Experiments with An Improved Iris Segmentation Algorithm

Similar documents
Iris Recognition using Histogram Analysis

Recent research results in iris biometrics

Distinguishing Identical Twins by Face Recognition

ACCEPTED MANUSCRIPT. Pupil Dilation Degrades Iris Biometric Performance

Image Averaging for Improved Iris Recognition

Image Averaging for Improved Iris Recognition

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches

Iris Segmentation & Recognition in Unconstrained Environment

Iris Recognition using Hamming Distance and Fragile Bit Distance

Global and Local Quality Measures for NIR Iris Video

The Best Bits in an Iris Code

IRIS Recognition Using Cumulative Sum Based Change Analysis

A Proficient Matching For Iris Segmentation and Recognition Using Filtering Technique

Fast identification of individuals based on iris characteristics for biometric systems

Using Fragile Bit Coincidence to Improve Iris Recognition

Impact of out-of-focus blur on iris recognition

Postprint.

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 4, NO. 4, DECEMBER

Iris Segmentation Analysis using Integro-Differential Operator and Hough Transform in Biometric System

ANALYSIS OF PARTIAL IRIS RECOGNITION

INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET)

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression

All Iris Code Bits are Not Created Equal

Authentication using Iris

Software Development Kit to Verify Quality Iris Images

RELIABLE identification of people is required for many

Image Understanding for Iris Biometrics: A Survey

Template Aging in Iris Biometrics: Evidence of Increased False Reject Rate in ICE 2006

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

THE field of iris recognition is an active and rapidly

Automatic Iris Segmentation Using Active Near Infra Red Lighting

Feature Extraction Techniques for Dorsal Hand Vein Pattern

International Conference on Innovative Applications in Engineering and Information Technology(ICIAEIT-2017)

A One-Dimensional Approach for Iris Identification

Copyright 2006 Society of Photo-Optical Instrumentation Engineers.

Improved iris localization by using wide and narrow field of view cameras for iris recognition

1. INTRODUCTION. Appeared in: Proceedings of the SPIE Biometric Technology for Human Identification II, Vol. 5779, pp , Orlando, FL, 2005.

Iris Recognition using Wavelet Transformation Amritpal Kaur Research Scholar GNE College, Ludhiana, Punjab (India)

IRIS Biometric for Person Identification. By Lakshmi Supriya.D M.Tech 04IT6002 Dept. of Information Technology

Eye-Gaze Tracking Using Inexpensive Video Cameras. Wajid Ahmed Greg Book Hardik Dave. University of Connecticut, May 2002

Empirical Evaluation of Visible Spectrum Iris versus Periocular Recognition in Unconstrained Scenario on Smartphones

NOVEL APPROACH OF ACCURATE IRIS LOCALISATION FORM HIGH RESOLUTION EYE IMAGES SUITABLE FOR FAKE IRIS DETECTION

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

Visible-light and Infrared Face Recognition

ABSTRACT I. INTRODUCTION II. LITERATURE SURVEY

Authenticated Automated Teller Machine Using Raspberry Pi

ISSN: [Deepa* et al., 6(2): February, 2017] Impact Factor: 4.116

Fusing Iris Colour and Texture information for fast iris recognition on mobile devices

Iris based Human Identification using Median and Gaussian Filter

Title Goes Here Algorithms for Biometric Authentication

Second Symposium & Workshop on ICAO-Standard MRTDs, Biometrics and Security

Iris Pattern Segmentation using Automatic Segmentation and Window Technique

Iris Recognition with Fake Identification

3D Face Recognition System in Time Critical Security Applications

The Results of the NICE.II Iris Biometrics Competition. Kevin W. Bowyer. Department of Computer Science and Engineering. University of Notre Dame

Selection of parameters in iris recognition system

Spatial Resolution as an Iris Quality Metric

Critical Literature Survey on Iris Biometric Recognition

Design of Iris Recognition System Using Reverse Biorthogonal Wavelet for UBIRIS Database

Evaluation of the Impact of Noise on Iris Recognition Biometric Authentication Systems

Iris Recognition using Left and Right Iris Feature of the Human Eye for Bio-Metric Security System

Note on CASIA-IrisV3

Iris Recognition based on Local Mean Decomposition

3 Department of Computer science and Application, Kurukshetra University, Kurukshetra, India

MAV-ID card processing using camera images

Roll versus Plain Prints: An Experimental Study Using the NIST SD 29 Database

Direct Attacks Using Fake Images in Iris Verification

Custom Design of JPEG Quantisation Tables for Compressing Iris Polar Images to Improve Recognition Accuracy

Iris Recognition based on Pupil using Canny edge detection and K- Means Algorithm Chinni. Jayachandra, H.Venkateswara Reddy

International Journal of Advance Engineering and Research Development

Factors that degrade the match distribution in iris biometrics

Modern Biometric Technologies: Technical Issues and Research Opportunities

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

Improved SIFT Matching for Image Pairs with a Scale Difference

About user acceptance in hand, face and signature biometric systems

License Plate Localisation based on Morphological Operations

Biometrics 2/23/17. the last category for authentication methods is. this is the realm of biometrics

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

Iris Recognition in Mobile Devices

IRIS RECOGNITION USING GABOR

BEing an internal organ, naturally protected, visible from

A ROBUST METHOD FOR ADDRESSING PUPIL DILATION IN IRIS RECOGNITION. Raghunandan Pasula

IRIS RECOGNITION SYSTEM

Facial Recognition of Identical Twins

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

The 2019 Biometric Technology Rally

AUTOMATED IRIS RECOGNITION SYSTEM USING CMOS CAMERA WITH PROXIMITY SENSOR

Subregion Mosaicking Applied to Nonideal Iris Recognition

[Kalsi*, 4.(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

Contact lens detection in iris images

Fast Subsequent Color Iris Matching in large Database

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Student Attendance Monitoring System Via Face Detection and Recognition System

Abstract Terminologies. Ridges: Ridges are the lines that show a pattern on a fingerprint image.

An Enhanced Biometric System for Personal Authentication

On the Existence of Face Quality Measures

Geometry-Based Populated Chessboard Recognition

Evaluation of Biometric Systems. Christophe Rosenberger

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

U.S.N.A. --- Trident Scholar project report; no. 342 (2006) USING NON-ORTHOGONAL IRIS IMAGES FOR IRIS RECOGNITION

Transcription:

Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A. {xliu5, kwb, flynn}@cse.nd.edu Abstract Iris is claimed to be one of the best biometrics. We have collected a large data set of iris images, intentionally sampling a range of quality broader than that used by current commercial iris recognition systems. We have reimplemented the Daugman-like iris recognition algorithm developed by Masek. We have also developed and implemented an improved iris segmentation and eyelid detection stage of the algorithm, and experimentally verified the improvement in recognition performance using the collected dataset. Compared to Masek s original segmentation approach, our improved segmentation algorithm leads to an increase of over 6% in the rank-one recognition rate. 1. Introduction Iris texture patterns are believed to be different for each person, and even for the two eyes of the same person. It is also claimed that for a given person, the iris patterns change little after youth. Very high recognition/verification rates have been reported for iris recognition systems in studies to date. For Daugman s system, when choosing the Hamming distance (HD) matching threshold value of 0.32, the false accept rate (FAR) was decreased from 1 in 151,000 (1993) to 1 in 26 million (2003) [3] [4] [5]. On the basis of these conceptual claims and empirical reports, iris is often thought to be one of the highest-accuracy biometrics. Compared with some other biometrics, such as fingerprints and face, iris recognition has a relatively short history of use. There are few large-scale experimental evaluations reported in the literature, and essentially none where the image dataset is available to other researchers. One constraint of current iris recognition systems, which is perhaps not widely appreciated, is that they require substantial user cooperation in order to acquire an image of sufficient quality for use. (a) Iridian LG EOU2200 (b) Iris acquisition Figure 1. The Iridian LG EOU2200 system. We reimplemented a Daugman-like algorithm originally implemented by Masek [8]. We also developed and implemented an improved iris segmentation and eyelid detection stage. Our improved system is denoted as ND IRIS. We tested ND IRIS on a set of over 4,000 images of varying quality acquired using an Iridian LG 2200 iris imaging system. The results show that the rank-one recognition rate using the ND IRIS segmentation is about 6% higher than that using the Masek segmentation. The remaining sections are organized as following: section 2 introduces the dataset used in the experiments; section 3 is the details of our implementation and optimization; section 4 shows the experimental results; and section 5 is the conclusion. 2. Dataset As described in [7], we used the Iridian LG EOU2200 system [2] [6], shown in Figure 1, for our data acquisition. The image data sets collected will be eventually available to the research community through the Iris Challenge Evaluation (ICE), a program jointly sponsored by several U.S. Government agencies interested in measuring the improvements in iris recognition technologies [1]. The iris images are intensity images with a resolution of 640 x 480. Because of the user cooperation required by the system, the iris generally takes up a large portion of an image. The average diameter of an iris is 228 pixels.

3.1. Iris Segmentation (a) Gallery image (b) Segmentation result Figure 2. An example image of gallery image and the segmentation result. We used only left iris images in our experiments reported here. There are a total of 317 iris images in the gallery set, corresponding to the 317 different subjects involved in our experiments. There are a total of 4,249 left iris images in the probe set. The gallery images are all good quality images. Figure 2(a) is one example. The probe images are of varying quality levels. The image quality can vary due to the percent of the iris area occluded, the degree of blur in the image, or both. 3. Implementation and Optimization An iris recognition process can be represented as three parts: iris segmentation, iris encoding and iris matching. The iris segmentation step localizes the iris region in the eye image. Figure 2 is an example of an iris image and the segmentation results from ND IRIS. The encoding stage uses filters to encode iris image texture patterns into digital codes. The similarity of two irises is defined by the Hamming Distance between the two digital codes. A smaller distance means a better match. We started with Masek s open source implementation of a Daugman-like recognition algorithm [8] to conduct the experiments. Masek s implementation was written in Matlab. We rewrote the program in C. We compared 250 templates generated by the Matlab code and our C code. The maximum Hamming distance (HD) between a Masek template and the corresponding ND IRIS template is 0.0053, and the mean HD is 0.00015. We assume that these small observed differences are due primarily to different floatingpoint calculation error. This paper focuses on creating an improved segmentation stage. The other two stages are plain translations of Masek stages. The use of the Masek stages here is to make it easy to have a complete system to use in experiments. For most algorithms, and assuming near-frontal presentation of the pupil, the iris boundaries are modeled as two circles, which are not necessarily concentric. The inner circle is the boundary between the pupil and the iris. The outer circle is the boundary between the iris and the sclera. 3.1.1. Masek s algorithm In Masek s segmentation algorithm, the two circular boundaries of the iris are localized in the same way. The Canny edge detector is used to generate the edge map. Then after doing a circular Hough transform, the maximum value in the Hough space corresponds to the center and the radius of the circle. 3.1.2. Optimization Based on examining instances of incorrect recognition with Masek s algorithm, it became clear that the performance of the iris segmentation step could be improved. As indicated in [7], for the 4,249 probe images used in the experiment, the rank-one recognition rate of our re-implementation of Masek s algorithm was 90.92%. However, if the iris location reported by the Iridian system [2] is substituted for that found by our re-implementation, the rank-one recognition rate increases to 96.61%. Therefore it seems that there is substantial room for improvement of the segmentation. We developed and implemented an improved segmentation algorithm with features described below. Reverse the Detection Order. Masek s algorithm detects the outer iris boundary first, then it detects the inner iris boundary within the detected outer boundary. However, the contrast between the iris and the pupil is usually stronger than that between the sclera and the iris. In an iris image, the pupil is the largest dark area with a specular highlight within it. Compared to the outer boundary, the inner boundary is relatively easier to localize. After the pupil boundary is detected, the iris outer boundary will be detected in an area centering at the detected pupil. Figure 3 shows the steps in ND IRIS segmentation. Reduce Edge Points. In looking at segmentation errors of Masek s algorithm, it appeared that edge pixels not from the iris boundary often caused the Hough transform to find an incorrect iris boundary. The specular highlight that typically appears in the pupil region was one source of such edge pixels. These can be generally eliminated by removing Canny edges at pixels with a high intensity value (240 in this case). Edge pixels inside the iris region can also contribute to pulling the Hough transform result away from the correct result. These can generally be eliminated by removing edges at pixels with an intensity below some value (30 in this case). Figure 4 shows an example of the edge points before and after the procedure of reducing edge points when detecting the outer iris boundary.

(a) Original iris image. (b) Step 1: detect the inner boundary as the pupil. (c) Step 2: detect the outer boundary as the iris. (d) Final result. (a) Before reducing edge points Figure 3. Illustration of the steps in ND IRIS segmentation. Modification to Hough Transform. In the Masek implementation, the Hough transform, for each edge point and a given radius r, votes for center location candidates in all directions. A well-known improvement to the Hough transform for circles is to restrict the vote for center locations based on the direction of the edges. So in our algorithm, each edge point votes for possible center locations in the area within only 30 degrees on each side of the local normal direction. Figure 5 shows an example of the center locations voted for by a single edge pixel in the two cases. Our improved algorithm also requires that more votes are needed for a circular boundary with a larger radius. Additionally, the search for a maximum in Hough space, to represent an iris boundary, is done using a sum over a sliding window of three values of r. Hypothesize and Verify. The iris segmentation step in Masek s algorithm is based on a simple search for peaks in the Hough space created from the edge pixels found by an implementation of a Canny edge detector. Peaks in the Hough space can be regarded as hypothesized boundaries in the image, but they need to be verified as meaningful boundaries. We implemented a simple hypothesize and verify approach to filter out some of the incorrect candidate segmentations found by searching the Hough space for peaks. For a peak in the Hough space that corresponds to a candidate sclera-iris boundary, a test is performed to check that the iris is darker than the sclera. This check is done with a small region on the left and the right sides of the candidate sclera-iris boundary. For a peak in the Hough space that corresponds to a candi- (b) After reducing edge points Figure 4. The effects of reducing edge points. date iris-pupil boundary, a test is performed to check that the pupil is darker than the iris. Again, the check is done with a small region on the left and right sides of the candidate boundary. It is also required that the radius of the iris-pupil boundary should be within a reasonable region compared to the detected sclera-iris boundary, and that the centers of the two circular boundaries should be closer than half of the radius of the iris-pupil boundary. Segmentation Improvements. Figure 6 shows some examples of segmentations that were incorrect in the Masek s results but are corrected in ND IRIS. The current version of ND IRIS is not perfect. Figure 7 shows some incorrect seg-

(a) Masek s algorithm (a) Masek (b) ND IRIS (b) ND IRIS Figure 6. Examples of improved segmentation. Figure 5. Modification to Hough transform. mentation results from the current version of ND IRIS. We are continuing to work on improved iris segmentation. Eyelid Detection. In our experiments, we considered the occlusion by eyelids. In Masek s algorithm, the eyelids are modeled as two horizontal lines. When detecting the top lid and the bottom lid, a Canny edge detector is used to generate the edge map. Then the line is located using a linear Hough transform. In ND IRIS, each eyelid is modeled as two straight lines. After the iris boundaries detection, we split the detected iris area into four parts of equal size: left top, right top, left bottom and right bottom. There is an overlap of half of the pupil radius between each window. We detect the eyelid in each of these four windows, and connect the results together. Figure 8 compares our eyelid detection result with Masek s eyelid detection result. 3.1.3. Encoding In order to reduce the effect of the scale difference of iris images, normalization is utilized before encoding. Gabor filters are utilized to encode the iris image. Each selected sector is encoded as two bits. 3.1.4. Matching The HD is used to indicate the similarity of two iris codes. The HD is defined as the number of different bits in the two codes over the total number of valid bits. A smaller distance means a better match. In order to overcome the rotation variation, shifting is used when calculating the HD. When calculating HD between A and B, we fix the code A, and shift the code B from -15 to +15 with an increment of 1.5 each time. The minimum HD from these 20 shift positions is used as the reported HD. i ((Ai HD(A, B) = V alidi = 1 0 xor Bi ) and V alidi ) i V alidi if noisemaskai = 0 and noisemaskbi = 0; otherwise. (1) (2)

Figure 7. Examples of incorrect segmentations from the ND IRIS algorithm. (a) Left top (c) Left bottom (e) ND IRIS eyelid detection result (b) Right top (d) Right bottom (f) Masek eyelid detection result Figure 8. ND IRIS eyelid detection. 4. Experimental Results In the context of verification, we compute the HD between a gallery image and a probe image, and compare the computed HD with a threshold. If the computed HD is smaller, the probe image is accepted. Otherwise the probe image is rejected. If an accepted image and the gallery image are not from the same subject, it is called a false accept. The percentage of false accept is called false accept rate (FAR). If a rejected image and the gallery image are from the same subject, it is called a false reject. The percentage of false reject is called false reject rate (FRR). An ROC curve plots the trade off between the FAR and the FRR. The equal error rate (EER) is obtained when the FAR equals FRR. In the context of identification, we compare each probe iris image with all gallery iris images and choose the gallery image closest to the probe image according to the computed HD as the prediction result. If the probe image and the selected matching gallery image are from the same subject, it is a correct match. The percentage of the correctly matched probe images is the rank-one recognition rate. The experiments reported here do not use the Iridian software for enrollment and recognition. We experimented with three different segmentation results: our implementation of Masek s algorithm (denoted as Masek), our improved algorithm (denoted as ND IRIS), and the LG 2200 system reported localization. The different segmentation results were all run through the same encoding and recognition stages. Our C re-implementation of Masek s algorithm was used for this. At first we used Masek s eyelid detection model. Table 1 shows the rank-one recognition and the EER of using these three different segmentation results. Figure 9 shows the ROC curves for the iris verification experiments. The results show that the ND IRIS segmentation works much better than Masek s segmentation. The ND IRIS segmentation works a little better than the Iridian reported segmentation results. When using our optimized eyelid detection model, the rank-one recognition rates are increased to 96.75% (Iridian segmentation) and 97.34% (ND IRIS segmentation). SEGMENTATION MASEK ND IRIS IRIDIAN RANK-ONE RECOGNITION 90.92% 97.08% 96.61% EER 5.60% 1.79% 2.14% Table 1. The experimental results of using different segmentation methods. The breakdown of male versus female for the 317 persons in this dataset is 56% to 44%. There are varying numbers of iris images per person, depending on the number of data acquisition sessions and the image quality control screening. The number of probe images from male versus female persons is 52% to 48%. The corresponding recognition rates are 97.26% and 97.43%. This difference is not statistically significant at the 0.05 level. The iris color of the persons participating in the study was not recorded at the time of image acquisition. Since the images from the Iridian system are based on infra-red illumination, we cannot obtain iris color from them. However, subjects participating in this image acquisition were also part of the image acquisition for the Face Recognition Grand Challenge [9], for which high-resolution color face images were acquired. Looking at the color face images, it

False Accept Rate (FAR) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 Masek Segmentation ND IRIS Segmentation Iridian System Reported Segmentation We developed and implemented an improved iris segmentation stage. The ND IRIS segmentation leads to a rank-one recognition rate about 6% higher than the Masek segmentation. The results of using our segmentation are even a little bit better than using the Iridian reported segmentation. It is important to be clear that these results do not represent the performance of the Iridian commercial iris recognition system. It is highly possible that some of inaccurate segmentation reported by the Iridian system can be compensated for later in their system. Our experimental results also suggest that more work is needed on iris segmentation, especially for the iris images with relatively lower qualities. 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 False Reject Rate (FRR) Figure 9. ROC comparison of segmentations. Acknowledgments This work is supported by National Science Foundation grant CNS-0130839, by the Central Intelligence Agency, and by Department of Justice grant 2004-DD-BX-1224. is possible to retrospectively assign each a dark (black or brown) or light (blue, green, or hazel) iris color for each person. Using this retrospective assignment, the breakdown of dark versus light iris for persons in this dataset is 50% to 50%. The breakdown of probe images from dark versus light iris is 54% to 46%. The corresponding recognition rates are 96.70% and 98.10%. The recognition rate of light iris images is higher than that of dark iris images. This difference is significant at the 0.01 level. However this comparison is not controlled for image quality between the two groups, and so further study is needed before assigning any importance to it. 5. Conclusion We re-implemented Masek s iris recognition system in C. This paper looks at alternatives in the segmentation stage to see what can be done to get the best performance from the overall system. An initial window or performance range for the overall system is determined by using the Masek segmentation and the Iridian segmentation. Feeding the Masek segmentation into the remainder of the system gives a kind of lower bound. Feeding the Iridian segmentation into the remainder of the system gives, not an upper bound, but an indication of current industrial-strength performance. The goal of course is to find a segmentation algorithm that gives better performance with this system than the Iridian segmentation. That does not necessarily mean that the performance of the overall system is better than the current complete Iridian commercial system, since the last two stages (encoding and matching) used in the experiments are plain Masek stages. References [1] http://iris.nist.gov/ice/. [2] http://www.iridiantech.com/. [3] J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(11), November 1993. [4] J. Daugman. Statistical richness of visual phase information: Update on recognizing persons by iris patterns. International Journal of Computer Vision, 45(1):25 38, 2001. [5] J. Daugman. The importance of being random: Statistical principles of iris recognition. Pattern Recognition, 36(2):279 291, 2003. [6] J. Daugman. How iris recognition works. IEEE Trans. Circuits and Systems for Video Technology, 14:1:21 30, 2004. [7] X. Liu, K. Bowyer, and P. Flynn. Experimental evaluation of iris recognition. Proc. Face Recognition Grand Challenge Workshop, 2005. [8] L. Masek. Recognition of Human Iris Patterns for Biometric Identification. http://www.csse.uwa.edu.au/ pk/studentprojects/libor/, The University of Western Australia. [9] P. J. Phillips. Overview of the face recognition grand challenge. IEEE Conference on Computer Vision and Pattern Recognition, 1:947 954, 2005.