Impact of out-of-focus blur on iris recognition

Similar documents
Impact of Out-of-focus Blur on Face Recognition Performance Based on Modular Transfer Function

Experiments with An Improved Iris Segmentation Algorithm

Global and Local Quality Measures for NIR Iris Video

Software Development Kit to Verify Quality Iris Images

Image Averaging for Improved Iris Recognition

Recent research results in iris biometrics

Iris Recognition using Histogram Analysis

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches

Iris Segmentation & Recognition in Unconstrained Environment

Image Averaging for Improved Iris Recognition

ACCEPTED MANUSCRIPT. Pupil Dilation Degrades Iris Biometric Performance

International Conference on Innovative Applications in Engineering and Information Technology(ICIAEIT-2017)

Using Fragile Bit Coincidence to Improve Iris Recognition

Copyright 2006 Society of Photo-Optical Instrumentation Engineers.

Postprint.

ANALYSIS OF PARTIAL IRIS RECOGNITION

BEing an internal organ, naturally protected, visible from

Title Goes Here Algorithms for Biometric Authentication

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 4, NO. 4, DECEMBER

A Study of Slanted-Edge MTF Stability and Repeatability

A Proficient Matching For Iris Segmentation and Recognition Using Filtering Technique

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

Iris Recognition using Hamming Distance and Fragile Bit Distance

Quality in Face and Iris Research Ensemble (Q-FIRE)

INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET)

RELIABLE identification of people is required for many

Distinguishing Identical Twins by Face Recognition

Feature Extraction Techniques for Dorsal Hand Vein Pattern

Subregion Mosaicking Applied to Nonideal Iris Recognition

Fast identification of individuals based on iris characteristics for biometric systems

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

Biometrics 2/23/17. the last category for authentication methods is. this is the realm of biometrics

Near Infrared Face Image Quality Assessment System of Video Sequences

A New Fake Iris Detection Method

Iris based Human Identification using Median and Gaussian Filter

Improved SIFT Matching for Image Pairs with a Scale Difference

Multimodal Face Recognition using Hybrid Correlation Filters

Impact of Resolution and Blur on Iris Identification

Optical design of a high resolution vision lens

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

IREX V Guidance for Iris Image Collection

THE field of iris recognition is an active and rapidly

IRIS Recognition Using Cumulative Sum Based Change Analysis

Camera Resolution and Distortion: Advanced Edge Fitting

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Image Understanding for Iris Biometrics: A Survey

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

A Mathematical model for the determination of distance of an object in a 2D image

fast blur removal for wearable QR code scanners

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER

Defense Technical Information Center Compilation Part Notice

3 Department of Computer science and Application, Kurukshetra University, Kurukshetra, India

1. INTRODUCTION. Appeared in: Proceedings of the SPIE Biometric Technology for Human Identification II, Vol. 5779, pp , Orlando, FL, 2005.

COTTON FIBER QUALITY MEASUREMENT USING FRAUNHOFER DIFFRACTION

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Blur Detection for Historical Document Images

Iris Recognition in Mobile Devices

CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?

Instantaneous Baseline Damage Detection using a Low Power Guided Waves System

Roll versus Plain Prints: An Experimental Study Using the NIST SD 29 Database

Authentication using Iris

Improved iris localization by using wide and narrow field of view cameras for iris recognition

Supplementary Materials

Spatial Resolution as an Iris Quality Metric

Direct Attacks Using Fake Images in Iris Verification

Comparison of ridge- and intensity-based perspiration liveness detection methods in fingerprint scanners

Automatic Licenses Plate Recognition System

NOVEL APPROACH OF ACCURATE IRIS LOCALISATION FORM HIGH RESOLUTION EYE IMAGES SUITABLE FOR FAKE IRIS DETECTION

Quality Measure of Multicamera Image for Geometric Distortion

ISSN: [Deepa* et al., 6(2): February, 2017] Impact Factor: 4.116

Effective and Efficient Fingerprint Image Postprocessing

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Detection and Verification of Missing Components in SMD using AOI Techniques

Simple Impulse Noise Cancellation Based on Fuzzy Logic

Tips for a correct functioning of Face Recognition technology. FacePhi Face Recognition.

Iris Segmentation Analysis using Integro-Differential Operator and Hough Transform in Biometric System

Université Laval Face Motion and Time-Lapse Video Database (UL-FMTV)

Face Image Quality Evaluation for ISO/IEC Standards and

Improved Fusing Infrared and Electro-Optic Signals for. High Resolution Night Images

Filtering and Processing IR Images of PV Modules

Removal of Gaussian noise on the image edges using the Prewitt operator and threshold function technical

Algorithm for Detection and Elimination of False Minutiae in Fingerprint Images

1. INTRODUCTION ABSTRACT

Estimation of spectral response of a consumer grade digital still camera and its application for temperature measurement

Empirical Evaluation of Visible Spectrum Iris versus Periocular Recognition in Unconstrained Scenario on Smartphones

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

ECEN 4606, UNDERGRADUATE OPTICS LAB

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Palm Vein Recognition System using Directional Coding and Back-propagation Neural Network

Evaluation of image quality of the compression schemes JPEG & JPEG 2000 using a Modular Colour Image Difference Model.

High Resolution Spectral Video Capture & Computational Photography Xun Cao ( 曹汛 )

Automatic Iris Segmentation Using Active Near Infra Red Lighting

INFRARED IMAGING-PASSIVE THERMAL COMPENSATION VIA A SIMPLE PHASE MASK

A Novel Approach for Human Identification Finger Vein Images

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Proceeding The Alignment Method for Linear Scale Projection Lithography Based on CCD Image Analysis

Iris Recognition based on Local Mean Decomposition

Transcription:

Impact of out-of-focus blur on iris recognition Nadezhda Sazonova 1, Stephanie Schuckers, Peter Johnson, Paulo Lopez-Meyer 1, Edward Sazonov 1, Lawrence Hornak 3 1 Department of Electrical and Computer Engineering, the Univeristy of Alabama, Tuscaloosa, AL, USA Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY, USA 3 Lane Department of Computer Science and electrical Engineering, West Virginia University, Morgantown, WV, USA ABSTRACT Iris recognition has expanded from controlled settings to uncontrolled settings (on the move, from a distance) where blur is more likely to be present in the images. More research is needed to quantify the impact of blur on iris recognition. In this paper we study the effect of out-of-focus blur on iris recognition performance from images captured with out-of-focus blur produced at acquisition. A key aspect to this study is that we are able to create a range of blur based on changing focus of the camera during acquisition. We quantify the produced out-of-focus blur based on the Laplacian of Gaussian operator and compare it to the gold standard of the modulation transfer function (MTF) of a calibrated black/white chart. The sharpness measure uses an unsegmented iris images from a video sequence with changing focus and offers a good approximation of the standard MTF. We examined the effect of the 9 blur levels on iris recognition performance. Our results have shown that for moderately blurry images (sharpness at least 50%) the drop in performance does not exceed 5% from the baseline (100% sharpness). Keywords: iris recognition, out-of-focus blur, modulation transfer function I. INTRODUCTION Iris recognition has been suggested as a biometric in order to verify or identify an individual. In a controlled environment, it is possible to require an image to have high quality. However, in non-ideal, uncontrolled settings, iris images may contain degraded quality which cannot be corrected by requiring a second image. Blur is one of several main factors contributing to the errors in iris recognition. One focus of iris quality research stratifies iris quality metrics in order to estimate performance, with the goal of developing a satisfactory quality measure which predicts performance of iris recognition systems 1-5. In this research we study the extent at which imperfect images can be useful in iris recognition with an emphasis on iris out-of-focus blur. There has been some research in this direction. For example, in 6 the effect of blur on iris recognition was studied by using simulated blur. Another study 7 involved real data but measured the amount of blur as the focus value provided by the image acquisition system Iridian LG EOU 00. To our knowledge there is no study on the impact of blur that would be based on real images with a range of blur created for each subject at each visit with blur measured objectively, i.e. with relation to a gold standard. The amount of blur was estimated in standard units such as MTF and quantified by a measure linked to MTF convolution result when applying the -D Laplacian of Gaussian (LoG) operator. The purpose of the study is to investigate the effect of different levels of real-world blur on the performance of an iris recognition system. II. DATA Data consisted of videos of 103 subjects collected from two visits (with at least weeks in between) from the Q-FIRE dataset. The original Q-FIRE dataset includes 175 subjects. Since the purpose of the study was to investigate the sole effect of the blur on iris recognition, we excluded subjects who wore glasses and those for which eye localization failed. This left 103 subjects. To separate the effect of blur from other deteriorating factors (like resolution, illumination, etc.) we selected 1

the experiments with the same resolution (images obtained at a distance 7 ft resulting in 00-0 pixels across the iris) and the same illumination level (4 LED near-infrared 840nm lights positioned at 30 angle ft away from a subject). The dataset also contained images acquired at 5, 11, 15 and 5 ft. In this study we only considered a single distance to simplify analysis. At each visit for each subject a 6-second video at 5 frames per second was recorded using a near-infrared filter (removing wavelengths of 680 nm and below) resulting in 150 frames. During the video recording, a gradual change of focus was created by manually rotating the focus ring of the camera such that different levels of blur across frames and the sharpest image were captured. Fig. 1. Subject wearing lens-less frame with attached chart. III. METHOD A standard method to measure blur is by a test chart (ISO 133). The modulation transfer function (MTF) is computed which compares a blurred chart image relative to the baseline chart image. The MTF is computed as where modulation M is M image MTF, (1) M M I original I max min () I max I min Here I max and I min are maximum and minimum intensity of the chart image. In this study, MTF is computed at level 10 of the chart where the optical resolution at 60% MTF is equal 4-5 lines/mm in order to estimate blur of fine detail of the iris with 00-0 pixels across. Level 10 appears in the finest fragment of the ISO 133 test chart. In order to link the chart blur metric to the iris blur, we conducted preliminary experiments on 10 subjects by introducing the chart into the picture alongside a subject (Fig. 1). In order to provide adequate analysis, the experiment was performed in near-perfect conditions in an attempt to minimize the iris-chart misalignment and possible vibrations of the subject. This was done by using a chinrest and having each subject wear a lens-free glass frame with the chart attached to the side aligned with the iris plane. This setup was to ensure that the blur was changing at the same rate simultaneously for the chart and both irises during the 6-sec. video sequence. To compute the MTF for the chart, we used the sharpest image in the sequence in formula (1) as the baseline image. Thus, the computed MTF was called the relative MTF function. Typically, original MTF (computed relative to the

original) of the in-focus image is within 40-60% depending on the illumination level. For our data the original MTF of the sharpest images was around 60%, therefore, relative MTF differed from the original MTF by a factor of 0.6. Relative MTF is used because it corresponds to the sharpness level relative to the actual sharpest image and is comparable between videos with different illumination settings. However, despite the elaborate setup to ensure the iris-chart alignment, for some subjects the alignment was not perfect. This is not surprising since on the scale of such small details as those in the iris pattern (as well as the corresponding chart level 10), it is very easy for the chart and iris planes to misalign by slight head movements without visible notice. Alignment of irises for both eyes with the chart is even harder to control. Since computing MTF simultaneously with the iris image for all subjects is difficult due to high probability of misalignment of chart and iris, there is a need for a measure to compute blur which does not rely on the chart. Since standard MTF measurement of blur was difficult to perform in all images as described, we propose to use a measure (logscore) based on the convolution result when applying the -D Laplacian of Gaussian (LoG) operator 5 : LoG x y 1 x y 1 e (3) ( x, y) 4 Two parameters that control -D LoG filter implementation (window size (hs) and standard deviation (σ )) were initially set at hs = 10, σ = 1.0. It is worth noting that selection of the sharpest frame (within a video sequence) is not particularly affected by the choice of parameters. The logscore of the iris image was calculated for the cropped images of an iris as the mean value of the LoG-filtered image. Respectively, the relative MTF value of a specific frame was calculated by cropping the image of level 10 of the chart and comparing it to the baseline sharpest chart image using formulas (1)-(). As suggested in 5 LoG operator can be used for the assessment of blur in iris images, so we expected to find some correspondence between the logscore and the relative MTF. Fig.. Relative MTF and normalized logscore for blur simulated by Gaussian filter for a sample subject. Fig. 3. Relative MTF and normalized logscore for blur simulated by gradual pillbox filter for a sample subject. In tests, the logscore (normalized to [0.1, 1]) of a cropped unsegmented iris image follows closely the computed relative MTF of the chart in the same video sequence. We performed the grid search of the LoG parameters that provide the best match between relative MTF for the chart and the normalized logscore of an iris using real and simulated out-of-focus blur. Simulated blur was used to choose the settings for logscore. The sharpest chart (level 10) image and the sharpest right iris image were selected using separate logscores for the chart and the iris respectively. Most often, these images corresponded to the same frame within the video sequence, but sometimes they did not, which indicated iris-chart 3

misalignment. The sharpest chart and iris images were then simultaneously subjected to the same controlled amount of blur produced by: Gaussian filter with increasing window size and always applied to the sharpest image, 0 levels of blur; Pillbox filter with increasing window size applied to the image from previous iteration, 0 levels of blur. Relative MTF was calculated for the obtained sequence of chart images and the logscore (using a fixed set of parameters) and normalized to [0.1, 1]. Both relative MTF and normalized logscore were plotted on the same graph. The grid search of the LoG parameters revealed that for our video data normalized logscore with parameters hs=5 and σ=.0 there is almost perfect match between the logscore and relative MTF as shown in Figs. and 3 for a sample subject. We also examined the real out-of-focus blur for matching relative MTF for chart and normalized logscore for iris using LoG parameters found as a result of the grid search (hs=5, σ=.0). Most subjects from the preliminary data set demonstrated that the normalized logscore was highly correlated with relative MTF (Fig. 4). Nevertheless, even in these hard to control conditions we were able to observe very close match between such sharpness measures as chart-image-based MTF and iris-image-based logscore. Fig. 4. Relative MTF and normalized logscore for actual out-of-focus blur for a sample subject. Fig. 5. Selected images corresponding to the 9 levels of sharpness for a sample iris image. Thus, the preliminary study allowed us to establish the correspondence between MTF and the logscore and justified the replacement of relative MTF by normalized logscore for iris (with hs = 5, σ =.0) for blur quantification. To study the relationship between performance and increasing blur, we selected 9 levels of iris image blur corresponding to the ranges of sharpness expressed as normalized logscores (and, therefore, relative MTF): 10-0%, 0-30%, 30-40%, 40-50%, 50-60%, 60-70%, 70-80%, 80-90%, 100% (sharpest image). For each video sequence, logscore with optimum parameters (hs=5, σ=.0) was computed for the cropped iris images and normalized to [0.1,1] (cropping was performed automatically by using our own eye localization algorithm). One iris image for each 4

blur/sharpness level was selected so that its logscore would be the closest to the midpoint of the corresponding range. Thus, for each video sequence, at most 9 images for each eye (left and right) were selected for each of 103 subjects. See Fig. 5. Segmentation was performed using our own approach based on the relative entropy of grayscale values across the circular boundary. We also applied encoding originally developed by Masek 8, 9 which is an implementation of Gabor wavelets encoding proposed by Daugman 10. The size of the extracted template for each normalized iris image was 0x480. The matching was performed using traditional Hamming distance. Genuine scores were obtained by matching the sharpest image of the same subject and the same eye (left or right) from one visit to the rest of the images with corresponding amount of blur in another visit. Impostor scores were obtained by matching the sharpest image of one subject of the first visit to all the images of all the other subjects in the second visit. IV. RESULTS We based our results on the following selected number of images. For each eye (left and right) of each subject there were up to 9 iris images (corresponding to 9 sharpness levels) selected (occasionally it was not possible to select all 9 images but for most subjects all 9 images were available). Since there were also two visits for each subject, data available for analysis between 360 and 103 ( eyes) ( visits) = 41 genuine match scores for each sharpness level. We also obtained 1300 impostor match scores to use in the analysis. Visually detected segmentation error was estimated at less than 1.5% of all images. The distributions of genuine and impostor scores are shown in Fig. 6. The plot of the mean genuine scores for every sharpness level and the mean impostor score are plotted in Fig. 7. In addition, the decreasing performance due to increasing blur is demonstrated by DET curves in Fig. 8. Verification performance is reported in Table 1 (equal error rates (EER) and verification rates at 0.1% FAR). Fig. 6. Distributions of matching scores for impostors and genuine images of different degree of sharpness. These results show the extent at which blur deteriorates iris recognition performance. In particular, very high amounts of blur (with normalized logscore and relative MTF below 40%) are still not sufficient to completely mask the information expressed by iris patterns. This is demonstrated by the separation of the impostor and genuine scores even at 10-0% sharpness level and tolerable EER (Table 1, column ), although at very low FAR levels the verification drops significantly (Table 1, column 3) and for all practical applications becomes meaningless. Another noticeable observation is that at the higher end of FAR, verification rate (as 1-FRR) for the sharpest images is lower than that of the 60-70% sharp images. This effect may be due to measurement errors (when the sharpness of eyelashes was emphasized rather than the 5

actual iris area) and requires further study. At the same time the loss in the performance may not be attributed at least partially due to the increase in the segmentation errors since those were approximately uniformly distributed across all considered sharpness levels. Thus, the gradual decline of the performance characteristics reported in Table I was solely due to the loss of high frequency features in the blurred images. Sharpness level, % of norm. logscore TABLE I VERIFICATION UNDER DIFFERENT SHARPNESS LEVELS EER, % Verification rate at 0.1% FAR, % Drop in verification rate, % of baseline rate * 100 3.58 85.94 0 80-90.96 84.80 1.33 70-80 3.1 84.9 1.19 60-70 3.35 81.96 4.63 50-60.11 81.58 5.07 40-50.8 74.15 13.7 30-40 3.05 6.44 7.34 0-30 4.95 40.78 5.55 10-0 8.51.57 73.74 * Baseline verification rate is that at 100% sharpness Fig. 7. Mean matching scores for impostor and genuine Fig. 8. DET curves for 9 sharpness levels images. distributions for 9 sharpness levels. V. CONCLUSION In this study we present performance results for out-of-focus blur created by the lens focal ring at acquisition. We proposed a sharpness metric based on the LoG operator, which is normalized with respect to the best (sharpest) image so that to eliminate differences such as those due to eyelashes, illumination, etc. This metric proved to be in very good agreement with the gold-standard MTF. This allowed us to use the LoG based metric as a proxy for the MTF and also use these two measures interchangeably with relation to the sharpness level. This method was used in video sequences covering full range out-of-focus blur. 6

We examined the effect of the 9 blur levels on iris recognition performance. Our results have shown that the drop in verification rate (at 0.1% FAR) from the 100% sharpness baseline is only around 1.% for 70-90% sharpness and around 5% for 50-70% sharpness levels. Also, even for very blurry images (relative MTF approximated by normalized logscore is below 40%) there is enough information to achieve EER of 3 to 8.5% and verification rate at 0.1% FAR up to 6.4%, however the drop in the verification rate (in % to the 100% sharpness baseline) became pronounced: from 7% and higher. Here we argue that the iris images acceptable for iris recognition may have lower than expected sharpness quality, although individual preferences may vary depending on the particular operational region. In this study we applied only one iris recognition algorithm and concentrated the focus of the study on the trend in the performance of iris recognition with respect to objectively measured out-of-focus blur rather than the recognition rates per se. Thus, there is no claim that the chosen algorithm for segmentation and matching shows necessarily the best performance. In addition to the out-of-focus blur, other factors may influence iris recognition performance. In particular, off-angle was not an issue for the data considered in this study due to the specific data acquisition setup. Motion blur was minimized as the subjects were encouraged not to move but could be present to some extent due to blinking. The most important factor that was not specifically accounted for in this study was iris occlusion. However, the differences in occlusion do not change the major findings of this study as the average degree of occlusion was approximately constant among different sharpness levels. In this study we computed the logscore using the unsegmented iris images which included non-iris areas and in particular such noise as eyelashes and eyelids. This may have contributed to the measurement error in the logscore and also to the performance or iris recognition algorithm (due to differences in occlusion). In the future, we would like to address these issues by applying the logscore metric to the normalized iris with masked eyelashes and other noisy pixels and establish the acceptable occlusion level threshold. Future work will include analysis of iris images with different resolution and application a variety of available algorithms for iris recognition to support our current results. REFERENCES [1] Z. Wei, T. Tan, Z. Sun, J. Cui, Robust and Fast Assessment of Iris Image Quality, LNCS, Vol. 383, Advances in Biometrics, pp. 464-471, 005. [] L. Pan; M., Xie, The Algorithm of Iris Image Quality Evaluation, Intl Conf. Circuits and Syst, 11-13 July 007 pp: 616 619, 007. [3] G. Lu, J. Qi, Q. Liao, A New Scheme of Iris Image Quality Assessment, Proc. Third Intl Conf. IIH-MSP, Vol. 01, pp: 147-150, 007. [4] Y. Chen, S. C. Dass, and A. K. Jain, Localized Iris Image Quality Using -D Wavelets Intl Conf. Biomet., vol. 383, pp. 373-381, 006. [5] J. Wan, X. He, P. Shi, An Iris Image Quality Assessment Method Based on Laplacian of Gaussian Operation, IAPR Conf. Mach. Vision Appl, May 16-18, 007, Tokyo, JAPAN, pp.48-51, 007. [6] N. D. Kalka, V. Dorairaj, Y. N. Shah, N. A. Schmid, B. Cukic, Image Quality Assessment for Iris Biometric, Proc. of SPIE,. vol. 60, pp. 600D.1-600D, 006. [7] X. Liu, Optimizations in Iris Recognition, Thesis. The University of Notre Dame, 006. [8] L.Masek, Recognition of Human Iris Patterns for Biometric Identification, Thesis, The University of Western Australia, 003. [9] L. Masek, http://www.csse.uwa.edu.au/pk/studentprojects/libor/, 003. [10] J. Daugman. How Iris Recognition Works, IEEE Trans. CSVT. 14(1), pp. 1 30, 004. 7