IRIS RECOGNITION OF DEFOCUSED IMAGES FOR MOBILE PHONES

Similar documents
1. INTRODUCTION. Appeared in: Proceedings of the SPIE Biometric Technology for Human Identification II, Vol. 5779, pp , Orlando, FL, 2005.

Iris Recognition using Hamming Distance and Fragile Bit Distance

Global and Local Quality Measures for NIR Iris Video

Using Fragile Bit Coincidence to Improve Iris Recognition

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

Iris Segmentation & Recognition in Unconstrained Environment

ANALYSIS OF PARTIAL IRIS RECOGNITION

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

A New Fake Iris Detection Method

Experiments with An Improved Iris Segmentation Algorithm

Iris Recognition using Histogram Analysis

Impact of out-of-focus blur on iris recognition

Contrast adaptive binarization of low quality document images

Recent research results in iris biometrics

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches

RELIABLE identification of people is required for many

Laser Printer Source Forensics for Arbitrary Chinese Characters

License Plate Localisation based on Morphological Operations

Image Averaging for Improved Iris Recognition

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

Learning Hierarchical Visual Codebook for Iris Liveness Detection

IRIS Recognition Using Cumulative Sum Based Change Analysis

A Mathematical model for the determination of distance of an object in a 2D image

Multimodal Face Recognition using Hybrid Correlation Filters

Extended depth of field for visual measurement systems with depth-invariant magnification

Software Development Kit to Verify Quality Iris Images

Introduction to Video Forgery Detection: Part I

Iris Recognition based on Local Mean Decomposition

INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET)

3D Face Recognition System in Time Critical Security Applications

Distinguishing Identical Twins by Face Recognition

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Algorithm for Detection and Elimination of False Minutiae in Fingerprint Images

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

Non-Uniform Motion Blur For Face Recognition

Feature Extraction Techniques for Dorsal Hand Vein Pattern

Content Based Image Retrieval Using Color Histogram

Local prediction based reversible watermarking framework for digital videos

Authentication using Iris

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation

Fast identification of individuals based on iris characteristics for biometric systems

IRIS RECOGNITION USING GABOR

Reversible Data Hiding in Encrypted color images by Reserving Room before Encryption with LSB Method

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

A Soft-Limiting Receiver Structure for Time-Hopping UWB in Multiple Access Interference

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

A new quad-tree segmented image compression scheme using histogram analysis and pattern matching

Identification of Suspects using Finger Knuckle Patterns in Biometric Fusions

Copyright 2006 Society of Photo-Optical Instrumentation Engineers.

Deblurring. Basics, Problem definition and variants

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

Retrieval of Large Scale Images and Camera Identification via Random Projections

Removal of High Density Salt and Pepper Noise through Modified Decision based Un Symmetric Trimmed Median Filter

LENSLESS IMAGING BY COMPRESSIVE SENSING

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2, b, Ma Hui2, c

ISSN: [Deepa* et al., 6(2): February, 2017] Impact Factor: 4.116

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY. Khosro Bahrami and Alex C. Kot

International Conference on Innovative Applications in Engineering and Information Technology(ICIAEIT-2017)

Main Subject Detection of Image by Cropping Specific Sharp Area

FPGA implementation of DWT for Audio Watermarking Application

ENHANCHED PALM PRINT IMAGES FOR PERSONAL ACCURATE IDENTIFICATION

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Image Averaging for Improved Iris Recognition

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

Improved iris localization by using wide and narrow field of view cameras for iris recognition

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Automatic Licenses Plate Recognition System

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

INFRARED IMAGING-PASSIVE THERMAL COMPENSATION VIA A SIMPLE PHASE MASK

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

A Training Based Approach for Vehicle Plate Recognition (VPR)

Simple Impulse Noise Cancellation Based on Fuzzy Logic

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Effective and Efficient Fingerprint Image Postprocessing

C. Efficient Removal Of Impulse Noise In [7], a method used to remove the impulse noise (ERIN) is based on simple fuzzy impulse detection technique.

Stochastic Screens Robust to Mis- Registration in Multi-Pass Printing

Image Quality Assessment for Defocused Blur Images

IEEE Signal Processing Letters: SPL Distance-Reciprocal Distortion Measure for Binary Document Images

Research on Hand Gesture Recognition Using Convolutional Neural Network

Student Attendance Monitoring System Via Face Detection and Recognition System

Feature Extraction Technique Based On Circular Strip for Palmprint Recognition

Rotation/ scale invariant hybrid digital/optical correlator system for automatic target recognition

Satellite Image Fusion Algorithm using Gaussian Distribution model on Spectrum Range

Coded Computational Photography!

Implementation of Barcode Localization Technique using Morphological Operations

Coded Aperture for Projector and Camera for Robust 3D measurement

Hardware implementation of Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF)

Image Forgery Detection Using Svm Classifier

Touchless Fingerprint Recognization System

Performance Analysis of Maximum Likelihood Detection in a MIMO Antenna System

A Review over Different Blur Detection Techniques in Image Processing

Recognizing Words in Scenes with a Head-Mounted Eye-Tracker

IMAGE PROCESSING TECHNIQUES FOR CROWD DENSITY ESTIMATION USING A REFERENCE IMAGE

Transcription:

International Journal of Pattern Recognition and Artificial Intelligence c World Scientific Publishing Company IRIS RECOGNITION OF DEFOCUSED IMAGES FOR MOBILE PHONES BO LIU 1, SIEW-KEI LAM,2, THAMBIPILLAI SRIKANTHAN 3 and WEIQI YUAN 4 Computer Vision Group, Shenyang University of Technology Shenyang, China, 110870 Centre for High Performance Embedded Systems Nanyang Technological University Singapore, 637553 1 liuboapp@hotmail.com 2 assklam@ntu.edu.sg 3 astsrikan@ntu.edu.sg 4 yuan60@126.com In this paper, we introduce a novel iris recognition approach for mobile phones, which takes into account imaging noise arising from image capture outside the Depth of Field (DOF) of cameras. Unlike existing approaches that rely on special hardware to extend the DOF or computationally expensive algorithms to restore the defocused images prior to recognition, the proposed method performs recognition on the defocused images based on the stable bits in the iris code representation that are robust to imaging noise. To the best of our knowledge, our work is the first to investigate the characteristics of iris features for varying degree of image defocus when the images are captured outside the DOF of cameras. Based on our findings, we present a method to determine the stable bits of an enrolled image. When compared to iris recognition of defocused images that relies on the entire code representation, the proposed recognition method increases the inter-class variability while reducing the intra-class variability of the samples considered. This leads to smaller intersections between the intra-class and inter-class distance distributions, which results in higher recognition performance. Experimental results based on over 15,000 images show that the proposed method achieves an average recognition performance gain of about 2 times. It is envisioned that the proposed method can be incorporated as part of a multi-biometric system for mobile phones due to its lightweight computational requirements, which is well suited for power sensitive solutions. Keywords: Iris Recognition; Defocused Iris Image; Depth of Field. 1. Introduction The prevalence of internet-enabled mobile phones have given rise to many applications such as mobile banking, contactless payment, mobile marketing, etc., which requires tight security protection of user information. As such, traditional security features on mobile phones such as personal identification numbers is expected make way for more sophisticated biometric systems. One of the most promising biometric 1

2 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN authentication systems for mobile phones is iris recognition, which has been shown to be an effective method for recognizing individuals. In addition, iris recognition can be readily adopted in mobile phones due to the availability of built-in cameras in mobile phones today. 7,24,32 Most iris recognition methods adopt a similar strategy to the well-known Daugman s approach, which employ binary coding of phase information to represent the iris features. 8 11 The phase information is obtained by applying mathematical transformation (texture filtering) onto the iris images. 1,4,8 11,25,27,42 For example, Daugman extracts the iris texture s local phase feature using Gabor filtering. 8 11 The work in Refs. 42 extracts features by applying four levels of processing onto the iris images using the Laplace pyramid decomposition algorithm. In Refs. 4, Boles and his group extract the zero-crossings information using 1-D cubic spline wavelet. Ma makes use of the dyadic wavelet method to extract the binary features. 27 In Refs. 25, Lim and his group extract the iris images high frequency information as features by adopting the two-dimensional Harr wavelet transformation. Azizi makes use of the contourlet wavelet transformation to obtain the iris images intrinsic texture structure and then choose the optimal feature sequences. 1 While all of the above methods can achieve good recognition performance, they are unable to work with images that are affected by imaging noise introduced when the images are captured in non-constrained environment. One of the key problems with iris image acquisition in mobile phones is the limited DOF of the camera system. Iris camera systems must be capable of capturing the iris s texture details with high camera shutter speed so that the eye is not exposed to long period of lighting. In order to achieve this, the camera system requires a long focus lens to magnify the iris, and a high numerical aperture to enable sufficient lighting. 3,8 11,28,36 Such camera systems have short DOF, which increases the difficulty in the usage of the iris recognition system by untrained and non-cooperative users as images captured outside the DOF of cameras lead to low recognition performance. 30 This problem arises as iris patterns of images captured outside the DOF are transformed by blurring caused by optical defocusing. 21 Using an auto-focusing camera to overcome this problem will lead to increase in the cost, size, and complexity of the system. 15,21 In addition, techniques based on the use of audio/visual cues to guide the user to appropriate distance from camera during image acquisition is not practical to the mobile phone user, as it can be time-consuming and will therefore hinder the acceptance of the process in daily use. 16,40 Previous work on extending the DOF of iris recognition system can be categorized into two main approaches: 1) using special hardware such as aspheric optics and 2) applying image restoration on the defocused images prior to recognition. The former approach employs specially designed aspheric optics to increase the focus invariance along the axis of the lens, while the latter uses mathematical models that are constructed based on certain image capture conditions to restore the defocused images. These two approaches have a common aim to enhance the quality of the

Iris Recognition of Defocused Images for Mobile Phones 3 captured images in order to improve the recognition performance for robust iris recognition systems. In the first approach, computational imaging systems that commonly employ wave-front coded techniques (originally proposed by Dowski and Cathey) are used. 6,12 These systems introduce a cubic phase mask in the standard optical system to produce large focus invariance and employ signal processing algorithms on the captured images. Various wave-front coded lenses have been proposed for iris recognition, which take into account the non-linear steps required for iris feature extraction and identification. 29,30,33 The work in Refs. 36 and 40 examined the utilization of wave-front coded imaging system for increasing the operational range of iris recognition by considering the absence and presence of post-processing methods to restore the optical defocus in images that are introduced by the aspheric optic. They have demonstrated that the use of such optical elements is capable of relaxing the DOF requirements without the need to restore the defocused images. The work in Refs. 2 and 3 proposed a pattern matching strategy for iris recognition of defocused images based on correlation filtering. However, this method can only be used for images captured with wave-front coded systems. The methods discussed above require special optical element insertion, large computations and in some cases generate images with lower Signal-to-Noise Ratio (SNR), which must undergo post-processing prior to recognition. 36 The second approach aims to construct a mathematic model for restoring the defocused images prior to iris recognition. A real-time iris image restoration method based on inverse filtering was introduced in Refs. 22 and 23. However, this method did not consider noise factors. In addition, the point spread function (PSF) was heuristically determined without taking into consideration the camera optics, and this resulted in recognition performance degradation. A non-blind de-convolution algorithm to restore iris images and extend the DOF was proposed in Ref. 21. The authors estimated the PSF using the focus score that was measured from the high frequency components of the iris regions after irrelevant objects such as eyelashes and eyelids were removed. However, the Gaussian smoothness term used in the algorithm tend to over-smooth the restored image. An iris restoration algorithm using more accurate information pertaining to the image capture conditions that are computed from the iris images was proposed in Ref. 20, which achieves better performance on both synthetic and real data set when compared with the state-ofart iris restoration algorithm. While some of the image restoration methods have shown promising results, the biggest challenge in these approaches lie in obtaining suitable information pertaining to the image capture conditions that is required to construct the mathematic model for restoring the defocused images. 20 In addition, similar to the first approach, techniques relying on image restoration algorithms have high computational complexity and do not lend themselves well towards the power sensitive requirements of mobile phones. The method proposed in this paper does not require special hardware or image restoration algorithms for iris recognition of defocused images that are captured

4 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN outside the camera DOF. The proposed method is based on our investigation which reveals that certain iris features are not sensitive to optical defocusing and these features can be represented as stable bits in the iris code representation. In addition, we will also present our analysis of the iris feature characteristics in varying degree of image defocus. Based on these findings, we will propose a method to identify the stable bits of a given enrolled image, and show that good recognition performance can be achieved by limiting the pattern matching to these stable bits instead of using the entire iris code representation. As only a single enrolled image is required to obtain the stable bits, the proposed approach is highly practical. In this paper, we assume that the enrolled image used in the proposed method is a clear image that is captured within the camera DOF. Finally, the proposed method is well suited to be incorporated as part of a multi-biometric system for mobile phones as it does not require any power hungry computations as compared to existing approaches for extending the DOF of iris recognition system. While previous work Refs. 5, 13, 14, 17-19, 35, 38 and 41 have reported on the prospects of using stable iris features for recognition, to the best of our knowledge, this work is the first to investigate the characteristics of iris features in varying degree of image defocus when the images are captured outside the DOF of cameras. Our findings have led to the development of the proposed method that is capable of recognizing defocused iris images without the need for special hardware or time consuming image restoration algorithms. The paper is organized as follows. In the following section, we will discuss our study, which reveals that certain iris features are not sensitive to optical defocusing and these features can be represented as stable bits in the iris code representation. These stable bits can then be exploited in the recognition process. In Sec. 3, we will present an efficient method to identify stable bits from a single enrolled image based on our analysis of the iris feature characteristics in varying degree of image defocus. Next, we present experimental results to show that the proposed method can lead to significantly higher recognition performance when compared to iris recognition of defocused images that relies on the entire code representation. Sec. 5 concludes the paper with a discussion on future work. 2. Exploiting Stable Features in Defocus Images for Iris Recognition In this section, we will discuss our experimental setup and analysis which reveals that certain iris features in images that are captured outside the camera DOF are invariant to optical defocusing. We will present a simple method to identify these features and show that they can be used for iris recognition. 2.1. Image acquisition In order to obtain an iris image dataset for our study, we employed an iris image sampling system that is controlled by a stepping motor. The system is capable of

Iris Recognition of Defocused Images for Mobile Phones 5 capturing a sequence of 61 iris images with a sampling interval of 2mm for a single eye. The sequence of 61 images consists of a single clear image (captured in sharp focus within the camera DOF), while the remaining images may consist of different degree of defocus as they are captured outside the camera DOF at varying distance from the camera. In order to identify the clear image from a sequence, we employ the clarity evaluation function presented in Refs. 26, 31, 34 and 45. We then selected 15 images (which are subjected to optical defocus) before and after the clear image to be included in the dataset. Fig. 1 shows an example of an iris image sequence, where the 16th image is in sharp focus as it is captured within the camera DOF. The 30 images captured before and after the clear image are subjected to varying degree of defocusing. The resolution of each image is 640 480 pixels. Fig. 2 shows the clarity evaluation function curve of a particular sequence of iris images. The x-axis denotes the image number in the sequence, which corresponds to the sampling distance. The first half of the curve shows the gradual decrease in defocus from the 1st image to the 16th image, while the second half of the curve shows the gradual increase in defocus of the remaining images. As the sampling process of a sequence is performed with a fixed interval (i.e. 2mm apart), we can represent the degree of defocus using the image number in the sequence. For this study, we have selected 50 people, where each of their left and right eyes are sampled 5 times using the iris image sampling system. Each sampling sequence of the same eye may differ due to lighting conditions, position of the iris during Fig. 1. A sequence of iris images.

6 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN Normalized value of clarity evaluation funtion 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 10 15 20 25 30 Image number Fig. 2. Clarity evaluation function curve of a sequence of iris images. image capture, etc. Incorporating such variations during image acquisition of a single eye enable us to obtain a dataset that can provide more realistic evaluations. Hence, our dataset consists of 100 classes, where each class has 5 sampling sequences and there are31imagesin eachsequence. The totalnumber ofimagesin the dataset is 15,500 (100 5 31). 2.2. Image pre-processing Each image in the dataset undergoes a pre-processing step consisting of localization and normalization to eliminate any non-related information (e.g. sclera, pupil, etc.) and to obtain an iris area with uniform size and shape. We have adopted the pre-processing method presented in Ref. 43, to obtain a normalized image with a resolution of 512 64. Fig. 3 shows an example of normalized iris images in a particular sequence. Performing iris feature extraction on the whole normalized image will lead to the inclusion of pseudo features caused by irrelevant information that may still exist, such as eyelids and eyelashes. In order to ensure that the image used for recognition contains minimal irrelevant information, we can extract a smaller region of the normalized image for feature extraction. However, if the chosen region is too small, the features may not be sufficient for iris recognition. Based on the work in Ref. 44, we chose the mid-top area, i.e. about 40% of the whole normalized image as it has been shown to exclude features caused by irrelevant objects most of the time. The region of normalized images chosen for feature extraction is shown in the rectangles in Fig. 3. The resolution of the reduced normalized image is 260 32 pixels.

Iris Recognition of Defocused Images for Mobile Phones 7 Fig. 3. Normalized iris image. 2.3. Feature extraction Research in iris feature extraction for over a decade reveals that the Gabor filter used by Daugman is unanimously considered as the most efficient band-pass filter for discriminating the iris texture when compared to other wavelet extraction methods. 8,39 A Gabor filter under rectangular coordinate is given by Eq. (1). Gabor(x,y) = exp { [ (x x0 ) 2 α 2 + (y y 0) 2 β 2 ]} exp{2πi[u 0 (x x 0 )+v 0 (y y 0 )]}. (1) In Eq. (1), (x 0,y 0 ) denote the centre position of the filter, (α,β) denote the wavelet size in length and width, and the frequency and direction of the filter depend on (u 0,v 0 ). Binary quantification based on the complex phase vector is also widely adopted for representing the iris features. 37 As such, we have applied the 2D Gabor filter on the normalized iris texture images, extracted the complex phase information, and mapped them to binary codes based on their location on the complex plane as shown in Fig. 4. For example, if the phase vector (consisting of real and imaginary parts) lie in the top-right quadrant of the complex plane, it will be mapped to 11. As a result, a feature code matrix of 260 32 2bits is generated for each iris image. The real part and imaginary part, each a 260 32 1 bit matrix, is stored separately to facilitate our analysis on the changes in the individual bit characteristics due to the imaging noise. In a non-invasive iris recognition system, different eye images that are acquired from a single person are commonly subjected to rotating excursion that is perpendicular to the main optical axis of camera. This problem needs to be rectified so that we can analyze the corresponding bit characteristics in the various degree of defocus of a particular iris image sequence. This is achieved by using the clear image (16 th image in the 1 st sequence of a particular class) as the reference for shifting the bits in other images from the same class to the left/right so that the Hamming distance between them is minimized.

8 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN Fig. 4. Phase coding. 2.4. Proposed method 1: identifying stable bits for iris recognition Traditional pattern matching process for iris recognition attempts to cluster features from the same individual and perform classification of different individuals based on the focused iris images. In this study, we aim to determine iris features that are robust to imaging noise. These features are prevalent in all the iris images of a particular sequence. As discussed above, each iris image can be converted into two feature code matrices consisting of bit patterns. Features that are invariant to imaging noise can therefore be identified by a constant bit value in the same position of the feature code matrices of the entire image sequence. Hence, our goal is to identify feature clusters from iris image sequences of an individual and perform classification of sequences from different individuals. We first make the following definitions: Definition 1. Exactly Consistent Bit (ECB): A bit position in the feature code matrix is denoted as ECB if it is always 1 or 0 in two or more feature code matrices. Definition 2. Exactly Consistent Bit Rate (ECBR): The percentage of ECBs (in both real and imaginary feature code matrices) with respect to the size of the real and imaginary feature code matrices (i.e. 260 32 2). As ECB represent bit positions in the feature code matrix that are invariant to imaging noise, they form the ideal bits for pattern matching of defocused images. ECBR gives an indication of the similarity in iris features in two or more feature code matrices. Fig. 5 compares the averageecbr of 100 classesbetween 1) two images with same degree of defocus (as denoted by the x-axis) from different sequence of iris images of the same class (asterisk line), and 2) two images from the same class, one the defocus image and the other, the clear image (rhombic line). For the rhombic line, it can be observed that the averageecbr of a defocused image (with

Iris Recognition of Defocused Images for Mobile Phones 9 1 0.9 Same degree of defocus Different degree of defocus 0.8 0.7 Average ECBR 0.6 0.5 0.4 0.3 0.2 0.1 0 5 10 15 20 25 30 Image number Fig. 5. Average ECBR of iris images from 100 classes. respect to the 16th image which is the clear image) decreases with increasing degree of defocus. This implies that the number of ECB in a defocused image is inverse proportional to the amount of defocus in the iris image. Hence, if the ECB bits are used for recognition, we can expect that the recognition performance will degrade as the amount of defocus increases. On the other hand, the asterisk line shows that the ECB between two defocus images of different sequences (but of the same class and same degree of defocus), remains pretty consistent. The ECB corresponds to the features that are invariant to imaging noise. However, these features are rare in images that are obtained from camera systems with small DOF. Our experimental results show that on average, the ECBR is only 9.86%, which is not sufficient for iris recognition. Hence, there is a need to relax the constraints for identifying stable bits in the feature codes for iris recognition. Definition 3. Sensitivity: Each bit position in a feature code matrix can be defined as an intrinsic feature if the value (1 or 0) is found to be dominant across the entire sequence of images. It is defined as a sensitive feature otherwise. Sensitivity is defined as the probability of a particular bit position in the feature code matrix being a sensitive feature (in terms of %) in a sequence. Definition 4. Stable Bit and Sensitive Bit: A bit position in the feature code matrix is defined as a stable bit when its sensitivity is less than a predefined sensitive threshold. On the other hand, if the sensitivity is higher than the predefined

10 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN Fig. 6. Iris feature codes from two different classes (1 and 9). sensitive threshold, we denote the corresponding bit position as a sensitive bit. Basedon Def. 3 and Def. 4, Eq. (2) is used to determine whether the bit position of the feature code matrix at coordinate (x,y) is a stable bit. F k (i,j) is the feature code matrix of the k th image in the sequence, n is the number of images in the sequence, and t is the predefined sensitive threshold. (x,y) {(i,j) G(i,j) T a or G(i,j) T b } G(i,j) = n k=1 F k(i,j). (2) T a = n t T b = n n t +1 Based on Eq. (2), Fig. 6 shows the iris feature code consisting of stable and sensitive bits for images in two classes (i.e. 1-1 and 1-2 are images from different sequences in the same class, and 9-1 and 9-2 are images from different sequences of another class). The sensitive threshold is set at 30%. The black regions correspond to the stable bits while white regions correspond to the sensitive bits. We can observe from Fig. 6 that the distribution of stable bits from different sequences in the same class is similar with each other. In addition, the distribution of stable bits from different classes varies significantly. These observations demonstrate the feasibility of using stable bits for pattern matching to discriminate defocus iris images of different individuals. 3. Proposed Method 2: Identifying Stable Bits from a Single Enrolled Image In the previous section, we have described a simple method to identify stable bits that correspond to features which are invariant to the optical noise from a sequence of iris images. The method however, requires a sequence of iris images to be captured with varying degree of defocus during the enrollment phase. This is not a practical

Iris Recognition of Defocused Images for Mobile Phones 11 approach especially for mobile phone users. In this section, we propose a method to identify stable bits from a single enrolled image. The proposed method is based on our analysis of the displacement of the feature phase vector in the complex plane for varying degree of image defocus. We will show in the following section that this method can lead to good recognition performance. The proposed method for identifying the stable bits is based on the likelihood that the corresponding phase vectors will move across the axis in the complex plane when the images are subjected to varying degree of defocus. Fig. 7 shows an example of the displacement of a phase vector in a sequence of 31 iris images with different degree of defocus. It can be observed that the phase vector lies in the bottom left quadrant of the complex plane for most of the defocused images and in other quadrants (such as the top left) for the remaining defocus images. Based on the phase coding process described in Sec. 2.3, the displacement of the phase vector from one quadrant to another will cause a change in the corresponding bit value of the feature code. In other words, we can deduce the stability of a feature bit in the iris code by analyzing the distribution of the corresponding phase vector in different image defocus. For example in Fig. 7, we can observe that the real part of the phase vector have a higher stability than the imaginary part. In particular, the sensitivity of real part of the phase vector is 7%, and the imaginary part is 33%. A crude method for determining the stable bit in this example would be to choose the real part of the feature code as the stable bit. However, this method still requires the availability of a sequence of images with different degree of defocus. Based on the same example, Fig. 8 shows how the real and imaginary part of the phase vector evolved with the degree of defocus. It can be observed that the displacement of the phase vector of defocus images from the clear image (the 16 th 2 Im 1.5 1 0.5 0 Re 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 Fig. 7. Displacement of a complex phase vector in a sequence of iris images.

12 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN 0 0.2 Real part of the phase vector 0.4 0.6 0.8 1 1.2 5 10 15 20 25 30 Image number (a) Relationship between the displacement of the real part of the phase vector with degree of defocus. 0.4 Imaginary part of the phase vector 0.2 0 0.2 0.4 0.6 0.8 1 5 10 15 20 25 30 Image number (b) Relationship between the displacement of the imaginary part of the phase vector with degree of defocus. Fig. 8. Gradual displacement of the phase vectors with respect to the degree of defocus. image) increases gradually with the degree of defocus. In particular, the distance of displacement from the clear image for the real/imaginary parts increases until it crosses the imaginary/real axis of the complex plane. At this point, the corresponding bit value in the iris code will toggle. Therefore, based on a clear image, we can

Iris Recognition of Defocused Images for Mobile Phones 13 deduce that the real/imaginary part of a phase vector for a particular position in iris phase feature template is less likely to cross the axes, if its absolute distance from the imaginary/real axis is large enough. In other words, we can identify the stable bits of a single enrolled image based on the absolute distance of the real/imaginary part of the corresponding phase vector to the imaginary/real axis. Based on this, we devised the following method for identifying stable bits from an enrolled image. We first compute the absolute distance of the real/imaginary parts of the phase vectors of the enrolled image from the imaginary/real axis of the complex plane, and sort the absolute distances. Based on a predefined stable bit extraction rate, we identify the stable bits that correspond to the real/imaginary phase vectors with the higher absolute distances. Eq. (3) describes the method to identify the stable bit at coordinate (x,y) that correspond to the real/imaginary phase vector. P is the real/imaginary part of the phase vector of size 260 32, F abs computes the absolute distance, F vecsort is the function to sort the absolute distances and rearrange the matrix into vectors, N is the number of bits, and T is the stable bit extraction rate. (x,y) {(i,j) P(i,j) A(N stable )} A = F vecsort (F abs (P)) N stable = N T. (3) 4. Experimental Results In this section, we will present experimental results based on the 15,500 images in the iris image dataset (as discussed in Sec. 2.1) to demonstrate the feasibility of the proposed approach for iris recognition which relies on the stable bits that are identified using the methods discussed in this paper. The experiments were undertaken using Matlab on a 2.53-GHz PC with 3 GB RAM.We first evaluate the recognition performance of the proposed method that is based on identifying the stable bits from a sequence of images (Proposed Method 1 as discussed in Sec. 2.4) and the proposed method that is based on identifying the stable bits from a single enrolled image (Proposed Method 2 as discussed in Sec. 3). Next, we evaluate the effects of changing the stable bit extraction rate on the recognition performance of the proposed method. Finally, we will evaluate the recognition performance of the proposed method when the images that are subjected to recognition are captured within certain predefined range from the camera. 4.1. Recognition performance In order to evaluate the recognition performance of the proposed methods, we have implemented a conventional method based on Daugman s approach (as described in Section 2.2 & 2.3) for iris recognition of defocused image that relies on the entire code representation (as opposed to using only stable bits in the proposed methods)

14 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN for purpose of comparison. In Proposed Method 1, we have used a sensitive threshold of 30% to obtain the stable bits from a sequence for each class (the 1 st sequence in each class). Hence, a total of 3100 images (i.e.100 31) were used to obtain the stable bits. In Proposed Method 2, we have used a stable bit extracting rate of 70% to obtain the stable bits from the clear image for each class (the 16 th image in the first sequence of each class). Hence, 100 images are used to obtain the stable bits in Proposed Method 2. The experiments to evaluate the recognition performance of the proposed methods are based on 12400 images (i.e.100 4 31) images in the dataset that have not been used for identifying the stable bits. Fig. 9 shows the inter-class and intra-class Hamming distance of the images in the dataset. Note that for the conventional method, the entire feature code matrix is used to calculate the Hamming distance, while only the stable bits are used in the proposed methods for calculating the Hamming distance. It can be observed from 0.25 Intra class Inter class 0.2 0.15 Percent 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 Hamming Distance (a) Conventional method. 0.25 Intra class Inter class 0.25 Intra class Inter class 0.2 0.2 0.15 0.15 Percent Percent 0.1 0.1 0.05 0.05 0 0 0.2 0.4 0.6 0.8 1 Hamming Distance (b) Proposed method 1. 0 0 0.2 0.4 0.6 0.8 1 Hamming Distance (c) Proposed method 2. Fig. 9. Distribution of inter and intra-class Hamming distances for conventional and proposed methods.

Iris Recognition of Defocused Images for Mobile Phones 15 Fig. 9, the intersection between the intra-class and inter-class distance distributions of the proposed methods is smaller than the conventional method. It can be seen in Table 1, that although the average inter-class and intra-class distance has been reduced in the proposed methods, the intra-class distance reduction is significantly larger (e.g. intra-class reduction of 0.0523 as compared to inter-class reduction of 0.0046 in Proposed Method 2). This results in a sharp increase in the average difference between inter-class and intra-class distances in the proposed methods (e.g. 0.0915 as compared to 0.1392 for the conventional method and Proposed Method 2 respectively). As shown in Table 1, this leads to higher recognition performance in the proposed methods. In particular, when the False Acceptance Rate (FAR) is 0.5% for both methods, the False Rejection Rate (FRR) of the conventional and Proposed Method 2 is 65.86% and 39.52% repectively. In addition, Proposed Method 2 has a recognition performance gain of about 2 times over the conventional method as the Equal Error Rate (ERR) of the conventional and proposed method is 23.55% and 12.71% repectively. These results demonstrate that the proposed methods, which relies on stable bits that are tolerant to the optical defocusing for iris recognition, is capable of discriminating iris features of individuals from different classes. It can be observed from Table 1 that the average difference between inter-class and intra-class distances in Proposed Method 1 is larger than Proposed Method 2. The recognition performance of Proposed Method 1 is also evidently higher than Proposed Method 2. 4.2. Execution time We have also compared the registration and matching time of the proposed methods with the conventional method. Registration time is the time taken to generate iris code representation during the enrollment process. Matching time refers to the time taken for recognition. Note that the time for image acquisition and pre-processing is not considered as these are the same for all the methods considered. Table 1 Table 1. Comparison of recognition performance between conventional and proposed methods. Evaluation Conventional Method Proposed Method 1 Proposed Method 2 Standard Intra-class Inter-class Intra-class Inter-class Intra-class Inter-class Average Hamming 0.2448 0.3363 0.1792 0.3263 0.1925 0.3317 Distance FRR(%) 65.86 29.78 39.52 (FAR=0.5% ) EER(%) 23.55 9.015 12.71 Registration Time (ms) 282.65 10095.17 359.22 Recognition Time (ms) 13.90 11.23 10.06

16 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN shows that the registration time for the conventional method is the lowest. The registration time for Proposed Method 1 is the highest as it requires a sequence of images to obtain the stable bits. However, it is noteworthy that registration is a one-time process and even then, the time taken by Proposed Method 1 is only about 10 seconds. The recognition time for the proposed methods are lower than the conventional method as they do not require the entire iris code representation for matching. In particular, Proposed Method 1 and Proposed Method 2 can achieve a reduction in the recognition time of 19.2% and 27.6% when compared to the conventional method. From Table 1, it is evident that even though the recognition performance of Proposed Method 1 is marginally better than Proposed Method 2 (less than 4% improvement), Proposed Method 2 is a more feasible solution. In contrast to Proposed Method 1, which requires a longer time to identify the stable bits, Proposed Method 2 only requires a single enrolled image for identifying the stable bits, hence resulting in significantly lower registration time. In addition, the recognition time of Proposed Method 2 is lower than the recognition time of Proposed Method 1. 4.3. Effects of stable bit extraction rate on performance The stable bit extraction rate T, which is used to determine whether a bit position in the feature code matrix is a stable bit or not, is an important parameter as it affects the recognition performance. Table 2 shows how the average recognition performance (in terms of EER) changes when different stable bit extration rates are used in Proposed Method 2. In particular, we have used stable bit extration rates that are set to 55%, 60%, 65%, 70% and 75% to obtain the stable bits and evaluated the iris recognition performance for all the images in the dataset. It can be observed that the recognition performance increases when the stable bit extraction rate is increased from 55% to 65%. Thereafter, the recognition performance decreases with further increase in the stable bit extraction rate. The effects of the stable bit extraction rate on the recognition performance is expected, as choosing a very small stable bit extraction rate will restrict the number of stable bits for recognition which could lead to low inter-class variability. On the other hand, defining a verylargevalue for the stable bit extraction rate will lead to higher number of stable bits that are sensitive to the imaging noise and this will result in large intra-class variability, which leads to lower recognition performance. Table 2 Table 2. Recognition performance with varying stable bit extraction rate. Performance Stable Bit Extraction Rate (%) Standard 55 60 65 70 75 EER(%) 14.65 13.14 12.02 12.71 13.80 Recognition Time (ms) 7.69 8.51 9.46 10.06 10.88

Iris Recognition of Defocused Images for Mobile Phones 17 also shows that the recognition time increases with the stable bit extraction rate. This is due to the increase in the number of stable bits that are used for matching during recognition when the stable bit extraction rate increases. In practice, we can also determine a suitable stable bit extraction rate to obtain a good trade-off between the recognition performance and the computational complexity of the iris recognition process. For example, in a multi-biometric system for mobile phones, where the recognition performance of a single biometric system is not so crucial, we may decide to choose a lower stable bit extraction rate (60%), which is not with the best EER, to reduce the number of computations for iris recognition (due to lesser amount of stable bits for pattern matching). This may also lead to notable power savings for the mobile phone. 4.4. Effects of sampling range on recognition performance In order to increase the recognition performance in a non-intrusive iris recognition system, the user can be guided to position himself within a certain image capture distance from the camera. It is noteworthy that such a system should not impose strict requirements on the user to position himself such that the image will be captured within the DOF of the camera. Instead, the system should provide a distance range for image capture such that the entire process is comfortable to the userandcanbe completed fast.in this section,weevaluatethe effects ofvaryingthe distance range of image capture on the recognition performance. We have chosen images under different sampling range in the iris image dataset to create the test sets. The sixteen test sets are shown in Table 3, where each set contains images from varying distance range of image capture. We evaluated the recognition performance of each test set using the conventional and proposed methods. It can be observed from Fig. 10 that the EER of the three methods decreases as the distance range of image capture increases. However, the EER of the proposed methods is lower than the EER of the conventional method in all cases. In addition, the EER of the conventional method increases more drastically than the proposed method. These results further justifies the effectiveness of using stable features to extend the DOF of iris recognition system. Table 3. Select of testing database. Test Set No. 1 2 3 4 5 6 7 8 Image No. in Sequence 16 15-17 14-18 13-19 12-20 11-21 10-22 9-23 Range of Image Capture 0 4 8 12 16 20 24 28 (mm) Test Set No. 9 10 11 12 13 14 15 16 Image No. in Sequence 8-24 7-25 6-26 5-27 4-28 3-29 2-30 1-31 Range of Image Capture 32 36 40 44 48 52 56 60 (mm)

18 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN 0.25 0.2 Proposed Method 2 Proposed Method 1 Conventional Equal Error Rate ( EER ) 0.15 0.1 0.05 0 0 10 20 30 40 50 60 Acquisition Range ( AR ) = 0 : 4 : 60 ( mm ) Fig. 10. Relation diagram of AR and EER. 5. Conclusion Inthispaper,weproposedamethodtoidentifystablefeaturesinirisimagesthatare tolerant to imaging noise, which is introduced when the images are captured outside the camera DOF. Our analysis reveals that the stable bits corresponding to the stable features in the iris image can be identified based on the displacement of the feature phase vector from the axes in the complex plane. This enabled us to devise a method for identifying stable bits in a single enrolled image for iris recognition of defocused images. Unlike existing methods, the proposed method lends itself well towards robust, cost effective and power efficient iris recognition system for mobile phones as it does not require special hardware and time consuming image restoration algorithms. Experimental results demonstrate the superiority of the proposed method over a conventional method that uses the entire feature code representation for iris recognition. Future work includes exploring the benefits of incorporating more accurate localization, feature extraction and matching in the proposed method. Acknowledgments This work is partially supported by the National Natural Science Foundation of China under Grant No. 60672078 and the China Scholarship Council (CSC).

Iris Recognition of Defocused Images for Mobile Phones 19 References 1. J. Azizi and H. R. Pourreza, A novel and efficient method to extract features and vector creation in iris recognition system, 32nd Annual German Conference on Artificial Intelligence, KI 2009, Paderborn, Germany, Sep. 2009, pp. 114 122. 2. N. Boddeti and B. V. K. V. Kumar, Extended depth of field iris recognition with correlation filters, Proc. 2nd IEEE Int. Conf. Biometrics: Theory, Applications and Systems, Arlington, VA, 2008, pp. 1 8. 3. V. N. Boddeti and B. V. K. V. Kumar, Extended-depth-of-field iris recognition using unrestored wavefront-coded imagery, IEEE Trans Syst Man Cybern Pt A Syst Humans 40 (2010) 495 508. 4. W. W. Boles and B. Boashah, A human identification technique using images of the iris and wavelet transform, IEEE Trans Signal Process 46 (1998) 1185 8. 5. R. M. Bolle, S. Pankanti, J. H. Connell and N. K. Ratha, Iris individuality: A partial iris model, Proc. Int. Conf. Pattern Recognition, Cambridge, United kingdom, 2004, pp. 927 930. 6. W. T. Cathey and E. R. Dowski, New paradigm for imaging systems, Appl. Opt. 41 (2002) 6080 6092. 7. D. Cho, K. R. Park, D. W. Rhee, Y. Kim and J. Yang, Pupil and iris localization for iris recognition in mobile phones, Proc. 7th Int. Conf. Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Las Vegas, NV, United states, Jun. 2006, pp. 197 201. 8. J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Trans Pattern Anal Mach Intell 15 (1993) 1148 1161. 9. J. Daugman, Statistical richness of visual phase information: Update on recognizing persons by iris patterns, Int J Comput Vision 45 (2001) 25-38. 10. J. Daugman, Demodulation by complex-valued wavelets for stochastic pattern recognition, Proc. 3rd Int. Conf. Wavelet Analysis and Its Applications (WAA), Chongqing, China, May 2003, pp. 511 530. 11. J. Daugman, How iris recognition works, IEEE Trans. Circuits Syst. Video Technol. 14 (2004) 21 30. 12. E. R. Dowski and W. T. Cathey, Extended depth of field through wave-front coding, Appl Opt 34 (1995) 1859 1866. 13. G. Dozier, K. Frederiksen, R. Meeks, M. Savvides, K. Bryant, D. Hopes and T. Munemoto, Minimizing the number of bits needed for iris recognition via bit inconsistency and GRIT, IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms, and Applications, Nashville, TN, United states, 2009, pp. 30 37. 14. J. E. Gentile, N. Ratha and J. Connell, SLIC: Short-length iris codes, Proc. 3rd Int. Conf. Biometrics: Theory, Applications and Systems, Washington, DC, United states, 2009, pp. 1 5. 15. Y. He, J. Cui, T. Tan and Y. Wang, Key techniques and methods for imaging iris in focus, Proc. 18th Int. Conf. Pattern Recognition, Hong Kong, China, Sep. 2006, pp. 557 561. 16. Z. He, Z. Sun, T. Tan and X. Qiu, Enhanced usability of iris recognition via efficient user interface and iris image restoration, Proc. 15th Int. Conf. Image Processing, San Diego, CA, USA, Oct. 2008, pp. 261 264. 17. K. Hollingsworth, K. W. Bowyer and P. J. Flynn, All iris code bits are not created equal, Proc. 1st Int. Conf. Biometrics: Theory, Applications, and Systems, Crystal City, VA, Sep. 2007, pp. 1 6. 18. K. P. Hollingsworth, K. W. Bowyer and P. J. Flynn, The best bits in an iris code, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2009) 964 973.

20 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN 19. K. P. Hollingsworth, K. W. Bowyer and P. J. Flynn, Using fragile bit coincidence to improve iris recognition, Proc. 3rd Int. Conf. Biometrics: Theory, Applications, and Systems, Washington, DC, United states, Sep. 2009, pp. 1 6. 20. X.Huang,L.RenandR.Yang, Imagedeblurringforlessintrusiveiriscapture, IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition Workshops, Miami, FL, United states, Jun. 2009, pp. 1558 1565. 21. B. J. Kang and K. R. Park, Real-time image restoration for iris recognition systems, IEEE Trans. Syst. Man Cybern. B, Cybern. 37 (2007) 1555 1566. 22. B. J. Kang and K. R. Park, A Study on Iris Image Restoration, Proc. 5th Int. Conf. Audio - and Video-Based Biometric Person Authentication, Hilton Rye Town, NY, United states, Jul. 2005, pp. 31 40. 23. B. J. Kang and K. R. Park, A study on fast iris restoration based on focus checking, Proc. 4th Int. Conf. Articulated Motion and Deformable Objects, Port d Andratx, Mallorca, Spain, Jul. 2006, pp. 19 28. 24. S. Kurkovsky, T. Carpenter and C. MacDonald, Experiments with Simple Iris Recognition for Mobile Phones, Proc. 7th Int. Conf. Information Technology - New Generations, Las Vegas, NV, United states, Apr. 2010, pp. 1293 1294. 25. S. Lim, K. Li, O. Byeon and T. Kim, Efficient iris recognition through improvement of feature vector and classifier, ETRI J 23 (2001) 61 70. 26. J. Lin, C. Zhang and Q. Shi, Estimating the amount of defocus through a wavelet transform approach, Pattern Recogn. Lett. 25 (2004) 407 411. 27. L. Ma, T. Tan, Y. Wang and D. Zhang, Efficient iris recognition by characterizing key local variations, IEEE Trans Image Process 13 (2004) 739 750. 28. J. R. Matey, Iris recognition: On the move, at a distance and related technologies, Proc. Biometric Consortium Conf., Baltimore, MD, Sep. 2006, pp. 19 21. 29. R. Narayanswamy, G. E. Johnson, P. E. X. Silveira and H. B. Wach, Extending the imaging volume for biometric iris recognition, Appl. Opt. 44 (2005) 701 712. 30. R. Narayanswamy, P. E. X. Silveira, H. Setty, V. P. Pauca and J. van der Gracht, Extended depth-of-field iris recognition system for a workstation environment, Proc. SPIE - The International Society for Optical Engineering, Orlando, FL, United states, Mar. 2005, pp. 41 50. 31. L. Pan and M. Xie, The algorithm of iris image quality evaluation, Proc. 5th Int. Conf. Communications, Circuits and Systems, Kokura, Japan, Jul. 2007, pp. 616 619. 32. K. R. Park, H. Park, B. J. Kang, E. C. Lee and D. S. Jeong, A study on iris localization and recognition on mobile phones, Eurasip. J. Adv. Sign. Process. 2008 (2008) 1 12. 33. R. Plemmons, M. Horvath, E. Leonhardt, P. Pauca, S. Prasad, S. Robinson, H. Setty, T. Torgersen, J. van der Gracht, E. Dowski, R. Narayanswamy and P. E. X. Silveira, Computational imaging systems for iris recognition, Proc. SPIE - The International Society for Optical Engineering, Denver, CO, United states, Aug. 2004, pp. 346 357. 34. J. Ren and M. Xie, Research on Clarity-evaluation-method for iris images, Proc. 2nd Int. Conf. Intelligent Computation Technology and Automation, Piscataway, NJ, USA, Oct. 2009, pp. 682 685. 35. S. Ring and K. W. Bowyer, Detection of iris texture distortions by analyzing iris code matching results, IEEE Proc. 2nd Int. Conf. Biometrics: Theory, Applications and Systems, Arlington, VA, United states, 2008, pp. 1 6. 36. K. N. Smith, V. P. Pauca, A. Ross, T. Torgersen and M. C. King, Extended evaluation of simulated wavefront coding technology in iris recognition, Proc. 1st Int. Conf. Biometrics: Theory, Applications, and Systems, Piscataway, NJ, USA, Sep. 2007, pp. 316 322.

Iris Recognition of Defocused Images for Mobile Phones 21 37. Z. Sun, T. Tan, Y. Wang, Robust encoding of local ordinal measures: A general framework of iris recognition, Proc. Biometric Authentication. ECCV 2004 International Workshop, BioAW 2004, Piscataway, NJ, USA, May 2004, pp. 270 282. 38. P. Thoonsangngam, S. Thainimit and V. Areekul, Relative iris codes, IEEE Int. Symposium on Signal Processing and Information Technology, Cairo, Egypt, Dec. 2007, pp. 40 45. 39. J. Thornton, M. Savvides and B. V. K. V. Kumar, An evaluation of iris pattern representation, Proc. 1st Int. Conf. Biometrics: Theory, Applications, and Systems, Piscataway, NJ, USA, Sep. 2007, pp. 196 201. 40. J.vanderGracht,V.P.Pauca, H.Setty,R.Narayanswamy,R.J.Plemmons, S.Prasad and T. Torgersen, Iris recognition with enhanced depth-of-field image acquisition, Proc. SPIE - The International Society for Optical Engineering, Orlando, FL, United states, Apr. 2004, pp. 120 129. 41. V. Velisavljevic, Low-complexity iris coding and recognition based on directionlets, IEEE Trans. Inf. Forensics Sec. 4 (2009) 410 417. 42. R. P. Wildes, Iris recognition: An emerging biometric technology, Proc IEEE 85 (1997) 1348 1363. 43. W. Yuan and X. Bai, A new iris edge extraction method, Acta Optica Sinica 29 (2009) 2158 2163. 44. W. Yuan, Y. Bai and L. Ke, Analysis of relationship between region of iris and the accuracy rate, Acta Optica Sinica 28 (2008) 937 942. 45. X. Zhu, Y. Liu, X. Ming and Q. Cui, A quality evaluation method of iris images sequence based on wavelet coefficients in region of interest, Proc. 4th Int. Conf. Computer and Information Technology, Los Alamitos, CA, USA, Sep. 2004, pp. 24 27. Bo Liu received her Bachelor s degree of Engineering in Shenyang University of Technology (SUT), Shenyang, China, in 2006. Currently, she is a PhD candidate in the school of Information Science and Engineering, SUT. She also has been a Joint PhD student in the Centre for High Performance Embedded Systems of Nanyang Technological University (NTU), Singapore from Sep. 2010 to Sep. 2011. Her research interests include biometric identification, computer vision, image processing and pattern recognition, and also intelligent embedded systems and its application. Siew-Kei Lam received the BASc (Hons.) degree, MEng degree, and PhD in computer engineering from Nanyang Technological University (NTU), Singapore. Since 1994, he has been with NTU, where he is currently a Research Fellow with the Centre for High Performance Embedded Systems and has worked on a number of challenging projects that involved the porting of complex algorithms in VLSI. He is also familiar with rapid prototyping and application specific integrated-circuit design flow methodologies. His research interests include embedded system design algorithms and methodologies, algorithms-to-architectural translations, and high-speed arithmetic units. He is the member of the IEEE.

22 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN Thambipillai Srikanthan joined Nanyang Technological University (NTU), Singapore, in 1991. At present, he holds a full professor and joint appointments as a director of a 100 strong Centre for High Performance Embedded Systems (CHiPES). He founded CHiPES in 1998 and elevated it to a university level research centre in February 2000. He has published more than 250 technical papers. His research interests include design methodologies for complex embedded systems, architectural translations of compute intensive algorithms, computer arithmetic, and highspeed techniques for image processing and dynamic routing. He is the senior member of the IEEE. Weiqi Yuan received his PhD degree from Northeast University (NEU), China in 1997. He is currently a professor of Shenyang University of Technology (SUT) and the dean of the school of Information Science and Engineering. He is the director of the Computer Vision Group of SUT and also the director of SUT-Texas Instruments DSP Joint Laboratory. He has published more than 200 technical papers. His main research interests include biometric identification, computer vision, image processing and pattern recognition and image capture and processing system based on DSP. He is the standing director of China Instrument and Control Society.