A BIOMETRIC AUTHENTICATION SYSTEM PAUL GREEN Computing with Management Studies BSc SESSION 2004/2005

Similar documents
Introduction to Biometrics 1

Feature Extraction Technique Based On Circular Strip for Palmprint Recognition

Experiments with An Improved Iris Segmentation Algorithm

Proposed Method for Off-line Signature Recognition and Verification using Neural Network

Biometrics 2/23/17. the last category for authentication methods is. this is the realm of biometrics

Vein and Fingerprint Identification Multi Biometric System: A Novel Approach

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER

ENHANCHED PALM PRINT IMAGES FOR PERSONAL ACCURATE IDENTIFICATION

INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET)

Biometric Recognition: How Do I Know Who You Are?

Biometrics - A Tool in Fraud Prevention

Chapter 6. [6]Preprocessing

BIOMETRICS BY- VARTIKA PAUL 4IT55

Image Filtering. Median Filtering

License Plate Localisation based on Morphological Operations

Iris Segmentation & Recognition in Unconstrained Environment

Impulse noise features for automatic selection of noise cleaning filter

Iris Recognition-based Security System with Canny Filter

IRIS Biometric for Person Identification. By Lakshmi Supriya.D M.Tech 04IT6002 Dept. of Information Technology

Biometrics and Fingerprint Authentication Technical White Paper

Feature Extraction Techniques for Dorsal Hand Vein Pattern

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

Second Symposium & Workshop on ICAO-Standard MRTDs, Biometrics and Security

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Auto-tagging The Facebook

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation

User Awareness of Biometrics

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

A Kinect-based 3D hand-gesture interface for 3D databases

IMPORTANT: PLEASE DO NOT USE THIS DOCUMENT WITHOUT READING THIS PAGE

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

3D Face Recognition in Biometrics

A Proposal for Security Oversight at Automated Teller Machine System

4/9/2015. Simple Graphics and Image Processing. Simple Graphics. Overview of Turtle Graphics (continued) Overview of Turtle Graphics

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Haptic presentation of 3D objects in virtual reality for the visually disabled

Lane Detection in Automotive

About user acceptance in hand, face and signature biometric systems

On-Line, Low-Cost and Pc-Based Fingerprint Verification System Based on Solid- State Capacitance Sensor

Automatic Licenses Plate Recognition System

Tools for Iris Recognition Engines. Martin George CEO Smart Sensors Limited (UK)

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Automation of Fingerprint Recognition Using OCT Fingerprint Images

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

A New Fake Iris Detection Method

FACE VERIFICATION SYSTEM IN MOBILE DEVICES BY USING COGNITIVE SERVICES

Title Goes Here Algorithms for Biometric Authentication

Authenticated Automated Teller Machine Using Raspberry Pi

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression

Automatic Locking Door Using Face Recognition

ZKTECO COLLEGE- FUNDAMENTAL OF FINGER VEIN RECOGNITION

Student Attendance Monitoring System Via Face Detection and Recognition System

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

IRIS Recognition Using Cumulative Sum Based Change Analysis

Exercise questions for Machine vision

Lane Detection in Automotive

IMAGE ENHANCEMENT. Quality portraits for identification documents.

Image Extraction using Image Mining Technique

BIOMETRIC IDENTIFICATION USING 3D FACE SCANS

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

TECHNICAL DOCUMENTATION

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

Iris Recognition using Histogram Analysis

A Proficient Matching For Iris Segmentation and Recognition Using Filtering Technique

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Palm Vein Recognition System using Directional Coding and Back-propagation Neural Network

Image Capture TOTALLAB

The total manufacturing cost is estimated to be around INR. 12

FACE RECOGNITION BY PIXEL INTENSITY

Fingerprint Image Quality Parameters

Recognition System for Pakistani Paper Currency

Biometrics for Public Sector Applications

Live Hand Gesture Recognition using an Android Device

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

Impact of Resolution and Blur on Iris Identification

An Improved Bernsen Algorithm Approaches For License Plate Recognition

3D Face Recognition System in Time Critical Security Applications

Applying mathematics to digital image processing using a spreadsheet

Image Processing : Introduction

The Elegance of Line Scan Technology for AOI

On spatial resolution

Laboratory 1: Uncertainty Analysis

Biometric Recognition Techniques

MINUTIAE MANIPULATION FOR BIOMETRIC ATTACKS Simulating the Effects of Scarring and Skin Grafting April 2014 novetta.com Copyright 2015, Novetta, LLC.

Image Optimization for Print and Web

Sri Shakthi Institute of Engg and Technology, Coimbatore, TN, India.

Operating Procedures for MICROCT1 Nikon XTH 225 ST

An Algorithm for Fingerprint Image Postprocessing

A Study of Slanted-Edge MTF Stability and Repeatability

AGRICULTURE, LIVESTOCK and FISHERIES

CSI: Rombalds Moor Photogrammetry Photography

Colour Profiling Using Multiple Colour Spaces

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

Using the Advanced Sharpen Transformation

Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

Near Infrared Face Image Quality Assessment System of Video Sequences

White Paper High Dynamic Range Imaging

Transcription:

A BIOMETRIC AUTHENTICATION SYSTEM PAUL GREEN Computing with Management Studies BSc SESSION 2004/2005 The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has been made to the work of others. I understand that failure to attribute material which is obtained from another source may be considered as plagiarism. (Signature of student)

SUMMARY The aim of this project is to investigate whether images of a person s hand, obtained from a flatbed scanner, are sufficiently distinctive to be used as the basis of an authentication technique. A software prototype is to be planned and implemented, that can analyse a picture of a hand and from it draw conclusions as to who the hand belongs to. The project report will also include an evaluation of the experimental and data analysis techniques that are used to test the prototype. - i -

ACKNOWLEDGMENTS First of all I would like to thank my supervisor, Nick Efford, for his guidance and enthusiasm for the project and the topic area. Without his help I wouldn t have known how to get the project off the ground. The advice provided in the mid-project report and the progress meeting by Kristina Vuskovic was very insightful and very much appreciated. I would also like to thank all the participants who allowed me to scan their hands (and free of charge too!). Without their kindness I would have struggled to test the system properly. Last, but by no means least, I d like to thank my friends and family for their support throughout the project, and indeed my time at Leeds University. - ii -

CONTENTS Chapter 1: INTRODUCTION 1.1. Overview 1 1.2. Project Objectives 1 1.2.1. Minimum Requirements 2 1.2.2. Possible Extensions 2 1.3. Motivation 2 1.4. Report Structure 2 Chapter 2: BACKGROUND RESEARCH 2.1. Introduction 3 2.2. Biometric Authentication 3 2.3. Analysis of Current Systems 4 2.3.1. Hand Geometry 4 2.3.2. Hand Shape 5 2.3.3. Hand Texture 6 2.4. Hardware Requirements 7 2.5. Programming Language Selection 8 2.6. Curvature Calculation 8 2.7. Matching Techniques 9 2.7.1. Euclidian Distance 10 2.7.2. Hamming Distance 10 2.7.3. Gaussian Mixture Models 10 Chapter 3: METHODOLOGY 3.1. System Framework Design 11 3.2. User Interface 12 3.3. Hand Image Acquisition 12 3.4. Pre-Processing 13 3.5. Feature Extraction 13 3.6. Storage or Matching 15 3.7. Testing Plans 15 3.8. Schedule for Project Management 15 Chapter 4: IMPLEMENTATION 4.1. Scanner Control 16 4.2. Pre-Processing Operations 16 4.3. Feature Extraction 18 4.3.1. Producing the Border 19 - iii -

4.3.2. Landmark Identification 19 4.3.3. Calculation of Biometrics 23 4.3.3.1. Finger Length 23 4.3.3.2. Finger Width 24 4.4. Matching 27 4.4.1. Verification 28 4.4.2. Identification 28 Chapter 5: EVALUATION 5.1. Data Collection 29 5.2. Feedback Assessment Criteria 29 5.3. Finger Length Experiments 31 5.3.1. Comparison of Inner-Finger Lengths to Outer-Finger Lengths 31 5.3.2. Two Innermost Finger Length Results 31 5.4. Finger Width Results 33 5.5. Finger Length and Width Combined Results 36 5.6. Summary Of Results 38 5.7. Robustness Testing 40 5.7.1. Hand Orientation 40 5.7.2. Hand Pressure 40 5.7.3. Finger Separation 42 5.7.4. Impact of Finger Nail Length 42 5.7.5. Time Lapse Experiments 43 5.7.6. Spoofing Identity 43 Chapter 6: CONCLUSION 6.1. General Conclusion 44 6.2. Potential Improvements and Further Work 45 6.2.1. Finger Surface Extraction 47 REFERENCES 51 APPENDIX A: A.1. Personal Reflection 54 APPENDIX B: B.1. Computer Hardware Specification 56 B.2. Scanner Hardware Specification 56 APPENDIX C: C.1. Example Scans 59 APPENDIX D: D.1. Width Sample Variation Graphs 66 D.2. Results Tables 68 APPENDIX E: E.1. Code Snippets 74 E.2. Configuration File 83 E.3. Screen Dumps of The System 84 APPENDIX F: F.1. Hand Images Used in Texture Extraction Tests 89 APPENDIX G: G.1. Project Schedule 90 G.2. Progress Log 90 G.3. Original Schedule 91 G.4. Revised Schedule 92 - iv -

Chapter 1 INTRODUCTION 1.1. Overview The automated society that we live in is ever changing. We all seem to be leading increasingly fast-paced lives, where waiting around is seen as tedious and often frustrating. At the same time more and more places are requiring access control; logical logins at computer workstations, clocking on and off at work, access to restricted areas and passport control are all examples where security issues are abundant and automation is desirable. No longer are tokens required, where an individual must possess a key or card to be granted access. No longer are knowledge-based approaches the only viable option. Both of these are prone to abuse; it is possible for an unauthorised user to acquire both of these and fraudulently gain access to protected areas. A better way of ensuring the user is who they claim to be, or is at least on a list of allowed users is required. Biometric authentication offers a solution. A way of distinguishing an individual uniquely from all others is possible using biometrics. Scanning and identification can all be carried out in well under a second with some of the current commercially available systems, making access control a much more secure, efficient and user-friendly process. A topical subject, biometrics continue to receive attention in current news with the Government s proposals to incorporate the technique into passports in the near future, and potentially into a compulsory national identity card scheme [1,2]. Computer manufacturers IBM have recently marketed a new laptop model [3] with a biometric finger scanner for higher security login control without the need for passwords. Microsoft has also recently produced a computer keyboard with a built-in CCD sensor for scanning fingerprints [4]. At around 58, and sure to go down in price, these keyboards have the potential of being available at all login screens and either replace or compliment current knowledge-based passwords, increasing the security of workstations. 1.2. Project Objectives This report aims to investigate whether images of a person s hand, obtained from a flatbed scanner, are sufficiently distinctive to be used as the basis of an authentication technique. The minimum requirements and possible extensions are provided below, and the main objectives are discussed further in the Methodology section (Chapter 3). - 1 -

1.2.1. Minimum Requirements The minimum requirements of the project are to: Produce a system to perform biometric analysis of scanned images of the hand. Provide an evaluation of system accuracy and reliability using a selection of hand images Conduct some experiments to investigate the potential for spoofing. 1.2.2. Possible Extensions Possible extensions to the project are: Implement scanner control using the chosen programming language. Produce a graphical user interface Allow the scan to be conducted in real-time, displaying the captured image to the screen If in authentication mode, then display the matched username to the screen, and if authorised then provide some indication of acceptance i.e. access granted Extensively test the system, trying all ways possible to cause it to fail and where it does fail, look at ways of improving. 1.3. Motivation As a Computing student, artificial intelligence and computer vision has fascinated me. The topics covered in the AI modules studied at level two and three have fuelled my interest in this field. I have also enjoyed the challenges posed by the programming modules throughout my degree programme and felt that this project title encompassed both of these areas pleasingly. I wanted to develop a working prototype rather than simply analyse a current system, as I feel that producing a tangible product is very rewarding. 1.4. Report Structure A further look into biometrics and an analysis of systems currently available is provided in the Background Research section (page 3), coupled with a literature review of the relevant references obtained from the library where necessary. This is followed by a discussion of the proposed Methodology (page 4) and Implementation (page 16) of a software prototype to analyse an image of a hand, and from it draw conclusions as to who the hand belongs to. The results of extensive testing of the system are detailed in the Evaluation section (page 29), and an investigation to see if it can be compromised. An evaluation of the experimental and data analysis techniques that are used to test the prototype are also provided. Finally, a summary followed by suggested improvements to overcome failure (if necessary) and further work are discussed in the Conclusion section (page 44). - 2 -

Chapter 2 BACKGROUND RESEARCH 2.1. Introduction This project is highly specialised, so there are few relevant research papers in this field. Those considered useful are outlined in the bibliography, and a literature review is provided below. Firstly, an explanation of biometrics and an analysis of systems currently available are provided, then a look at hardware requirements, followed by a discussion of programming language alternatives. There are several major obstacles in overcoming the problem of producing a biometric authentication system, as outlined in Chapter 3. Feature extraction requires the landmarks of the hand image to be known (discussed later in Chapter 4), and to aid this process information on curvature calculation is provided here. Once calculated, the system must then compare the biometrics of the scanned image with those stored in the database, therefore research relating to matching techniques is also provided in this chapter. 2.2. Biometric Authentication As discussed briefly in the introduction, this topic is receiving an increasing amount of current interest, namely in the news with the Government s plans to incorporate the technique into their proposed compulsory national identity card scheme [1,2]. Although this has received a great deal of political debate, controversially a strategy has already been put into place (very recently) which looks highly likely to require biometric data to be stored on all new passports within the next five years [5,6] - potentially using fingerprints as the chosen biometric. Civil liberties campaigners see this as an intrusion on privacy and undemocratic, as passports are issued under Royal Prerogative the scheme will bypass the objections aired in Parliament. A pilot scheme took place between April and December 2004 with 10,000 volunteers using facial biometrics, results of which are anticipated soon. Authentication is the process of verifying a person is who they say they are, or at least recognising his or her identity. In order for the system to make this decision the user must provide at least one of the following: a) Proof of knowledge (for example, a password, PIN number or answer to a secret question) b) Proof of possession (for example, an ID card or some other hardware token ) c) Proof of being (the claimed person) The problem is that an unauthorised individual can compromise any one of these security requirements. A password can be acquired various ways; including through packet sniffing, a brute force attack, social - 3 -

engineering techniques where an unsuspecting authorised user discloses their login credentials, or observed over the shoulder of an authorised user. Possession of a hardware token, such as an identity card, is also susceptible to security vulnerabilities. An unauthorised user could steal an ID card, or skim an authorised card and make a copy for himself. Although proof of being is considerably more difficult to masquerade for an attacker, it is not impossible. Physically forcing an authorised user to comply, or even measures as extreme as removing those features required (such as fingers, hands, etc) could be carried out. It is arguable however, that login stations should be under supervision, so such a threat should not exist. So what is biometric authentication? And why are the proposed Government plans receiving so many objections? Biometric authentication can be used stand-alone for an access granting system, or can offer an extra layer of security for an existing system. A set of biometric measurements should be unique to each individual. They can be measured from physical features, such as the retina, iris [7], face [8,9,10], hand geometry, hand texture and perhaps the most commonly known, fingerprints [11,12,13,14]. Biometrics can also be obtained from behavioural characteristics; such as the way an individual signs their name [15], their speech signature (based on the movement of vocal organs) or gesture, for example. The more well known techniques have been available long before computers. Fingerprint recognition in particular dates back as far as the seventh century, where in China they were accepted as a legal alternative to a seal or signature [16]. The main objection to such systems is generally with regard to a user s privacy. Society feel uneasy about their personal information being stored in a database, and with regard to the Government s compulsory identification card scheme, stored in a national biometric database. There are also psychological reasons for rejection, for example, some people link fingerprint scanning to criminals, as it is common knowledge that the police use this identification technique at crime scenes. Generally however, biometric authentication offers the opportunity for automatic access control, and a way of eliminating the need to carry, or the risk of loosing a token, key or ID card around. It also prevents an unauthorised user learning/guessing a password or PIN number. A combination of knowledge, possession and being has the benefit of further mitigating the threat of unauthorised access to a system. A discussion of currently available systems, relating to hand biometrics, is discussed below. 2.3. Analysis of Current Systems As mentioned earlier, there are very few references available relating to hand biometric systems. However, the prior work in this field can be divided into three subsections: hand geometry, hand shape and hand texture based. Each of these are explored below: 2.3.1. Hand Geometry This is the most popular and commonly found of the hand biometric systems. Generally measurements are made from features such as finger length and widths, and also the width of the palm. Such systems are normally only awarded a medium security confidence level, as the measurements made are - 4 -

not assumed to have a very high discriminative power. Despite this, there are number of commercially available systems on the market, ranging from tracking employee attendance and punctuality [17] to verifying the identity of border crossing travellers [18], staff at schools, hospitals and nuclear power plants. Recognition Systems Inc. [19] (supplier of the technology for the INSPASS programme [18]) offers a range of hand geometry and fingerprint readers, and complimentary software. The user s hand is used as a replacement to a conventional card reader, therefore the risk of time-card fraud (or buddy punching ) is eliminated. However, details of how the commercial system s software is configured, and the algorithms used are often protected by patents [20,21,22,23]. Although few papers exist, researchers have produced similar systems with comparable results, and published details of their findings. Sanchez-Reillo et al [24] discuss one such system, where the hand is guided to a fixed position on a platform, and the image is acquired using a CCD colour camera, positioned above. A mirror is fixed to the edge of this platform to capture a side view of the hand as well as the topdown contour. This enables not only measurements of the finger widths, separation of inter-finger points and the width of the palm to be extracted, but also the height of the palm, little and middle fingers as well. The prototype produced achieves results of up to 97% success in identification, and error rates much below 10% in verification. Further details are also provided in this paper relating to feature selection and matching techniques. These are discussed later in this chapter, in section 2.5. Another prototype developed by researchers is the HaSIS system [25]. This system captures the image in much the same way, again with a side elevation acquired with the aid of a mirror. A platform is again used to guide the hand to a fixed location, with the aid of pegs. These ensure the hand is not only correctly placed; but the separation of the fingers is also consistent. Seventeen features are then extracted from the hand image, with measurements made from finger widths, palm width, finger and palm heights. However, unlike the system by Sanchez-Reillo et al [24], finger lengths are also measured, and furthermore the thumb is used in the extraction process. This system has been tested with 800 images (taken from 100 people, 8 images each). Results from this prototype also look promising, with a false acceptance rate of 0.57% and a false rejection rate of 0.68%. Hand geometry systems have some major advantages, which make them an attractive technique to develop. Firstly, the cost of the required hardware is fairly inexpensive; generally only a low resolution CCD camera (for example) is required. The template used to store each user s measurements is small, potentially the smallest of all the biometric systems, therefore storage requirements are low. As an added benefit of this, the computational cost is also reduced. 2.3.2. Hand Shape As the location where the hand will be placed in the above systems is known, measurements can be made fairly easily. However, this is a huge limitation of the system. It would be far better to have no such requirement, making the system more robust and appealing to users. Jain and Duta [26] solve this problem by using the whole hand shape, or contour, as the basis to make the measurements from. The - 5 -

hand is represented by a series of points on the perimeter. Before extracting the features, the image is aligned. The mean alignment error between point pairs is used to indicate the match score, or quality. This system was tested using 353 images, taken from 53 people (varying from 2 to 15 images each). The results are very good, with a false acceptance rate of 2% and a false rejection rate of 1.5%. These are comparable to the commercial systems discussed above. Although this approach is more flexible, unfortunately it attracts a disadvantage. Because the template is represented by hundreds of points on the perimeter of the hand, not only is more storage space required for each user, more computing time is necessary to manipulate those points and extract the measurements. When matching a probe image to the database, more comparisons will also be necessary, again having an impact on computational cost. Another system [27] researched, pursues a different approach altogether of the use of hand shape. Instead of representing the hand image as a contour (a series of points), the hand is projected onto a plane and the difference between the hand being present and an empty scene is used to derive the shape of the hand. The user is responsible for the positioning of their hand, and a real-time live view of the current scene is provided as feedback. Once the user is happy that their hand will be acquired acceptably, the image is captured. Unlike the system developed by Jain and Duta, this prototype has a much smaller storage requirement for the templates produced, and encodes the binary image captured using a quad-tree for efficient comparisons in the matching stage. Tested using 100 images, results are also excellent with a verification rate of 99.04%, and both false acceptance rate and false rejection rate of 0.48% 2.3.3. Hand Texture Instead of making measurements of finger length, width etc. some researchers have attempted to develop a recognition system based on the texture of the hand surface. The most obvious texture analysis is based on the pattern of the fingertips. However, other parts of the hand have proven to offer effective identification potential. One such area is the palm. Several palm-print systems exist, one of which developed commercially by NEC [28] for use in criminal applications. For civil use, where the system is required to be robust to a much higher number of users, Zhang et al [29] offer a prototype which provides reliable performance using a large database, in real-time. This system is an adaptation of that discussed in [30]. Because of the large size of the palm-print area compared to, say, the fingertip, the technique was proven to be robust to noise. The area contains copious information to extract, including the principal lines and wrinkles, in addition to the texture itself. After testing with 400 palm images, the system achieves a commendable false acceptance rate of 0.02%, and a high genuine acceptance rate of 98.83%. In verification tests, the prototype produces an even better false acceptance rate of 0.017% and a false rejection rate of 0.86%, showing this is a viable biometric technique. Woodard and Flynn [31] investigate another texture-based recognition approach. Instead of the palm, the texture of the finger surface is used. Shape index images are then extracted using the first (index) finger, middle finger and ring fingers. The hand images are captured using a colour 3D scanner [32]. For - 6 -

testing purposes, a space of one week between captures was given to investigate the effects of time on robustness. Results obtained are promising, with the middle finger providing better results than the other two, mainly due to its a larger size and subsequent surface area. Same day test images performed much better (94.2-99.4% matching probability) than those collected on different days (60-70%). 2.4. Hardware Requirements The systems discussed above use a range of different hardware set-ups. Most, however involve some variation of a hand-scanning device, which uses a camera or sensor to capture the hand-image. Figure 2.1, below shows the Recognition Systems Inc. terminal discussed in section 2.3.1 above. Notice the pegs in all of these scanners. These are used to guide the hand into a specific, fixed position: Figure 2.1 IR Recognition Systems scanner [17] Figure 2.2 Handpunch Biometric Terminal [33] Figure 2.3 - VeryFast Access Control Terminal [34] The devices used in [24] and [25] also capture the side view of the hand, with the aid of a mirror attached to the right of the scanning platform. When the camera takes the photograph of the hand, the image produced shows a side elevation in addition to the top-down view of the hand silhouette (Figures 2.4-2.5): Above: Figure 2.4 The hardware used in [22]. Right: Figure 2.5 A diagram of the hardware used in the HaSIS system [23]. Notice the pegs to position the hand, and the mirror that is used to capture the side elevation of for measuring finger and palm height. - 7 -

The system in this paper is required to operate using a standard flatbed scanner. Details of the hardware used for the prototype are provided in Appendix B. No pegs or mirrors are attached, and therefore no supposition of hand location can be made. The only valid assumption is that of the general hand orientation, as the hand must be placed in the scanner from the top (as described in Appendix B). 2.5. Programming Language Selection There is an abundance of programming languages available to the software developer, and a major decision before designing a system is the choice of which language to use. Each has its own advantages and disadvantages, and these must be examined in relation to the problem at hand. The first influencing factor with this system is the requirement for image handling. In many respects Java offers superior support through its class libraries and APIs. Although this should not be the principal reason for choosing the language, it is a valid factor. Image libraries are available for Python and C++, however there is no built-in support as default, unlike Java. Another factor is the familiarity of the language to the developer. Again this on its own should not be used to influence the decision, as there is an abundant source of books and tutorials available to learn the various languages. On this occasion, through other modules in the School of Computing, Java is the most familiar of those languages available. Java also has the benefit of being freely available and platform-independent. As discussed in the introduction, control of the scanner from within the software is desirable. On investigation, an API for Java is available to assist with this process. Morena [35] offers control of the hardware though the standard platform-independent TWAIN architecture. This would mean the software is portable and, coupled with the other aspects above, is the deciding factor in choosing Java as the programming language to use. 2.6. Curvature Calculation Two methods have been studied for calculating curvature. The first is vector based and is founded on work by Rosenfeld et al. [36,37]. The equation below (1) is used to compute curvature (C k ), measured between 0 and 1. This is calculated at a scale k and the sign (S k ) gives the sense of that curvature (either convex or concave). C k ak bk = ½ 1+ (1) k k a b S k [ ak bk]z = (2) The vectors a k and b k are defined by the diagram, in Figure 2.6 below: - 8 -

(x i, y i ) a k b k Ө (x i+k, y i+k ) (x i-k, y i-k ) Figure 2.6 Vector-based curvature calculation. The second technique incorporates a Gaussian smoothing operation during the curvature calculation and is therefore likely to be a more efficient method. This alternative approach based upon Ansari et al. [38, 39], uses the standard parametric formula: xy &&& yx &&& k( t, σ ) = (3) 2 2 ( x& + y& ) 3/ 2 Where derivatives x &,&& x, y& and & y are computed from coordinates that have been smoothed with a Gaussian kernel and t is the path length along the curve. The width of this kernel, σ, controls the scale at which the curvature is estimated. This one-dimensional filter is shown in equation 4 below: 2 [ 0.5( t / ) ] 1 k ( t, σ) = exp σ (4) 2πσ The approach used in the adapted Border algorithm (included in the AI31 libraries [40]) involves calculating the curvature whilst traversing the boundary of an object. This is performed using the vectorbased method and the values are stored in an array. This array is then smoothed with a Gaussian filter, of specified width. 2.7. Matching Techniques Of the various systems discussed above, once the features have been extracted and stored in a database, probe images must be compared with those enrolled and a level of closeness should be provided. Depending on how close the match is, the system should be able to make a decision as to the identity of the user. The hand geometry prototype discussed in [24] explores four different matching techniques: Euclidian distance, Hamming distance, Gaussian mixture models and radian basis function neural networks. The first three are considered the most relevant to this system, and are discussed below: - 9 -

2.7.1. Euclidian Distance This comparison technique is the most widely found of those mentioned above. It takes each measurement feature in turn and compares a probe image with a gallery (or enrolled) image. The difference between each feature is totalled and then square-rooted to produce a distance, or matching score. A small score signifies a close match. Equation 5 below expresses this process mathematically: d = N i= 1 2 ( p i g i ) (5) N is the total number of features in the template, p i and g i represent corresponding features in the probe and gallery images respectively, and d is the total distance for that particular probe-gallery pair. 2.7.2. Hamming Distance Instead of comparing each feature in turn and calculating an overall distance for a probe-gallery pair, the Hamming distance is based on the number of corresponding features that differ in value. To use this method, multiple templates are required for each user. It is assumed that the feature components follow a Gaussian distribution and therefore each feature consists of not only the mean of the values for a particular component, but a standard deviation as well. As a result, twice the storage space needed for each template is required, which is a disadvantage of this technique. i m i m { i { 1,..., N} p g g } d ( p, g ) = # / > (6) d is the Hamming distance, i is the current feature, N the total number of features in the template, # the sum of the matched features, component) respectively. m g i and i i v gi the mean and standard deviation of the current feature (the ith v i 2.7.3. Gaussian Mixture Models This technique is based on the approach taken by Reynolds and Rose [41,42]. The method must be trained on all the enrolled users and classifies each user template as a separate Gaussian mixture model (GMM). When testing a probe image against those enrolled, the probability ( x ) of the probe belonging to each particular GMM is computed and the highest probability GMM (potentially above a certain threshold) is used to identify the user. Equation (7) below shows the probability of a sample belonging to a class u is: M r wi 1 r r T x( p / u) = exp ( p µ ) 1/ 2 i N / 2 i= 1 (2π) 2 i 1 i r r ( p µ i ) (7) wi and i represent weighting and covariance of each of the GMMs respectively, i µ the mean of the current feature (i). M is the number of models (and therefore enrolled users), and N is the total number of features in the template. - 10 -

Chapter 3 METHODOLOGY 3.1. System Framework Design To illustrate the design process, the task must be broken down into various stages. The Biometric Authentication System must consist of the following six principal components: 1. User Interface (ideally graphical) 2. Hand image acquisition 3. Pre-processing 4. Feature extraction 5. Storage 6. Matching Each of these elements is explored in detail below, including a discussion of design issues. The UML activity diagram below (Figure 3.1) shows how these processes are linked together to form the system. Figure 3.1 UML activity diagram illustrating the system framework. - 11 -

3.2. User Interface For the system to be broadly accepted an interface must be provided. A graphical interface is desirable. Essentially two separate systems should be produced: one to authenticate and the other to enrol users. The identification/verification system can then be deployed at all access points (for example: login stations, secure doors, etc.), with the enrolment system deployed at a secure location, accessible only to authorised personnel. 3.3. Hand Image Acquisition The hardware to be used to capture the hand images is a standard flatbed scanner. (Appendix B provides further details). To acquire the image the user will place his/her hand as flat as possible on the glass surface and then the scanner will obtain the image. The first major decision for the design regards the image resolution and colour depth. There are obvious processing issues with using large images, and colour images will cause a greater impact on performance. Firstly is colour information required? If the images are acquired in colour then more information is available for further work, analysing skin tonality or pattern recognition for example. However if the system is only to analyse the geometric shape of the hand, a silhouette is adequate. Resolution must then be chosen. A larger image will hold greater information, containing more hand texture detail, such as clearer fingerprints, lines and patterns on the palm. Again this is at the expense of extra computing time, with larger images requiring more processing. Not only do larger images take longer for the scanner to acquire, scanning in colour increases the capture time and also the amount of disk space required for (albeit possibly temporary) storage. A compromise must be established so as to have enough detail, and yet still allow the system to function at a reasonable speed. Users become frustrated waiting for an unresponsive system, so for a biometric system like this to be accepted it must work as quickly as possible. The results in Figure 3.2 below illustrate the relative effect of varying these attributes on the time required to capture images from the scanner. Depending on the hardware used the absolute time required would differ, therefore a hardware specification is also provided in Appendix B. GREYSCALE COLOUR Median Filter (7x7) 3400x3840 Resolution 2549x2880 1700x1920 850x960 0 20 40 60 80 100 120 Time (seconds) Figure 3.2 Chart illustrating the effect of increasing resolution and colour information on capture time from the scanner. - 12 -

From Figure 3.2 above, colour images of resolution 850 x 960 meet this compromise and will be used. There is little variation in the time required to capture the image in colour or greyscale at this resolution, therefore they will be captured in colour to allow future work (relating to skin tonality/pattern for example) on the dataset. Also the time required to perform a median filter operation is substantially smaller compared to the larger resolution images. Appendix C provides some examples of images acquired using these settings. 3.4. Pre-Processing There shall be no requirement to align the images before processing, unlike systems such as [24,25] which attempt to fix the orientation and location of the hand and require alignment if the image deviates from this specified position. Alignment is constricted in such systems by adding pegs to the scanning device. As the flatbed scanner has no such pegs, there can be no assumption that the hand will be placed in a fixed place, or that the fingers will be spread consistently across all scans. Instead the system will use the whole shape [26,27] of the hand to calculate the biometrics. To obtain the silhouette of the hand it must first be segmented from the background. A black surround for the scanner (see Appendix B) simplifies this process, as the image acquired can be thresholded above a specified grey-level to remove this black background. After thresholding however, there may be artefacts in the binary image produced by dust or marks on the scanner. Also, the further away the subject is from the glass plate of the scanner, the darker the image. Therefore if the wrist is not placed as flat as possible this will produce a dark image which, after thresholding, can lead to disjointed portions of the hand. 3.5. Feature Extraction The next stage is feature extraction, and uses the binary image produced by the previous two stages. This involves computing those features suitable for distinguishing the individual from other users: the biometrics. Figure 3.5 An example of pressing hard against the glass of Figure 3.6 An example of the same hand, however applying the scanner surface. less pressure against the glass of the scanner surface. - 13 -

Finding suitable areas to measure is crucial and those chosen will impact the rest of the system considerably. It is important to use measurements that will not only be consistent with different scans of the same person, but will also separate users significantly enough to identify them accurately. Variations in pressure of the hand on the scanner plate affect the quality of the images obtained. The harder the hand is pressed against the scanner the whiter it becomes and the more the detail is lost. Therefore texture analysis of the palm [28,29,30] or fingers [31] will not produce consistent results. Although a texture analysis technique is discussed later (see Conclusion: Further Work) it will not be explored further for this particular system. [31] describes two different categories for hand-based biometric systems: hand geometry and hand shape. The approach of this system involves a combination of both, taking measurements of various features based on the hand shape. Hand geometry systems such as [24,25] are often based on an expectation of where the hand will be placed (as described in Chapter 2). However, as the exact position and orientation of the hand are unknown there can be no such assumption. Potential biometrics to consider are illustrated in the Figure 3.7 below, and are adapted from the hand geometry system discussed in [24]: Figure 3.7 Example of possible biometric features that could be used to identify the user. Not all of these will be suitable for this system however (discussed in Chapter 4). - 14 -

Choosing which measurements to make, and experiments demonstrating the effectiveness of those chosen are detailed in Chapter 4 and Chapter 5 respectively. 3.6. Storage or Matching Once the template of the user has been established the system will have two options: either to store the details in a database, and therefore enrol the user on the system; or to match the scanned template with those already stored in the database. To enrol a user on the system, a storage procedure must be established. Whether to use only one scan to enrol, or several, is an issue discussed in Chapter 5. For matching there are also choices to be made as to how the system should function. To verify, a user must claim to be someone stored in the database. A comparison of how close the scanned template is to the claimed template will establish whether the user is an impostor. Identification is more of a challenge however, and involves comparing the scanned image with all of those stored. The system must then decide who is the closest match, but perhaps more importantly reject the user if they do not match closely enough anyone enrolled on the system. The procedure is broadly similar for identification and verification, the only difference being the number of templates the probe image is compared with. Results and a discussion of the two are provided in the Evaluation section (Chapter 5). 3.7. Testing Plans Once the system has been produced, rigorous testing is required to ensure the robustness of the technique. These tests will consist of: Threshold acceptance level variation Variation of hand orientation Hand pressure variation Separation of fingers Impact of finger nail length Effect of time lapse to results Attempts at spoofing identity A series of experiments relating to these are discussed in Chapter 5. 3.8. Schedule for Project Management Appendix G details the proposed schedule for managing the project. This is taken from the midproject report, and modified where necessary. A project log containing progress updates will be kept on a website, online. Further information is provided in Appendix G. - 15 -

Chapter 4 IMPLEMENTATION 4.1. Scanner Control A major influencing factor in choosing Java over other programming languages is the extensive range of libraries available. One such API is Morena [35]. This framework is built upon the standard platform-independent TWAIN interface that provides hardware control of the scanner from the computer. As Java is also platform-independent the system produced should be portable between different operating systems. An image acquisition class named GetHand was developed to incorporate the features offered by Morena. The following settings are specified: Image frame area: the target area of the scanner surface is fixed, and scanning begins two inches from the start position to ensure as much of the hand and wrist is scanned as possible, but as little of the arm. Brightness/Contrast: both of these values are fixed to 0 to ensure consistent images are produced. The results for the particular scanner used (see Appendix B) were ideal with brightness and contrast at 0, but depending on the hardware variation these values may need adjusting to improve segmentation results. Resolution: This is set to 100dpi in both X and Y to produce images of size 850x960 (as chosen in section 3.3.) Scanner Dialogue Box: this is disabled so no direct scanning control is provided to the user. All that is displayed is a progress bar as the scanner is operating. The constructor for the GetHand class allows specification of the filename to store the acquired image to, and the format is.jpeg. 4.2. Pre-Processing Operations As mentioned in section 3.4., the hand must be segmented from the background before any processing operations can take place. The first stage of the segmentation is a threshold operation. As the images are acquired in RGB format, this must be applied across all three colour channels. The minimum level chosen for each channel will obviously affect the resulting binary image considerably, and a balance must be reached as to provide consistent results for different scans, segmenting the whole hand and as much of the wrist as possible. The images below illustrate the effect of thresholding at different levels, with the threshold value applied equally to all three colour channels: - 16 -

Figure 4.1 Original Scan Figure 4.2 Threshold at 10 Figure 4.3 Threshold at 20 Figure 4.4 Threshold at 30 Figure 4.5 Threshold at 50 Figure 4.6 Threshold at 70 Figure 4.7 Threshold at 90 Figure 4.8 Threshold at 110 It is clear that the higher the threshold, the more the hand shape erodes. This is due to the hand not lying completely flat to the scanner plate. Objects further away from the plate are captured darker than those flush with the glass. Therefore, because of the rounded shape of the fingers, when scanned they appear darker towards the edges. The centre of the palm is also notably darker as this does not lie completely flat against the plate. A threshold level of 23 across all three channels provides optimum results for all of the images acquired; this value may be hardware dependent however. Altering the brightness and contrast settings will also have an effect on the threshold value chosen. For this reason all variables that will affect the system behaviour are defined in a configuration file (see Appendix E). If different hardware is used at a later date, the settings can be modified accordingly. Figure 4.9 Close up of a colour, scanned image after applying threshold of 23, 23, 23. Artefacts detected on the scanner pane have not been removed as a result of thresholding. - 17 -

Once the thresholding is complete the next stage is to apply a median filter. This is required to remove any artefacts remaining after the threshold operation, as a result of dust or marks present on the scanning surface for example (shown in Figure 4.9 above). The outline of the silhouette is very jagged after applying the threshold operation. By median filtering the image not only are the small artefacts removed, but the boundary is also smoothed. Figure 4.10 below shows the effect of applying a 7x7 median filter to the image shown in Figure 4.9. Figure 4.10 Close up of the same scan after a median filter is applied to the thresholded image above. Notice the artefacts have been removed and the edge of the hand (especially the wrist) is smoother. After image acquisition from the scanner, the median filter is the next major bottleneck for the system. Operations involving a kernel being passed across an image run much slower in Java than if they were written and executed in other languages, such as C++. The larger the kernel, and the slower the computer, the longer it will take for the process to complete. Therefore it is essential to use a filter just big enough to remove the artefacts. A 7x7 window is successful for all the test images, and only takes approximately 3 seconds to complete, so this is implemented. As the configuration file allows for the scanning resolution to be altered, the size of the median filter kernel can also be modified if necessary, but the default value is set to 7. Using a fairly low resolution of 850x960 has the benefit of not only less pixels to process, but a smaller kernel can be used. With higher resolution images the same unwanted artefacts are produced much bigger, and therefore, a larger kernel is required to remove them by median filtering. Using an increased image and therefore kernel size would obviously increase the computing time required considerably. 4.3. Feature Extraction This stage will completely affect the rest of the system. The features extracted are those that will be used to distinguish the difference between users enrolled on the system. Choices of what measurements to use and how these will be computed are discussed below. - 18 -

4.3.1. Producing the Border As mentioned earlier, the system does not know where exactly the location of the hand will be, the spread of the fingers, or the orientation. Any calculations of measurements must therefore be based on the entire shape of the hand. From the median filtered binary image the outline of the hand silhouette is produced using an adaptation the border algorithm [40] provided in the AI31 libraries. The default algorithm requires a white object on a black background. It starts from the top left of the image and searches line by line until it locates the first white pixel, then traces anti-clockwise the outline of the object found. Original border also contains unused functionality relating to curvature. This is calculated using the vector-based method described in section 2.6. The algorithm was adapted to start searching from a specified position (described below), and to store the curvature values at each pixel on the boundary, in an array. 4.3.2. Landmark Identification From the curvature array, points of extreme curvature can be identified. These points define the landmarks of the hand, i.e fingertips, inter-finger points and where the wrist intersects the image (highlighted with numbered black squares in Figure 4.11, below): Figure 4.11 Extremes of curvature are the landmarks of the image, eleven should be located on a normal hand. - 19 -

The border algorithm is set to start at row 250 (shown by the red arrow in Figure 4.11, above). This is to ensure no landmark is detected in the area identified by the green circle. Depending on the amount the hand is rotated in relation to the wrist, this area will range from being relatively smooth to having quite a sharp curve in the outline. (See also section 4.3.3.1.). As described in section 2.6., there are two variables that influence the result of the curvature array. These are the window size used, and the amount of Gaussian smoothing of the values calculated. With a small window size and no smoothing the curvature array produces a fairly noisy graph (Figure 4.12), however using a window size of 65 and a Gaussian kernel of width 40 a graph is produced that clearly identifies peaks (Figure 4.13). These peaks are the extremes of curvature and correspond to the landmarks illustrated in Figure 4.11. Figure 4.12 Using the default window size and no Gaussian smoothing (as in the original Border algorithm), the curvature array produced is too noisy to use. Note the gap between 3467 and 3678 corresponds to the flat section between landmarks 9 and 10. Figure 4.13 However, by altering the window size and amount of Gaussian blurring of the array, a much clearer graph is produced, showing the extremes of curvature. The numbers at each peak correspond to those points identified in Figure 4.11. At each point on the boundary not only is the curvature calculated, the x and y-coordinates are also stored. Therefore it is possible to work out exactly where the peaks on the graph are in relation to the border. The problem is however, how to locate these peaks in the curvature array. At first glance it appears that they could be roughly in the same position on the graph and that simply finding the highest - 20 -

curvature value between eleven specified ranges would work. However under closer inspection, variances in hand position and between different hands, produce peaks in different positions. Also, the total border length for individual hands will be different depending on the overall size, and therefore perimeter of the hand. Therefore, although the graphs produced are of similar shape there can be no assumption of where the peaks will be, only the order in which they will appear, as the start position of the border and direction to trace is known. The graph of curvature and the border for three separate hands are shown below. The darker sections of the hand outline relate to curvature that is above a threshold of 0.12, shown as a horizontal dotted line on the graphs. Figure 4.14 Hand 1 Notice the total length is 3930 and the peaks are not located in the same position in the other hands below. Figure 4.15 Hand 2 - The total border length for this hand is only 3665, although the pattern looks similar. Figure 4.16 Hand 3 4348 is the border length here, the longest of the three. Again a similar pattern is produced however. Notice the peaks here (and in Hand 2 above) are not in the same place as Hand 1 (shown by the three vertical dashed lines on the left hand side of the graphs) In an attempt to normalise the length of the curvature array, the above three graphs are shown below on a percentage scale (Figure 4.17). The grey shaded regions show the ranges to search for the peaks, each a certain percentage distance along the array. Although the peaks are in similar positions, only three hands are shown. With hundreds of hands they could overlap these ranges, so an alternative method of extracting the peaks is required. - 21 -

Figure 4.17 All three graphs above scaled to same length along the x-axis. Grey regions show potential ranges to extract the peaks from. The technique implemented is as follows. The curvature array is read through, checking to see if the next value in the array is larger than the previous value. If it is, then the current array location is on the left hand side of a peak. Providing the value is above a specified threshold (0.1), the curvature is compared with the maximum found so far. If greater than this maximum, the current maximum is modified and the location on the border of where this maximum was discovered is updated in the temporary variables plotx and ploty. The next array value is then fetched, and if it s curvature value is less than the current maximum, then it is assumed that the peak has been reached. Once the peak has been found, the location is stored as the landmark, and then current maximum is reset back to 0. The process repeats until the end of the curvature array is reached. The result of applying this algorithm (see Figure E.1 of Appendix E) to the same three hand images above (in Figures 4.14-4.16) produces the following results; notice the landmark positions are relatively stable across all the images: Figure 4.18 Hand 1 landmarks Figure 4.19 Hand 2 landmarks Figure 4.20 Hand 3 landmarks - 22 -

4.3.3. Calculation of Biometrics Now that the landmarks and their locations are known, the next stage is to calculate the biometrics, and therefore the features to be extracted. From section 3.5., possible measurements to use are finger length and finger width. These were implemented separately, finger length first. Comparisons of the two methods are discussed later, in Chapter 5. 4.3.3.1. Finger Length From the landmarks known it is possible to work out the length of the two innermost fingers (shown in blue in Figure 4.21 below). This is achieved by using the fingertip and the midpoint of the interfinger points. The distance between these points (χ below) is then computed using Pythagoras theorem. Calculating the length of the outer-most two fingers proves more difficult however, as we do not know the landmarks at the bottom of the fingers on the outer boundary. A proposed way of finding these points is to plot a line through two of the known inter-finger landmarks and then project the point on the boundary that lies on this line. This provides the green landmarks illustrated in Figure 4.21, and now the mid-point can be calculated and the lengths measured for these outer fingers in the same way as the innermost fingers. The lengths produced by this method rely heavily on where the interfinger landmarks (numbered 1, 3, 5 in Figure 4.11) are located. Landmark 7 varies greatly depending on the amount χ χ χ χ Figure 4.21 Possible ways of measuring the finger lengths. The method used to extract the lengths identified by the blue lines is more likely to be stable across various images; the spread of the fingers should not affect the results substantially. the thumb is moved between scans. The landmarks at the top of the image are also highly likely to be inconsistent, due to how far the hand is placed into the scanner, and the pressure of the wrist on the pane. Depending on the placement of the hand, and the rotation of the hand in relation to the wrist there is also a possibility of extra landmarks being detected in the regions highlighted by the green circles above. Therefore landmarks 7, 9 and 10 cannot be reliably used in the processing of measurements. As the proposed methods for calculating biometrics only involve the four fingers, extra landmarks detected in the wrist area will not effect the processing. Results from experiments using only finger length are provided in section 5.3., and the algorithm to calculate all four lengths can be found in Appendix E (Figure E.2). - 23 -

4.3.3.2. Finger Width The more measurements taken for each user, the more likely there will be separation between individuals. Each measurement adds an extra dimension to the user, an extra vector to distinguish from others. The prototypes proposed by [24 and 25] use finger widths in the feature extraction process, but this is aided by the position and stretching of the fingers being known to the system. As this is not the case with this system, a way of measuring widths using the landmarks calculated earlier is therefore desirable. Although the interfinger landmarks are not overly stable, depending on the separation of the fingers (see later - section 5.7.3.), the landmarks detected at the fingertips appear to be. Several images of the same user s hand, with fingers spread differently and the hand positioned in alternative Figure 4.22 [Yellow] Figure 4.23 [Red] Figure 4.24 [Dark Purple] Position 1, little finger Position 2, little finger Position 3, little finger extracted. extracted and aligned. extracted and aligned. places, produce fingertip landmarks that are consistent among the images. These points are therefore taken as constants among different scans of the same person. To prove that the finger widths vary between individuals, but are consistent for the same user regardless of their hand placement in the scanner, fingers from separate scans were extracted and aligned. Figures 4.22-4.24 (above) show the little finger cut from each image and aligned to the same position as that in Figure 4.22. Figure 4.25 below shows these three aligned fingers stacked on top of each other in the order yellow red dark purple. Notice that although the Figure 4.25 Little fingers from 4.23-4.25 aligned and then stacked on top of each other. The length of the top (yellow) finger has been trimmed on the right hand side so to show the colour of the finger(s) layered underneath. finger is positioned differently in each of the scans, there is little variation in the width extracted, as expected. Although this consistency makes measuring the widths a viable option, there must be differences between users. To prove that this is indeed the case, fingers from other users were aligned in the same way and stacked on top of each other. Where there is little difference between scans of the same user (Figure 4.25 shows that the edges - 24 -

are mostly green with very little deviation from the blue and red images underneath), there should be much more significant deviations between individuals. Figures 4.26 4.27 below show two different users and the alignment of their little finger matched to the same as that of Figure 4.22. All three individual s little fingers are stacked on top of each other in Figure 4.28, in narrowest to widest order. Figure 4.26 [Dark Blue] Different user, notice the little finger extracted appears wider than that in Figure 4.23. Figure 4.27 [Magenta] Another user, again with the little finger positioned to the same alignment as that in Figure 4.23. It is clear that the differences in width between the stacked fingers in Figure 4.28 are much greater than that of Figure 2.25 (i.e those of the same user). Although the separation between DB (dark blue) and Y (yellow) may not be huge, that of DB and M (magenta) is. The results from using finger width for all four fingers are discussed in section 5.4 (Chapter 5). The proposed method to extract the widths uses the fingertip landmark as a starting point. The idea is to extract approximately the same length of finger regardless of the user, and sample the widths of that length at specified intervals. Although some users will have wider fingers than others, the length of a user s fingers will also play a part in the width measured. Variations in width due to where the finger joints are positioned will produce different results, depending on how long the individual s fingers are. Only the upper-most part of longer-fingered individuals will be extracted, whereas the whole finger may be sampled for shorter-fingered users. Figure 4.29 (below) shows the dark shaded Figure 4.28 The little finger extracted from three different users aligned and stacked on top of each other. Notice the difference in the widths, making finger width a potentially viable biometric to use. finger region used to sample the widths from. The two minimum points reached for each finger, i.e. the most distant shaded sections before and after the fingertip landmark, are determined by the constant value specified for the particular finger. These values are different for each finger, as generally the little finger is the smallest, middle finger largest and the ring finger and index finger are approximately the same length. The regions extracted in Figure 4.29 seem to be fairly small compared to the length of the fingers, but the hand shown has long fingers in comparison with others. - 25 -

Fixed values for α, β and λ must be chosen that are suitable for all the users enrolled in the database. The lengths shown may look small, but this is necessary for the criteria to be met. The next major decision is how many samples to take from the highlighted regions. Obviously the more samples taken, the greater the approximation to the actual finger shape, and the more information obtained. Storing more measurements however not only takes up more disk space, but the amount of time required to extract and match them will also increase. The system in [24] makes four measurements for the index finger, ring finger and little finger, and five for the middle finger. Other measurements are also taken - made possible by the fixed location of the α β λ β hand. As these other biometrics are not possible with this system, more measurements per finger are implemented here, and a discussion of how varying the number of measurements taken affects the results obtained is discussed in section 5.4. Although the system is tested with different sampling amounts, the final implementation uses six width measurements for each finger. Firstly, two points on the border are computed, and their location in the border array and co-ordinates stored. These are the furthest away points that make α, β and λ (Figure 4.29) equal to the specified values. The function takes the position in the border array where the fingertip landmark is located, as a parameter. Using this as a starting point the border is searched forward until a point is found ( f in Figure 4.30, right) that is within ±1 of α, β or λ (depending on the finger). Once the co-ordinates are obtained and the forward_border location is calculated, the algorithm then searches backwards from the fingertip starting point and calculates the point on the border before the fingertip that again makes α, β or λ within ±1 ( b in Figure 4.30). The co-ordinates of both of these two points can be calculated from their location in the border array, as each element in the Figure 4.29 The regions to sample the finger widths from are highlighted by the darker border. Constants α, β and λ are used to extract these regions. b f α α - + Figure 4.30 The border array is firstly incremented, then decremented until the distance from the start point equals α. array is a coord object. Once the positions in the border array before ( b ) and after ( f ) the fingertip are known, the next stage is to sample the distance between these points at specified intervals. The number of measurements to make for the finger is passed to the function. - 26 -

b b* f b* f* b* f* b* f* b* f* f* b* s f* f * = f start position number of measurements start position b b* = number of measurements Figure 4.31 The widths are measured at equal intervals forwards/backwards from the start point. The difference between the start position in the border array and b and f, is divided by the number of measurements to give equal increments ( f* ) or decrements ( b* ) depending whether the point is forward or backward of the fingertip (start). The point distance between the consecutive increment-decrement pairs (starting from the fingertip) are the measurements made (Figure 4.31 above). The function is called for all four fingers and for each finger the six measurements made are stored in the UserMeasurements object which is passed to the function. This object type holds the biometrics required for the matching stage, discussed below. b There is an alternative way of locating f and b so to ensure (as w much as possible) exactly the same length of finger is extracted regardless of f finger thickness. This involves increasing distance between the start position and f and b until the length specified (l in Figure 4.32, left) is reached. By using this technique however, a lot more processing is required. The algorithm discussed above (shown in Appendix E, Figure E.3) has to repeat with increasing start forward/backward-point lengths until l is attained. For all the test images available however, the difference between the length of a finger from the widest-fingered user and the narrowestfingered user, extracted with this method compared to the implemented one, is only 3 pixels. There is still likely to be a potential error of ±1 pixel by this alternative method, as the image is essentially made up of a grid of l - + Figure 4.32 An alternative method of calculating f and b that ensures the length of finger extracted is consistent regardless of finger width. pixels. Values used for α, β or λ are large enough (180, 230 and 260 respectively) so that differences in w (Figure 4.32) due to finger thickness cause an insignificant effect on the actual length of finger extracted. Therefore implementing this alternative technique was not considered necessary. 4.4. Matching Once the biometrics have been calculated and stored in a UserMeasurements object, the next task is to either authenticate or identify the user. This involves a comparison of the measurements made from a live scan, and those stored in the database of enrolled users. The closeness of the match will determine the system behaviour - whether to grant or deny access. - 27 -

Of the systems discussed in the background research, various different methods are used for this process. For example, in [30] normalised Hamming distance is implemented, and in [31] a simple correlation score is computed. [24] discusses several techniques and compares them. More information is provided in Background Research (Chapter 2). The method chosen is based on Euclidean distance. Each measurement of the live image is compared with the enrolled image(s). Depending on whether verification or identification is used, and how many enrolment scans for each user are stored, the matching process will be different (discussed below). The formula to calculate the distance is as follows: d = N i= 1 ( p i g i ) The live image to test against the database is labelled a probe, and those stored in the database as gallery images. For each measurement ( i ) the difference is calculated between the probe and the gallery biometrics. N is the total number of features to compare. For both verification and identification a decision must be made based on the distance calculated. If the smallest distance produced is above a certain threshold (specified) then access should be denied. This threshold is varied in the testing stage, and the effect made on the results is shown in Chapter 5. 2 (8) 4.4.1. Verification Where the system is set up to verify, the user must supply some sort of claim to the program. This could be the supplying of a username, or the swiping of an ID card for example. Either way the comparison is only made between the probe and the gallery image(s) of the claimed user. If more than one enrolment image is stored, the lowest distance produced ( d ) is taken as the match score. The smaller the score, the closer the match. 4.4.2. Identification To identify, no claim is made to the system as to who the user is. The system must therefore compare the probe image to all of the gallery images and then decide on the closest match. The number of enrolment images stored for each user determines how many potential correct matches there should be, but the lowest distance is used as the match score. Identification therefore involves a lot more processing than authentication, and depending on the number of users enrolled on the system, can have a detrimental effect on performance. This is one of the reasons why only a limited set of biometrics is stored. Comparing hundreds for each probe-gallery pair for example may provide more accurate results, but at the expense of an unrealistic processing time. The effect of varying the number of, and locations of the biometrics is illustrated in the testing phase, discussed in Chapter 5. System performance and a comparison of verification and identification authentication modes are also discussed. - 28 -

Chapter 5 EVALUATION 5.1. Data Collection In order to evaluate the system thoroughly, test data is required. Using the same hardware as described in Appendix B, 179 images were acquired over a period of six weeks. The participants involved in the data collection ranged from nineteen to fifty five years old, with mean age of twenty-eight. Of the 179 images obtained, these came from twenty-two individuals, 68% of which were male. Between five and ten images were acquired from each participant, with three of these chosen for enrolment purposes. The average number of images acquired from each user was seven. For the majority of participants all of their scans were performed in one session. However, for the purposes of testing robustness, extra scans were made from some of the participants - with up to a six weeks gap between those and their original scans. The effect of time-lapse on results is discussed later in this chapter. All images acquired are stored in.jpeg format, RGB colour and of size 850 x 960 (for reasons discussed in Chapters 3 and 4). 5.2. Feedback Assessment Criteria Some way of measuring system performance is essential to draw conclusions on how robust each technique described in Chapter 4 is, and therefore decide on how to configure the system for optimum performance. Common criteria to measure authentication systems are based on four possible system outcomes, and therefore there are four rates to measure: Genuine Acceptance Rate true positive genuine accepted GAR = 100 genuine probes Genuine Rejection Rate true negative impostors rejected GAR = 100 impostor probes False Acceptance Rate false positive False Rejection Rate false negative false acceptance total* GAR = 100 genuine rejected GAR = 100 genuine probes + impostor probes genuine probes * false acceptance total = accepted with incorrect name + impostors acepted - 29 -

These rates are provided for all the biometric systems discussed in the background research (Chapter 2). Ideally the GAR and GRR should be as close to 100%, and FRR and FAR as close to 0% as possible. The rates will differ between identification and verification modes, and also the number of enrolment images stored. To test the FAR and GRR, images must be used that do not correspond to any enrolled on the system. For this purpose, three individuals have been isolated from the rest of the test data and their hand images will be used as impostors, to see how the system reacts. There are therefore 82 probe images (70 genuine and 12 impostor), and up to 57 gallery images depending on the number of enrolment images required in each particular experiment. Testing the various implementations discussed in Chapter 4 now follows with results from using finger length, width, and both finger length and width combined. The results for each experiment are broken down into identification and verification. The same scans are used as gallery and probe images, to ensure a fair testing environment. For each experiment there are several variables that will change the outcome of the results. These are altered to see how the system performs: The threshold acceptance level: the maximum score accepted as a match The number of enrolment images stored in the database for each user: either 1, 2 or 3 Whether to take the lowest overall score of all the enrolled images, or the lowest average score. When using the lowest overall score the smallest distance is returned from {number of users x no. of enrolment image}. When using the lowest average score, the smallest distance is returned from {number of users}. Authentication mode. In identification mode, the system compares the genuine probe against all those enrolled and can grant access (GA), deny access (FR) or grant access under a different username (FA). If the probe is of an impostor there are two possibilities; if access is granted it is a false acceptance (FA), or if denied it is a genuine rejection (GR). In verification mode, the probe image is compared only to those enrolled images for the claimed user. If there is a match below the acceptance threshold then access is granted (GA), otherwise access is denied (FR). In identification mode an extra rate is computed in addition to the GAR, GRR, FAR and FRR. This is the FMBT rate, or false match below threshold rate. Each time a genuine probe image is tested against the database, the system can either return one of three possibilities: GA, FR or FA. This decision is made based on the lowest (overall or average) match score of the probe against all those enrolled; all the other scores are discarded even if they are below the threshold. If these other scores correspond to a different user than the genuine probe image, there is a risk that on subsequent tests the lowest score returned could be of another user. The system could therefore mistake the genuine user for someone else, potentially providing extra privileges upon login. This rate illustrates the discriminating power of each technique in distinguishing users from each other, and should be as low as possible to ensure robustness of the system. - 30 -

5.3. Finger Length Experiments The method described in Chapter 4 discusses a possible way of extracting the finger lengths for all four fingers. However, the technique described for measuring the innermost two fingers is different from that of the two outer fingers (the little finger and first/index finger). To calculate the outer finger lengths two extra landmarks must be derived, and the reliability of this method depends on this process. 5.3.1. Comparison of Inner-Finger Lengths to Outer-Finger Lengths The two techniques discussed in Section 4.3.3.1. are compared below to see whether the measurements produced for the outer-fingers are stable between images of the same user. If the results are consistent then further experiments will be carried out using all four fingers, otherwise only the innermost finger lengths will be used. Figure 5.1 A graph illustrating the consistency of the measurements of the four fingers, ring finger and middle finger are the most consistent. The graph in Figure 5.1 above compares the lengths computed for each finger, from eleven images (labelled a to k ) of the same user s hand. Between scans the hand was re-positioned to a different orientation, and the finger separation altered. The average length of each finger was computed and then the deviation from this average (shown as the normal ) plotted for each scan. Obviously, the closer the line plotted for each finger is to the normal of zero mean deviation, the more consistent the measurements are of that particular finger. The graph illustrates that the inner-two fingers, the ring finger and the middle finger, produce the most stable results of the four, with the lengths deviating only ±3 of the normal. The lengths produced for the outer two fingers are much less stable. Outer-finger biometric calculation is subsequently abandoned at this stage. Results from using only the two innermost finger lengths as the sole basis of distinction between users, follows. 5.3.2. Two Innermost Finger Length Results The results below show how increasing the threshold acceptance level affects the performance of both identification and verification. Lowest overall score, and lowest average score results are shown graphically, and a table of system performance at an acceptance threshold of 120 follows in Table 5.1. - 31 -

Identification Figure 5.2 & 5.3 Overall and average score results, increasing the threshold and number of enrolment images used. By increasing the threshold, the false rejection rate decreases. However, the false acceptance rate increases, identifying a genuine image as of a different user, or granting access to a user who is not enrolled on the system. Using the lowest average score, results improve for the FAR, halving from 14.6% to 7.3%, when using three enrolment images. This is at the expense of denying more genuine images at thresholds of less than 70, but this is far better than granting unauthorised access. The FAR is smaller in both graphs when more enrolment images are used, three enrolment images providing optimum results here. Verification As a claim must be made as to whom the probe image belongs to, there are only two possible outcomes from this type of test; either a genuine accept, or a false reject. The graphs in Figures 5.4 and 5.5 below show how increasing the threshold, and the number of enrolment images, affects these results: Figure 5.4 & 5.5 Overall and average verification results, effect of increasing the threshold and number of enrolment images used on GAR & FRR. The overall score rule achieves better results at lower thresholds than the average score. Using three enrolment images, the overall score tests return a GAR of 100% at a very low threshold of 20. The same number of enrolment images requires a threshold acceptance level of 60 using the average score test. Performance is poorest using only one enrolment image. - 32 -

Performance A full table of results is provided in Appendix D. The largest threshold tested is 120, and Table 5.1 below shows the results of how identification and verification compare at this level. Values shown are a rate, so are relative to the number of genuine and impostor test images used. IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images Threshold 120 1 84.3-58.3-19.5-0.0-17.8-100.0-0.0-2 91.4 92.9 41.7 66.7 15.9 11.0 0.0 0.0 17.9 17.3 100.0 100.0 0.0 0.0 3 92.9 97.1 41.7 66.7 14.6 7.3 0.0 0.0 17.9 17.3 100.0 100.0 0.0 0.0 Table 5.1 Comparison of results at threshold acceptance level of 120, varying the no. of enrolment images and scoring type (overall or average) As discussed in Section 5.2. above, the GAR and GRR should both be as close to 100% as possible. The FAR and FRR should be as close to 0% as possible. In identification mode, the best performance is achieved using the lowest average score method, with three enrolment images providing optimum results with a GAR of 97.1%, GRR of 66.7%, FAR of 7.3% and FRR of 0%. The FAR is low, but is not very impressive. As the prototype system only contains 19 enrolled users, this rate is likely to scale dramatically with a larger database, producing unsatisfactory results. The FRR of 0% looks as though the system is performing well, but instead of rejecting a genuine probe for being above the threshold, it is being accepted as being of a different user. This is a problem and illustrates how unreliable this method on its own is. Verification mode performs well at this acceptance level however this is expected at such a high threshold. As the probe image is only compared with 1,2 or 3 enrolled images of the claimed user the GAR is based on the closeness of this match. If the threshold is too high, all probes will be accepted regardless of whom they belong to, as the control of distance between the probe-gallery pairs will be too loose. An indication of how well the technique separates users is useful, and as mentioned earlier this is calculated as the FMBT rate. This is only applicable in identification mode, and is fairly high at close to 18% - regardless of the number of enrolment images. Ideally this rate should be as near to 0% as possible to ensure the robustness of the system. Using finger length as the chosen biometric is therefore not likely to be stable on its own. To reduce this high FMBT rate, other features of hand geometry should be included as well as, or instead of the length of the two innermost fingers. 5.4. Finger Width Results The experiments that follow show how the system performs using only finger widths as the distinguishing features between users. Firstly however, as described in Section 4.3.3.2., the number of samples to be extracted for each finger must be specified. The graphs in Appendix D show how varying this number affects the separation of users in vector space. - 33 -

The main difference noticed between the graphs is how the lines for each image drop for the measurements made at the start of each finger. This shows that the widths near the fingertips are too similar between users, and therefore do not help in distinguishing users from each other. Other than these troughs, the general shapes of the lines are similar between the graphs. Extracting more features will make the template a closer match to the actual shape of the fingers; but this is at the expense of extra storage required for each user. More measurements mean more comparisons are required in the matching stage. For a system with few enrolled users the effect on performance is likely to be negligible, however upon scaling, with perhaps hundreds or thousands of user templates stored in the database, identification will undoubtedly take much longer with more features to compare. As described in section 4.3.3.2., and for this reason, six samples per finger are used in the following tests. The graph representing six samples is shown in Figure 5.6 below. Graphs showing different sampling rates are detailed in Appendix D. Figure 5.6 [Compared to the graphs shown in Appendix D, Figures D.1-D.3]. Using six samples per finger improves the [separation] situation however. There is clearer separation of user a from d in the middle and first finger measurements. c is still fairly close to a, but there are more features further away which it is hoped will impact the score during the matching stage significantly enough to distinguish between the two users. Notice, the first feature measured for each finger seems to be converging to a similar value. Each line on the graph represents a complete set of finger widths from one image, extracted from all four fingers. Lines coloured the same correspond different images of the same user. The x-axis represents the particular feature measured (i.e. little finger width measurement one ( lf_1 ), little finger width measurement two ( lf_2 ) etc.) Ideally, these lines of different users should be as far away from each other as possible with the distance between them significant at enough features to clearly differentiate individuals from each other. Using the same probe images as the previous experiments, the results of the system using finger widths as the only biometrics are shown below. Following the graphs a brief discussion of how widths perform compared to the use of finger length is provided. - 34 -

Identification Figure 5.7 & 5.8 Overall and average identification results, effect of increasing the threshold and no. of enrolment images used on FAR & FRR The false acceptance rate is fairly consistent irrespective of the number of enrolment images, or score mode. When in overall score mode, the FRR is significantly higher (and the poorest performance noticed) using only one enrolment image, compared to two or three. The FRR using the average score rule produces very similar results to the overall score rule, when using only one enrolment image. In comparison to the results obtained for finger length only, the false acceptance rates are similar above a threshold acceptance level of approximately 30. However, the false rejection rates are much higher, and regardless of the number of enrolment images or the threshold level, they never fall to 0%. Unlike the results for finger length, the overall score rule achieves better performance than using the average score. In general however, performance is better using finger length, three enrolment images and the lowest average score for matching. Verification Again, by contrast to the finger length results, the performance of verification is poorer. The GAR curves in the length experiments reach 100% by a low threshold of approximately 30 for two or three enrolment images, unlike the steady increase shown in the graphs below, which never reach 100%. Following the trend so far, three enrolment images produce the best results, and like the length experiments, using the overall score provides better performance for verification. Figure 5.9 & 5.10 Overall and average verification results, effect of increasing the threshold and no. of enrolment images used on GAR & FRR - 35 -

Performance Like the finger length experiments, a full table of results is provided in Appendix D. Table 5.2 below shows how this technique performs at the same threshold as the results provided in Table 5.1: IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images Threshold 120 1 84.3-58.3-14.6-5.7-4.6-94.3-5.7-2 91.4 92.9 50.0 58.3 12.2 9.8 2.9 2.9 4.6 4.2 97.1 97.1 2.9 2.9 3 90.0 91.4 50.0 58.3 13.4 11.0 2.9 2.9 4.3 3.7 97.1 97.1 2.9 2.9 Table 5.2 Comparison of results at threshold acceptance level of 120, and varying the no. of enrolment images and scoring type (overall or average) It should be clear by the graphs that, although this technique does not appear to perform as well at lower levels, above a threshold of 100 the results are comparable to the length-only results. Strangely, two enrolment images seem to produce a better overall genuine acceptance rate (in identification mode) than three images. Like the length results, overall average scores perform better in identification mode, with optimum performance of 92.9% GAR, 58.3% GRR, 9.8% FAR and 2.9% FRR. Notice that the FMBT rate is significantly lower using this technique than the finger length method (which was around 18%). This is a good indication that, although the FAR is still high, the separation of user s measurements in feature space is greater - so the technique is likely to be more robust against false acceptances using further probe scans. Verification rates, although not as successful as those achieved using the length method, are still admirable. With an increased threshold the verification scores will undoubtedly improve, at the expense of being less strict however, and at an increased risk of granting access to an unauthorised user. 5.5. Finger Length and Width Combined Results The next logical step is to see if combining the lengths of the two innermost fingers, and the widths of all four fingers, can improve results. After all, the more biometrics used, the more unique the template produced is likely to be. Some people may have short but broad fingers, whilst others long but narrow, or vice versa. The higher discriminative power of the widths may add enough differentiation between users to reduce the FAR and FMBT rates of finger length on its own, considerably. Again, using the same probe images as the previous tests, the results below show how combining the two innermost finger lengths, and the widths of all four fingers perform. Identification and verification graphs are provided in Figures 5.11-5.14, and a performance table follows in Table 5.3. - 36 -

Identification Figure 5.11 & 5.12 Overall and average identification results, effect of increasing the threshold and no. of enrolment images used on FAR & FRR In comparison to the results obtained for the other experiments, this approach looks very promising in significantly distinguishing users from each other. The false acceptance rates are zero across all tests and all thresholds. This is a great improvement over using length or width in isolation. Although the false rejection rates are higher than the other techniques, it is more important to have a low false acceptance rate. At higher thresholds the FRR is close to zero, and like the other experiments, three enrolment images produce better results. Like the separate width-only results, overall score mode produces the best performance, which is only really shown by the improved genuine acceptance rate and reduced false rejection rate. The false acceptance rate and genuine rejection rates are at the maximum of 0% and 100% respectively - regardless of the threshold used, or the number of enrolment images. Verification Although the FRR does not reach 0%, increasing the acceptance threshold further will result in the FRR becoming 0% and consequently the GAR 100%. Above a threshold of 100 though, the GAR is 97.1% for two or three enrolment images, whether using the overall or average score set-up, which is still very impressive. Figure 5.13 & 5.14 Overall and average verification results, effect of increasing the threshold and no. of enrolment images used on GAR & FRR - 37 -

Performance The results of combining the length and width measurements are shown in Table 5.3 below. (Further results in Appendix D.) Notice that two or three enrolment images produce the best results, and both produce identical genuine acceptance rates and false rejection rates for identification and verification. IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images Threshold 120 1 85.7-100.0-0.0-14.3-0.4-85.7-14.3-2 97.1 95.7 100.0 100.0 0.0 0.0 2.9 4.3 0.4 0.4 97.1 95.7 2.9 4.3 3 97.1 95.7 100.0 100.0 0.0 0.0 2.9 4.3 0.4 0.4 97.1 95.7 2.9 4.3 Table 5.3 Comparison of results at threshold acceptance level of 120, and varying the no. of enrolment images and scoring type (overall or average) Although there is no clear difference in the results of two or three enrolment images, only 82 test images are used against 19 enrolled users. Looking at the results from the previous experiments however, using three enrolment images produces optimum results, so this should be the chosen setting. Upon scaling, testing with a larger database and more probe images may further quantify this decision. Notice that the FMBT rate is extremely low at only 0.4%. This provides clear evidence that combining length and width separates users substantially from each other. As this rate is so low, the risk of the system producing a false acceptance is greatly reduced. Even with the highest tested threshold of 120 only 16 matches out of a possible 4,674 comparisons were of a different user than the probe image was of; compared with 677 when using length only or 163 with width only. This is a substantial improvement. 5.6. Summary Of Results Although the GAR using length only seems quite high at 92.9%, unfortunately the FAR and FMBT rates are also high. Using the overall score rule the FAR is as high as 14.6%. Coupled with this, the GRR is too low, and therefore these results are unacceptable. Finger length in isolation is not very distinguishing as the basis of biometric representation, subsequently the verification results are the highest using this approach. This is because the maximum difference between the shortest-fingered registered user and the longest-fingered is approximately 80, when using only two fingers. Width-only produces slightly inferior (but still admirable) verification results, but with the benefit of more robust identification performance. In identification mode, although the GAR is slightly lower than length-only, the GRR performs better using the overall score rule; and perhaps more importantly the FAR is lower. The FMBT is substantially lower, demonstrating the potential of widths as more distinguishing features to use. This is quite predictable though; the more features extracted the more unique the representation is likely to be. Using only two finger lengths to determine the authenticity of a user is unlikely to be reliable on large - 38 -

scaling of the number of users enrolled. By combining the lengths and widths, system performance of identification improves considerably. With a GAR of 97.1%, GRR of 100%, FAR of 0% and FRR of 2.9% - robustness of the technique looks very promising; this is credited by an extremely low FMBT rate of 0.4%. Table 5.4 below shows the results of the three approaches discussed above, shown in a single table for clearer comparison. This summary table only provides details of the performance of three enrolment images; as from all the experiments the results produced were best with this setting. IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Length Only 92.9 97.1 41.7 66.7 14.6 7.3 0.0 0.0 17.9 17.3 100.0 100.0 0.0 0.0 Width Only 90.0 91.4 50.0 58.3 13.4 11.0 2.9 2.9 4.3 3.7 97.1 97.1 2.9 2.9 Length and Width 97.1 95.7 100.0 100.0 0.0 0.0 2.9 4.3 0.4 0.4 97.1 95.7 2.9 4.3 Table 5.4 Overall summary table, showing results from all three experiments. Values shown correspond to a threshold acceptance level of 120. The three ROC curves (shown left) compare the genuine acceptance rate with the false acceptance rate for each technique. Three lines are plotted for each graph, representing the number of enrolment images used in each experiment. Looking at the first graph, the largest FAR is fairly high, at close to 20% for one enrolment image. Ideally this rate should be minimized, and as close to 0% as possible. Using finger width (graph two) improves this rate to a maximum of approximately 15%, as demonstrated by the shift of the three curves to the left. By combining the lengths and widths (graph three), the FAR is reduced to zero across all thresholds, and is minimised as much as is possible. As a result, by increasing the acceptance level threshold the GAR improves without any affect on the FAR. This is ideal and system performance using both length and width combined provides the most robust results. - 39 -

5.7. Robustness Testing Now that length and width combined have been proven to provide the best results, the system is configured to this set-up. Although the results discussed above are very encouraging, the next stage is to test how robust the technique is to various usage conditions. These are explained below, and resulting system behaviour is discussed afterwards. 5.7.1. Hand Orientation Placement of the hand on the scanner plate is uncontrolled. However, it is assumed that the hand will be placed fairly straight-up. Although the system calculates the measurements based on the entire shape of the hand, the effects of varying the positioning as much as possible are shown in Figures 5.15 17 above, and graphically in Figure 5.18. These images are of an enrolled user. The normal on the graph represents the values expected for each biometric feature for the particular individual, based on their three enrolled images. Each test image is shown as a line on Figure 5.15 Test 1 Match score = 93.52 Figure 5.16 Test 2 Match score = 17.27 Figure 5.17 Test 3 Match score = 93.55 Figure 5.18 How the test image measurements differ from the average of 3 enrolled images (the normal ). Notice how test 2 is closer to the normal compared to 1 and 3. the graph. The more the lines deviate from the normal, the greater the effect of hand orientation on results. Test image 2 (Figure 5.16) is expected to be the most common orientation of the hand, and is the most natural way of placing the hand in the scanner. Test images 1 and 3 required the hand to be twisted as far as it could clockwise and anti-clockwise. These two images therefore represent the extremes of values expected by varying hand orientation. Although test image 2 provides the lowest score, using a minimum threshold of 100, all three images would be accepted by the system. 5.7.2. Hand Pressure As mentioned earlier (Chapter 3, section 3.5.) the amount of pressure applied to the scanner plate effects the quality of the captured image. The harder the hand is pressed, the whiter the image produced is. Therefore, if the fingers (in particular) are not placed as flat as possible, they could appear thinner after thresholding due to this darkening effect at the finger edges. Like the orientation tests above, Figure 5.22 compares three test images against the average of three enrolled images. Fig 5.19 and 5.21 are examples of little/no pressure and pressing as hard as possible against the scanner plate without breaking the glass. Test 2 is again an example of an ordinary scan, with the pressure applied as naturally as possible. - 40 -

The results are good for the normal pressure and still good for Figure 5.19 Test 1 Figure 5.20 Test 2 Figure 5.21 Test 3 Match score = 2411.12 Match score = 29.52 Match score = 48.27 pressing as hard as possible. Problems occur however with too little pressure applied by the user. Looking at test image 1 (Figure 5.19), the hand is very dark, particularly near the bottom of the fingers and the palm. When the threshold operation is applied this can remove portions of the hand like this since it appears too dark. If this is the case, feature extraction will generally fail, as an incorrect number of landmarks will be detected. However, if feature extraction succeeds (as it does with test 1), the result is unlikely to be satisfactory. The score produced for Figure 5.19 (over 2,400) is way beyond any sensible threshold and will Figure 5.22 How the test image feature measurements differ from the average of three enrolled images. Notice how test 2 is closer to the normal compared to 1 and 3. consequently report a false rejection. Looking at the border produced for this test image makes it clear why the score is so high (Figure 5.23 below). The middle finger is captured darkest out of the four, and after thresholding a lot of the finger shape is lost. As a result the width measurements for this finger will be very different from the ordinary scan (Figures 5.20/5.24), and this is shown distinctly by the large trough in the test 1 line in Figure 5.22 above. As the fingertips and the little finger are produced sufficiently well in test 1, the line produced is quite near to the normal for these Figure 5.23 Border produced from Figure 5.19 measurements. It is only near the bottom of the fingers where measurements the deviate substantially, due to the inadequate pressure of the hand against the plate during scanning. - 41 - Figure 5.24 Border produced from Figure 5.20

5.7.3. Finger Separation The third major factor that could influence results is the separation of the fingers whilst scanning. As the system calculates the biometrics based on the landmarks, the fingers must be sufficiently far away from each other so that the border can be traced round the whole shape of the hand successfully. If, after thresholding, the fingers seem to join up, then the inter-finger landmarks will be plotted in the wrong location, see Figures 5.25-26 above. This aside, assuming there is a sufficient gap between the fingers, how does the size of this gap affect the results? Test images 1 to 3 (shown in Figures 5.27 29, right) show various finger poses, ranging from being very close together (but not too close as to cause an error), to being as far stretched as possible. Test image 2 is again an ordinary scan, with fingers separated as naturally as possible. The results show that the scores produced for all three tests are still acceptable regardless of finger separation, with a minimum acceptance threshold of 80 required in order to accept these images. Figure 5.25 The two innermost fingers (ring and middle) are too close together in this scanned image. Figure 5.26 Therefore landmark 3 is plotted in the wrong location, and finger widths are incorrectly calculated. Figure 5.27 Test 1 Match score = 75.34 Figure 5.28 Test 2 Match score = 14.61 Figure 5. 29 Test 3 Match score = 70.55 Figure 5.30 How the test image feature measurements differ from the average of three enrolled images. Notice how test 2 is closer to the normal compared to 1 and 3. 5.7.4. Impact of Finger Nail Length Finger length and width calculation relies on the fingertip landmark being stable across multiple scans of a user. However, finger nail-length could cause inconsistency of this location over time. This is more likely to impact female users of the system who may have longer nails on occasion. To see how length affects the matching score of genuine users, two of the individuals enrolled on the system grew their nails longer than they were when the enrolment images were captured. - 42 -

The tests carried out show how extreme nail length impacts the performance of the system for enrolled users. After thresholding the fingers appear to be longer than they actually are due to the extra long nail lengths. Consequently the landmarks are plotted in the wrong location and the lengths and widths measured are not close enough to the enrolled images for the system to accept the probe images. The only way to overcome this problem using the measurement method implemented here, is to re-new the enrolment images for the user if they intend to have long nails on a long term basis. Appendix D details the results. 5.7.5. Time Lapse Experiments As discussed in the background research, the prototype developed by [29] reported better results for test images captured same day as opposed to those on different days. To see how time lapse affects the performance of the system, extra images from some enrolled users were acquired several weeks after the original enrolment images were captured, and tested. Appendix D, again, details the results. The results suggest that same day results do indeed perform better. Although the system did perform reasonably well with more recent scanned images, those captured on the same day produced much better results. The GAR dropped to 84% and the FAR increased to 4% using recent images. This could be because of a different amount of pressure being applied by the test participants, compared to their original scans. Further investigation is required to better understand the impact of time lapse. 5.7.6. Spoofing Identity The tests above show how robust the prototype is to conventional usage. However, unauthorised users may try and misuse the system in attempt to gain access. As the feature extraction technique relies on the silhouette of the hand, this section discusses spoofing identity by attempting to login with a drawn outline, photocopy, or stencil of an authorised user s hand. Firstly, testing using a sheet of A4 paper containing only the outline of the hand was carried out. This test failed however, due to the pre-processing operations. When the image is acquired, after thresholding a median filter is applied, and this causes the thin outline to be removed. Therefore an outline of the hand is insufficient to break the security of the system. Next a colour photocopy of the hand was tested. Due to the way in which the toner dried on the photocopies they were unsuitable for use with the system, as the background segmentation stage failed. Instead colour printouts were tested (See Appendix C). All scores were above the acceptance threshold, however not convincingly high enough to rule out the potential of an unauthorised user spoofing the system by this method. Following this an attempt at using a cut-out (or stencil) of the hand shape was tested. From a photocopy of a genuine user s hand, the shape was cut out of a sheet of A4 and placed into the scanner (an example is shown in Appendix C, on page 65). The system accepted this image and granted access, with a match score of 63. This is obviously a problem; assuming an unauthorised user can acquire an outline of an authorised user s hand and produce a stencil like that shown in Appendix C, they could be granted access. Further discussion is provided in the Conclusion, section 6.3. - 43 -

Chapter 6 CONCLUSION 6.1. General Conclusion Looking back at the introduction, requirements were set for the system to meet. Using the minimum requirements and the possible extensions as a checklist, the first issue to address is whether a solution for the initial problem has been produced. The minimum requirements specified that a system be developed to perform biometric analysis of scanned images of the hand. The prototype produced investigated the possibilities of using finger length, and finger widths as a basis of biometric authentication. It also incorporated the extension of scanner control from within the software, and a graphical user interface (screen dumps are provided in Appendix E, section E.3). This interface allows the user to scan their hand for identification purposes. A separate program is available for the administrator to enrol new users to the database. The administrator must log-in to receive this privilege, by scanning his/her hand. If the system verifies that the captured image is of an authorised user, then access is granted. Once the software prototype was implemented, the system was extensively tested as shown in the previous chapter (Evaluation). Various measurement methods were investigated, and their results detailed and discussed. Several other prototype systems were discussed in the Background Research chapter. The best performance for this system was achieved using length and width measurements combined. Table 6.1 (below) provides a comparison between the published systems and the prototype developed in this report. Most of the published papers do not state all of the rates indicated in the table, therefore blanks have been printed where the rate is unknown. IDENTIFICATION VERIFICATION GAR GRR FAR FRR GAR FRR Reference [24] 97.00% - - - > 90.00% < 10.00% Reference [25] - - 0.57% 0.68% - - Reference [26] - - 2.00% 1.50% - - Reference [27] - - 0.48% 0.48% 99.04% 0.96% Reference [29] 98.83% - 0.02% - 99.14% 0.86% Reference [31] - - - - 96.80%* 3.20% THIS SYSTEM 97.10% 100.00% 0.00% 2.90% 97.10% 2.90% Table 6.1 A comparison of performance against the systems discussed in the background research chapter. *reported results ranged from 94.20 99.40% in Reference [31]. The performance of this system is comparable to those discussed in the background research section, showing that the techniques developed in this report provide good discriminative power between users and that finger lengths and widths combined are suitable as the basis of a biometric authentication system. - 44 -

Part of the possible extensions listed in the introduction chapter specified that, where possible, ways of improving the system should be suggested, and these follow. 6.2. Potential Improvements and Further Work The first major decision made during the configuration of the system was when investigating the use of four fingers for extracting measurements from. However, in section 5.3.1. the inconsistencies in the lengths of the two outer fingers resulted in only the two innermost fingers receiving further analysis. Although the outer fingers were used later for width extraction, there is the potential for exploiting the lengths of these fingers to help further separate users in vector space. Another method for calculating the finger lengths could be investigated to see if these extra two measurements add any additional distinguishing power to the system. The next major influencing factor was the number of samples to take from each finger when calculating the finger widths. After investigation, the decision was made to use six samples per finger. Changing this value will undoubtedly impact the performance of the system, and a trade-off between the quantity of stored measurements, matching time and accuracy resulted in six being the chosen value. Further analysis may lead to a different value selected which may yield better results. It may make sense to extract more measurements from the middle finger than the others, with it being the largest finger, for example. Further investigation into the threshold acceptance level for the length and width combined set-up may prove that a higher threshold improves the results without any negative impact on the system performance. The threshold could also be altered by the security level required, or perhaps learnt from analysis of all of the enrolled users and choosing a value that will ensure false acceptances are minimal, ideally zero. The results from all the experiments illustrated that three enrolment images provided the best system performance. Further investigation may prove that four, five or six could improve the situation further, enhancing results. Using more images may have no impact on the performance however, and could even have a negative effect. More enrolled images requires more storage space and more processing in the matching stage. Deeper analysis would be required before any conclusions could be made as to whether more enrolment images would be beneficial. Perhaps one of the most obvious further developments to explore is the use of the right hand. The system has been implemented to function based on the left hand. The expected order of the finger landmarks is the foundation on which the feature extraction is built upon. If the right hand is scanned instead however, either the thumb or the first finger will be identified first and the other fingers in reverse order to how the system is currently set-up. This problem can be overcome by either configuring the border program to begin from the opposite side of the image and traverse the boundary clockwise instead of anti-clockwise; or simply reversing the settings specified for the fingers so that they relate to the corresponding fingers on the right hand. Either way, incorporating the right hand into the user template will add extra dimensions to the vector space and undoubtedly improve identification performance. This is - 45 -

obviously at the expense of requiring extra enrolment scans, additional storage and more time to login. Alternatively, the right hand could be used in situations where there is no clear distinction between two users. Perhaps there are two individuals with close left-hand templates; requesting an extra scan from the right hand may then add enough separation to make it clearer to identify the user. This could be implemented as an extra option to be used where necessary, rather than built into the standard login procedure. An important issue surrounding the use of biometrics is how the details are stored. People are often sceptical about their personal details being kept on a central database and public concerns over security can hinder the deployment of such systems. Once the biometrics have been calculated and a template produced, secure storage is essential. As mentioned in section 4.2., after acquiring the image from the scanner, the median filter operation (part of the pre-processing stage) is the next major bottleneck of the system. As shown in Figure 3.2 of Chapter 3, it takes approximately 3 seconds to median filter an 850x960pixel image. Once preprocessed, the matching stage takes less than a second. Therefore another way of removing the artefacts and smoothing the image is desirable. Median filters sort the pixels in the neighbourhood by their greylevel, however the median filter implemented in this prototype works on a binary rather than a greyscale image. An alternative technique could involve a binary morphological operation. By applying a BinaryOpen operation [43], using a circular 7x7 array as the structure element (the same size as the median filter used), the artefacts are removed in approximately 1.8 seconds - with similar results. This is almost half the time the median filter requires but further investigation is necessary to see if this operation is suitable, and will provide consistent results for all of the test data. The main drawback of the system, as identified in the evaluation, is that the authentication of users is based on their hand silhouette. In section 5.7.6., when testing to see if an authorised user s identity could be spoofed, scanning a 2D stencil of an authorised user s hand shape allowed access to be granted. This is obviously a problem for the system, but there are potential ways of solving the situation. One way is to capture a side elevation of the hand, like two of the systems discussed in the background research chapter [24, 25]. The platform, where the hand is placed for these systems, has a mirror attached, so when the camera above takes the photo of the top-down view of the hand, the side and therefore heights of the fingers and wrist are also acquired. Not only does this ensure an actual 3D hand is present in the capture device, but further biometrics can be extracted such as the finger heights etc. However, there are still unresolved issues here. Although it may sound absurd, an unauthorised user could place a 3D-mould of an authorised user s hand in the scanner and assuming the shape is a close enough match gain access. The argument here though, is that access points should be supervised and if any suspicious behaviour is detected, then security personnel should carry out further investigation. Attaching a mirror to a flatbed scanner is unlikely to achieve suitable results, but this idea warrants further thought. Another extension is to incorporate skin tonality into the user templates. All of the test images were acquired in colour to allow further work on the colour of the hand surface to be investigated. Issues - 46 -

with the hand pressure making the captured image appear whiter could cause problems however. Nevertheless, certain areas of the hand could produce more consistent results across images, meaning colour analysis could be incorporated into the system to further distinguish between users. One stage further would involve some degree of texture analysis of the hand surface. As discussed in the background research, some biometric systems are based on palm patterns [28, 29, 30]. Another system uses the surface texture from three fingers to identify a user [31]. In the Methodology (Chapter 3) and Evaluation (Chapter 5), examples are provided that show how the pressure applied by the user affects the quality of the image captured. Generally however, the pressure of the fingers seems to be fairly consistent for the test images acquired in the data collection stage. Most of the pressure is applied at the fingertips and around the palm of the hand, so fingerprint or palm print analysis would not be possible. An investigation into analysis of finger textures could prove useful however. If results can be obtained close to (or better than) that of the prototype discussed in [31], then this approach would be a very valuable extension to the system. Like the suggestion of using the right hand as an extra if the system cannot clearly make a decision as to which user the image belongs to, texture analysis could add an additional, optional level to the system. Where a user s biometrics are similar to one or more other user s, the textures of the fingers could also be compared. This should make distinguishing between this short-list much easier, and with increased accuracy. This would also mean that there would be no performance hit for the majority of logins only where it is necessary to further analyse the scanned image before making an authenticating decision. 6.2.1. Finger Surface Extraction Realising the potential benefits of incorporating finger surface analysis into the prototype, some preliminary work to extract the textures from a scanned image has already been carried out. Based on the method for extracting the finger widths, this technique relies on first calculating the points on the hand border corresponding to a specified distance in-front of, and behind the fingertip landmarks ( f and b in Figure 6.1). Once these points are located, the texture is extracted from the original scanned image, scan-line by scan-line until the fingertip is reached. The way this is carried out depends on whether b or f has the smallest y value (calculated from the top of the image). If b has the smallest y value (as in Figure 6.1), it is selected as x1 and the finger outline is traversed anti-clockwise until the b f Figure 6.1 An example of how the texture is extracted scan-line by scan-line, between x1 and x2 calculated in relation to the finger border. point on the border is reached that corresponds to the same y value. Where this point does not lie in the darker region shown in Figure 6.1, the x value for f is used as x2. All the pixels between x1 and x2 are then extracted. Following this the point b on the border is incremented to the next y value and the process is repeated until the fingertip is reached. - 47 -

If f has the smallest y value, the border is traversed clockwise until the corresponding point is found on the opposite finger edge. The extraction process is the same once x1 and x2 have been identified for the particular scan-line. Appendix E includes some program code snippets illustrating how this algorithm works. Figures 6.4-6.7 below show the fingers extracted from the scanned image shown in Figure 6.2, below. Figure 6.2 A scanned image of a user s hand Figure 6.3 After producing the outline and identifying the landmarks of the hand image shown in Figure 6.2. Figure 6.4 The little finger extracted (corresponds to the example shown in Figure 6.1) Figure 6.5 The ring finger Figure 6.6 The middle finger Figure 6.7 The first/index finger Once the texture has been extracted, the next stage is to rotate it so the finger is vertical, then align and write to a separate file. The textures could be stored in matrices within the user object instead of outputting to separate files; meaning no storage of the images would be required and also offering higher security, as the matrices could be encrypted as part of the user template. - 48 -

Figures 6.8-6.11 show the same extracted fingers rotated and aligned. The widths of all the images are constant, and the heights of the images are also fixed but are different depending on the particular finger. This is because the method described in Chapter 4 for calculating the finger widths extracted a smaller length from the little finger (being the smallest), a larger length for both the ring and first finger, and the largest length from the middle finger. Figure 6.8 Figure 6.9 Figure 6.10 Figure 6.11 Little finger Ring finger Middle finger First finger Once aligned, the next stage is to see if the textures are suitable for comparison with those from other users. To illustrate how the patterns of the finger surface vary between individuals, Figures 6.12-6.17 below compare corresponding fingers from several scans of different users. The fingers are extracted from various different images (shown in Appendix F), all with varying hand placement, finger spread and hand orientation. Notice how the textures produced are almost identical for sets from different scans of the same user. This is very promising, and further validates the stability of the fingertip landmarks location. Figure 6.12 3 different middle fingers extracted from user a', notice how similar they are after alignment Figure 6.13 2 different middle fingers extracted from user b Figure 6.14 2 different middle fingers extracted from user c Figure 6.15 3 different ring fingers extracted from user a', notice how similar they are after alignment Figure 6.16 2 different ring fingers extracted from user b Figure 6.17 2 different ring fingers extracted from user c - 49 -

Using the extracted textures, the final stage is to compare the images and come to some conclusion as to how close they match. One such way is to compare corresponding pixels, one-by-one, between the two images and then total the number of identical matches. This is the approach taken in the system described in [31], where the following formula is used: 1 score( p, g) = I( p( i, j) g( i, j) (9) N ( i, j) valid p and g represent the probe and gallery images, N is the number of valid pixels in both images, and i and j are the co-ordinates of the current pixel. I is the indicator function and returns unity (1) if the pixels are identical, or zero otherwise. The score produced is the total number of identical pixel matches between the two images. To compensate for potential alignment error, this formula is applied several times - with the probe image shifted a pixel in each direction each time. The highest score produced is taken as the match score, and this is tested against an acceptance threshold to decide whether the images correspond to the same individual. Another matching technique is to use normalised greyscale correlation. This method removes the mean value calculated for each image from each pixel to reduce the effects of global lighting changes (although this should be irrelevant using a flatbed scanner with fixed settings). The distance between the two images (probe and gallery) is then computed as follows: ( pi p)( gi g) i NGC( p, g) = (10) 2 2 ( p p) ( g g) i i i i where pi represents the current pixel (i) in the probe image, and pixel in the gallery image. gi represents the corresponding current p and g are the mean grey-level values in the probe and gallery images respectively. For normalised greyscale correlation to produce reasonable results using the images shown in Figures 6.12-6.17, only the middle region of the images is suitable. Using all of the pixels in the image will skew the results; as a high percentage of the pixels are black and this would affect the mean calculation. The main problem with any matching techniques relating to texture is the consistency and quality of the images captured. Those images shown in Figures 6.12-6.17 above have lost a lot of detail at the fingertips due to the pressure applied on the scanner surface. Therefore the usable area is greatly reduced to quite a small region. Whether the amount of detail in this region is enough to distinguish between users needs further investigation. Acquiring the hand images in a higher resolution could be a solution to obtaining more useful features but this is at the expense of additional time required to capture, process and store (if necessary) the images. - 50 -

REFERENCES 1 Home Office: Identity Cards. http://www.homeoffice.gov.uk/comrace/identitycards/index.html April 2005 2 Identity Cards Bill. http://www.publications.parliament.uk/pa/ld200405/ldbills/030/2005030.pdf April 2005 3 IBM ThinkPad X41. http://www.ibm.co.uk. Released 5 th April 2005. 4 Microsoft Wireless Optical Desktop with Fingerprint Reader http://www.microsoft.com/uk/mouseandkeyboard/ Released 21 st September 2004. 5 The United Kingdom Passport Service: Entitlement Cards. http://www.ukpa.gov.uk/identity.asp April 2005 6 The United Kingdom Passport Service: Improving Passport Security and Tackling ID Fraud. http://www.ukpa.gov.uk/press_240305.asp April 2005 7 W. Shen, R. Khanna. Iris Recognition: An Emerging Biometric Technology. Proceedings of the IEEE, Vol. 85, No. 9, September 1997 8 J. Kim, J. Choi, J. Yi. Face Recognition Based on Locally Salient ICA Information. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 9 L. Zhang, D. Samaras. Pose Invariant Face Recognition Under Arbitrary Unknown Lighting Using Spherical Harmonics. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 10 L. Nanni, A. Franco, R. Cappelli. Towards a Robust Face Detector. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 11 W. Shen, R. Khanna. Fingerprint Features: Statistical Analysis and System Performance Estimates. Proceedings of the IEEE, Vol. 85, No. 9, September 1997 12 W. Shen, R. Khanna. An Identity-Authentication System Using Fingerprints. Proceedings of the IEEE, Vol. 85, No. 9, September 1997 13 A. Ross, A. Jain. Biometric Sensor Interoperability: A Case Study in Fingerprints. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 14 H.L. Lee, R.E. Haensslen. Advances in Fingerprint Technology. Eds. Second ed., Boca Raton, Fla.: CRC Press, 2001. 15 S. Hangai, T. Higuchi. Writer Identification Using Finger-Bend in Writing Signature. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004-51 -

16 R.J. Anderson. Security Engineering. Wiley, 2001. 17 A.K. Jain, R. Bolle, S. Pankanti. BIOMETRICS Personal Identification in a Networked Society. Kluwer Academic, 1998. 18 R.J. Hays. INS Passenger Accelerated Service System (INSPASS). http://www.biometrics.org/reports/inspass.html December 2004 19 Recognition Systems Inc. http://www.recogsys.com. http://www.handreader.com/products/handgeometry.htm December 2004 20 I.H. Jacoby, A.J. Giordano, W.H. Fioretti. Personal Identification Apparatus. U.S. Patent No. 3648240, 1972. 21 R.H. Ernst. Hand ID System. U.S. Patent No. 3576537, 1971. 22 D. Sidlauskas. 3D Hand Profile Identification Apparatus. U.S. Patent No. 4736203, 1988. 23 R.P. Miller. Finger Dimension Comparison Identification System. U.S. Patent No. 3576538, 1971. 24 R. Sanchez-Reillo, C. Sanchez-Avila, A. Gonzalez-Marcos. Biometric Identification though Hand Geometry Measurements. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10): 1168-1171, October 2000. 25 University of Bologna. HaSIS: A Hand Shape Identification System. http://www.csr.unibo.it/research/biolab/hand.html December 2004 26 A.K. Jain, N. Duta. Deformable Matching Of Hand Shapes For Verification. In Proceedings of International Conference on Image Processing, October 1999. 27 Y.L. Lay. Hand Shape Recognition. Optics and Laser Technology, 32(1):1-5, February 2000. 28 NEC automatic palmprint identification system. http://www.necsam.com/idsolutions/download/palmprint/palmprint.html April 2005 29 D. Zhang, G. Lu, A.W.-K. Kong, M.Wong. Palmprint Authentication System for Civil Applications. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 30 D. Zhang, W.K. Kong, J. You, M. Wong. On-line palmprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1041-1050, 2003. 31 D.L. Woodard, P.J. Flynn. 3D Finger Biometrics. Proceedings of the ECCV 2004 International Workshop, BioAW, May 2004 32 Konica Minolta Vivid 910 3D scanner. http://kmpi.konicaminolta.us/eprise/main/kmpi/content/isd/isd_product_pages/vivid_910 April 05 33 Handpunch Biometric Hand-Geometry Recognition Terminal. http://www.acroprintstore.com December 2004 34 VeryFast Access Control Terminal. http://www.biomet.ch/ December 2004-52 -

35 Gnome. Morena 6 is Image Acquisition Framework for Java Platform. http://www.gnome.sk/twain/jtp.html December 2004 36 A. Rosenfeld, E. Johnston. Angle detection in digital curves. IEEE Transactions on Computers, 22:875-878, 1973. 37 N. Ansari, K.-W. Huang. Non-parametric dominant point detection. Pattern Recognition, 24(9):849-862, 1991. 38 N. Ansari, E.J. Delp. On detecting dominant points. Pattern Recognition, 24(5):441-451, 1991. 39 F. Mokhtarian, A. Mackworth. Scale-based description and recognition of planar curves and 2D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(1):34-43, 1986. 40 A. Bulpitt, N. Efford. Border algorithm. AI31 Libraries, School of Computing, University of Leeds, October 2000. 41 D.A. Reynolds and R.C. Rose. Robust Text-Independent Speaker Identification Using Gaussian Mixture Speaker Models. IEEE Transactions on Speech and Audio Processing, vol. 3, no. 1, pp. 72-83, 1995. 42 D.A. Reynolds. Speaker Identification and Verification Using Gaussian Mixture Speaker Models. Speech Communications, vol. 17, pp. 91-108, 1995. 43 N. Efford. BinaryOpen algorithm. Digital Image Processing: A Practical Introduction Using Java, Pearson Education Ltd., 2000. - 53 -

APPENDIX A A.1. Personal Reflection When choosing a final year project, I wasn t sure what exactly I wanted to do. Searching through the list of ideas proposed seemed very daunting at first, until I noticed the title Biometric Authentication System posted by Nick Efford. I was intrigued immediately by the topic area and hoped that I would be allocated the assignment. I didn t think it would be possible to identify an individual from their hand shape, and was very sceptical at first but after my first supervisor meeting I went away with some ideas and was eager to start implementing a prototype. One of the goals I wanted to reach from the beginning was scanner control from within a prototype program. I was very interested in trying to get this to work and investigated possible ways of doing it near the start of the project development. Which programming language to use was a key factor and influenced how to tackle this problem. After searching the Internet, a library for Java called Morena [35] was found and this proved very useful. Although scanner control was achieved fairly early on, one of the major problems was the actual hardware used. Originally I was using my very old parallel-port controlled scanner. This was very slow, and regardless of the resolution of images chosen, proved to be very frustrating. I knew that I would need to acquire as many hand images as possible for testing the system. After a few weeks of attempting various ways of speeding up the image acquisition I was still unsatisfied that the hardware was suitable. Therefore I decided to purchase a more up-to-date scanner in the hope that it would work faster. Luckily the money was well spent, and the new scanner boasted USB 2.0 capabilities - meaning image acquisition (even in colour and at high resolutions) was very fast, and much more efficient than the old parallel-port relic. In hindsight, instead of wasting time with the old scanner I should have bought the new one at the beginning of the project. As part of looking at ways of extending the system for improved reliability, more background research was carried out after the January exam period. Originally, only finger lengths were investigated as the basis of biometric authentication. However after reading these extra papers, new ideas were discovered. Subsequently finger widths and texture analysis were explored. Looking back I should have maybe carried out deeper research at the beginning of the project rather than doing this at a later stage. However I was happy with my progress and the stage I had reached by the Christmas vacation. Having achieved scanner control and landmark identification from the captured images, this allowed me to start measuring finger - 54 -

lengths, but I hadn t considered using finger widths at this stage. The extra research carried out therefore offered further improvements for the prototype and built upon these system foundations. In order to test the system thoroughly, it is important to use as many hand images as possible. Although I acquired images from twenty-two individuals, ideally using many more would have validated the results of the evaluation further. For someone considering continuation of work on this system, I would recommend they start capturing images as early as possible. Maybe try and set up the scanner in a busy place, possibly, and see if images can be obtained from members of the public. The whole idea of biometric authentication has attracted interest with friends, and seeing people getting their hands scanned is bound to raise curiosity amongst passer-bys. At around ten seconds to capture an image, people shouldn t have a problem participating. This is well worth a try, although potentially embarrassing, it ensures a diverse and comprehensive test data set. All in all, I am very pleased with the outcome of the project. I have produced a system that successfully identifies all of the enrolled users based on their hand geometry, and rejects images of those who are not enrolled. I am therefore very satisfied with the results and the system performance achieved. Although at times the project has been frustrating, it has been fun and rewarding, and I hope that if anyone continues the work I have started here, they will enjoy the challenges posed as much as I have. - 55 -

APPENDIX B B.1. Computer Hardware Specification The computer hardware specification used in the development of the project is as follows: AMD Athlon XP1700+ 1.48GHz 512MB DDR RAM 160GB HDD USB 1.1 Running Microsoft Windows XP, Service Pack 2 However, a faster laptop was used in the progress meeting presentation in March 2005. The specifications for this system are as follows: 3.06GHz Mobile Intel Pentium 4 512MB DDR RAM 60GB HDD USB 2.0 Hi-Speed Running Microsoft Windows XP, Service Pack 2 B.2. Scanner Hardware Specification The capture device used in this project for acquiring hand images is a standard flatbed scanner. At around 50 and available from a range of different shops, the full specification of capabilities is as follows: Canon CanoScan LiDE 35 1200 x 2400dpi 48-bit input/output USB 2.0 Hi-Speed (where supported, otherwise USB 1.1) Power supplied via USB port. In order to facilitate the segmentation process (as discussed in section 3.4.), a black surround was produced to ensure that light would not affect the image captured and also ensure consistency of the images captured. Constructed from a thick, black cardboard box the enclosure simply rests on the scanner surface, acting as a wedge holding the scanner lid at a fixed height. - 56 -

The photographs in Figures B.1 B.6 below illustrate the scanner and the enclosure produced. Figure B.1 Scanner and surround, shown separately, notice the gap in the surround to place hand through for scanning Figure B.2 Scanner lid open, with surround resting against the lid Figure B.3 Scanner with surround placed on top of the scanner surface, lid open still Figure B.4 The lid holds the surround in place, therefore no physical attachment to the scanner is necessary. - 57 -

Figure B.5 A side view of the scanner with the surround in place, notice how the surround keeps the lid open at a fixed angle. Figure B.6 Corner view of the scanner with surround. - 58 -

APPENDIX C C.1. Example Scans The following pages provide examples of the scanned images acquired. These printouts are also used in the testing described in the Evaluation (section 5.7.6.) for the spoofing identity experiments. As the software is configured so the scanner captures the image from a specified window (default a 8.5 by 9.6 ) the images shown on the following pages have been aligned so after printing they can be placed into the scanner aligned to the A4 paper guides on the actual hardware. The final image shown is the stencil used in section 5.7.6. of the Evaluation. The actual stencil tested is included in this document, and if further are required then the page can be printed and the green hand image cut out and used. - 59 -

- 60 -

- 61 -

- 62 -

- 63 -

- 64 -

- 65 -

APPENDIX D D.1. Width Sample Variation Graphs As discussed in section 5.4. of the Evaluation chapter, (and also in section 4.3.3.2. of Chapter4), the number of samples chosen when extracting the finger widths from each finger, will have a significant impact on the results obtained. The three graphs on the following page (Figures D.1 D.3) show how using three, nine and twelve samples per finger separate four different users in vector space. Six samples were used in the final implementation, and the graph showing six samples per finger is shown in the Evaluation, section 5.4. D.2. Results Tables Following the graphs, results tables are provided detailing the outcome of the extensive tests varied out when evaluating the system. Tables are provided showing the full sets of results of identification and verification, for thresholds of 10 to 120, and for all the experiments discussed in Chapter 5: Finger length Finger Width Finger length and width combined (used in the final implementation) Impact of finger nail length Time lapse experiments Spoofing identity (by trying to login with the images shown in Appendix C above) - 66 -

Figure D.1 With three samples for each finger, there is a clear separation between different users, however only twelve biometric dimensions per user. The green line representing user a comes very close to that of d for some of the middle finger measurements and all the first finger measurements. Therefore only seven out of these twelve features are useful in distinguishing user a from d, this may not be enough. Figure D.2 The converging of the first and second measurements for each finger is more apparent here. Therefore measurements nearer the fingertip seem less discriminative than those sampled further down the length of the finger. Thirty-six measurements per user here, although perhaps only twenty-eight are useful - the first two widths sampled for each finger could be discarded. Figure D.3 Twelve measurements per finger in this example, providing a potential of forty-eight features for each enrolled image. The measurements made at the tip of the fingers are too similar to be considered worthwhile for inclusion in the template stored for each user. Discarding the first three samples extracted for each finger though, and thirty-nine potentially useful features remain. - 67 -

Two Innermost Finger Length Results Probe images: 70 genuine, 12 impostor IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images 1 57.1-100.0-8.5-32.9-2.1-61.4-38.6-10 2 78.6 72.9 100.0 100.0 7.3 2.4 12.9 24.3 1.8 1.5 87.1 75.7 12.9 24.3 3 90.0 75.7 100.0 100.0 6.1 1.2 2.9 22.9 1.7 1.3 97.1 77.1 2.9 22.9 1 71.4-91.7-11.0-17.1-2.6-82.9-17.1-20 2 81.4 78.6 91.7 100.0 8.5 3.7 10.0 17.1 3.0 2.9 90.0 81.4 10.0 18.6 3 92.9 85.7 91.7 100.0 7.3 1.2 0.0 12.9 3.0 2.8 100.0 87.1 0.0 12.9 1 77.1-83.3-13.4-10.0-4.1-88.6-11.4-30 2 90.0 85.7 83.3 100.0 9.8 3.7 1.4 10.0 4.6 4.4 98.6 90.0 1.4 10.0 3 92.9 94.3 83.3 100.0 8.5 1.2 0.0 4.3 4.5 4.4 100.0 90.0 0.0 4.3 1 78.6-83.3-14.6-7.1-5.7-90.0-10.0-40 2 91.4 88.6 83.3 100.0 9.8 3.7 0.0 7.1 6.6 6.4 100.0 92.9 0.0 7.1 3 92.9 95.7 83.3 100.0 8.5 1.2 0.0 2.9 6.5 6.3 100.0 97.1 0.0 2.9 1 78.6-83.3-14.6-7.1-7.4-91.4-8.6-50 2 91.4 92.9 83.3 100.0 9.8 4.9 0.0 1.4 8.2 8.0 100.0 97.1 0.0 2.9 THRESHOLD 120 110 100 90 80 70 60 3 92.9 97.1 83.3 100.0 8.5 2.4 0.0 0.0 8.0 7.9 100.0 98.6 0.0 1.4 1 80.0-75.0-17.1-4.3-8.7-92.9-7.1-2 91.4 92.9 66.7 100.0 12.2 4.9 0.0 1.4 9.8 9.7 100.0 97.1 0.0 2.9 3 92.9 97.1 66.7 100.0 11.0 2.4 0.0 0.0 9.6 9.4 100.0 100.0 0.0 0.0 1 81.4-75.0-17.1-2.9-10.8-94.3-5.7-2 91.4 92.9 58.3 91.7 13.4 7.3 0.0 0.0 11.7 11.6 100.0 100.0 0.0 0.0 3 92.9 97.1 58.3 91.7 12.2 3.7 0.0 0.0 11.4 11.4 100.0 100.0 0.0 0.0 1 84.3-66.7-18.3-0.0-12.6-97.1-2.9-2 91.4 92.9 58.3 83.3 13.4 8.5 0.0 0.0 13.5 13.1 100.0 100.0 0.0 0.0 3 92.9 97.1 58.3 83.3 12.2 4.9 0.0 0.0 13.3 13.3 100.0 100.0 0.0 0.0 1 84.3-58.3-19.5-0.0-14.1-97.1-2.9-2 91.4 92.9 58.3 93.3 13.4 8.5 0.0 0.0 14.6 14.4 100.0 100.0 0.0 0.0 3 92.9 97.1 58.3 83.3 12.2 4.9 0.0 0.0 14.4 14.2 100.0 100.0 0.0 0.0 1 84.3-58.3-19.5-0.0-15.5-97.1-2.9-2 91.4 92.9 58.3 75.0 13.4 9.8 0.0 0.0 15.8 15.5 100.0 100.0 0.0 0.0 3 92.9 97.1 58.3 66.7 12.2 7.3 0.0 0.0 15.6 15.4 100.0 100.0 0.0 0.0 1 84.3-58.3-19.5-0.0-16.8-97.1-2.9-2 91.4 92.9 58.3 66.7 13.4 11.0 0.0 0.0 16.9 16.4 100.0 100.0 0.0 0.0 3 92.9 97.1 58.3 66.7 12.2 7.3 0.0 0.0 16.9 16.6 100.0 100.0 0.0 0.0 1 84.3-58.3-19.5-0.0-17.8-100.0-0.0-2 91.4 92.9 41.7 66.7 15.9 11.0 0.0 0.0 17.9 17.3 100.0 100.0 0.0 0.0 3 92.9 97.1 41.7 66.7 14.6 7.3 0.0 0.0 17.9 17.3 100.0 100.0 0.0 0.0 *FMBT = False Matches Below Threshold - 68 -

Finger Width Results Probe images: 70 genuine, 12 impostor (4 fingers, 6 measurements each) IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images 1 10.0-100.0-0.0-90.0-0.0-10.0-90.0-10 2 27.1 20.0 100.0 100.0 0.0 0.0 72.9 80.0 0.0 0.0 27.1 20.0 72.9 80.0 3 38.6 17.1 100.0 100.0 0.0 0.0 61.4 84.3 0.0 0.0 38.6 17.1 61.4 82.9 1 30.0-100.0-0.0-70.0-0.0-30.0-70.0-20 2 55.7 32.9 100.0 100.0 0.0 0.0 44.3 67.1 0.0 0.0 55.7 32.9 44.3 67.1 3 61.4 30.0 100.0 100.0 0.0 0.0 38.6 70.0 0.0 0.0 61.4 30.0 38.6 70.0 1 41.4-100.0-1.2-57.1-0.1-41.4-58.6-30 2 67.1 51.4 100.0 100.0 1.2 0.0 31.4 48.6 0.0 0.0 67.1 51.4 32.9 48.6 3 75.7 50.0 100.0 100.0 1.2 0.0 22.9 50.0 0.0 0.0 75.7 50.0 24.3 50.0 1 57.1-100.0-3.7-38.6-0.2-57.1-42.9-40 2 81.4 64.3 100.0 100.0 2.4 1.2 15.7 34.3 0.1 0.2 82.9 65.7 17.1 34.3 3 82.9 60.0 100.0 100.0 3.7 2.4 12.9 37.1 0.1 0.2 85.7 62.9 14.3 37.1 1 62.9-100.0-6.1-30.0-0.7-65.7-34.3-50 2 84.3 70.0 100.0 100.0 3.7 2.4 11.4 27.1 0.5 0.5 85.7 72.9 14.3 27.1 THRESHOLD 120 110 100 90 80 70 60 3 85.7 67.1 100.0 100.0 4.9 3.7 8.6 28.6 0.4 0.2 88.6 70.0 11.4 30.0 1 65.7-100.0-6.1-27.1-0.9-71.4-28.6-2 85.7 78.6 100.0 100.0 3.7 3.7 10.0 17.1 0.7 0.8 90.0 82.9 10.0 17.1 3 85.7 71.4 100.0 100.0 4.9 4.9 8.6 22.9 0.6 0.4 91.4 74.3 8.6 25.7 1 71.4-83.3-9.8-20.0-1.3-80.0-20.0-2 85.7 80.0 83.3 100.0 7.3 3.7 8.6 15.7 1.3 1.1 90.0 84.3 10.0 15.7 3 85.7 78.6 83.3 100.0 8.5 4.9 7.1 15.7 1.0 0.6 91.4 81.4 8.6 18.6 1 74.3-83.3-9.8-17.1-1.9-82.9-17.1-2 88.6 81.4 83.3 100.0 7.3 3.7 5.7 14.3 1.8 1.4 92.9 85.7 7.1 14.3 3 87.1 84.3 83.3 100.0 8.5 4.9 5.7 10.0 1.4 1.0 92.9 87.1 7.1 12.9 1 74.3-83.3-9.8-17.1-2.5-82.9-17.1-2 90.0 85.7 66.7 91.7 9.8 4.9 4.3 10.0 2.4 2.2 95.7 90.0 4.3 10.0 3 88.6 85.7 66.7 75.0 11.0 8.5 4.3 8.6 2.0 1.9 95.7 91.4 4.3 8.6 1 78.6-75.0-11.0-12.9-3.2-87.1-12.9-2 90.0 90.0 58.3 66.7 11.0 8.5 4.3 5.7 3.1 3.0 95.7 94.3 4.3 5.7 3 88.6 85.7 58.3 66.7 12.2 9.8 4.3 8.6 2.7 2.4 95.7 91.4 4.3 8.6 1 81.4-66.7-13.4-8.6-4.2-90.0-10.0-2 91.4 92.9 50.0 58.3 12.2 9.8 2.9 2.9 4.1 3.7 97.1 97.1 2.9 2.9 3 90.0 90.0 50.0 58.3 13.4 11.0 2.9 4.3 3.7 2.9 97.1 95.7 2.9 4.3 1 84.3-58.3-14.6-5.7-4.6-94.3-5.7-2 91.4 92.9 50.0 58.3 12.2 9.8 2.9 2.9 4.6 4.2 97.1 97.1 2.9 2.9 3 90.0 91.4 50.0 58.3 13.4 11.0 2.9 2.9 4.3 3.7 97.1 97.1 2.9 2.9 *FMBT = False Matches Below Threshold - 69 -

Finger Length and Width Combined Results Probe Images: 70 genuine, 12 impostor IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Enrol. Images 1 4.3-100.0-0.0-95.7-0.0-4.3-95.7-10 2 17.1 11.4 100.0 100.0 0.0 0.0 82.9 88.6 0.0 0.0 17.1 11.4 82.9 88.6 3 30.0 10.0 100.0 100.0 0.0 0.0 70.0 90.0 0.0 0.0 30.0 10.0 70.0 90.0 1 17.1-100.0-0.0-82.9-0.0-17.1-82.9-20 2 38.6 21.4 100.0 100.0 0.0 0.0 61.4 78.6 0.0 0.0 38.6 21.4 61.4 78.6 3 50.0 18.6 100.0 100.0 0.0 0.0 50.0 81.4 0.0 0.0 50.0 18.6 50.0 81.4 1 30.0-100.0-0.0-70.0-0.0-30.0-70.0-30 2 55.7 35.7 100.0 100.0 0.0 0.0 44.3 64.3 0.0 0.0 55.7 35.7 44.3 64.3 3 67.1 32.9 100.0 100.0 0.0 0.0 32.9 67.1 0.0 0.0 67.1 32.9 32.9 67.1 1 44.3-100.0-0.0-55.7-0.0-44.3-55.7-40 2 72.9 50.0 100.0 100.0 0.0 0.0 27.1 50.0 0.0 0.0 72.9 50.0 27.1 50.0 3 81.4 48.6 100.0 100.0 0.0 0.0 18.6 51.4 0.0 0.0 81.4 48.6 18.6 51.4 1 51.4-100.0-0.0-48.6-0.0-51.4-48.6-50 2 81.4 58.6 100.0 100.0 0.0 0.0 18.6 41.4 0.0 0.0 81.4 58.6 18.6 41.4 THRESHOLD 120 110 100 90 80 70 60 3 87.1 58.6 100.0 100.0 0.0 0.0 12.9 41.4 0.0 0.0 87.1 58.6 12.9 41.4 1 62.9-100.0-0.0-37.1-0.0-62.9-37.1-2 88.6 67.1 100.0 100.0 0.0 0.0 11.4 32.9 0.0 0.1 88.6 67.1 11.4 32.9 3 91.4 62.9 100.0 100.0 0.0 0.0 8.6 37.1 0.1 0.1 91.4 62.9 8.6 37.1 1 68.6-100.0-0.0-31.4-0.1-68.6-31.4-2 90.0 71.4 100.0 100.0 0.0 0.0 10.0 28.6 0.1 0.2 90.0 71.4 10.0 28.6 3 91.4 68.6 100.0 100.0 0.0 0.0 8.6 31.4 0.1 0.2 91.4 68.6 8.6 31.4 1 75.7-100.0-0.0-24.3-0.2-75.7-24.3-2 92.9 75.7 100.0 100.0 0.0 0.0 7.1 24.3 0.2 0.2 92.9 75.7 7.1 24.3 3 92.9 81.4 100.0 100.0 0.0 0.0 7.1 18.6 0.2 0.2 92.9 81.4 7.1 18.6 1 75.7-100.0-0.0-24.3-0.2-75.7-24.3-2 95.7 78.6 100.0 100.0 0.0 0.0 4.3 21.4 0.2 0.2 95.7 78.6 4.3 21.4 3 95.7 91.4 100.0 100.0 0.0 0.0 4.3 8.6 0.2 0.2 95.7 91.4 4.3 8.6 1 78.6-100.0-0.0-21.4-0.2-78.6-21.4-2 95.7 85.7 100.0 100.0 0.0 0.0 4.3 14.3 0.2 0.3 95.7 85.7 4.3 14.3 3 95.7 91.4 100.0 100.0 0.0 0.0 4.3 8.6 0.3 0.3 95.7 91.4 4.3 8.6 1 82.9-100.0-0.0-17.1-0.3-82.9-17.1-2 97.1 94.3 100.0 100.0 0.0 0.0 2.9 5.7 0.4 0.4 97.1 94.3 2.9 5.7 3 97.1 94.3 100.0 100.0 0.0 0.0 2.9 5.7 0.4 0.4 97.1 94.3 2.9 5.7 1 85.7-100.0-0.0-14.3-0.4-85.7-14.3-2 97.1 95.7 100.0 100.0 0.0 0.0 2.9 4.3 0.4 0.4 97.1 95.7 2.9 4.3 3 97.1 95.7 100.0 100.0 0.0 0.0 2.9 4.3 0.4 0.4 97.1 95.7 2.9 4.3 *FMBT = False Matches Below Threshold - 70 -

Impact of Finger Nail Length Results Probe images: 10 genuine, 0 impostor Results shown are using three enrolment images and an acceptance threshold of 120. The SCORES are the match of the test against the 1 st, 2 nd and 3 rd enrolment images stored for the corresponding user. Scores marked in red are above an acceptance threshold of 120 and therefore would be rejected. IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Threshold 120 0.0 0.0 100.0 100.0 0.0 0.0 100.0 100.0 0.0 0.0 0.0 0.0 100.0 100.0 NORMAL NAIL LENGTH* EXTRA LONG NAILS SCORE 1 SCORE 2 SCORE 3 SCORE 1 SCORE 2 SCORE 3 TEST1 14 7 8 560 538 580 TEST2 34 10 29 552 575 580 USER A TEST3 18 17 13 542 528 569 TEST4 50 16 37 663 644 684 TEST5 22 19 25 614 598 629 TEST1 80 52 23 429 723 571 TEST2 83 41 36 670 1139 915 USER B TEST3 96 2 44 691 1225 991 TEST4 98 10 39 165 377 280 TEST5 92 37 32 199 453 321 *Images captured on the same day as the enrolment images - 71 -

Time Lapse Results Probe images: 24 genuine, 0 impostor Results shown are using three enrolment images and an acceptance threshold of 120. The SCORES are the match of the test against the 1 st, 2 nd and 3 rd enrolment images stored for the corresponding user. Scores marked in red are above an acceptance threshold of 120 and therefore would be rejected. IDENTIFICATION VERIFICATION GAR GRR FAR FRR FMBT* GAR FRR Threshold 120 84.0 68.0 100.0 100.0 4.0 8.0 12.0 24.0 0.71 0.88 88.0 76.0 12.0 24.0 SAME DAY IMAGES TIME LAPSE SCORE 1 SCORE 2 SCORE 3 SCORE 1 SCORE 2 SCORE 3 TEST1 14 7 8 43 76 54 TEST2 34 10 29 63 118 82 USER A TEST3 18 17 13 58 148 91 TEST4 50 16 37 45 113 70 TEST5 22 19 25 57 146 89 TEST1 80 52 23 19 286 159 TEST2 83 41 36 131 167 96 USER B TEST3 96 2 44 201 61 33 TEST4 98 10 39 63 143 62 TEST5 92 37 32 93 175 75 TEST1 56 10 37 46 67 34 USER C TEST2 - - - 49 34 44 TEST3 - - - 30 32 23 TEST4 - - - 13 29 17 TEST5 - - - 15 34 17 TEST1 32 31 38 159 120 142 USER D TEST2 75 21 35 333 146 175 TEST3 78 14 10 191 177 211 TEST4 80 4 14 118 83 100 TEST1 33 69 41 35 23 41 TEST2 25 34 46 29 25 28 USER E TEST3 - - - 18 17 29 TEST4 - - - 9 20 25 TEST5 - - - 34 30 44-72 -

Spoofing Identity With Colour Printouts Results Probe images: 5 colour printouts Results shown are using three enrolment images and an acceptance threshold of 120. The SCORES are the match of the test against the 1 st, 2 nd and 3 rd enrolment images stored for the corresponding user. Scores marked in red are above an acceptance threshold of 120 and therefore would be rejected. TYPICAL SCORE OF A GENUINE SCAN SPOOF SCORE SCORE 1 SCORE 2 SCORE 3 SCORE 1 SCORE 2 SCORE 3 USER 1 33 69 41 422 451 479 USER 2 56 10 37 595 567 535 USER 3 43 40 15 277 330 225 USER 4 106 76 36 715 352 413 USER 5 96 77 87 493 311 487-73 -

APPENDIX E E.1. Code Snippets Figure E.1 The calculate landmarks algorithm, part of the FingerExtraction class - 74 -

Figure E.2 The function to measure the finger lengths from all four fingers. - 75 -

Figure E.3 The function to measure the finger widths, part of the FingerExtraction class. - 76 -

Figure E.4 The function to extract the finger textures (image 1 of 6). - 77 -

Figure E.4 (image 2 of 6). - 78 -

Figure E.4 (image 3 of 6). - 79 -

Figure E.4 (image 4 of 6). - 80 -

Figure E.4 (image 5 of 6). - 81 -

Figure E.4 (image 6 of 6). - 82 -

E.2. Config. File Figure E.5 The configuration file. The settings in this file will alter system behaviour considerably - 83 -

E.3. Screen Dumps Of The System Figure E.6 Assuming the configuration file is set for the scanner to be live, when the administrator clicks the login button their hand image is acquired from the scanner. If the scanner is switched off a stored image of the administrator is used to authenticate the login. This is only available as for the testing on the system and a real system would not have an offline option. Figure E.7 Once scanned, the hand image is compared to all of those stored in the database. If the match returned is that of Admin, then access is granted (Figure E.8). - 84 -

Figure E.8 Admin logged in. To enrol a new user, first enter their username. If username blank, an error message is presented (Figure E.17) Figure E.9 If an image is stored in the photos\ directory the default photo is updated. Default is 3 scans for enrolment Figure E.10 If the scanner is live, the user is asked to ensure their hand is as flat as possible before scanning begins Figure E.11 The prototype can also work offline, and if scanner is off in config file the following message provides offline enrolment instructions Figure E.12 For each enrolment image, the captured image is displayed and the user is asked whether it looks suitable. If there is a problem extracting the features an error is presented (Figure E.18) and the current scan must be re-done in order to continue Figure E.13 Again, the user is asked to see if the image looks ok to enrol, but if feature extraction fails an error is presented (Figure E.18) and the user must re-scan this image to continue the enrolment process. - 85 -

Figure E.14 The final enrolment images (using the default settings). Again, the user must confirm the image looks ok to enrol, before feature extraction is attempted Figure E.15 Assuming the three images are close enough to each other (i.e. similar enough so to ensure consistency of the enrolment scans) then the user is enrolled on the system, otherwise the error in Figure E.16 is presented Figure E.16 The enrolment image attempted here is from a different user so the variation between it and the two images from Thomas is above the specified threshold. The user is asked whether they would like to start again Figure E.17 If nothing is entered in the username box when attempting to enrol a new user, the error message shown above is presented. Figure E.18 If there is a problem extracting the features the above message is presented, and the current scan must be re-captured Figure E.19 The following generic error is presented to catch any unexpected problems. For example where an image filename does not exist when attempting to enrol a user offline - 86 -

Figure E.20 The IdentifyUser program is similar in appearance to the EnrolUser program. Only one option is provided though SCAN HAND Figure E.21 If the scanner is switched to OFF in the config file, the prototype can still work offline. If an image filename is entered in this prompt box the supplied image is identified against the enrolled users Figure E.22 If the filename entered does not exist the user is prompted and has the option to re-enter. Figure E.23 This is an example of entering an image that does exist, but the image is of a user who is not enrolled on the system Figure E.24 The system responds appropriately, providing a message alerting that the user is not found access denied. The message asks the user to re-try - re-positioning their hand in case of false rejection Figure E.25 The filename entered here is a probe image of an enrolled user. N.B. if the scanner was set to ON in the config file this box would not appear - the program would simply acquire an image from the scanner - 87 -

Figure E.26 The program correctly identifies the image and therefore updates the default photo to that of the user ( Sarah ), and also lights up an image saying access granted Figure E.27 Again the filename entered corresponds to a probe image of an enrolled user. Figure E.28 The system again correctly identifies the user and updates the photo and access granted is presented. The image shown right is that of the command prompt which is open in the background. The scores of the probe image against the enrolled images stored in the database are printed to the command line during the matching stage. The scores are also printed to a text file in the same directory as the image, with filename <imagename_score.txt>. Notice that the score produced for Paul Blakey is the lowest by far from the others enrolled in the database. The lowest score returned ( 24 ) is used as the matching score, which is below the default acceptance threshold of 100. - 88 -

APPENDIX F F.1. Hand Images Used In Texture Extraction Tests The following seven images were used in Conclusion section 6.4. USER C USER B USER A - 89 -

APPENDIX G G.1. Project Schedule A schedule was drawn up near the beginning of the project, detailing a proposed plan for time management. This was included in the mid-project report, and a copy is provided in section G.3. below (modified only in format not content). During development of the project the schedule was not followed strictly. Certain stages took longer than expected, whereas others did not take as long. For this reason, a revised schedule is provided in section G.4. with corrections made where necessary. G.2. Progress Log As recommended by the supervisor, a progress log was set-up very near the beginning of the project (using personal web-space). A news script was configured, allowing updates (including aims for subsequent meetings) to be posted to a dedicated website from any computer with internet-access. The online log also made demonstrating developments at the supervisor meetings much easier and it allowed the supervisor to monitor progress if a meeting could not be arranged for a particular week. The website can be found at: http://www.pdg.34sp.com - 90 -