A Novel Image Fusion Scheme For Robust Multiple Face Recognition With Light-field Camera

Size: px
Start display at page:

Download "A Novel Image Fusion Scheme For Robust Multiple Face Recognition With Light-field Camera"

Transcription

1 A Novel Image Fusion Scheme For Robust Multiple Face Recognition With Light-field Camera R. Raghavendra Kiran B Raja Bian Yang Christoph Busch Norwegian Biometric Laboratory, Gjøvik University College, Norway. The Norwegian Colour and Visual Computing Laboratory, Gjøvik University College, Norway. raghavendra.ramachandra@hig.no Abstract Accurate face recognition in wide area surveillance application is a challenging problem because of multiple pose, non-uniform illumination, low resolution and out-of-focus face images that are recorded with conventional surveillance cameras (Closed-Circuit TeleVision or Pan-Tilt-Zoom). In this paper, we address the problem of face recognition in wide area surveillance with a light-field camera. The main advantage of a light-field camera is that, it can provide different focus (or depth) images in a single exposure (capture) which is not possible with a conventional 2D camera. In this work, we propose a novel weighted image fusion scheme to combine different depth (or focus) images rendered by a light-field camera. The proposed image fusion scheme is not only dynamic in handling number of depth (or focus) images but also adaptive in assigning higher weights to the best focus image as compared to the out-offocus image. Extensive experiments are carried out on our newly acquired face dataset captured using Lytro light-field camera to bring out the merits and demerits of the proposed weighted image fusion scheme for face recognition in wide area surveillance applications. I. INTRODUCTION The face detection and recognition has witnessed a rapid growth from past two decades and is also one of the well addressed research areas in the discipline of computer vision. Even after considerable amount of work is carried out to achieve the acceptable accuracy, face recognition still remains as a challenging problem, especially in the presence of varying illumination, multiple poses, at a distance, etc. Since the performance of the face recognition system strongly depends on the acquisition device (or camera) used to capture the face data, the use of particular cameras (like CCTV, PTZ) will also impact the overall performance of the face based biometric system. In general, cameras employed for real-life wide area surveillance exhibit a distinct perspective and have fixed focus that frequently result in out-of-focus face samples, which causes degraded performance of a face recognition system. One possible way to address this problem is by employing a Light-Field Camera (LFC). The light field camera allows to capture the ray-space which holds the rich information such as color, intensity, position and direction of incoming rays of light. The light field camera captures the image by sampling the4d light-field on its sensor in a single photographic exposure by inserting a microlens array [5] or a pin-hole array [4] or masks [18] between the sensor and the main lens. Thus, the presence of the micro lens (or array of pin-holes or masks) allows one to measure not just the total amount of light intensity deposited on the sensor, but also the direction of each ray of incoming light. Finally, by re-sorting the measured rays of light to where they would have terminated one can obtain a number of sharp images focused at different depth [5]. Thus, the light-field camera exhibits interesting features as compared to a conventional camera, such as (1) generating images at different focus (or depth) in one shot. (2) it is low cost; it does not require to move the lens to set the focus on the object in a scene. (3) portable and hand-held (4) real-time exposure. Thus, these features of LFC motivates us to investigate its applicability for the wide area surveillance that involves identifying multiple faces with different pose that are present at various distance (between 0.5m 20m). We study this by simulating real-life surveillance scenarios. There exists a number of weighted fusion schemes especially for multimodal biometric system in which information from different modalities are combined at four different levels, namely: [14]: sensor [17] [11], feature [10] [9], comparison score [12] [20] [15] and decision level [16]. The available work on weighted fusion schemes can be broadly classified into two types: (1) Multimodal weighted fusion: Here, the weight assignment is carried out based on the performance (measured in terms of Equal Error Rate (EER) or/and False Acceptance Rate (FAR) and/or False Reject Rate (FRR)) of the individual modalities to be combined. Thus, higher weight is assigned to the modality that exhibits the best performance over other modalities. (2) Unimodal multisensorial weighted fusion: Here, the unimodal biometric modality acquired with different sensors (for example, face acquired using visible and thermal sensors) [11] or with a same sensor under different acquisition conditions, for example, multispectral acquisition of palmprints [22] are fused. In most of the cases, the weighted fusion is normally carried out at sensor/image level. Here, the weight assessment is carried out either at pixel level or at image level [11] [22]. In case of pixel level weight assessment, each pixel is weighted according to its contribution to the overall biometric system performance. Here, main idea is to use the optimization schemes like Genetic algorithms [3], or particle swarm optimization (PSO) [11] to accurately compute the weight for each pixel in an iterative manner till the performance gain is achieved. In case of image level weighting scheme, the wavelet transform can be used to decompose the images to be fused, then, these decomposed wavelet coefficients are analyzed to perform either selection or fusion (usually equal weighted). Finally, based on these selected or fused wavelet coefficients are used to obtain a fused image [22] [13] using Inverse wavelet transform. In this work, we propose a novel weight assessment

2 Light-field data acquisition using Lytro camera Depth Image 1 Depth Image N Face detector 1 Face detector N Proposed selection and Weighted image fusion Feature Extraction and Identification Fig. 1. Block diagram of the proposed scheme (a) Fig. 2. Lytro light-field Depth images (a) Different depth images (scaled to fit the page) Segmented faces using face detector scheme to accurately combine different focus (or depth) images rendered by the Lytro light field camera. Even though the proposed method belongs to the class of unimodal image fusion (face modality in our case), it still stands out in two different aspects: (1) Since, the number of depth (or focus) images rendered by the Lytro light-field camera is not constant, thus, the proposed scheme must be dynamic to incorporate this property of the Lytro light field camera, (2) Since, we are addressing the multiple face recognition in a given scene, each subject will have their best focus images in any one of the focus (or depth) images and hence, the proposed methods should be adaptive i.e. it should assign larger value of weight for good focus images as compared to that of the out-offocus image. To address this, we propose a novel weighted image fusion scheme that can be structured in two steps: (1) Image selection based on wavelet entropy (2) Novel weight assessment on each of these selected images before performing the fusion using weighted sum rule. In order to effectively evaluate the proposed schemes, we constructed a new multiple face dataset by simulating real life surveillance scenario. To this extent, we employed the first available consumer LFC developed and marketed by Lytro Inc. [1]. We then designed three different (indoor, corridor and outdoor) image acquisition protocols reflecting real life surveillance scenarios. We acquired a new face dataset comprising of 25 subjects such that, each is having 8 enrollment samples acquired in the studio setting using Canon EOS 550D and there are 303 probe samples acquired using Lytro LFC [1]. The rest of the paper is organized as follows: Section II will discuss the proposed method, Section III presents the details on dataset collection and results. Section IV draws the conclusion. II. THE PROPOSED SCHEME Figure 1 shows the block diagram of the proposed method that can be structured in three steps: (1) Face detection and extraction (2) Proposed image selection and weighted fusion scheme (3) Feature extraction and identification. The following section describes these steps in detail. A. Face detection and extraction The first step involves extracting the face region. In this work, we employed the well known Viola-Jones face detector [19] by considering its robustness and performance in a reallife scenario. We trained the face detector using 2429 face samples and 3000 non-face samples acquired using a conventional camera. Figure 2 illustrates the qualitative results of the face detector (Figure 2 ) on different depth images rendered by Lytro LFC. Each image acquired using Lytro LFC will result in multiple depth images. Since, each of these depth images will have at-least one region in focus, applying face detector on any one of these depth images may not result in

3 accurate face region detection (because of out-of-focus face regions). Hence, in this work, we carry out face detection on each of these depth images and then select the result of face detection that corresponds to maximum number of detected face region. We then use this as a reference image and extract the corresponding face region from all depth images. B. Proposed image selection and weighted fusion scheme Depth Image 1 Depth Image 2 Depth Image N DWT DWT DWT Calculate Entropy (E 1 ) Calculate Entropy (E 2 ) Image selection based on positive entropy (I s1, I s2,..i sn ) Calculate Entropy (E N ) In the next step, we propose to perform the image selection by measuring the quality (or focus) by computing the entropy of the wavelet coefficients (W 1 ). In this work, we are motivated to use the entropy as a measure of focus (or quality) of the image (1) as it can provide monotonic quantitative values i.e., the higher values are noted for best focused images while low values are noted for out-of-focus images. Thus, the higher values always correspond to good quality images while lower values represent the low quality images. (2) because of its robustness to noise. These features make the wavelet entropy measure more suitable to evaluate the focus (or quality) of the image. Given the wavelet coefficients W 1, the log-energy entropy can be obtained as follows [8]: E 1 = K (log 2 (W 1i ) 2 ) (1) i=1 Where, K denotes the number of wavelet coefficients obtained on an image (say I 1 (x,y)) and E 1 denotes the log-energy entropy of the corresponding image I 1 (x,y). Normalize and sorting (S 1, S 2,..,S j ) Sliding window based differencing (D 1, D 2,..,D j-1 ) E 1 = 4535 E 2 = 7574 E 3 = 9243 E 4 = 6393 E 5 = -556 E 6 = E 7 = (a) Weights computation (w 1, w 2,..,w j ) IDWT Weighted fusion Fused Image Nor 1 = Nor 2 = Nor 3 = Nor 4 = Sor 1 = Sor 2 = Sor 3 = Sor 4 = (c) D 1 = D 2 = D 3 = w 1 = w 2 = w 3 = w 4 = Fig. 3. Block diagram of the proposed image selection and weighted fusion scheme Figure 3 shows the block diagram of the proposed image selection and weighted fusion scheme to combine information from different focus (or depth) images rendered by the Lytro light-field camera. The proposed fusion scheme can be broadly viewed in two levels: (1) Image selection based on the wavelet log-energy entropy measure, (2) Weighted fusion of the selected images such that, weights are assigned in an adaptive and dynamic manner. 1) Image selection: The proposed image selection is carried out on focus (or depth) images (face images obtained after face detector) corresponding to one subject at a time. Let N be the number of face images corresponding to different focus (or depth) from the first subject such that: Sub 1 = {I 1 (x,y),i 2 (x,y),...,i N (x,y)}. The first step of the proposed selection method involves in decomposing a face image corresponding to different focus (or depth) using 2D- Discrete Wavelet Transformation (DWT) [7]. Given a focus image I 1 (x,y), the 2D-DWT operation is performed by carrying out two operations namely [8]: filtering and downsampling using low-pass (L) and high-pass (H) filter along both row (x) and column (y) of I 1 (x,y). Let W 1 represent the decomposed wavelet coefficients corresponding to the focus image I 1 (x,y). Fig. 4. scheme (d) Illustration of the proposed image selection and weighted fusion The above procedure is repeated for all N images corresponding to the subject Sub 1 to compute the corresponding entropy E = {E 1,E 2,...,E N }. Figure 4 (a) illustrates this step by showing the different focus images corresponding to the subject (Sub 1 ) and their corresponding entropy values. In the next step, we perform the image selection by choosing all images whose entropy value is greater than 0 (i.e. positive entropy value). Figure 4 illustrates the selected images and let these selected images be denoted as I s = {I s1,i s2,...,i sn }. Thus, it can be observed from Figure 4(a) and that, adoption of the Log-energy based entropy measure [8] can successfully provide the quantitative measure of the focus. 2) Weighted image fusion: After selecting the images based on the wavelet entropy measure, we propose a new method to compute the weight for each of these selected images. The core idea of the proposed weight assessment scheme is to assign high weight to the image with large wavelet entropy value and low weight to the image with lower wavelet entropy

4 value in a dynamic and adaptive fashion. The proposed weight assessment scheme is detailed in the following steps: a) Normalizing and sorting: Given the selected images and their corresponding wavelet entropy E sel = {E s1,e s2,...,e sn }, where, sn represents the number of selected images. We first perform the normalization on the entropy values E sel as follows: Nor j = E j max(e sel ) Where, Nor j j = s1,s2,...,sn represents the normalized entropy values of the selected images, E j represents the wavelet entropy value of the j th image and max(e sel ) indicates the maximum wavelet entropy value of the selected images. In the next step, we sort normalized values in decreasing order, such that, the image with highest wavelet entropy value appears first. (2) Sor = sort{nor 1,Nor 2,...,Nor j } (3) Finally, the normalized and sorted wavelet entropy values can be represented as: Sor = {Sor 1,Sor 2,...,Sor j } and { the corresponding sorted images can be written as I s = I 1,I 2,...,I j }. b) Sliding window difference: Here, we perform the difference between consecutive normalized and sorted entropy values (Sor). The sliding window employed in this work is of dimension 2 with an overlap of 1. Here, the main idea of performing the sliding window difference is to analyze the change in the normalized and sorted entropy values (Sor) which in turn will be used to assign the corresponding weights. Let D be the difference obtained using the sliding window as follows: D j = Sor j+1 Sor j (4) c) Weight Assessment: The proposed weight assessment scheme will dynamically assign the weight on each of the selected focus images. The weight assessment is carried out based on the difference value D j as it can provide the clue on information between the images. Thus, the low value of D j implies that corresponding images (say I 1 and I 2) are of equal importance and hence an equal weight can be assigned. While, higher values of D j implies that both images (say I 1 and I 2) are different and hence require different weights. In this work, we propose a novel function that can dynamically compute the weight by satisfying the above mentioned criteria. It can be formulated as follows: F = 0.5+(0.5 D j ) (5) { F Max Weight, if Th 0.2 w j = M ax W eight/2, otherwise Figure 5 shows the response of the proposed function (Equation 5) for different values of D j. Here, it can be observed that, if the value of D j is less than the threshold (T h), then equal weights are assigned, else, different weights are assigned for the image. Since, images are sorted according (6) Fig. 5. Response of the proposed weight mapping function to the largest entropy value, the proposed method will always assign highest weight to the image that has a large entropy value. The complete weight assessment scheme is illustrated in the Figure 4 (c) and the whole procedure is summarized in the Algorithm 1. The value of Th = 0.2 is chosen based on different trials by considering the overall performance of the proposed scheme. Algorithm 1 Proposed algorithm for weight assessment Max Weight 1; Th 0.2; if d j < Th then % Assign equal weights w j Max Weight/2; w j+1 Max Weight w j ; Max Weight w j+1 ; % Update Max Weight else % Assign different weights F 0.5+(0.5 D j ); w j Max Weight F; w j+1 Max Weight w j ; Max Weight w j+1 ; % Update Max Weight end if d) Weighted image fusion: After accurately computing the weight for each image, we carry out the image fusion using weighted SUM rule as follows: Im Fu = j I j w j (7) where, Im Fu represents the fused image, I j (prime) represents the j th image and w j represents the computed weight for the j th image such that, w 1 +w 2 +w w j = 1. Finally the Inverse Discrete Wavelet Transform (IDWT) is employed to reconstruct the fused image on which the feature extraction and identification is carried out. Figure 4 (d) shows the fused image obtained using proposed image selection and fusion scheme. Figure 9 shows the results of the proposed weighted image fusion method on different subjects. The Figure 9 (a) shows images obtained after performing the selection based on positive entropy values and Figure 9 shows the results of the proposed weighted image fusion scheme. Thus, it can be observed that, the fused images obtained using the proposed scheme appears to have superior visual details as compared to the selected images of Figure 9 (a).

5 (a) Fig. 6. Example of the enrolled sample corresponding to one subject: (a) Enrolled samples Corresponding face images (a) (c) Fig. 7. Examples of probe samples: (a) Corridor Indoor (c) Outdoor (for simplicity, results are shown on one of the possible depth images) (a) (c) Fig. 8. images) Qualitative results of face detector on (a) Indoor Corridor (c) Outdoor scenario (for simplicity, results are shown on one of the possible depth

6 (a) Fig. 9. Results of the proposed scheme (a) After image selection After weighted image fusion normal lighting conditions (because of fluorescent lamps) that are already present in the corridor. (3) Outdoor: Here, probe samples are acquired in outdoor conditions in which we have naturally varying sunlight. Examples of the probe samples are shown in Figure 7. In all three scenarios, the subjects are allowed to stand between the distance of 0.5m 20m and every probe sample has 2 to 4 subjects standing at various distance with varying poses. With these settings, we captured 303 probe samples that resulted in 986 face samples from 25 subjects that are distributed in three different scenarios like indoor (433), outdoor (340) and corridor (213). A. Results and discussion C. Feature extraction and identification After obtaining the fused image, we perform the feature extraction using two well established algorithms in the field of face biometrics, namely: Local Binary Patterns (LBP) [6] and Log-Gabor filters [2]. The LBP employed in this work has a radius of 2 while Log-Gabor filter has 4 scales and 8 orientations and we fix these values as the result of experimental trials and also in conformity with literature [10]. We then use these extracted features to accurately identify the subject by employing the Sparse Reconstruction Classifier (SRC) [21]. Here, we carry out l1 - minimization via SP GL1 solver based on spectral gradient projection [21]. In this work, we obtain the comparison score that directly corresponds to the residual errors obtained using SRC. III. E XPERIMENTS AND DISCUSSION This section presents the experimental protocols and results of the proposed weighted image fusion scheme on our newly collected multiple face light-field face dataset using Lytro light field camera [1]. The whole dataset was acquired in our laboratory over a period of 20 days and comprises of 25 subjects. The data acquisition process is divided into two parts, namely: Enrollment data samples and probe data samples. The enrollment samples are collected in a controlled illumination (studio) environment using a Canon EOS 550D DSLR camera. For each subject, we captured 8 samples that correspond to neutral, smile and 6 different poses as shown in the figure 6 (a). The acquired enrollment samples are in lossless 24 bit color JPEG format with the original size of pixels. In order to carry out the experiments, we perform the pre-processing steps to accurately extract the face region. Our pre-processing steps include a face detector to detect and crop the face region which is then re-sized to have the size of pixel. Then we finally perform the Gaussian filtering (σ = 2) to remove the noise contained in high spatial frequency. Figure 6 shows the cropped face images that are used for experiments. Thus the enrollment dataset consists of 8 samples per subject that resulted in a total of 200 samples. The probe samples are collected using the Lytro light-field camera in three different scenarios, namely: (1) Indoor: Here, all probe samples are collected in the room with controlled lighting without any sunlight. (2) Corridor: Here, subjects are allowed to stand close to the windows through which sunlight is allowed to fall on the subject s face in addition to the This section discusses the results of the proposed image fusion algorithm on our database collected using Lytro lightfield camera. The experiments are carried out on 303 probe samples that are collected by simulating real-life surveillance scenarios. In this work, all the results are presented in terms of identification rate (rank 1) that is obtained by comparing 1 : N subjects in the dataset, therefore, a higher value of identification rate corresponds to better accuracy of the algorithm. In order to effectively evaluate the face recognition algorithms on our new dataset, we employ the gallery samples from our enrollment dataset and probe samples corresponding to the images acquired using Lytro light-field camera. Figure 8 shows the qualitative results of the face detection algorithm on the probe samples acquired using Lytro lightfield camera. For simplicity, the results on light-field camera are illustrated on one of the depth images. In order to obtain the quantitative results, we evaluate the face detector on every probe sample that shows the detection rate of 94.8%. In this work, we present the results of both image selection and weighted image fusion method discussed in section II. The first proposed scheme is based on the image selection using the entropy based measure and we denote this scheme as the Proposed scheme 1. The second scheme is based on the proposed weighted image fusion and we denote this as the Proposed scheme 2. (a) Fig. 10. Qualitative illustration (a) Best focus image Proposed weighted image fusion

7 Figure 10 shows the qualitative results of the proposed weighted image fusion scheme and the best focus image selected based on the wavelet entropy. It can be observed that the proposed weighted fusion scheme has shown greater visual quality as compared with the best focused image. This fact illustrates the efficacy of the proposed weight assessment scheme. Table I shows the quantitative performance of the proposed scheme 1 and proposed scheme 2 when LBP and SRC is employed as the feature extraction and classification scheme. From the Table I, it can be observed that, the proposed scheme 2 has outperformed the proposed scheme 1 in all three scenarios. The best result is noted for indoor scenario with an identification rate of 58.66%. TABLE I. QUANTITATIVE PERFORMANCE OF THE PROPOSED IMAGE FUSION SCHEMES: LBP-SRC Method Scenario Identification Rate (%) Proposed Scheme 1 Proposed scheme 2 Indoor Corridor Outdoor Indoor Corridor Outdoor Table III shows the quantitative performance of the decision level fusion of LBP-SRC and LG-SRC using OR rule. It can be noted that the best performance is obtained with the proposed scheme 2 for corridor scenario with an identification rate of 75.12%. Thus, from the above experiments, it can be observed that the proposed scheme 2 has shown the best performance in all three different scenarios and thereby justifying the efficacy. IV. CONCLUSION In this paper, we proposed a novel weight assessment scheme to carry out the weighted image fusion scheme to combine different depth (or focus) images for accurate face recognition. The proposed weight assessment scheme is not only dynamic but also adaptive in assigning the weights for different depth (or focus) image rendered by Lytro lightfield camera. The proposed scheme is evaluated on the newly collected multiple face dataset consisting of 25 subjects in three different scenarios resulting in 303 probe samples with 986 face samples scattered between the distance of 0.5m - 20m. Experimental results have revealed that, the proposed scheme 2 has shown the performance with an identification rate of 75.12%. Table II shows the qualitative performance of the proposed scheme I and II when LG-SRC combination is used as the feature extraction and classification method. Here, it can also be observed that, the proposed scheme 2 outperforms the proposed scheme 1 in all three different scenarios. Here, the best performance is noted for the proposed scheme 2 with an identification rate of 54.93% In order to further TABLE II. QUANTITATIVE PERFORMANCE OF THE PROPOSED IMAGE FUSION SCHEMES: LG-SRC Method Scenario Identification Rate (%) Proposed Scheme 1 Proposed Scheme 2 Indoor Corridor Outdoor Indoor Corridor Outdoor improve the overall performance of the system by exploiting the complementary information on the decision provided by each of these algorithms (LBP-SRC and LG-SRC), we propose to carry out the multi-algorithm fusion at the decision level. In this work, we employed the OR rule to combine the decisions from multiple algorithms. TABLE III. QUANTITATIVE PERFORMANCE OF THE PROPOSED IMAGE FUSION SCHEMES: DECISION LEVEL FUSION Method Scenario Identification Rate (%) Proposed scheme 1 Proposed scheme 2 Indoor Corridor Outdoor Indoor Corridor Outdoor V. ACKNOWLEDGMENT This work is funded by the EU 7th Framework Program (FP7/ ) under grant agreement n o for the large-scale integrated project FIDELITY. REFERENCES [1] Homepage of lytro:-. [2] F. J. David. Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America, 4(12): , [3] G.Bebis, A.Gyaourova, S.Singh, and I.Pavlides. Face recognition by fusing thermal and visible imagery. Image and Vision Computing, 24: , [4] R. Horstmeyer, G. Euliss, R. Athale, and M. Levoy. Flexible multimodal camera using a light field architecture. In International Conference on Computational Photography (ICCP), pages 1 8, [5] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. Light field photography with a hand- held plenoptic camera. In Stanford University Computer Science Technical Report CSTR , pages 1 11, [6] T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24(7): , [7] G. Pajares and J. M. de la Cruz. A wavelet-based image fusion tutorial. Pattern Recognition, 37(9): , [8] G. Pajares and J. M. de la Cruz. A wavelet-based image fusion tutorial. Pattern Recognition, 37(9): , [9] R. Raghavendra. PSO based framework for weighted feature level fusion of face and palmprint. In Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH- MSP), pages , [10] R. Raghavendra, B. Dorizzi, A. Rao, and G. H. Kumar. Designing efficient fusion schemes for multimodal biometric system using face and palmprint. Pattern Recognition, 44(5): , [11] R. Raghavendra, B. Dorizzi, A. Rao, and G. H. Kumar. Particle swarm optimization based fusion of near infrared and visible images for improved face verification. Pattern Recognition, 44(2): , [12] R. Raghavendra, A. Rao, and G. H. Kumar. Qualitative weight assignment for multimodal biometric fusion. In 7th International Conference on Advances in Pattern Recognition (ICAPR 2009). Kolkata, India, pages , 2009.

8 [13] R. Raghavendra, A. Rao, and G. H. Kumar. Multisensor biometric evidence fusion of face and palmprint for person authentication using particle swarm optimisation (pso). International Journal of Biometrics, 2(1):19 33, [14] Ross.A, Nandakumar.K, and J. A.K. Handbook of Multibiometrics. Springer-Verlag edition, [15] Y. N. Singh and P. Gupta. Qualitative Evaluation of Normalization Techniques of Matching Scores in Multimodal Biometrics Systems. In proceedings of International Conference on Biometrics (ICB), pages , [16] S.Prabhakar and A. Jain. Decision level fusion in fingerprint verification. Pattern Recognition, 35(4): , [17] S.Singh, A.Gyaourova, G.Bebis, and I.Pavlidies. Infrared and visible image fusion for face recognition. In of SPIE Defense and security symposium, pages , [18] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph., 26(3):1 12, July [19] P. Viola and M. Jones. Robust real-time face detection. International Journal of Computer Vision, 57: , [20] Y. Wang, T. Tan, and A. K. Jain. Combining Face and Iris Biometrics for Identity Verification. In proceedings of 4th International Conference on Audio and Video based Biometric Person Authentication (AVBPA, Guildford, UK), pages , [21] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2): , [22] Z. S. Y. Hao and T. Tan. Comparative studies on multispectral palm image fusion for biometrics. In International Conference on Asian Conference on Computer Vision, page 1221, 2007.

Empirical Evaluation of Visible Spectrum Iris versus Periocular Recognition in Unconstrained Scenario on Smartphones

Empirical Evaluation of Visible Spectrum Iris versus Periocular Recognition in Unconstrained Scenario on Smartphones Empirical Evaluation of Visible Spectrum Iris versus Periocular Recognition in Unconstrained Scenario on Smartphones Kiran B. Raja * R. Raghavendra * Christoph Busch * * Norwegian Biometric Laboratory,

More information

Face Presentation Attack Detection by Exploring Spectral Signatures

Face Presentation Attack Detection by Exploring Spectral Signatures Face Presentation Attack Detection by Exploring Spectral Signatures R. Raghavendra, Kiran B. Raja, Sushma Venkatesh, Christoph Busch Norwegian Biometrics Laboratory, NTNU - Gjøvik, Norway {raghavendra.ramachandra;

More information

Multimodal Face Recognition using Hybrid Correlation Filters

Multimodal Face Recognition using Hybrid Correlation Filters Multimodal Face Recognition using Hybrid Correlation Filters Anamika Dubey, Abhishek Sharma Electrical Engineering Department, Indian Institute of Technology Roorkee, India {ana.iitr, abhisharayiya}@gmail.com

More information

ENHANCHED PALM PRINT IMAGES FOR PERSONAL ACCURATE IDENTIFICATION

ENHANCHED PALM PRINT IMAGES FOR PERSONAL ACCURATE IDENTIFICATION ENHANCHED PALM PRINT IMAGES FOR PERSONAL ACCURATE IDENTIFICATION Prof. Rahul Sathawane 1, Aishwarya Shende 2, Pooja Tete 3, Naina Chandravanshi 4, Nisha Surjuse 5 1 Prof. Rahul Sathawane, Information Technology,

More information

Iris Recognition using Histogram Analysis

Iris Recognition using Histogram Analysis Iris Recognition using Histogram Analysis Robert W. Ives, Anthony J. Guidry and Delores M. Etter Electrical Engineering Department, U.S. Naval Academy Annapolis, MD 21402-5025 Abstract- Iris recognition

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

Presentation Attack Detection Algorithms for Finger Vein Biometrics: A Comprehensive Study

Presentation Attack Detection Algorithms for Finger Vein Biometrics: A Comprehensive Study 215 11th International Conference on Signal-Image Technology & Internet-Based Systems Presentation Attack Detection Algorithms for Finger Vein Biometrics: A Comprehensive Study R. Raghavendra Christoph

More information

International Journal of Engineering and Emerging Technology, Vol. 2, No. 1, January June 2017

International Journal of Engineering and Emerging Technology, Vol. 2, No. 1, January June 2017 Measurement of Face Detection Accuracy Using Intensity Normalization Method and Homomorphic Filtering I Nyoman Gede Arya Astawa [1]*, I Ketut Gede Darma Putra [2], I Made Sudarma [3], and Rukmi Sari Hartati

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Title Goes Here Algorithms for Biometric Authentication

Title Goes Here Algorithms for Biometric Authentication Title Goes Here Algorithms for Biometric Authentication February 2003 Vijayakumar Bhagavatula 1 Outline Motivation Challenges Technology: Correlation filters Example results Summary 2 Motivation Recognizing

More information

Light-Field Database Creation and Depth Estimation

Light-Field Database Creation and Depth Estimation Light-Field Database Creation and Depth Estimation Abhilash Sunder Raj abhisr@stanford.edu Michael Lowney mlowney@stanford.edu Raj Shah shahraj@stanford.edu Abstract Light-field imaging research has been

More information

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS Yatong Xu, Xin Jin and Qionghai Dai Shenhen Key Lab of Broadband Network and Multimedia, Graduate School at Shenhen, Tsinghua

More information

Distinguishing Identical Twins by Face Recognition

Distinguishing Identical Twins by Face Recognition Distinguishing Identical Twins by Face Recognition P. Jonathon Phillips, Patrick J. Flynn, Kevin W. Bowyer, Richard W. Vorder Bruegge, Patrick J. Grother, George W. Quinn, and Matthew Pruitt Abstract The

More information

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013 Lecture 18: Light field cameras (plenoptic cameras) Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:

More information

Design and Testing of DWT based Image Fusion System using MATLAB Simulink

Design and Testing of DWT based Image Fusion System using MATLAB Simulink Design and Testing of DWT based Image Fusion System using MATLAB Simulink Ms. Sulochana T 1, Mr. Dilip Chandra E 2, Dr. S S Manvi 3, Mr. Imran Rasheed 4 M.Tech Scholar (VLSI Design And Embedded System),

More information

3 Department of Computer science and Application, Kurukshetra University, Kurukshetra, India

3 Department of Computer science and Application, Kurukshetra University, Kurukshetra, India Minimizing Sensor Interoperability Problem using Euclidean Distance Himani 1, Parikshit 2, Dr.Chander Kant 3 M.tech Scholar 1, Assistant Professor 2, 3 1,2 Doon Valley Institute of Engineering and Technology,

More information

Experiments with An Improved Iris Segmentation Algorithm

Experiments with An Improved Iris Segmentation Algorithm Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A.

More information

Biometric Recognition: How Do I Know Who You Are?

Biometric Recognition: How Do I Know Who You Are? Biometric Recognition: How Do I Know Who You Are? Anil K. Jain Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824, USA jain@cse.msu.edu

More information

Iris Recognition using Hamming Distance and Fragile Bit Distance

Iris Recognition using Hamming Distance and Fragile Bit Distance IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 06, 2015 ISSN (online): 2321-0613 Iris Recognition using Hamming Distance and Fragile Bit Distance Mr. Vivek B. Mandlik

More information

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 14 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 2 due May 19 Any last minute issues or questions? Next two lectures: Imaging,

More information

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction 2013 IEEE International Conference on Computer Vision Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction Donghyeon Cho Minhaeng Lee Sunyeong Kim Yu-Wing

More information

EFFECTS OF SEVERE SIGNAL DEGRADATION ON EAR DETECTION. J. Wagner, A. Pflug, C. Rathgeb and C. Busch

EFFECTS OF SEVERE SIGNAL DEGRADATION ON EAR DETECTION. J. Wagner, A. Pflug, C. Rathgeb and C. Busch EFFECTS OF SEVERE SIGNAL DEGRADATION ON EAR DETECTION J. Wagner, A. Pflug, C. Rathgeb and C. Busch da/sec Biometrics and Internet Security Research Group Hochschule Darmstadt, Darmstadt, Germany {johannes.wagner,anika.pflug,christian.rathgeb,christoph.busch}@cased.de

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December-2016 192 A Novel Approach For Face Liveness Detection To Avoid Face Spoofing Attacks Meenakshi Research Scholar,

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information

Image Averaging for Improved Iris Recognition

Image Averaging for Improved Iris Recognition Image Averaging for Improved Iris Recognition Karen P. Hollingsworth, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame Abstract. We take advantage of the temporal continuity in an iris video

More information

Recent research results in iris biometrics

Recent research results in iris biometrics Recent research results in iris biometrics Karen Hollingsworth, Sarah Baker, Sarah Ring Kevin W. Bowyer, and Patrick J. Flynn Computer Science and Engineering Department, University of Notre Dame, Notre

More information

Chapter 6 Face Recognition at a Distance: System Issues

Chapter 6 Face Recognition at a Distance: System Issues Chapter 6 Face Recognition at a Distance: System Issues Meng Ao, Dong Yi, Zhen Lei, and Stan Z. Li Abstract Face recognition at a distance (FRAD) is one of the most challenging forms of face recognition

More information

Feature Extraction Techniques for Dorsal Hand Vein Pattern

Feature Extraction Techniques for Dorsal Hand Vein Pattern Feature Extraction Techniques for Dorsal Hand Vein Pattern Pooja Ramsoful, Maleika Heenaye-Mamode Khan Department of Computer Science and Engineering University of Mauritius Mauritius pooja.ramsoful@umail.uom.ac.mu,

More information

Dictionary Learning based Color Demosaicing for Plenoptic Cameras

Dictionary Learning based Color Demosaicing for Plenoptic Cameras Dictionary Learning based Color Demosaicing for Plenoptic Cameras Xiang Huang Northwestern University Evanston, IL, USA xianghuang@gmail.com Oliver Cossairt Northwestern University Evanston, IL, USA ollie@eecs.northwestern.edu

More information

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

Linear Gaussian Method to Detect Blurry Digital Images using SIFT IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org

More information

NFRAD: Near-Infrared Face Recognition at a Distance

NFRAD: Near-Infrared Face Recognition at a Distance NFRAD: Near-Infrared Face Recognition at a Distance Hyunju Maeng a, Hyun-Cheol Choi a, Unsang Park b, Seong-Whan Lee a and Anil K. Jain a,b a Dept. of Brain and Cognitive Eng. Korea Univ., Seoul, Korea

More information

Content Based Image Retrieval Using Color Histogram

Content Based Image Retrieval Using Color Histogram Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

An Efficient Approach to Face Recognition Using a Modified Center-Symmetric Local Binary Pattern (MCS-LBP)

An Efficient Approach to Face Recognition Using a Modified Center-Symmetric Local Binary Pattern (MCS-LBP) , pp.13-22 http://dx.doi.org/10.14257/ijmue.2015.10.8.02 An Efficient Approach to Face Recognition Using a Modified Center-Symmetric Local Binary Pattern (MCS-LBP) Anusha Alapati 1 and Dae-Seong Kang 1

More information

New applications of Spectral Edge image fusion

New applications of Spectral Edge image fusion New applications of Spectral Edge image fusion Alex E. Hayes a,b, Roberto Montagna b, and Graham D. Finlayson a,b a Spectral Edge Ltd, Cambridge, UK. b University of East Anglia, Norwich, UK. ABSTRACT

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f) Capturing Light Rooms by the Sea, Edward Hopper, 1951 The Penitent Magdalen, Georges de La Tour, c. 1640 Some slides from M. Agrawala, F. Durand, P. Debevec, A. Efros, R. Fergus, D. Forsyth, M. Levoy,

More information

Non-Uniform Motion Blur For Face Recognition

Non-Uniform Motion Blur For Face Recognition IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 6 (June. 2018), V (IV) PP 46-52 www.iosrjen.org Non-Uniform Motion Blur For Face Recognition Durga Bhavani

More information

Near Infrared Face Image Quality Assessment System of Video Sequences

Near Infrared Face Image Quality Assessment System of Video Sequences 2011 Sixth International Conference on Image and Graphics Near Infrared Face Image Quality Assessment System of Video Sequences Jianfeng Long College of Electrical and Information Engineering Hunan University

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

Global and Local Quality Measures for NIR Iris Video

Global and Local Quality Measures for NIR Iris Video Global and Local Quality Measures for NIR Iris Video Jinyu Zuo and Natalia A. Schmid Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV 26506 jzuo@mix.wvu.edu

More information

Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method

Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method Z. Mortezaie, H. Hassanpour, S. Asadi Amiri Abstract Captured images may suffer from Gaussian blur due to poor lens focus

More information

Interpolation of CFA Color Images with Hybrid Image Denoising

Interpolation of CFA Color Images with Hybrid Image Denoising 2014 Sixth International Conference on Computational Intelligence and Communication Networks Interpolation of CFA Color Images with Hybrid Image Denoising Sasikala S Computer Science and Engineering, Vasireddy

More information

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY Selim Aksoy Department of Computer Engineering, Bilkent University, Bilkent, 06800, Ankara, Turkey saksoy@cs.bilkent.edu.tr

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X HIGH DYNAMIC RANGE OF MULTISPECTRAL ACQUISITION USING SPATIAL IMAGES 1 M.Kavitha, M.Tech., 2 N.Kannan, M.E., and 3 S.Dharanya, M.E., 1 Assistant Professor/ CSE, Dhirajlal Gandhi College of Technology,

More information

Digital Watermarking Using Homogeneity in Image

Digital Watermarking Using Homogeneity in Image Digital Watermarking Using Homogeneity in Image S. K. Mitra, M. K. Kundu, C. A. Murthy, B. B. Bhattacharya and T. Acharya Dhirubhai Ambani Institute of Information and Communication Technology Gandhinagar

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Note on CASIA-IrisV3

Note on CASIA-IrisV3 Note on CASIA-IrisV3 1. Introduction With fast development of iris image acquisition technology, iris recognition is expected to become a fundamental component of modern society, with wide application

More information

3D Face Recognition System in Time Critical Security Applications

3D Face Recognition System in Time Critical Security Applications Middle-East Journal of Scientific Research 25 (7): 1619-1623, 2017 ISSN 1990-9233 IDOSI Publications, 2017 DOI: 10.5829/idosi.mejsr.2017.1619.1623 3D Face Recognition System in Time Critical Security Applications

More information

Concealed Weapon Detection Using Color Image Fusion

Concealed Weapon Detection Using Color Image Fusion Concealed Weapon Detection Using Color Image Fusion Zhiyun Xue, Rick S. Blum Electrical and Computer Engineering Department Lehigh University Bethlehem, PA, U.S.A. rblum@eecs.lehigh.edu Abstract Image

More information

ANALYSIS OF PARTIAL IRIS RECOGNITION

ANALYSIS OF PARTIAL IRIS RECOGNITION ANALYSIS OF PARTIAL IRIS RECOGNITION Yingzi Du, Robert Ives, Bradford Bonney, Delores Etter Electrical Engineering Department, U.S. Naval Academy, Annapolis, MD, USA 21402 ABSTRACT In this paper, we investigate

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Vehicle License Plate Recognition System Using LoG Operator for Edge Detection and Radon Transform for Slant Correction

Vehicle License Plate Recognition System Using LoG Operator for Edge Detection and Radon Transform for Slant Correction Vehicle License Plate Recognition System Using LoG Operator for Edge Detection and Radon Transform for Slant Correction Jaya Gupta, Prof. Supriya Agrawal Computer Engineering Department, SVKM s NMIMS University

More information

Impact of out-of-focus blur on iris recognition

Impact of out-of-focus blur on iris recognition Impact of out-of-focus blur on iris recognition Nadezhda Sazonova 1, Stephanie Schuckers, Peter Johnson, Paulo Lopez-Meyer 1, Edward Sazonov 1, Lawrence Hornak 3 1 Department of Electrical and Computer

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Computational Photography

Computational Photography Computational photography Computational Photography Digital Visual Effects Yung-Yu Chuang wikipedia: Computational photography h refers broadly to computational imaging techniques that enhance or extend

More information

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION 1 Arun.A.V, 2 Bhatath.S, 3 Chethan.N, 4 Manmohan.C.M, 5 Hamsaveni M 1,2,3,4,5 Department of Computer Science and Engineering,

More information

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation Sensors & Transducers, Vol. 6, Issue 2, December 203, pp. 53-58 Sensors & Transducers 203 by IFSA http://www.sensorsportal.com A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition

More information

APPENDIX 1 TEXTURE IMAGE DATABASES

APPENDIX 1 TEXTURE IMAGE DATABASES 167 APPENDIX 1 TEXTURE IMAGE DATABASES A 1.1 BRODATZ DATABASE The Brodatz's photo album is a well-known benchmark database for evaluating texture recognition algorithms. It contains 111 different texture

More information

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database An Un-awarely Collected Real World Face Database: The ISL-Door Face Database Hazım Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs (ISL), Universität Karlsruhe (TH), Am Fasanengarten 5, 76131

More information

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy A Novel Image Deblurring Method to Improve Iris Recognition Accuracy Jing Liu University of Science and Technology of China National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information

Impeding Forgers at Photo Inception

Impeding Forgers at Photo Inception Impeding Forgers at Photo Inception Matthias Kirchner a, Peter Winkler b and Hany Farid c a International Computer Science Institute Berkeley, Berkeley, CA 97, USA b Department of Mathematics, Dartmouth

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Multi-Image Deblurring For Real-Time Face Recognition System

Multi-Image Deblurring For Real-Time Face Recognition System Volume 118 No. 8 2018, 295-301 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Multi-Image Deblurring For Real-Time Face Recognition System B.Sarojini

More information

Feature Extraction Technique Based On Circular Strip for Palmprint Recognition

Feature Extraction Technique Based On Circular Strip for Palmprint Recognition Feature Extraction Technique Based On Circular Strip for Palmprint Recognition Dr.S.Valarmathy 1, R.Karthiprakash 2, C.Poonkuzhali 3 1, 2, 3 ECE Department, Bannari Amman Institute of Technology, Sathyamangalam

More information

Student Attendance Monitoring System Via Face Detection and Recognition System

Student Attendance Monitoring System Via Face Detection and Recognition System IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Student Attendance Monitoring System Via Face Detection and Recognition System Pinal

More information

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression

An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression An Efficient Approach for Iris Recognition by Improving Iris Segmentation and Iris Image Compression K. N. Jariwala, SVNIT, Surat, India U. D. Dalal, SVNIT, Surat, India Abstract The biometric person authentication

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

Impact of Out-of-focus Blur on Face Recognition Performance Based on Modular Transfer Function

Impact of Out-of-focus Blur on Face Recognition Performance Based on Modular Transfer Function Impact of Out-of-focus Blur on Face Recognition Performance Based on Modular Transfer Function Fang Hua 1, Peter Johnson 1, Nadezhda Sazonova 2, Paulo Lopez-Meyer 2, Stephanie Schuckers 1 1 ECE Department,

More information

Background Subtraction Fusing Colour, Intensity and Edge Cues

Background Subtraction Fusing Colour, Intensity and Edge Cues Background Subtraction Fusing Colour, Intensity and Edge Cues I. Huerta and D. Rowe and M. Viñas and M. Mozerov and J. Gonzàlez + Dept. d Informàtica, Computer Vision Centre, Edifici O. Campus UAB, 08193,

More information

Detection of Compound Structures in Very High Spatial Resolution Images

Detection of Compound Structures in Very High Spatial Resolution Images Detection of Compound Structures in Very High Spatial Resolution Images Selim Aksoy Department of Computer Engineering Bilkent University Bilkent, 06800, Ankara, Turkey saksoy@cs.bilkent.edu.tr Joint work

More information

NOISE REMOVAL TECHNIQUES FOR MICROWAVE REMOTE SENSING RADAR DATA AND ITS EVALUATION

NOISE REMOVAL TECHNIQUES FOR MICROWAVE REMOTE SENSING RADAR DATA AND ITS EVALUATION NOISE REMOVAL TECHNIQUES FOR MICROWAVE REMOTE SENSING RADAR DATA AND ITS EVALUATION Arundhati Misra 1, Dr. B Kartikeyan 2, Prof. S Garg* Space Applications Centre, ISRO, Ahmedabad,India. *HOD of Computer

More information

A Spatial Mean and Median Filter For Noise Removal in Digital Images

A Spatial Mean and Median Filter For Noise Removal in Digital Images A Spatial Mean and Median Filter For Noise Removal in Digital Images N.Rajesh Kumar 1, J.Uday Kumar 2 Associate Professor, Dept. of ECE, Jaya Prakash Narayan College of Engineering, Mahabubnagar, Telangana,

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Camera identification from sensor fingerprints: why noise matters

Camera identification from sensor fingerprints: why noise matters Camera identification from sensor fingerprints: why noise matters PS Multimedia Security 2010/2011 Yvonne Höller Peter Palfrader Department of Computer Science University of Salzburg January 2011 / PS

More information

Single-shot three-dimensional imaging of dilute atomic clouds

Single-shot three-dimensional imaging of dilute atomic clouds Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Funded by Naval Postgraduate School 2014 Single-shot three-dimensional imaging of dilute atomic clouds Sakmann, Kaspar http://hdl.handle.net/10945/52399

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

AN OPTIMIZED APPROACH FOR FAKE CURRENCY DETECTION USING DISCRETE WAVELET TRANSFORM

AN OPTIMIZED APPROACH FOR FAKE CURRENCY DETECTION USING DISCRETE WAVELET TRANSFORM AN OPTIMIZED APPROACH FOR FAKE CURRENCY DETECTION USING DISCRETE WAVELET TRANSFORM T.Manikyala Rao 1, Dr. Ch. Srinivasa Rao 2 Research Scholar, Department of Electronics and Communication Engineering,

More information

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches Sarah E. Baker, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame {sbaker3,kwb,flynn}@cse.nd.edu

More information

Recognition Of Vehicle Number Plate Using MATLAB

Recognition Of Vehicle Number Plate Using MATLAB Recognition Of Vehicle Number Plate Using MATLAB Mr. Ami Kumar Parida 1, SH Mayuri 2,Pallabi Nayk 3,Nidhi Bharti 4 1Asst. Professor, Gandhi Institute Of Engineering and Technology, Gunupur 234Under Graduate,

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array Daisuke Kiku, Yusuke Monno, Masayuki Tanaka, and Masatoshi Okutomi Tokyo Institute of Technology ABSTRACT Extra

More information

A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION. Scott Deeann Chen and Pierre Moulin

A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION. Scott Deeann Chen and Pierre Moulin A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION Scott Deeann Chen and Pierre Moulin University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering 5 North Mathews

More information

Coded Aperture and Coded Exposure Photography

Coded Aperture and Coded Exposure Photography Coded Aperture and Coded Exposure Photography Martin Wilson University of Cape Town Cape Town, South Africa Email: Martin.Wilson@uct.ac.za Fred Nicolls University of Cape Town Cape Town, South Africa Email:

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) Suma Chappidi 1, Sandeep Kumar Mekapothula 2 1 PG Scholar, Department of ECE, RISE Krishna

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

Feature Level Two Dimensional Arrays Based Fusion in the Personal Authentication system using Physiological Biometric traits

Feature Level Two Dimensional Arrays Based Fusion in the Personal Authentication system using Physiological Biometric traits 1 Biological and Applied Sciences Vol.59: e16161074, January-December 2016 http://dx.doi.org/10.1590/1678-4324-2016161074 ISSN 1678-4324 Online Edition BRAZILIAN ARCHIVES OF BIOLOGY AND TECHNOLOGY A N

More information

Wavelet-based Image Splicing Forgery Detection

Wavelet-based Image Splicing Forgery Detection Wavelet-based Image Splicing Forgery Detection 1 Tulsi Thakur M.Tech (CSE) Student, Department of Computer Technology, basiltulsi@gmail.com 2 Dr. Kavita Singh Head & Associate Professor, Department of

More information

A Novel Approach for MRI Image De-noising and Resolution Enhancement

A Novel Approach for MRI Image De-noising and Resolution Enhancement A Novel Approach for MRI Image De-noising and Resolution Enhancement 1 Pravin P. Shetti, 2 Prof. A. P. Patil 1 PG Student, 2 Assistant Professor Department of Electronics Engineering, Dr. J. J. Magdum

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK NC-FACE DATABASE FOR FACE AND FACIAL EXPRESSION RECOGNITION DINESH N. SATANGE Department

More information

Classification in Image processing: A Survey

Classification in Image processing: A Survey Classification in Image processing: A Survey Rashmi R V, Sheela Sridhar Department of computer science and Engineering, B.N.M.I.T, Bangalore-560070 Department of computer science and Engineering, B.N.M.I.T,

More information

Outdoor Face Recognition Using Enhanced Near Infrared Imaging

Outdoor Face Recognition Using Enhanced Near Infrared Imaging Outdoor Face Recognition Using Enhanced Near Infrared Imaging Dong Yi, Rong Liu, RuFeng Chu, Rui Wang, Dong Liu, and Stan Z. Li Center for Biometrics and Security Research & National Laboratory of Pattern

More information

Visibility of Uncorrelated Image Noise

Visibility of Uncorrelated Image Noise Visibility of Uncorrelated Image Noise Jiajing Xu a, Reno Bowen b, Jing Wang c, and Joyce Farrell a a Dept. of Electrical Engineering, Stanford University, Stanford, CA. 94305 U.S.A. b Dept. of Psychology,

More information