3D Face Recognition in Biometrics
|
|
- Joanna Lamb
- 10 months ago
- Views:
Transcription
1 3D Face Recognition in Biometrics CHAO LI, ARMANDO BARRETO Electrical & Computer Engineering Department Florida International University West Flagler ST. EAS USA {cli007, Abstract: -Biometrics is the area of bioengineering that pursues the characterization of individuals in a population (e.g., a particular person) by means of something that the individual is or produces. Among the different modalities in biometrics, face recognition has been a focus in research for the last couple of decades because of its wide potential applications and its importance to meet the security needs of today s world. To date, most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent to 2D face recognition, i.e. sensitivity to illumination conditions and positions of a subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: expression recognition system, expressional face recognition system and neutral face recognition system. A system for the recognition of faces with one type of expression (smile) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework. Key-Words: - face recognition, biometrics, 2D, 3D, range image, PCA, subspace, SVM, LDA 1 Introduction Biometrics is a specific area of bioengineering. It pursues with recognition of a person through something the person has, i.e., the biological characteristics or something the person produces, i.e., behavioral characteristics. Examples of the former include finger print, iris, retina, palm print, face, DNA, etc. Examples of the latter include voice, handwriting, gait, signature, etc. Biometrics is used for identification or authentication in border control, e-commerce, ATM machine access control, crowd surveillance, etc. In recent years, biometrics gained more and more attention for its potential application in anti-terrorism. Among the different modalities used in biometrics, the face is considered to be the most transparent one. It requires minimum cooperation from the subject. In some application scenario like crowd surveillance, face recognition probably is the only feasible modality to use. Face recognition is also the natural way used by our human beings in daily life. Therefore, face recognition has attracted many researchers from different disciplines, such as image processing, pattern recognition, computer vision, and neural networks. Face recognition scenarios can be classified into the following two: Face verification ( Am I who I say I am? ) is a one-to-one match that compares a query face image against a gallery face image whose identity is being claimed. (The gallery face images are the images which have been stored in the database.) Face identification ( Who am I? ) is a one-to-many matching process that compares a query face image against all the gallery images in a face database to determine the identity of the query face. In the identification task, we assume that through some other methods we know that the person is in the database. The identification of the query image is done by locating the image in the database that has the highest similarity with the query image. In this paper, the face identification problem was addressed. Most of the face recognition attempts that have been made until recently use 2D intensity images as the data format for processing. Varying levels of success have been achieved in 2D face recognition research. Detailed and comprehensive surveys can be found in [1, 2]. Although 2D face recognition has achieved considerable success, certain problems still exist. Because the 2D face images used not only depend on the face of a subject, but also depend on the imaging factors, such as the environmental illumination and the orientation of the subject. These two sources of variability in the face image often make the 2D face
2 recognition system fail. That is the reason why 3D face recognition is believed to have an advantage over 2D face recognition. With the development of 3D imaging technology, more and more attention has been directed to 3D face recognition. In [3], Bowyer et al. provide a survey of 3D face recognition technology. Most of the 3D face recognition systems treat the 3D face surface as a rigid surface. But actually the face surface is deformed by different expressions of the subject. Therefore, systems that treat the face as a rigid surface are prone to fail when dealing with faces with expressions. The involvement of facial expression has become an important challenge in 3D face recognition systems. In this paper, we propose an approach to tackle the expression challenge in 3D face recognition. Because the deformation of the face surface is always associated with a specific expression, an integrated expression recognition and face recognition system is proposed. In section 2, a model of the relationship between expression and face recognition is introduced. Based on this model, the framework of integrated expression recognition and face recognition is proposed. Section 3 explains the acquisition of the experimental data used and the preprocessing performed. Section 4 outlines our approach to 3D face expression recognition. Section 5 explains the process used for 3D face recognition. Section 6 describes the experiments performed and the results obtained. Section 7 presents our discussion and conclusion. 2 Expression Recognition and Face Recognition From the psychological point of view, it is still not known whether facial expression recognition information aids the recognition of faces by human beings. One of the experiments that support the existence of a connection between facial expression recognition and face recognition was reported in [4]. The authors found that people are slower in identifying happy and angry faces than they are in identifying faces with neutral expression. The proposed framework involves an initial assessment of the expression of an unknown face, and uses that assessment to facilitate its recognition. The incoming 3D range image is processed by an expression recognition system to find the most appropriate expression label for it. The expression labels include the six prototypical expressions of the faces, which are happiness, sadness, anger, fear, surprise and disgust[5], plus the neutral expression. Therefore the expression recognition system will assign one of the seven expressions to the incoming face. According to different expressions, a matching face recognition system is then applied. If the expression is recognized as neutral, then the incoming 3D range image is directly passed to the neutral expression face recognition system, which uses the features of the probe image to directly match those of the gallery images, which are all neutral, to get the closest match. If the expression found is not neutral, then for each of the six prototypical expressions, a separate face recognition subsystem should be used. In these cases, the system will find the matching face through modeling the variations of the face features between the neutral face and the face with expression. Since the recognition through modeling is a more complex process than the direct matching for the neutral face, this framework would support the experimental findings in [4]. Figure 1 shows a simplified version of this framework. This simplified diagram only deals with the smiling expression, which is the most commonly displayed by people publicly. Incoming face Facial Expression Recognition Training faces 3D face databse Neutral face Smiling face Smiling face feature variation model Neutral face recognition Smiling face recognition Output face Output face Figure1 Simplified framework of 3D face recognition 3 Data Acquisition and Preprocessing To test the idea proposed in this model, a database, which includes 30 subjects, was built. In this database, we test the different processing of the two most common expressions, i.e., smiling versus neutral. Each subject participated in two sessions of the data acquisition process, which took place in two different days. In each session, two 3D scans were acquired. One was a neutral expression; the other was a happy (smiling) expression. The 3D scanner used was a Fastscan 3D scanner from Polhemus Inc. [6]. The resulting database contains 60 3D neutral scans and 60 3D smiling scans of 30 subjects. The left image in Figure 2 shows an example of the 3D scans obtained using this scanner, the right image is the
3 2.5D range image used in the algorithm, which was obtained by preprocessing as described in [8]. Figure 2 3D surface (left) and mesh plot of the converted range image (right) 4 Expression Recognition Facial expressions constitute a basic mode of nonverbal communication among people. In [5], Ekman and Friesen proposed six primary emotions. Each possesses a distinctive content together with a unique facial expression. They seem to be universal across human ethnicities and cultures. These six emotions are happiness, sadness, fear, disgust, surprise and anger. Together with the neutral expression, they also form the seven basic prototypical facial expressions. Automatic facial expression recognition has gained more and more attention recently. It has various potential applications in improved intelligence in human computer interfaces, image compression and synthetic face animation. As in face recognition, most contemporary facial expression systems use two-dimensional images or videos as data format. Logically, the same 2D shortcomings will hamper 2D expression recognition, (i.e., 2D formats are dependent on the pose of the subjects and on the illumination of the environment). In our experiment, we aim to recognize social smiles, which were posed by each subject. Smiling is generated by contraction of the zygomatic major muscle. This muscle lifts the corner of the mouth obliquely upwards and laterally, producing a characteristic smiling expression. So, the most distinctive features associated with the smile are the bulging of the cheek muscle and the uplift of the corner of the mouth, as shown in Figure 3. Figure 3 Illustration of features of a smiling face The following steps are followed to extract six representative features for the smiling expression: 1. An algorithm is developed to obtain the coordinates of five characteristic points A, B, C, D and E in the face range image as shown in Figure 3. A and D are at the extreme points of the base of the nose. B and E are the points defined by the corners of the mouth. C is in the middle of the lower lip. 2. The first feature is the width of the mouth, BE, normalized by the length of AD. Obviously, while smiling the mouth becomes wider. The first feature is represented by mw. 3. The second feature is the depth of the mouth (The difference between the Z coordinates of points BC and EC) normalized by the height of the nose to capture the fact that the smiling expression pulls back the mouth. This second feature is represented by md. 4. The third feature is the uplift of the corner of the mouth, compared with the middle of the lower lip d1 and d2, as shown in the figure, normalized by the difference of the Y coordinates of points AB and DE, respectively and represented by lc. 5. The fourth feature is the angle of AB and DE with the central vertical profile, represented by ag. 6. The last two features are extracted from the semicircular areas shown, which are defined by using AB and DE as diameters. The histograms of the range (Z coordinates) of all the points within these two semicircles are calculated. Figure 4 Histogram of range of cheeks (L &R) for neutral (top row), and smiling (bottom row) face Figure 4 shows the histograms for the smiling and the neutral faces of the subject in Figure 3. The two figures in the first row are the histograms of the range values for the left cheek and right cheek of the neutral face image; the two figures in the second row are the histograms of the range values for the left cheek and right cheek of the smiling face image. From the above figures, we can see that the range histograms of the neutral and smiling expressions are
4 different. The smiling face tends to have large values at the high end of the histogram because of the bulge of the cheek muscle. On the other hand, a neutral face has large values at the low end of the histogram distribution. Therefore two features can be obtained from the histogram: one is called the histogram ratio, represented by hr, the other is called the histogram maximum, represented by hm. h6 + h7 + h8 + h9 + h10 hr = (1) h1 + h2 + h3 + h4 + h5 hm = i ; i = arg{ max( h( i) } (2) In summary, six features, i.e. mw, md, lc, ag, hr and hm, are extracted from each face for the purpose of expression recognition. After the features have been extracted, this becomes a general classification problem. Two pattern classification methods are applied to recognize the expression of the incoming faces. The first method used is a linear discriminant (LDA) classifier, which seeks the best set of features to separate the classes. The other method used is a support vector machine (SVM). For our work, Libsvm [7] was used to implement a suitable support vector machine. 5 3D Face Recognition 5.1 Neutral Face Recognition In our earlier research work, we have found that the central vertical profile and the contour are both discriminant features for every person[8]. Therefore, for neutral face recognition, the same method as in [9] is used: the results of central vertical profile matching and contour matching are combined. The combination of the two classifiers improves the overall performance significantly. The final similarity score for the probe image is the product of ranks for each of the two classifiers (based on the central vertical profile and contour). The image with the smallest score in the gallery will be chosen as the matching face for the probe image. Table 1 Expression recognition results 5.2 Smiling Face Recognition For the recognition of smiling faces we have adopted the probabilistic subspace method proposed by B. Moghaddam et al. [10, 11]. It is an unsupervised technique for visual learning, which is based on density estimation in high dimensional spaces created through eigen decomposition. Using the probabilistic subspace method, a multi-class classification problem can be converted into a binary classification problem. In the experiment for smiling face recognition, because of the limited number of subjects (30), the central vertical profile and the contour are not used directly as vectors in a high dimensional subspace. Instead, they are down sampled to a dimension of 17 to be used. The dimension of difference in feature space is set to be 10, which contains approximately 97% of the total variance. The dimension of difference from feature space is 7. The results of central vertical profile matching and contour matching are combined. Here also the combination of the two classifiers improves the performance. The final similarity score for the probe image is the product of ranks for each of the two classifiers. The image with the smallest score in the gallery will be chosen as the matching face for the probe image. 6 Experiments and Results In order to evaluate the performance of the suggested framework, one gallery and three probe databases were created. The gallery database has 30 neutral faces, one for each subject, recorded in the first data acquisition session. Three probe sets are formed as follows: Probe set 1: 30 neutral faces acquired in the second session. Probe set 2: 30 smiling faces acquired in the second session. Probe set 3: 60 faces, (probe set 1 and probe set 2). Experiment 1: Testing the expression recognition module The leave-one-out cross validation method is used to test the expression recognition classifier. Every time, the faces collected from 29 subjects in both data acquisition sessions are used to train the classifier and the four faces of the remaining subject collected in both sessions are used to test the classifier. Two classifiers are used. One is the linear discriminant classifier; the other is a support vector machine classifier. The results are shown in Table 1. Table 1 Expression recognition results Method LDA SVM Expression recognition t 90.8% 92.5% Experiment 2: Testing the neutral and smiling recognition modules separately
5 In the first two sub-experiments, probe faces are directly fed to the neutral face recognition module. In the third sub experiment, the leave-one-out cross validation is used to verify the performance of the smiling face recognition module alone. 2.1 Neutral face recognition: probe set 1. (Neutral face recognition module used.) 2.2 Neutral face recognition: probe set 2. (Neutral face recognition module used.) 2.3 Smiling face recognition: probe set 2. (Smiling face recognition module used.) Figure 6 Results of Experiment 3 (three sub-experiments) It can been seen in Figure 6 that if the incoming faces include both neutral faces and smiling faces, the recognition rate can be improved about 10 percent by using the integrated framework proposed here. Figure 5 Results of Experiment 2(three sub-experiments) From Figure 5, it can be seen that when the incoming faces are all neutral, the algorithm which treats all the faces as neutral achieves a very high rank-one recognition rate (97%). On the other hand, if the incoming faces are smiling, then the neutral face recognition algorithm does not perform well, only 57% rank-one recognition rate is obtained. In contrast, when the smiling face recognition algorithm is used to deal with smiling faces, the recognition rate can be as high as 80%. Experiment 3: Testing a practical scenario These sub-experiments emulate a realistic situation in which a mixture of neutral and smiling faces (probe set 3) must be recognized. Sub-experiment 1 investigates the performance obtained if the expression recognition front end is bypassed, and the recognition of all the probe faces is attempted with the neutral face recognition module alone. The last two sub-experiments implement the full framework shown in Figure 1. In 3.2 the expression recognition is performed with the linear discrimant classifier, while in 3.3 it is implemented through the support vector machine approach. 3.1 Neutral face recognition module used alone: probe set 3 is used 3.2 Integrated expression and face recognition: probe set 3 is used. (Linear discriminant classifier for expression recognition.) 3.3 Integrated expression and face recognition: probe set 3 is used. (Support vector machine for expression recognition.) 7 Discussion and Conclusion Experiment 1 was aimed at determining the level of performance of the Facial Expression Recognition Module, by itself. Using the leave-one-out cross validation approach, 30 different tests were carried out (Each using 29 x 2 neutral faces and 29 x 2 smiling faces for training). The average success rate in identifying the expressions of the face belonging to the subject not used for training, in each case, was 90.8% with LDA and 92.5% when SVM was used. This confirms the capability of this module to successfully sort these two types of faces (neutral vs. smiling). Both algorithms were applied on the six facial features obtained from the range images (mw, md, lc, ag, hr and hm). Using these features, the actual choice of algorithm used to separate neutral from smiling faces did not seem to be critical. Experiment two was carried out to test one of the basic assumptions behind the framework proposed (Figure 1). That is, a system meant to recognize neutral faces may be successful with faces that are indeed neutral, but may have much less success when dealing with faces displaying an expression, e.g., smiling faces. This differentiation was confirmed by the high rank-one recognition (97%) achieved by the Neutral Face Recognition Module for neutral faces (probe set 1) in sub-experiment 1, which was in strong contrast with the much lower rank-one recognition rate (57%) achieved by this same module for smiling faces (probe set 2), in sub-experiment 2. On the other hand, in the third sub-experiment we confirmed that a module that has been specifically developed for the identification of individuals from smiling probe images (probe set 2) is clearly more successful in this task (80% rank-one recognition). Finally, Experiment 3 was meant to simulate a more practical scenario, in which the generation of probe images does not control the expression of the
6 subject. Therefore for all three sub-experiments in Experiment 3 we used the comprehensive probe set 3, including one neutral range image and one smiling range image from each of the subjects. In the first sub-experiment we observe the kind of results that could be expected when these 60 probe images are processed by a standard Neutral Face Recognition Module alone, which is similar to several of the contemporary approaches used for 3D face recognition. Unfortunately, with a mix of neutral and smiling faces this simple system only achieves a 77% rank-one face recognition (much lower than the 97% obtained for probe set 1, made up of just neutral faces, in Experiment 2). This result highlights the need to account for the possibility of a non-neutral expression in 3D face recognition systems. On the other hand, in sub-experiments two and three we apply the same mixed set of images (Probe set 3) through the complete process depicted in our proposed framework (Figure 1). That is, every incoming image is first sorted by the Facial Expression Recognition Module and accordingly routed to either the Neutral Face Recognition Module or the Smiling Face Recognition Module, where the identity of the subject is estimated. The right-most four columns in Figure 6 show that, whether using the linear discriminant analyzer or the support vector machine for the initial expression sorting, the rank-one face recognition levels achieved by the overall system are higher (87%, 85%). In reviewing the results of these experiments, it should be noted that all the experiments involving smiling faces are done using the leave-one-out cross validation method because of the size of the database. Therefore the results displayed are the average, not the best one. For simplicity of implementation, the training samples for the expression recognition system and the smiling face recognition systems are the same faces. In a real application, we would select the training samples to make the best classifier for expression recognition and the identification of faces with a type of expression separately. Considerable performance improvement might be achieved in this way. The work reported in this paper represents an attempt to acknowledge and account for the presence of expression on 3D face images, towards their improved identification. The method introduced here is computationally efficient. Furthermore, this method also yields as a secondary result the information of the expression found in the faces. Based on these findings we believe that the acknowledgement of the impact of expression on 3D face recognition and the development of systems that account for it, such as the framework introduced here, will be keys to future enhancements in the field of 3D Automatic Face Recognition. 8 Acknowledgement This work was sponsored by NSF grants IIS , CNS , HRD and CNS Mr. Chao Li is the recipient of an FIU Dissertation Year Research Fellowship. References: [1] R. Chellappa, C.Wilson, and S. Sirohey, Human and Machine Recognition of Faces: A Survey, Proceedings of the IEEE, (5): pp [2] W. Zhao, R. Chellappa, and A.Rosenfeld, Face recognition:a literature survey, ACM Computing Survey, : pp [3] K. Bowyer, K. Chang, and P. Flynn, A Survey of Approaches to 3D and Multi-Modal 3D+2D Face Recognition, IEEE International Conference on Pattern Recognition, pp [4] N. Etcoff, J. Magee, Categorical perception of facial expressions, Cognition, : pp [5] P.Ekman, W. Friesen, Constants across cultures in the face and emotion, Jounal of Personality and Social Psychology, (2): pp [6] [7] C. Chang, C. Lin, LIBSVM: a library for support vector machines, [8] C. Li, A. Barreto, Profile-Based 3D Face Registration and Recognition, Lecture Notes on Computer Science, : pp [9] C. Li, A.Barreto, J. Zhai and C. Chin. Exploring Face Recognition Using 3D Profiles and Contours, iieee SoutheastCon Fort Lauderdale. pp [10]B. Moghaddam, A. Pentlend, Probabilistic Visual Learning for Object Detection, International Conferrence of Computer Vision (ICCV' 95), pp [11]B. Moghaddam, A. Pentlend, Probabilistic Visual Learning for Object Representation, IEEE Trans. on Pattern Analysis and Machine Intelligence, (7): pp
BIOMETRIC IDENTIFICATION USING 3D FACE SCANS
BIOMETRIC IDENTIFICATION USING 3D FACE SCANS Chao Li Armando Barreto Craig Chin Jing Zhai Electrical and Computer Engineering Department Florida International University Miami, Florida, 33174, USA ABSTRACT
3D Face Recognition System in Time Critical Security Applications
Middle-East Journal of Scientific Research 25 (7): 1619-1623, 2017 ISSN 1990-9233 IDOSI Publications, 2017 DOI: 10.5829/idosi.mejsr.2017.1619.1623 3D Face Recognition System in Time Critical Security Applications
A Proposal for Security Oversight at Automated Teller Machine System
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 6 (June 2014), PP.18-25 A Proposal for Security Oversight at Automated
PHASE CONGURENCY BASED FEATURE EXTRCTION FOR FACIAL EXPRESSION RECOGNITION USING SVM CLASSIFIER
PHASE CONGURENCY BASED FEATURE EXTRCTION FOR FACIAL EXPRESSION RECOGNITION USING SVM CLASSIFIER S.SANGEETHA 1, A. JOHN DHANASEELY 2 M.E Applied Electronics,IFET COLLEGE OF ENGINEERING,Villupuram 1 Associate
International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER
International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December-2016 192 A Novel Approach For Face Liveness Detection To Avoid Face Spoofing Attacks Meenakshi Research Scholar,
Student Attendance Monitoring System Via Face Detection and Recognition System
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Student Attendance Monitoring System Via Face Detection and Recognition System Pinal
IR and Visible Light Face Recognition
IR and Visible Light Face Recognition Xin Chen Patrick J. Flynn Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556 USA {xchen2, flynn, kwb}@nd.edu
Iranian Face Database With Age, Pose and Expression
Iranian Face Database With Age, Pose and Expression Azam Bastanfard, Melika Abbasian Nik, Mohammad Mahdi Dehshibi Islamic Azad University, Karaj Branch, Computer Engineering Department, Daneshgah St, Rajaee
A Neural Network Facial Expression Recognition System using Unsupervised Local Processing
A Neural Network Facial Expression Recognition System using Unsupervised Local Processing Leonardo Franco Alessandro Treves Cognitive Neuroscience Sector - SISSA 2-4 Via Beirut, Trieste, 34014 Italy lfranco@sissa.it,
Multi-modal Human-computer Interaction
Multi-modal Human-computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu SSIP 2008, 9 July 2008 Hungary and Debrecen Multi-modal Human-computer Interaction - 2 Debrecen Big Church Multi-modal
Visual Search using Principal Component Analysis
Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development
Auto-tagging The Facebook
Auto-tagging The Facebook Jonathan Michelson and Jorge Ortiz Stanford University 2006 E-mail: JonMich@Stanford.edu, jorge.ortiz@stanford.com Introduction For those not familiar, The Facebook is an extremely
Content Based Image Retrieval Using Color Histogram
Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,
Multimodal Face Recognition using Hybrid Correlation Filters
Multimodal Face Recognition using Hybrid Correlation Filters Anamika Dubey, Abhishek Sharma Electrical Engineering Department, Indian Institute of Technology Roorkee, India {ana.iitr, abhisharayiya}@gmail.com
Second Symposium & Workshop on ICAO-Standard MRTDs, Biometrics and Security
Second Symposium & Workshop on ICAO-Standard MRTDs, Biometrics and Security Face Biometric Capture & Applications Terry Hartmann Director and Global Solution Lead Secure Identification & Biometrics UNISYS
Face Detection System on Ada boost Algorithm Using Haar Classifiers
Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics
IRIS Biometric for Person Identification. By Lakshmi Supriya.D M.Tech 04IT6002 Dept. of Information Technology
IRIS Biometric for Person Identification By Lakshmi Supriya.D M.Tech 04IT6002 Dept. of Information Technology What are Biometrics? Why are Biometrics used? How Biometrics is today? Iris Iris is the area
Multi-PIE. Ralph Gross a, Iain Matthews a, Jeffrey Cohn b, Takeo Kanade a, Simon Baker c
Multi-PIE Ralph Gross a, Iain Matthews a, Jeffrey Cohn b, Takeo Kanade a, Simon Baker c a Robotics Institute, Carnegie Mellon University b Department of Psychology, University of Pittsburgh c Microsoft
FaceReader Methodology Note
FaceReader Methodology Note By Dr. Leanne Loijens and Dr. Olga Krips Behavioral research consultants at Noldus Information Technology A white paper by Noldus Information Technology what is facereader?
Sentiment Analysis of User-Generated Contents for Pharmaceutical Product Safety
Sentiment Analysis of User-Generated Contents for Pharmaceutical Product Safety Haruna Isah, Daniel Neagu and Paul Trundle Artificial Intelligence Research Group University of Bradford, UK Haruna Isah
Processing and Enhancement of Palm Vein Image in Vein Pattern Recognition System
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 4, April 2015,
ROBOT VISION. Dr.M.Madhavi, MED, MVSREC
ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation
Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye
Intelligent Identification System Research
2016 International Conference on Manufacturing Construction and Energy Engineering (MCEE) ISBN: 978-1-60595-374-8 Intelligent Identification System Research Zi-Min Wang and Bai-Qing He Abstract: From the
A SURVEY ON HAND GESTURE RECOGNITION
A SURVEY ON HAND GESTURE RECOGNITION U.K. Jaliya 1, Dr. Darshak Thakore 2, Deepali Kawdiya 3 1 Assistant Professor, Department of Computer Engineering, B.V.M, Gujarat, India 2 Assistant Professor, Department
BIOMETRICS BY- VARTIKA PAUL 4IT55
BIOMETRICS BY- VARTIKA PAUL 4IT55 BIOMETRICS Definition Biometrics is the identification or verification of human identity through the measurement of repeatable physiological and behavioral characteristics
User Awareness of Biometrics
Advances in Networks, Computing and Communications 4 User Awareness of Biometrics B.J.Edmonds and S.M.Furnell Network Research Group, University of Plymouth, Plymouth, United Kingdom e-mail: info@network-research-group.org
Forensic Sketch Recognition: Matching Forensic Sketches to Mugshot Images
Forensic Sketch Recognition: Matching Forensic Sketches to Mugshot Images Presented by: Brendan Klare With: Anil Jain, and Zhifeng Li Forensic sketchesare drawn by a police artist based on verbal description
Card IEEE Symposium Series on Computational Intelligence
2015 IEEE Symposium Series on Computational Intelligence Cynthia Sthembile Mlambo Council for Scientific and Industrial Research Information Security Pretoria, South Africa smlambo@csir.co.za Distortion
Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence
Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence Sheng Yan LI, Jie FENG, Bin Gang XU, and Xiao Ming TAO Institute of Textiles and Clothing,
Accurate Emotion Detection of Digital Images Using Bezier Curves
Accurate Emotion Detection of Digital Images Using Bezier Curves C.Karuna Sharma, T.Aswini, A.Vinodhini, V.Selvi Abstract Image capturing and detecting the emotions of face that have unconstrained level
Low Vision Assessment Components Job Aid 1
Low Vision Assessment Components Job Aid 1 Eye Dominance Often called eye dominance, eyedness, or seeing through the eye, is the tendency to prefer visual input a particular eye. It is similar to the laterality
International Journal of Scientific & Engineering Research, Volume 5, Issue 1, January ISSN
International Journal of Scientific & Engineering Research, Volume 5, Issue 1, January-2014 289 Fingerprint Minutiae Extraction and Orientation Detection using ROI (Region of interest) for fingerprint
Pose Invariant Face Recognition
Pose Invariant Face Recognition Fu Jie Huang Zhihua Zhou Hong-Jiang Zhang Tsuhan Chen Electrical and Computer Engineering Department Carnegie Mellon University jhuangfu@cmu.edu State Key Lab for Novel
On The Correlation of Image Size to System Accuracy in Automatic Fingerprint Identification Systems
On The Correlation of Image Size to System Accuracy in Automatic Fingerprint Identification Systems J.K. Schneider, C. E. Richardson, F.W. Kiefer, and Venu Govindaraju Ultra-Scan Corporation, 4240 Ridge
EE368 Digital Image Processing Project - Automatic Face Detection Using Color Based Segmentation and Template/Energy Thresholding
1 EE368 Digital Image Processing Project - Automatic Face Detection Using Color Based Segmentation and Template/Energy Thresholding Michael Padilla and Zihong Fan Group 16 Department of Electrical Engineering
Facial Recognition of Identical Twins
Facial Recognition of Identical Twins Matthew T. Pruitt, Jason M. Grant, Jeffrey R. Paone, Patrick J. Flynn University of Notre Dame Notre Dame, IN {mpruitt, jgrant3, jpaone, flynn}@nd.edu Richard W. Vorder
This paper is a postprint of a paper submitted to and accepted for publication in IET Biometrics and is subject to Institution of Engineering and
This paper is a postprint of a paper submitted to and accepted for publication in IET Biometrics and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at
Face Detection and Face Recognition in Android Mobile Applications
20 Informatica Economică vol. 20, no. 1/2016 Face Detection and Face Recognition in Android Mobile Applications Octavian DOSPINESCU 1, Iulian POPA 2 1 Faculty of Economics and Business Administration,
Improved SIFT Matching for Image Pairs with a Scale Difference
Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,
Raster Based Region Growing
6th New Zealand Image Processing Workshop (August 99) Raster Based Region Growing Donald G. Bailey Image Analysis Unit Massey University Palmerston North ABSTRACT In some image segmentation applications,
Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw
Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur
Color Constancy Using Standard Deviation of Color Channels
2010 International Conference on Pattern Recognition Color Constancy Using Standard Deviation of Color Channels Anustup Choudhury and Gérard Medioni Department of Computer Science University of Southern
Human Identification from Video: A Summary of Multimodal Approaches
June 2010 Human Identification from Video: A Summary of Multimodal Approaches Project Leads Charles Schmitt, PhD, Renaissance Computing Institute Allan Porterfield, PhD, Renaissance Computing Institute
Online Signature Verification by Using FPGA
Online Signature Verification by Using FPGA D.Sandeep Assistant Professor, Department of ECE, Vignan Institute of Technology & Science, Telangana, India. ABSTRACT: The main aim of this project is used
Haptic presentation of 3D objects in virtual reality for the visually disabled
Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,
INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET)
INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET) www.irjaet.com ISSN (PRINT) : 2454-4744 ISSN (ONLINE): 2454-4752 Vol. 1, Issue 4, pp.240-245, November, 2015 IRIS RECOGNITION
DUE to growing demands in such application areas as law
50 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004 Face Sketch Recognition Xiaoou Tang, Senior Member, IEEE, and Xiaogang Wang, Student Member, IEEE Abstract
Authenticated Document Management System
Authenticated Document Management System P. Anup Krishna Research Scholar at Bharathiar University, Coimbatore, Tamilnadu Dr. Sudheer Marar Head of Department, Faculty of Computer Applications, Nehru College
Edge Histogram Descriptor for Finger Vein Recognition
Edge Histogram Descriptor for Finger Vein Recognition Yu Lu 1, Sook Yoon 2, Daegyu Hwang 1, and Dong Sun Park 2 1 Division of Electronic and Information Engineering, Chonbuk National University, Jeonju,
Image Processing: Capturing Student Attendance Data
Abstract I S S N 2 2 7 7-3061 Image Processing: Capturing Student Attendance Data Hendra Kurniawan (1), Melda Agarina (2), Suhendro Yusuf Irianto (3) (1,2,3) Lecturer, Department of Computer Scince, IIB
A NOVEL IMAGE PROCESSING TECHNIQUE TO EXTRACT FACIAL EXPRESSIONS FROM MOUTH REGIONS
A NOVEL IMAGE PROCESSING TECHNIQUE TO EXTRACT FACIAL EXPRESSIONS FROM MOUTH REGIONS S.Sowmiya 1, Dr.K.Krishnaveni 2 1 Student, Department of Computer Science 2 1, 2 Associate Professor, Department of Computer
COLOR IMAGE SEGMENTATION USING K-MEANS CLASSIFICATION ON RGB HISTOGRAM SADIA BASAR, AWAIS ADNAN, NAILA HABIB KHAN, SHAHAB HAIDER
COLOR IMAGE SEGMENTATION USING K-MEANS CLASSIFICATION ON RGB HISTOGRAM SADIA BASAR, AWAIS ADNAN, NAILA HABIB KHAN, SHAHAB HAIDER Department of Computer Science, Institute of Management Sciences, 1-A, Sector
Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image
Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How
Investigation of Recognition Methods in Biometrics
Investigation of Recognition Methods in Biometrics Udhayakumar.M 1, Sidharth.S.G 2, Deepak.S 3, Arunkumar.M 4 1, 2, 3 PG Scholars, Dept of ECE, Bannari Amman Inst of Technology, Sathyamangalam, Erode Asst.
Experimental Analysis of Face Recognition on Still and CCTV images
Experimental Analysis of Face Recognition on Still and CCTV images Shaokang Chen, Erik Berglund, Abbas Bigdeli, Conrad Sanderson, Brian C. Lovell NICTA, PO Box 10161, Brisbane, QLD 4000, Australia ITEE,
The Use of Neural Network to Recognize the Parts of the Computer Motherboard
Journal of Computer Sciences 1 (4 ): 477-481, 2005 ISSN 1549-3636 Science Publications, 2005 The Use of Neural Network to Recognize the Parts of the Computer Motherboard Abbas M. Ali, S.D.Gore and Musaab
Environmental Sound Recognition using MP-based Features
Environmental Sound Recognition using MP-based Features Selina Chu, Shri Narayanan *, and C.-C. Jay Kuo * Speech Analysis and Interpretation Lab Signal & Image Processing Institute Department of Computer
Understanding the city to make it smart
Understanding the city to make it smart Roberta De Michele and Marco Furini Communication and Economics Department Universty of Modena and Reggio Emilia, Reggio Emilia, 42121, Italy, marco.furini@unimore.it
A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS
Vol. 12, Issue 1/2016, 42-46 DOI: 10.1515/cee-2016-0006 A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS Slavomir MATUSKA 1*, Robert HUDEC 2, Patrik KAMENCAY 3,
IMAP- INTELLIGENT MANAGEMENT OF ATTENDANCE PROCESSING USING VJ ALGORITHM FOR FACE DETECTION
IMAP- INTELLIGENT MANAGEMENT OF ATTENDANCE PROCESSING USING VJ ALGORITHM FOR FACE DETECTION B Muthusenthil, A Samydurai, C Vijayakumaran Department of Computer Science and Engineering, Valliamai Engineering
A Survey on Image Contrast Enhancement
A Survey on Image Contrast Enhancement Kunal Dhote 1, Anjali Chandavale 2 1 Department of Information Technology, MIT College of Engineering, Pune, India 2 SMIEEE, Department of Information Technology,
On the Existence of Face Quality Measures
On the Existence of Face Quality Measures P. Jonathon Phillips J. Ross Beveridge David Bolme Bruce A. Draper, Geof H. Givens Yui Man Lui Su Cheng Mohammad Nayeem Teli Hao Zhang Abstract We investigate
Real Time Detection and Classification of Single and Multiple Power Quality Disturbance Based on Embedded S- Transform Algorithm in Labview
Real Time Detection and Classification of Single and Multiple Power Quality Disturbance Based on Embedded S- Transform Algorithm in Labview Mohd Fais Abd Ghani, Ahmad Farid Abidin and Naeem S. Hannoon
Computer Vision in Human-Computer Interaction
Invited talk in 2010 Autumn Seminar and Meeting of Pattern Recognition Society of Finland, M/S Baltic Princess, 26.11.2010 Computer Vision in Human-Computer Interaction Matti Pietikäinen Machine Vision
VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL
VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu
Below is provided a chapter summary of the dissertation that lays out the topics under discussion.
Introduction This dissertation articulates an opportunity presented to architecture by computation, specifically its digital simulation of space known as Virtual Reality (VR) and its networked, social
Object Perception. 23 August PSY Object & Scene 1
Object Perception Perceiving an object involves many cognitive processes, including recognition (memory), attention, learning, expertise. The first step is feature extraction, the second is feature grouping
NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:
IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2
A Novel Approach for Human Identification Finger Vein Images
39 A Novel Approach for Human Identification Finger Vein Images 1 Vandana Gajare 2 S. V. Patil 1,2 J.T. Mahajan College of Engineering Faizpur (Maharashtra) Abstract - Finger vein is a unique physiological
A NOVEL ARCHITECTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM DETECTED FACE
A NOVEL ARCHITECTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM DETECTED FACE Vibekananda Dutta Dr.Nishtha Kesswani Deepti Gahalot Central University of Rajasthan Central University of Rajasthan Govt.Engineering
DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam
DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.
Improving Image Quality by Camera Signal Adaptation to Lighting Conditions
Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Mihai Negru and Sergiu Nedevschi Technical University of Cluj-Napoca, Computer Science Department Mihai.Negru@cs.utcluj.ro, Sergiu.Nedevschi@cs.utcluj.ro
Journal of Asian Scientific Research IMPROVEMENT OF PEST DETECTION USING HISTOGRAM ADJUSTMENT METHOD AND GABOR WAVELET
Journal of Asian Scientific Research ISSN(e): 2223-1331/ISSN(p): 2226-5724 URL: www.aessweb.com IMPROVEMENT OF PEST DETECTION USING HISTOGRAM ADJUSTMENT METHOD AND GABOR WAVELET Mostafa Bayat 1 --- Mahdi
Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University
Images and Graphics Images and Graphics Graphics and images are non-textual information that can be displayed and printed. Graphics (vector graphics) are an assemblage of lines, curves or circles with
Live Hand Gesture Recognition using an Android Device
Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com
Active Safety Systems Development and Driver behavior Modeling: A Literature Survey
Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 9 (2013) pp. 1153-1166 Research India Publications http://www.ripublication.com/aeee.htm Active Safety Systems Development
Classification Accuracies of Malaria Infected Cells Using Deep Convolutional Neural Networks Based on Decompressed Images
Classification Accuracies of Malaria Infected Cells Using Deep Convolutional Neural Networks Based on Decompressed Images Yuhang Dong, Zhuocheng Jiang, Hongda Shen, W. David Pan Dept. of Electrical & Computer
Hybrid Segmentation Approach and Preprocessing of Color Image based on Haar Wavelet Transform
Hybrid Segmentation Approach and Preprocessing of Color Image based on Haar Wavelet Transform Reena Thakur Anand Engineering College, Agra, India Arun Yadav Hindustan Institute of Technology andmanagement,
An Enhanced Biometric System for Personal Authentication
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735. Volume 6, Issue 3 (May. - Jun. 2013), PP 63-69 An Enhanced Biometric System for Personal Authentication
CS231A Final Project: Who Drew It? Style Analysis on DeviantART
CS231A Final Project: Who Drew It? Style Analysis on DeviantART Mindy Huang (mindyh) Ben-han Sung (bsung93) Abstract Our project studied popular portrait artists on Deviant Art and attempted to identify
Recognition Of Vehicle Number Plate Using MATLAB
Recognition Of Vehicle Number Plate Using MATLAB Mr. Ami Kumar Parida 1, SH Mayuri 2,Pallabi Nayk 3,Nidhi Bharti 4 1Asst. Professor, Gandhi Institute Of Engineering and Technology, Gunupur 234Under Graduate,
Face Detection using 3-D Time-of-Flight and Colour Cameras
Face Detection using 3-D Time-of-Flight and Colour Cameras Jan Fischer, Daniel Seitz, Alexander Verl Fraunhofer IPA, Nobelstr. 12, 70597 Stuttgart, Germany Abstract This paper presents a novel method to
Automatic Electricity Meter Reading Based on Image Processing
Automatic Electricity Meter Reading Based on Image Processing Lamiaa A. Elrefaei *,+,1, Asrar Bajaber *,2, Sumayyah Natheir *,3, Nada AbuSanab *,4, Marwa Bazi *,5 * Computer Science Department Faculty
Miniature UAV Radar System April 28th, Developers: Allistair Moses Matthew J. Rutherford Michail Kontitsis Kimon P.
Miniature UAV Radar System April 28th, 2011 Developers: Allistair Moses Matthew J. Rutherford Michail Kontitsis Kimon P. Valavanis Background UAV/UAS demand is accelerating Shift from military to civilian
Uncertainty in CT Metrology: Visualizations for Exploration and Analysis of Geometric Tolerances
Uncertainty in CT Metrology: Visualizations for Exploration and Analysis of Geometric Tolerances Artem Amirkhanov 1, Bernhard Fröhler 1, Michael Reiter 1, Johann Kastner 1, M. Eduard Grӧller 2, Christoph
Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs
Basic Digital Image Processing A Basic Introduction to Digital Image Processing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland,
The Center for Identification Technology Research (CITeR)
The Center for Identification Technology Research () Presented by Dr. Stephanie Schuckers February 24, 2011 Status Report is an NSF Industry/University Cooperative Research Center (IUCRC) The importance
IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING
IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING PRESENTED BY S PRADEEP K SUNIL KUMAR III BTECH-II SEM, III BTECH-II SEM, C.S.E. C.S.E. pradeep585singana@gmail.com sunilkumar5b9@gmail.com CONTACT:
Addis Ababa University School of Graduate Studies Addis Ababa Institute of Technology
1 Addis Ababa University School of Graduate Studies Addis Ababa Institute of Technology Design and Implementation of Car Plate Recognition System for Ethiopian Car Plates By: Huda Zuber Ahmed Addis Ababa
Mixed Synchronous/Asynchronous State Memory for Low Power FSM Design
Mixed Synchronous/Asynchronous State Memory for Low Power FSM Design Cao Cao and Bengt Oelmann Department of Information Technology and Media, Mid-Sweden University S-851 70 Sundsvall, Sweden {cao.cao@mh.se}
Virtual Sculpting and Multi-axis Polyhedral Machining Planning Methodology with 5-DOF Haptic Interface
Virtual Sculpting and Multi-axis Polyhedral Machining Planning Methodology with 5-DOF Haptic Interface Weihang Zhu and Yuan-Shin Lee* Department of Industrial Engineering North Carolina State University,
Identification of Cardiac Arrhythmias using ECG
Pooja Sharma,Int.J.Computer Technology & Applications,Vol 3 (1), 293-297 Identification of Cardiac Arrhythmias using ECG Pooja Sharma Pooja15bhilai@gmail.com RCET Bhilai Ms.Lakhwinder Kaur lakhwinder20063@yahoo.com
DOI: /IJCSC Page 210
Video Based Face Detection and Tracking for Forensic Applications Ritika Lohiya, Pooja Shah Assistant professor at Silver Oak College of engineering and technology, Assistant Professor at Nirma University
Predictive biometrics: a review and analysis of predicting personal characteristics from biometric data
Page 1 of 28 IET Biometrics Predictive biometrics: a review and analysis of predicting personal characteristics from biometric data Michael Fairhurst 1, Cheng Li 2, Márjory Da Costa-Abreu 3 1 School of
Face Presentation Attack Detection by Exploring Spectral Signatures
Face Presentation Attack Detection by Exploring Spectral Signatures R. Raghavendra, Kiran B. Raja, Sushma Venkatesh, Christoph Busch Norwegian Biometrics Laboratory, NTNU - Gjøvik, Norway {raghavendra.ramachandra;
Presentation Attack Detection Algorithms for Finger Vein Biometrics: A Comprehensive Study
215 11th International Conference on Signal-Image Technology & Internet-Based Systems Presentation Attack Detection Algorithms for Finger Vein Biometrics: A Comprehensive Study R. Raghavendra Christoph
Comparison of Receive Signal Level Measurement Techniques in GSM Cellular Networks
Comparison of Receive Signal Level Measurement Techniques in GSM Cellular Networks Nenad Mijatovic *, Ivica Kostanic * and Sergey Dickey + * Florida Institute of Technology, Melbourne, FL, USA nmijatov@fit.edu,
Recent research results in iris biometrics
Recent research results in iris biometrics Karen Hollingsworth, Sarah Baker, Sarah Ring Kevin W. Bowyer, and Patrick J. Flynn Computer Science and Engineering Department, University of Notre Dame, Notre