Driver Fatigue Detection System Based on DM3730

Similar documents
An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Keyword: Morphological operation, template matching, license plate localization, character recognition.

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Automatic Licenses Plate Recognition System

Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

License Plate Localisation based on Morphological Operations

INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION

Libyan Licenses Plate Recognition Using Template Matching Method

Recognition Of Vehicle Number Plate Using MATLAB

A Vehicle Speed Measurement System for Nighttime with Camera

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

中国科技论文在线. An Efficient Method of License Plate Location in Natural-scene Image. Haiqi Huang 1, Ming Gu 2,Hongyang Chao 2

Iris Recognition using Hamming Distance and Fragile Bit Distance

FLUORESCENCE MAGNETIC PARTICLE FLAW DETECTING SYSTEM BASED ON LOW LIGHT LEVEL CCD

A Chinese License Plate Recognition System

Open Access The Application of Digital Image Processing Method in Range Finding by Camera

An Improved Binarization Method for Degraded Document Seema Pardhi 1, Dr. G. U. Kharat 2

A Real-Time Driving Fatigue Monitoring DSP Device Based On Computing Complexity of Binarized Image

Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2, b, Ma Hui2, c

Open Access Partial Discharge Fault Decision and Location of 24kV Composite Porcelain Insulator based on Power Spectrum Density Algorithm

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

Development of Hybrid Image Sensor for Pedestrian Detection

Target Recognition and Tracking based on Data Fusion of Radar and Infrared Image Sensors

Automated Number Plate Recognition System Using Machine learning algorithms (Kstar)

Automatic License Plate Recognition System using Histogram Graph Algorithm

Infrared Night Vision Based Pedestrian Detection System

Automatics Vehicle License Plate Recognition using MATLAB

Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere

Open Access AOA and TDOA-Based a Novel Three Dimensional Location Algorithm in Wireless Sensor Network

The Open Automation and Control Systems Journal, 2015, 7, Application of Fuzzy PID Control in the Level Process Control

Virtual Digital Control Experimental System

International Conference on Computer, Communication, Control and Information Technology (C 3 IT 2009) Paper Code: DSIP-024

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology

The Research of the Lane Detection Algorithm Base on Vision Sensor

An Efficient Method for Vehicle License Plate Detection in Complex Scenes

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Real Time and Non-intrusive Driver Fatigue Monitoring

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

The Study on the Image Thresholding Segmentation Algorithm. Yue Liu, Jia-mei Xue *, Hua Li

A Method of Measuring Distances between Cars. Using Vehicle Black Box Images

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation

Open Access Structural Parameters Optimum Design of the New Type of Optical Aiming

Detection of Rail Fastener Based on Wavelet Decomposition and PCA Ben-yu XIAO 1, Yong-zhi MIN 1,* and Hong-feng MA 2

Face Recognition System Based on Infrared Image

CHAPTER VII PROPOSED SYSTEM TESTING AND IMPLEMENTATION

Application of Machine Vision Technology in the Diagnosis of Maize Disease

Software Development Kit to Verify Quality Iris Images

Driver status monitoring based on Neuromorphic visual processing

Implementation of License Plate Recognition System in ARM Cortex A8 Board

Open Access An Improved Character Recognition Algorithm for License Plate Based on BP Neural Network

Open Access Design of Diesel Engine Adaptive Active Disturbance Rejection Speed Controller

Night-time pedestrian detection via Neuromorphic approach

Automatic Electricity Meter Reading Based on Image Processing

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator

Iris Recognition using Histogram Analysis

EE 5359 MULTIMEDIA PROCESSING. Vehicle License Plate Detection Algorithm Based on Statistical Characteristics in HSI Color Model

Open Access Partial Discharge Fault Decision and Location of 24kV Multi-layer Porcelain Insulator based on Power Spectrum Density Algorithm

Automated License Plate Recognition for Toll Booth Application

World Journal of Engineering Research and Technology WJERT

IMAGE TYPE WATER METER CHARACTER RECOGNITION BASED ON EMBEDDED DSP

Lane Detection in Automotive

Number Plate Recognition System using OCR for Automatic Toll Collection

A Method of Multi-License Plate Location in Road Bayonet Image

A Fast Algorithm of Extracting Rail Profile Base on the Structured Light

Vehicle License Plate Recognition System Using LoG Operator for Edge Detection and Radon Transform for Slant Correction

Detection of License Plates of Vehicles

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique

The Key Information Technology of Soybean Disease Diagnosis

Image Enhancement System Based on Improved Dark Channel Prior Chang Liu1, a, Jun Zhu1,band Xiaojun Peng1,c

][ R G [ Q] Y =[ a b c. d e f. g h I

A Review of Optical Character Recognition System for Recognition of Printed Text

AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA

FACE RECOGNITION BY PIXEL INTENSITY

Detection and Verification of Missing Components in SMD using AOI Techniques

Feature Extraction Techniques for Dorsal Hand Vein Pattern

Automatic Locating the Centromere on Human Chromosome Pictures

RESEARCH PAPER FOR ARBITRARY ORIENTED TEAM TEXT DETECTION IN VIDEO IMAGES USING CONNECTED COMPONENT ANALYSIS

Computer Vision. Howie Choset Introduction to Robotics

A Novel Morphological Method for Detection and Recognition of Vehicle License Plates

Road marking abrasion defects detection based on video image processing

Experiments with An Improved Iris Segmentation Algorithm

Master thesis: Author: Examiner: Tutor: Duration: 1. Introduction 2. Ghost Categories Figure 1 Ghost categories

Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter

FPGA based Real-time Automatic Number Plate Recognition System for Modern License Plates in Sri Lanka

RETRACTED ARTICLE. Bus-Styling Appraisement Research Using Extension Theory-Based on Artificial Neural Network. Open Access

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

MAV-ID card processing using camera images

Chapter 17. Shape-Based Operations

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

Color Image Segmentation in RGB Color Space Based on Color Saliency

Open Access Research on RSSI Based Localization System in the Wireless Sensor Network

Main Subject Detection of Image by Cropping Specific Sharp Area

International Journal of Advance Engineering and Research Development

Background Pixel Classification for Motion Detection in Video Image Sequences

Feature Extraction Technique Based On Circular Strip for Palmprint Recognition

Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence

IJSRD - International Journal for Scientific Research & Development Vol. 4, Issue 05, 2016 ISSN (online):

According to the proposed AWB methods as described in Chapter 3, the following

Transcription:

Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2015, 7, 1191-1196 1191 Driver Fatigue Detection System Based on DM3730 Open Access Ming Cai 1,2,*, Ye Gu 3, Haixin Sun 3, Jie Qi 3 and Boliang Wang 1 1 School of Information Science and Engineering, Xiamen University, China 2 School of Information Science and Technology, Xiamen University Tan KahKee College, China 3 Key Laboratory of Underwater Acoustic Communication and Marine Information Technology (Xiamen University), Ministry of Education, Xiamen, China Abstract: Driver fatigue is a popular problem which has attracted people s views. Many research departments are researching driver fatigue detection in order to improve the traffic safety. This paper presented a driver fatigue detection system based on DM3730. The system calculated the inter-image difference between frames captured by near-infrared light irradiation, which included identification of the eyes by Otsu adaptive threshold segmentation method and prediction of the orientation of the eye in nearby images by Kalman filter. Then the system determined the state of fatigue by improved PERCLOS (Percentage of Eyelid Closure over the Pupil)method. Experimental results show that the system has the advantages of small size and low power consumption. Meanwhile it meets the requirements of all-weather, real-time monitoring. The system can be extended to automobile and other production processes which the fatigue monitoring is required. Keywords: Driver fatigue detection, Kalman filter, PERCLOS. 1. INTRODUCTION The methods of detecting driver fatigue include physiological signal detection [1, 2], vehicle behavioral characteristics detection [3] and computer vision based detection [4-6]. In these driving fatigue evaluation methods, PERCLOS (Percentage of Eyelid Closure Over the Pupil Over Time) based on computer vision is recognized as the most effective, automotive and real-time one [7], which was first proposed in the technical forum of the Federal Highway Administration in April 1999. The PERCLOS method regards the time percentage of closing eyes as a key indicator of driver fatigue predictions. Although there are a variety of methods of real-time driver fatigue detection, most of them are limited to theoretical research. Some devices have a lot of limitations, while others have many problems to be solved. The current researches of vehicle driver fatigue detection monitoring focus on improving the characteristic of real-time, robustness, accuracy, cost performance and multi-feature fusion. This paper designed an all-weather driver fatigue detection system to determine the fatigue level of drivers by extracting the pupil characteristics. This system built on TI's DaVinci 3730 platform is visual, real-time and noncontacting. *Address correspondence to this author at the School of information science and engineering, Xiamen University, China; Tel: 008615959250539; E-mail: hxsun@xmu.edu.cn 2. SYSTEM OVERVIEW 2.1. Hardware Structure This system based on DM3730 hardware platform collects the reflection lights of the face of the driver by 850nm infrared cameras with OV7725 sensors. Then the system analyzes the eye condition by the difference of inter-frame images, and determines the extent of the driver s fatigue by the red-eye effect of the eye in the infrared radiation generated by the light source. System hardware structure is shown in (Fig. 1). Infrared camera Video signal Video decoder Data bus Synchronous signal I 2 C Fig. (1). Hardware structure. 2.2. Software Structure SDRAM DM3730 FLASH Alarm Software structure of the system is composed by image processing algorithms, including image difference method to analyze the different brightness of the pupil in adjacent frames, estimating the pupil s position in the image by the previous one, extracting the feature parameters of the eye to count the value of discrete PERCLOS. The series of algo- 1874-4443/15 2015 Bentham Open

1192 The Open Automation and Control Systems Journal, 2015, Volume 7 Cai et al. rithms can determine whether the driver is fatigue driving. System software structure is shown in Fig. 2. Grayscale image of bright pupil Gaussian filter Kalman filter initialization Difference image method Image Enhancement Grayscale image of dark pupil Gaussian filter Image prediction Adaptive threshold binarization Erosion & dilation Pupil captured? Y Update Kalman filter N Reset Kalman filter 3.2. Eyes Localization Image acquisition system determines the position of the driver s eyes from two nearby frames. The brightness of the eyes in the images is different. The eyes location can be acquired from a series of processing including image difference and image screening methods. LED lights in the inner ring flash to acquire images of bright pupil, which is called red-eye images. LED lights in the outer ring generate normal images of dark pupil. The calculation formula to obtain the difference image is shown in (1). ( ) (1) L(i, j) is the dark pupil image. A(i, j) is the bright pupil image. C (i, j) is the difference image. The difference image is the absolute value of the difference of all the pixels in the dark and bright images. In (Fig. 4), (c) is the difference results of (a) and (b). It can be observed from the difference image that the luminance of the pupil region is so lighter than other regions that the difference is significant between the images. PERCLOS calculation module Fig. (2). Software structure. 3. FATIGUE DETECTION ALGORITHM 3.1. Image Acquisition Image acquisition module gets real-time images of driver s face from LED light source. As shown in Fig. 3, the device set includes near-infrared LED light sources, a light source control device, a device accommodating box, an OV7725 sensor camera, and camera lens with the infrared band-pass filters. Fig. (3). Image acquisition device. The light source is inner and outer rings of near-infrared wavelength of 850nm LED light sources on a circuit board. Eight LED lights in the inner ring around the camera lens produce red-eye effect of the driver s eyes, while eight lights in the outer ring are relatively dispersed to avoid red-eye effect. The brightness of the inner and outer lights is same by adjusting the rheostat in the lights. The system uses OV7725 sensor camera that can keep a good performance of image acquisition in low illumination and can get a small motion deviation between frames. Fig. (4). Image difference operations. In order to reduce image interference in subsequent processing, the system uses OTSU method [8]. According to the characteristics of the grayscale image, the adaptive thresholding method divides the image into two parts: background and foreground. The class variance between the foreground and background reaches the maximum after image binarization by threshold. The difference image to be processed contains 256 gray levels (0, 1,..., 255). Let i be the gray value of the pixel, the number of pixels be Ni, and the ratio in the total pixels be Pi. The whole image is divided into dark area C1 and bright area C2 by threshold t. The variance σ is a function of t. σ = ( ) (2) where, 1 c1, (i ) /, (i ) /. Select the best threshold value T to obtain the maximum value of σ in (2) when t = T. The segmentation results based on the OTSU adaptive threshold method are shown in (Table 1). It can be verified from the segmentation results that the system greatly enhances the ability to adapt to the environment by introducing an adaptive segmentation threshold. In order to eliminate image noise after adaptive thresholding segmentation, an open operation is implemented, which corrodes first then expands. The corrosion operation uses a 2 2 template (3). 1 1 1 0 (3)

Driver Fatigue Detection System Based on DM3730 The Open Automation and Control Systems Journal, 2015, Volume 7 1193 Table 1. Segmentation results based on OTSU method. Original Image Difference Image Threshold Binarization 13 58 30 xˆ = Axˆ + Bu k k 1 k 1 T P = AP A + Q k k 1 K = P H ( HP H + R) T T 1 k k k xˆ xˆ = + K ( z Hxˆ ) k k k k k P = ( I K H) P k k k Fig. (6). Process of Kalman filter. The binary image is corroded by this template. The result is shown in Fig. 5b. The expansion operation uses a 3 3 template (4). 1 1 1 1 1 1 (4) 1 1 1 The image after expansion operation is shown in Fig. 5c. Fig. (5). Open operation of binary image. After the open operation, the pupil regions are kept in the image. The two largest connected regions in the image can be regarded as the pupil regions within the limited scope by the Kalman filter. The system uses a Kalman filter to predict the eye position in the next frame by the detection result of the current frame, which reduces the size of the image processing and improves accuracy. The operation process of Kalman filter is shown in Fig. 6. In the pupil tracking process, let the motion parameters of system state x k be x, y, dx, dy, according to the eye status, where x is the horizontal ordinate of the center of the pupil, y is the vertical axis, dx is the velocity in x-axis, and dy is the velocity in y-axis. Define (x, y) T as the observation vector. The system parameters are defined as follows based on the assumption state. 22 Transition matrix of system status (5): 1 0 " 0 0 1 0 " 0 0 1 0 0 0 0 1 Observation matrix (6): 1 0 0 0 0 1 0 0 Noise covariance matrix of system status (7): 0.01 0 0 0 0 0.01 0 0 0 0 0.01 0 0 0 0 0.01 Noise covariance matrix of observation (8): 0.2845 0.0045 0.0045 0.0455 Error covariance matrix (9): 100 0 0 0 0 100 0 0 0 0 100 0 0 0 0 100 The pupil tracking steps by Kalman filter are as follows: 1. Initialize the system state. 2. Try to extract the pupil in the k-th frame in accordance with the pupil region in the (k-1)-th frame. If successful, then modify the system state parameters of Kalman filter. If failed, then set the search region to the whole image and restart Kalman filtering process in the next frame. 3. Predict the motion state of the next frame by Kalman filter. Narrow the extraction range of the eigenvalues. (5) (6) (7) (8) (9)

1194 The Open Automation and Control Systems Journal, 2015, Volume 7 Cai et al. Fig. (7). Pupil tracking results. Fig. (8). Pupil trajectory of Kalman filter. 4. Return to step 2. The pupil tracking results are shown in Fig. 7. In Fig. 7., the green spots are the positioning trajectories of the pupil, while the red frames are the region sets that can predict the position of the pupil in the next frame. As can be seen from 1-30 th frames, the Kalman filter works well that accurately predicts which region the pupil will appear in the next frame. In 30-35 th frames, the pupil position has a large deviation. The probable reason is the fast-moving of the driver s eyes or frame loss. When the system fails to find the pupil in the predicting frame, the Kalman filter will be reset, and the search region is also reset to the entire frame. The recapture and predict situations are shown in 30-40 th frames. As the trajectory of the pupil can be seen in Fig. 8, the Kalman filter can predict the possible region in the next frame, in which the pupil can be found in most cases. When the pupil position comes out of the predicting frame (as described in the 9 th and 44 th points), the Kalman filter is reset. As can be seen in Fig. 9, the Kalman filter can predict the correct position of the pupil in the next frame, which not only reduces the interference caused by other regions but also reduces the calculation of feature extraction. The timeliness and robustness of the system are greatly improved. 3.3. Eye Feature Extraction After extracting the contour of the eye region, the characteristic parameters (for example, the aspect ratio and the area of the eyes) can be solved. By analyzing these parameters, it is a relatively simple, feasible and effective way to identify the eye state. Calculate the area and the aspect ratio of the circumscribed rectangle of the region. The driver s eyes are determined to be closed when the area is smaller than the threshold value. 3.4. Fatigue Analysis and Monitoring Among all the non-contact driver fatigue monitor methods, PERCLOS criterion is one of the most effective internationally recognized fatigue detection methods. There are three evaluation indicators in PERCLOS method: P70, P80, and EM [9-11]. We apply P80 as the criteria of driver fatigue detecting, which believes the driver s eyes closed when more than 80% area of the pupil is covered by the eyelids. Measurement principle of discrete PERCLOS value is shown in Fig. 10.

Driver Fatigue Detection System Based on DM3730 The Open Automation and Control Systems Journal, 2015, Volume 7 1195 Fig. (9). Pupil inspections of consecutive frames. Fig. (10). Measurement principle of discrete PERCLOS value. The statistical method of discrete PERCLOS value is shown in (10). PERCLOS = (10) T is the sampling duration, n is the count of eyes closing in the sampling duration, and t i is time duration of the i-th eyes closed. By (10), the PERCLOS value can be calculated that represents the degree of fatigue of the driver. 4. EXPERIMENTAL RESULTS Image capture frequency in acquisition system is 18fps. The image size is 320x240. The test driver simulates two states of normal driving and fatigue driving, where in the state of normal driving the driver s head maintains straight ahead and occasionally turned around to see the rearview mirror. Experimental results show that the accuracy of the system to detect driver fatigue reaches more than 82%, of which the accuracy to position human eyes reaches more than 92%. Besides, the accuracy of the system to position human eyes is higher in dark environments than in bright environments, while the calculation gets reduced. CONCLUSION In this paper, we present a driver fatigue detection system based on DM3730. The system accesses the images of the driver s face by the image acquisition device. The nearinfrared light produces the red-eye effect. The difference image is generated from adjacent frames. Then the pupil is segmented based on OTSU adaptive threshold segmentation method. The pupil position is predicted from the neighboring

1196 The Open Automation and Control Systems Journal, 2015, Volume 7 Cai et al. frame by a Kalman filter. In the final step, the discrete PER- CLOS value is calculated to detect the driver fatigue condition. The fatigue detection system works not only in the daytime but also at night. The unified feature extraction method is accurate and insensitive to the environment. Compared to the feature extraction method of pattern recognition, the algorithm designed for the using in the environment of this system is simple, efficient and accurate, improving the shortcomings of pattern recognition and laying the foundation for large-scale promotion. CONFLICT OF INTEREST The authors confirm that this article content has no conflict of interest. ACKNOWLEDGEMENTS This work is supported by the natural science foundation of Fujian Province (No. 2013J01258), the key projects of Fujian Province (No. 2012H1012), the national natural science foundation of China (No. 61107023) and the Ph. D. programs foundation of ministry of education of China (No. 20110121120020). REFERENCES [1] M. Simon, E. A. Schmidt, W. E. Kincses, M. Fritzsche, A. Bruns, C. Aufmuth, M. Bogdan, W. Rosenstiel, and M. Schrauf, EEG alpha spindle measures as indicators of driver fatigue under real traffic conditions, Clinical Neurophysiology, vol. 122, pp. 1168-1178, 2011. [2] M. Patel, S. K. L. Lal, D. Kavanagh, and P.Rossiter, Applying neural network analysis on heart rate variability data to assess driver fatigue, Expert Systems with Applications, vol. 38, pp. 7235-7242, 2011. [3] E. Dagan, O. Mano, G. P. Stein, and A. Shashua, Forward collision warning with a single camera, In: Intelligent Vehicles Symposium, 2004, pp. 37-42. [4] A. Bhardwaj, P. Aggarwal, R. Kumar, and N. Chandra, Image Filtering Techniques Used for Monitoring Driver Fatigue: A Survey, International Journal of Scientific and Research Publications, vol. 3, 2013, pp. 702-705. [5] X. Q. Luo, R. Hu, and T. E. Fan, The driver fatigue monitoring system based on face recognition technology, Intelligent Control and Information Processing (ICICIP), 2013 4 th International Conference, Beijing, China, 2013, pp. 384-388. [6] C. H. Zhao, J. Lian, Q. Dang, and C. Tong, Classification of driver fatigue expressions by combined curvelet features and Gabor features, and random subspace ensembles of support vector machines, Journal of Intelligent and Fuzzy Systems, vol. 26, pp. 91-100, 2014. [7] M. H. Sigari, Driver hypo-vigilance detection based on eyelid behavior, In: 7 th International Conference on Advances in Pattern Recognition (ICAPR 2009), 2009, pp. 426-429. [8] T. Pradhan, A.N.Bagaria, and A. Routray, Measurement of PER- CLOS using eigen-eyes, In: 4 th International Conference on Intelligent Human Computer Interaction (IHCI), 2012, pp. 1-4. [9] B. Alshaqaqi, A. S. Baquhaizel, M. El A. Ouis, M. Boumehed, A. Ouamri, and M. Keche, Driver drowsiness detection system, In: 8 th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), 2013, pp. 151-155. [10] N. Ostu, A threshold selection method from gray-level histogram, IEEE Trans, vol. 9, pp. 62-69, 1979. [11] J. L. Wan, Research on bias control and optimum operating condition of avalanche photodetector, In: International Conference on Electronic Measurement & Instruments (ICEMI), 2003 6 th International Conference, Taiyuan, China, 2003, pp. 1684-1687. Received on: May 26, 2015 Revised on: July 14, 2015 Accepted on: August 10, 2015 Cai et al.; Licensee Bentham Open. This is an open access article licensed under the terms of the (https://creativecommons.org/licenses/by/4.0/legalcode), which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited.