Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities

Size: px
Start display at page:

Download "Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities"

Transcription

1 2010 IEEE Intelligent Vehicles Symposium University of California, San Diego, CA, USA June 21-24, 2010 TuB1.30 Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities Cuong Tran and Mohan M. Trivedi Abstract Driver s body posture in 3D contains information potentially related to driver intent, driver affective state, and driver distraction. In this paper, we discuss issues and possibilities in developing a vision-based, markerless system to systematically explore the role of 3D driver posture dynamics for driver assistance. At high level, two main emphases in the proposed system are: (i) The coordination between real world driving testbed and simulation environment and (ii) The usefulness of driver posture dynamics is studied not only as an individual cue but also in relation with other contextual information (e.g. head dynamics, facial features, and vehicle dynamics). Some initial results in our experiment following these guidelines show the feasibility and promise of extracting and using 3D driver posture dynamics for driver assistance. L I. INTRODUCTION OOKING at driver to understand driver state (e.g. affective states, distraction states) and intention is an important component in driver assistance systems. It is reported that human errors caused a large portion of roadway accidents [1]. In this paper, we concern about vision-based, markerless systems looking at driver and analyzing driver state and intention. Vision-based approach provides a more natural and non-contact solution compared to using bio or physiological sensors which require driver to wear some specific devices. It should also be mentioned that an effective intelligent driver assistance systems need to be human centric and work in a holistic manner which takes into account different components including sensors for environment (e.g. looking at roads, other cars) and sensors for vehicle (e.g. looking at steering angle, vehicle speed) beside looking at driver [18]. In vision-based systems for driver state and intention analysis, lots of related research studies focus on using features related to head and face. For example, head pose and/or eye gaze were used for predicting lane change intent [8, 11, 14]. Head movement, eye movement, and facial features were used for monitoring driver mental state [3, 22], for detecting fatigue state [21, 23]. There were also some research studies using hand position, e.g. hand position was incorporated with head pose for lane change intent prediction [6], for driver distraction alert system [17]. In [5], the authors proposed a method for determining whether driver or passenger s hand is in infotainment area which also has connection with driver distraction. Authors are with LISA: Laboratory for Safe and Intelligent Automobiles, University of California at San Diego ( {cutran, mtrivedi}@ucsd.edu. We see that just hand position already contains important and useful information for driver assistance. The whole 3D driver posture, which is more informative with torso, head, and arms dynamics, seems to be a very potential cue for driver assistance system. Therefore exploring the role of 3D driver posture dynamics for driver assistance in a systematic manner is useful and needed. In this paper, we will discuss several related issues and possibilities of using some developed techniques for such task. Following these discussions are some initial experimental results which show the feasibility and promise of extracting and using 3D posture dynamics for driver support systems. II. RELATED STUDIES Driver posture dynamics in 3D is informative and can help to develop better driver support systems. Study in [2] indicated the relation between sitting posture and affective state, e.g. having slumped pose after a failure, upright pose after a success. It also pointed out the importance of mood congruent interaction in smart interactive system. In [13], a marker-based posture tracking system was used to study the relation between postural stability and driver controlling state. There were also research studies using posture information for passive driver assistance systems, e.g. in [19] sitting posture was used to adjust airbag deployment, in [4] driver posture was studied to build a more comfortable driver cockpit. The role of driver posture dynamics in active driver assistance, e.g. detecting driver intents, driver states and then having appropriate interaction to improve driver safety as well as comfortability, has not been studied much. Fig. 1 shows some possible ranges of driver posture movement which seems to have connection to driver state and intention. For example, leaning backward might indicate relax position, leaning forward indicates concentration. Before doing some specific tasks, driver may also have some posture changes in preparation such as moving head forward to prepare for a better visual check before lane change (Sect. IV A has some real-world driving illustrations). We will discuss several related issues and possibilities towards the goal of systematically exploring the role of 3D posture dynamics in vision-based, active driver assistance. At high level, two main emphases in developing our testbeds and approaches are (i) the need of coordination between real world driving testbed and simulation environment and (ii) the usefulness of driver posture dynamics should be studies not only as an individual cue but also in a holistic manner with other contextual information /10/$ IEEE 179

2 Fig. 1. Illustration of some possible range of driver posture movement during driving Fig. 2. Coordination between real world and simulation environment III. DEVELOPING A VISION BASED SYSTEM FOR EXPLORING THE ROLE OF 3D POSTURE DYNAMICS IN DRIVER ASSISTANCE A. Coordination between Real World Driving Testbed and Simulation Environment Working with real world driving testbed is important and is the ultimate goal. However simulation environment can provide more flexibility in configuring sensors and designing experiment tasks for deeper analysis which might be difficult and unsafe for implementing in real world driving. Hence, the coordination between real world driving and simulation environment is useful and we take it into account when developing our system. As shown in Fig. 2, observations from real world driving data can initiate the design of simulation experiment. Then this simulation experiment can be modified and improved to achieve several desired analyses. However, there are always gaps between simulation environment and real world. This happens even with nowadays complex and expensive simulations. For example, the realistic feel of driver about the vehicle dynamics and surround environment will be different. There are several random difficulties which only happen in real world situation such as sudden difficult lighting condition, highly dynamics background. Therefore the usefulness of analyses and findings in simulation environment should again be verified with the real world driving data. B. A System for Exploring the Role of 3D Upper Body Pose in Combination with Other Contextual Information Following the underlying principle of a holistic sensing approach [18], the interaction between different cues is important for an effective driver assistance system. A cue might seem to be not very useful and related when be considered separately but using it in combination with other contextual information cues could help improving the whole system performance. Fig. 3 shows the flowchart of our proposed system for studying the role of driver posture Fig. 3. General flowchart of the system using 3D posture dynamics for driver assistance dynamics in combination with other contextual information. First, the inputs from different contextual sensors (observing driver, environment, and vehicle state) need to be captured in a synchronous manner. Contextual information from different sensors is then extracted separately. In the next step, extracted contextual information can be fused in different combination sets to analyze driver state and intent. Finally based on the above analysis, the system will interact to assist driver when needed. Different types of interaction can be used such as visual interaction (e.g. using Active Heads-up Display [9]), audio interaction (e.g. using beep sound), or mechanical interaction (e.g. lightly shaking the steering wheel). This paper focus on part looking at driver so we will go into more details of related components in the following sections. 180

3 XMOB (Extremity Movement Observation) for upper body pose tracking [16] The skeletal upper body model used in XMOB upper body tracker is shown in Fig. 4. The length of body parts are considered fixed, which means there is only kinematics movement at the joints. There are 4 joints in the model: two shoulder joints, each has 3 Degree of Freedom (DOF) and two 1 DOF elbow joints. An upper body pose at time t can be represented by 7 positions in 3D of inner joints and extremities X t = {P t hea, P t lha, P t rha, P t lsh, P t rsh, P t leb, P t reb } The idea of XMOB is to break the 3D upper body pose tracking problem into two sub-problems: First track the 3D movements of extremal parts, i.e. head and hands {P t hea, P t lha, P t rha ). Then using human knowledge of upper body configuration constraints, those 3D extremity movements are used to predict the whole upper body pose sequence as an inverse kinematics problem. The underlying motivation is that extremities are easier to track with less occlusion compared to inner body parts like elbow and shoulder joints. Moreover by breaking the problem of high dimensional search for 3D upper body pose into two sub-problems, the complexity is reduced considerably to achieve real-time performance (XMOB run at ~15 frames per second on an Intel Core i7 3.0 GHz). On the other hand since human upper body kinematics is redundant, i.e. with the same head and hand positions, there could be many solutions for upper body pose, the inverse kinematics problem mentioned above is ambiguous. To deal with this ambiguity, XMOB took a numerical approach utilizing the dynamics information, not just head and hand positions at a single frame. First at each frame, XMOB determines a set of inner joints candidates from head and hand positions based on geometric constraints between upper body inner joints and extremities. Then by observing extremity movements in a period of time, XMOB uses assumptions of minimizing total joint displacement to predict the corresponding upper body sequence. Head pose tracking with HyHOPE (Hybrid Head Orientation and Pose Estimation) [12] We use HyHOPE which is a real time, robust head pose tracking method from monocular view [12] to extract head dynamics information. HyHOPE combines static head pose estimation with a real time 3D model-based tracking system for better tracking performance. From an initial estimate of head position and orientation, the system generates a texture mapped 3D model of the head from the most recent head image and using particle filter approach to find the best match of this 3D model from each subsequent frame. HyHOPE also use GPU (Graphics Processing Unit) programming to parallelize computations and achieve realtime performance (~ 30 frames per second). Gabor Wavelet Filters for Facial Feature Extraction Fig. 4. Upper body model used in XMOB upper body tracker Gabor wavelet filter closely models the response function of simple cells in primary visual cortex consisting of a Gaussian kernel function modulated by a sinusoidal plane wave. Using Gabor filters has been shown to be a good feature extraction method for recognition Facial Action Coding System (FACS) [20]. Since Gabor features are good for FACS recognition, which can be considered as basic facial movements, they should also be an effective representation for facial dynamics. However when applied to real world driving situation, there are several issues that need to be considered like challenging lighting condition, shadow. Moreover since typically facial features can only be extracted reliably on a frontal face, we will use head pose information from HyHOPE to only extract Gabor facial features when driver head looks straight. Driver state and intent analysis using different combination of extracted contextual information This analysis step can either follow: A rule-based approach, e.g. IF No hands on wheel AND Head turn away THEN Alert for a serious distraction like what have been done in [17] or A statistical learning approach such as using Relevance Vector Machine (RVM) method [15] which was shown to be quite effective in analyzing multimodal data from different sensors for driver intent prediction [8, 11]. RVM can produce a sparse representation of the data from a large feature set for classification and it also provides a probability output of class membership (versus a binary output, e.g. Support Vector Machine method) which can be useful for situations when output ranking is needed IV. EXPERIMENT A. Real World Driving Data Testbed LISA-P and Some Motivate Observations Fig.5 shows our real world testbed LISA-P ( As discussed in Sect. III.A, we utilize data collected from our LISA-P testbed to determine some initial scenarios for experiment setup in simulation environment. Fig.6 illustrates some observations from real world driving data that we are interested in. 181

4 Fig. 5. LISA-P real world testbed with the positions of 3 cameras observing driver (two for upper body, one for head and face). Fig. 7. LISA-S Simulation Environment Driver leans forward before head turn for a more careful lane change visual check, which is quite common in our freeway driving data. Incorporating detection of such body posture movement would indicate a higher probability of lane change intent compared to e.g. using head turn only. Driver leaned to the right to better hear passenger in a conversation which might be an indication of distraction Fig. 6. Some observations from real world driving data We observe that on highway driving, for a more careful lane change visual check drivers commonly tend to leans forward before making a head turn. This observation is kind of related to relax and concentrated sitting pose. By incorporating detection of such driver posture movement with e.g. head turn information, we will have a stronger indication of lane change intent. Sometimes, driver leans to the right to better hear the passenger in a conversation which might be used as one of indications for distraction. On highway driving, we also observed that driver looks more serious before doing lane change, e.g. a head turn with a smile on the face commonly happens only when driver is talking to other passengers, not in a lane change. This kind of information could help to reduce false alarm in predicting lane change intent using head pose. B. Driving Simulation Environment LISA-S Fig. 7 shows our simulation environment which we call LISA-S. There are 2 cameras used for 3D driver posture tracking, 1 camera for head and face tracking, and we also install a stereo eye tracking system. The steering wheel is the same size with real steering wheel and can turn 450 Fig. 8. Some visualization samples of Gabor features extraction from face image in LISA-S data degree each direction (900 degree in total). We use the TORCS open source as driving simulator ( with which we can design the road track, environment scene, and control the dynamics of ego vehicle as well as other vehicle. C. Some Initial Results To demonstrate the ability to extract concerned contextual information including 3D upper body posture dynamics, head dynamics, facial features: Fig. 8 shows some visual samples of apply Gabor wavelets with different spatial frequencies and orientations on face image for facial feature extraction. For more reliable feature extraction, we only extract facial feature when it is close to the frontal view (determined from HyHOPE head tracking). Fig shows pretty good visual evaluation of 3D XMOB upper body pose tracking and HyHOPE head pose tracking in both real world driving sequence (from LISA-P) and simulation driving sequence (from LISA-S). Fig. 13 shows quantitative plots (compared with manually annotated ground truth) of a simple analysis using extracted contextual information of 3D upper body pose and head pose to determine some concerned events. From 3D driver posture dynamics, driver state is classified into relax or concentrated. Head dynamics is classified into look straight, turn to the left, or turn to the right. Moreover when combining head and 3D posture information, we can see some examples that driver change from relax to concentrated state and then make a head turn which could be a strong indicator of lane change intent. Regarding the interaction part, Fig. 14 illustrate example of using Active Head-ups Display for visual interaction with driver in the assistance system for Keeping hands on the wheel and eyes on the road [17]. V. CONCLUDING REMARKS AND FUTURE WORK In this paper, we discussed several related issues and possibilities of developing a vision-based, markerless system 182

5 for systematically exploring the role of 3D driver posture dynamics for active driver assistance. Some initial results in our experiment indicated the feasibility and promise of extracting and using 3D driver posture in combination with other contextual information to analyze driver affective state and intent. Based on these discussions as well as initial implementation and results, the obvious follow-up work is to design more natural test cases in the LISA-S simulation environment for analyzing the usefulness of 3D driver posture information. Besides using intuitive rule-based approach similar to approach in [17], some statistical learning methods like RVM will also be used for analysis and comparison. ACKNOWLEDGMENT We thank our colleagues at CVRR lab for useful discussions and assistances, especially Mr. Anup Doshi who played a main role in setting up the LISA-S simulation environment. REFERENCES [1] "World Report on Road Traffic Injury Prevention : Summary," Technical Report, World Health Organization, [2] H.I. Ahn, A. Teeters, A. Wang, C. Breazeal, and R.W. Picard,, "Stoop to Conquer: Posture and affect interact to influence computer users' persistence," The 2nd International Conference on Affective Computing and Intelligent Interaction, [3] S. Baker, I. Matthews, J. Xiao, R. Gross, T. Ishikawa, and T. Kanade, Real-time non-rigid driver head tracking for driver mental state estimation, Robot. Inst., Carnegie Mellon Univ., Tech. Rep , Feb [4] R. Brodeur, H.M. Reynolds, K. Rayes, and Y. Cui, The Initial Position and Postural Attitudes of Driver Occupants, Posture, ERL- TR , Ergonomics Research Laboratory, [5] Shinko Y. Cheng and Mohan M. Trivedi, "Vision-based Infotainment User Determination by Hand Recognition for Driver Assistance, IEEE Transactions on Intelligent Transportation Systems, 2010 [6] S. Cheng and M. M. Trivedi, "Turn-Intent Analysis Using Body Pose for Intelligent Driver Assistance", IEEE Pervasive Computing, 5(4):28-37, Oct-Dec [7] A.Datta, Y. Sheikh, and T. Kanade, Linear Motion Estimation for Systems of Articulated Planes, IEEE Conference on Computer Vision and Pattern Recognition, [8] A. Doshi and M. M. Trivedi, "On the Roles of Eye Gaze and Head Pose in Predicting Driver's Intent to Change Lanes", IEEE Transactions on Intelligent Transportation Systems, September [9] A. Doshi, S. Y. Cheng, and M. M. Trivedi, "A Novel, Active Heads- Up Display for Driver Assistance", IEEE Transactions on Systems, Man, and Cybernetics, Part B, Feb [10] V. Ferrari, M. Jimenez, A. Zisserman, Progressive Search Space Reduction for Human Pose Estimation, IEEE Conference on Computer Vision and Pattern Recognition, [11] J. McCall, D. Wipf, M. M. Trivedi, B. Rao, "Lane Change Intent Analysis Using Robust Operators and Sparse Bayesian Learning", IEEE Transactions on Intelligent Transportation Systems, Sept [12] E. Murphy-Chutorian and M. M. Trivedi, "HyHOPE: Hybrid Head Orientation and Position Estimation for Vision-based Driver Head Tracking", IEEE Intelligent Vehicles Symposium, [13] A Petersen, R Barrett, Postural Stability and Vehicle Kinematics During an Evasive Lane Change Manoeuvre: A Driver Training Study, Ergonomics, Vol. 52, Issue 5, May [14] L. Tijerina, W. R. Garrott, D. Stoltzfus, and E. Parmer, Eye glance behavior of van and passenger car drivers during lane change decision phase, Trans. Res. Rec., vol. 1937, pp , [15] M. E. Tipping, Sparse Bayesian learning and the relevance vector machine, J. Mach. Learn. Res., vol. 1, pp , Sep [16] C. Tran and M. M. Trivedi, "Introducing 'XMOB': Extremity Movement Observation Framework for Upper Body Pose Tracking in 3D," IEEE International Symposium on Multimedia, [17] C. Tran and M. M. Trivedi, "Driver Assistance for 'Keeping Hands on the Wheel and Eyes on the Road'," IEEE International Conference on Vehicular Electronics and Safety, [18] M. M. Trivedi, S. Cheng, "Holistic Sensing and Active Displays for Intelligent Driver Support Systems", In IEEE Computer Magazine, May [19] M. M. Trivedi, S. Cheng, E. Childers, S. Krotosky, "Occupant Posture Analysis with Stereo and Thermal Infrared Video: Algorithms and Experimental Evaluation, IEEE Transactions on Vehicular Technology, Special Issue on In-Vehicle Vision Systems, Volume: 53, Issue: 6, November [20] M.B. Stewart, J.R. Movellan, G.C. Littlewort, B. Braathen, M.G. Frank, and T.J. Sejnowski, "Towards automatic recognition of spontaneous facial actions, In P. Ekman (Ed.), What the Face Reveals, 2nd Edition, Oxford University Press, [21] E. Vural, M. Çetin, A. Erçil, G. Littlewort, M. S. Bartlett, J. R. Movellan: "Drowsy Driver Detection Through Facial Movement Analysis", IEEE International Conference on Computer Vision Human Computer Interaction, [22] Y. Zhu and K. Fujimara, Head pose estimation for driver monitoring, IEEE Intell. Vehicles Symp., [23] Z. Zhu and Q. Ji, Real Time and Non-intrusive Driver Fatigue Monitoring, IEEE International Conference on Intelligent Transportation Systems, Fig. 9. Visual evaluation of HyHOPE head pose tracking (top) and 3D XMOB upper body tracking superimposed on input image (bottom) in some real driving sequences from LISA-P 183

6 Fig. 10. Visual evaluation of XMOB upper body tracking in 3D from a driving sequence in LISA-S. White blobs are 3D voxel reconstructed from 2-view skin segmentation of head and hands. Color lines are the estimated 3D upper body pose Fig. 11. Visual evaluation of HyHOPE head pose tracking from sequences in LISA-S simulation environment Fig. 12. Visual evaluation 3D XMOB upper body tracking superimposed on input images. Top subject 1, view 2. Bottom Subject 2, view 1 (LISA-S environment) Fig. 13. Quantitative plot for comparison between extracted results from XMOB and HyHOPE and the manually annotated ground truth Fig. 14. Illustrative example of using Active Head-ups Display for visual interaction with driver in the assistance system for Keeping hands on the wheel and eyes on the road [17] 184

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Driver Assistance for Keeping Hands on the Wheel and Eyes on the Road ICVES 2009 Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road" Cuong Tran and Mohan Manubhai Trivedi Laboratory for Intelligent and Safe Automobiles (LISA) University of California

More information

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles Introducing LISA LISA: Laboratory for Intelligent and Safe Automobiles Mohan M. Trivedi University of California at San Diego mtrivedi@ucsd.edu Int. Workshop on Progress and Future Directions of Adaptive

More information

Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle

Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle Cuong Tran and Mohan Manubhai Trivedi Abstract An important real-life application domain of computer vision techniques looking at

More information

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos 214 IEEE Intelligent Vehicles Symposium (IV) June 8-11, 214. Dearborn, Michigan, USA Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos Sujitha Martin 1, Eshed Ohn-Bar

More information

Real Time and Non-intrusive Driver Fatigue Monitoring

Real Time and Non-intrusive Driver Fatigue Monitoring Real Time and Non-intrusive Driver Fatigue Monitoring Qiang Ji and Zhiwei Zhu jiq@rpi rpi.edu Intelligent Systems Lab Rensselaer Polytechnic Institute (RPI) Supported by AFOSR and Honda Introduction Motivation:

More information

Prof Trivedi ECE253A Notes for Students only

Prof Trivedi ECE253A Notes for Students only ECE 253A: Digital Processing: Course Related Class Website: https://sites.google.com/a/eng.ucsd.edu/ece253fall2017/ Course Graduate Assistants: Nachiket Deo Borhan Vasili Kirill Pirozenko Piazza Grading:

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Image Processing Based Vehicle Detection And Tracking System

Image Processing Based Vehicle Detection And Tracking System Image Processing Based Vehicle Detection And Tracking System Poonam A. Kandalkar 1, Gajanan P. Dhok 2 ME, Scholar, Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology,

More information

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Sujitha Martin Laboratory of Intelligent and Safe Automobiles UCSD - La Jolla, CA, USA scmartin@ucsd.edu Ashish

More information

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Joel C. McCall, Ofer Achler, Mohan M. Trivedi jmccall@ucsd.edu, oachler@ucsd.edu, mtrivedi@ucsd.edu Computer

More information

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye

More information

Head, Eye, and Hand Patterns for Driver Activity Recognition

Head, Eye, and Hand Patterns for Driver Activity Recognition 2014 22nd International Conference on Pattern Recognition Head, Eye, and Hand Patterns for Driver Activity Recognition Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan Trivedi University of California

More information

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster)

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster) Lessons from Collecting a Million Biometric Samples 109 Expression Robust 3D Face Recognition by Matching Multi-component Local Shape Descriptors on the Nasal and Adjoining Cheek Regions 177 Shared Representation

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

Driver status monitoring based on Neuromorphic visual processing

Driver status monitoring based on Neuromorphic visual processing Driver status monitoring based on Neuromorphic visual processing Dongwook Kim, Karam Hwang, Seungyoung Ahn, and Ilsong Han Cho Chun Shik Graduated School for Green Transportation Korea Advanced Institute

More information

Drowsy Driver Detection System

Drowsy Driver Detection System Drowsy Driver Detection System Abstract Driver drowsiness is one of the major causes of serious traffic accidents, which makes this an area of great socioeconomic concern. Continuous monitoring of drivers'

More information

Lifelog-Style Experience Recording and Analysis for Group Activities

Lifelog-Style Experience Recording and Analysis for Group Activities Lifelog-Style Experience Recording and Analysis for Group Activities Yuichi Nakamura Academic Center for Computing and Media Studies, Kyoto University Lifelog and Grouplog for Experience Integration entering

More information

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis IEEE Conference on Computer Vision and Pattern Recognition Workshops - Mobile Vision 2014 Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis Eshed Ohn-Bar, Ashish

More information

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples 2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples Daisuke Deguchi, Mitsunori

More information

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X HIGH DYNAMIC RANGE OF MULTISPECTRAL ACQUISITION USING SPATIAL IMAGES 1 M.Kavitha, M.Tech., 2 N.Kannan, M.E., and 3 S.Dharanya, M.E., 1 Assistant Professor/ CSE, Dhirajlal Gandhi College of Technology,

More information

Face Detection: A Literature Review

Face Detection: A Literature Review Face Detection: A Literature Review Dr.Vipulsangram.K.Kadam 1, Deepali G. Ganakwar 2 Professor, Department of Electronics Engineering, P.E.S. College of Engineering, Nagsenvana Aurangabad, Maharashtra,

More information

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAVE-IT David W. Eby,, PhD University of Michigan Transportation Research Institute International Distracted Driving Conference

More information

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14) ISSN 0976 6367(Print) ISSN 0976

More information

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results Angelos Amditis (ICCS) and Lali Ghosh (DEL) 18 th October 2013 20 th ITS World

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

PerSec. Pervasive Computing and Security Lab. Enabling Transportation Safety Services Using Mobile Devices

PerSec. Pervasive Computing and Security Lab. Enabling Transportation Safety Services Using Mobile Devices PerSec Pervasive Computing and Security Lab Enabling Transportation Safety Services Using Mobile Devices Jie Yang Department of Computer Science Florida State University Oct. 17, 2017 CIS 5935 Introduction

More information

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 6 (55) No. 2-2013 PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES A. FRATU 1 M. FRATU 2 Abstract:

More information

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments , pp.32-36 http://dx.doi.org/10.14257/astl.2016.129.07 Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments Viet Dung Do 1 and Dong-Min Woo 1 1 Department of

More information

NTU Robot PAL 2009 Team Report

NTU Robot PAL 2009 Team Report NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering

More information

Summary of robot visual servo system

Summary of robot visual servo system Abstract Summary of robot visual servo system Xu Liu, Lingwen Tang School of Mechanical engineering, Southwest Petroleum University, Chengdu 610000, China In this paper, the survey of robot visual servoing

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

Visual Search using Principal Component Analysis

Visual Search using Principal Component Analysis Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development

More information

Hand & Upper Body Based Hybrid Gesture Recognition

Hand & Upper Body Based Hybrid Gesture Recognition Hand & Upper Body Based Hybrid Gesture Prerna Sharma #1, Naman Sharma *2 # Research Scholor, G. B. P. U. A. & T. Pantnagar, India * Ideal Institue of Technology, Ghaziabad, India Abstract Communication

More information

Super resolution with Epitomes

Super resolution with Epitomes Super resolution with Epitomes Aaron Brown University of Wisconsin Madison, WI Abstract Techniques exist for aligning and stitching photos of a scene and for interpolating image data to generate higher

More information

Environmental Sound Recognition using MP-based Features

Environmental Sound Recognition using MP-based Features Environmental Sound Recognition using MP-based Features Selina Chu, Shri Narayanan *, and C.-C. Jay Kuo * Speech Analysis and Interpretation Lab Signal & Image Processing Institute Department of Computer

More information

Vehicle Detection Using Imaging Technologies and its Applications under Varying Environments: A Review

Vehicle Detection Using Imaging Technologies and its Applications under Varying Environments: A Review Proceedings of the 2 nd World Congress on Civil, Structural, and Environmental Engineering (CSEE 17) Barcelona, Spain April 2 4, 2017 Paper No. ICTE 110 ISSN: 2371-5294 DOI: 10.11159/icte17.110 Vehicle

More information

Classification of Road Images for Lane Detection

Classification of Road Images for Lane Detection Classification of Road Images for Lane Detection Mingyu Kim minkyu89@stanford.edu Insun Jang insunj@stanford.edu Eunmo Yang eyang89@stanford.edu 1. Introduction In the research on autonomous car, it is

More information

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION Volume 116 No. 11 2017, 91-99 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v116i11.10 ijpam.eu DRIVER FATIGUE DETECTION USING IMAGE

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

Platform-Based Design of Augmented Cognition Systems. Latosha Marshall & Colby Raley ENSE623 Fall 2004

Platform-Based Design of Augmented Cognition Systems. Latosha Marshall & Colby Raley ENSE623 Fall 2004 Platform-Based Design of Augmented Cognition Systems Latosha Marshall & Colby Raley ENSE623 Fall 2004 Design & implementation of Augmented Cognition systems: Modular design can make it possible Platform-based

More information

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Somnath Mukherjee, Kritikal Solutions Pvt. Ltd. (India); Soumyajit Ganguly, International Institute of Information Technology (India)

More information

WB2306 The Human Controller

WB2306 The Human Controller Simulation WB2306 The Human Controller Class 1. General Introduction Adapt the device to the human, not the human to the device! Teacher: David ABBINK Assistant professor at Delft Haptics Lab (www.delfthapticslab.nl)

More information

A Proposal for Security Oversight at Automated Teller Machine System

A Proposal for Security Oversight at Automated Teller Machine System International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 6 (June 2014), PP.18-25 A Proposal for Security Oversight at Automated

More information

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Petr Bouchner, Stanislav Novotný, Roman Piekník, Ondřej Sýkora Abstract Behavior of road users on railway crossings

More information

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel 3rd International Conference on Multimedia Technology ICMT 2013) Evaluation of visual comfort for stereoscopic video based on region segmentation Shigang Wang Xiaoyu Wang Yuanzhi Lv Abstract In order to

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK SMILE DETECTION WITH IMPROVED MISDETECTION RATE AND REDUCED FALSE ALARM RATE VRUSHALI

More information

Scanned Image Segmentation and Detection Using MSER Algorithm

Scanned Image Segmentation and Detection Using MSER Algorithm Scanned Image Segmentation and Detection Using MSER Algorithm P.Sajithira 1, P.Nobelaskitta 1, Saranya.E 1, Madhu Mitha.M 1, Raja S 2 PG Students, Dept. of ECE, Sri Shakthi Institute of, Coimbatore, India

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians

A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians Jeffrey Ploetner Computer Vision and Robotics Research Laboratory (CVRR) University of California, San Diego La Jolla, CA 9293,

More information

List of Publications for Thesis

List of Publications for Thesis List of Publications for Thesis Felix Juefei-Xu CyLab Biometrics Center, Electrical and Computer Engineering Carnegie Mellon University, Pittsburgh, PA 15213, USA felixu@cmu.edu 1. Journal Publications

More information

A Multimodal Framework for Vehicle and Traffic Flow Analysis

A Multimodal Framework for Vehicle and Traffic Flow Analysis Proceedings of the IEEE ITSC 26 26 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-2, 26 WB3.1 A Multimodal Framework for Vehicle and Traffic Flow Analysis Jeffrey Ploetner

More information

Intelligent driving TH« TNO I Innovation for live

Intelligent driving TH« TNO I Innovation for live Intelligent driving TNO I Innovation for live TH«Intelligent Transport Systems have become an integral part of the world. In addition to the current ITS systems, intelligent vehicles can make a significant

More information

Activity monitoring and summarization for an intelligent meeting room

Activity monitoring and summarization for an intelligent meeting room IEEE Workshop on Human Motion, Austin, Texas, December 2000 Activity monitoring and summarization for an intelligent meeting room Ivana Mikic, Kohsia Huang, Mohan Trivedi Computer Vision and Robotics Research

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

Current Technologies in Vehicular Communications

Current Technologies in Vehicular Communications Current Technologies in Vehicular Communications George Dimitrakopoulos George Bravos Current Technologies in Vehicular Communications George Dimitrakopoulos Department of Informatics and Telematics Harokopio

More information

Perception platform and fusion modules results. Angelos Amditis - ICCS and Lali Ghosh - DEL interactive final event

Perception platform and fusion modules results. Angelos Amditis - ICCS and Lali Ghosh - DEL interactive final event Perception platform and fusion modules results Angelos Amditis - ICCS and Lali Ghosh - DEL interactive final event 20 th -21 st November 2013 Agenda Introduction Environment Perception in Intelligent Transport

More information

Classification for Motion Game Based on EEG Sensing

Classification for Motion Game Based on EEG Sensing Classification for Motion Game Based on EEG Sensing Ran WEI 1,3,4, Xing-Hua ZHANG 1,4, Xin DANG 2,3,4,a and Guo-Hui LI 3 1 School of Electronics and Information Engineering, Tianjin Polytechnic University,

More information

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed AUTOMOTIVE Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed Yoshiaki HAYASHI*, Izumi MEMEZAWA, Takuji KANTOU, Shingo OHASHI, and Koichi TAKAYAMA ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

More information

Recognition Of Vehicle Number Plate Using MATLAB

Recognition Of Vehicle Number Plate Using MATLAB Recognition Of Vehicle Number Plate Using MATLAB Mr. Ami Kumar Parida 1, SH Mayuri 2,Pallabi Nayk 3,Nidhi Bharti 4 1Asst. Professor, Gandhi Institute Of Engineering and Technology, Gunupur 234Under Graduate,

More information

Driver Assistance System Based on Video Image Processing for Emergency Case in Tunnel

Driver Assistance System Based on Video Image Processing for Emergency Case in Tunnel American Journal of Networks and Communications 2015; 4(1): 5-9 Published online March 12, 2015 (http://www.sciencepublishinggroup.com/j/ajnc) doi: 10.11648/j.ajnc.20150401.12 ISSN: 2326-893X (Print);

More information

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015 Risk assessment & Decision-making for safe Vehicle Navigation under Uncertainty Christian LAUGIER, First class Research Director at Inria http://emotion.inrialpes.fr/laugier Contributions from Mathias

More information

An Automated Face Reader for Fatigue Detection

An Automated Face Reader for Fatigue Detection An Automated Face Reader for Fatigue Detection Haisong Gu Dept. of Computer Science University of Nevada Reno Haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute qji@ecse.rpi.edu

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

Multimodal Face Recognition using Hybrid Correlation Filters

Multimodal Face Recognition using Hybrid Correlation Filters Multimodal Face Recognition using Hybrid Correlation Filters Anamika Dubey, Abhishek Sharma Electrical Engineering Department, Indian Institute of Technology Roorkee, India {ana.iitr, abhisharayiya}@gmail.com

More information

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database An Un-awarely Collected Real World Face Database: The ISL-Door Face Database Hazım Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs (ISL), Universität Karlsruhe (TH), Am Fasanengarten 5, 76131

More information

interactive IP: Perception platform and modules

interactive IP: Perception platform and modules interactive IP: Perception platform and modules Angelos Amditis, ICCS 19 th ITS-WC-SIS76: Advanced integrated safety applications based on enhanced perception, active interventions and new advanced sensors

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

An Information Fusion Method for Vehicle Positioning System

An Information Fusion Method for Vehicle Positioning System An Information Fusion Method for Vehicle Positioning System Yi Yan, Che-Cheng Chang and Wun-Sheng Yao Abstract Vehicle positioning techniques have a broad application in advanced driver assistant system

More information

Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers

Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers Sujitha Martin and Mohan M. Trivedi Abstract From driver assistance in manual mode to takeover requests in

More information

Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness

Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness 1 Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness Nachiket Deo, and Mohan M. Trivedi, Fellow, IEEE arxiv:1811.06047v1 [cs.cv] 14 Nov 2018 Abstract Continuous estimation

More information

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta 3D Interaction using Hand Motion Tracking Srinath Sridhar Antti Oulasvirta EIT ICT Labs Smart Spaces Summer School 05-June-2013 Speaker Srinath Sridhar PhD Student Supervised by Prof. Dr. Christian Theobalt

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor ADAS Development using Advanced Real-Time All-in-the-Loop Simulators Roberto De Vecchi VI-grade Enrico Busto - AddFor The Scenario The introduction of ADAS and AV has created completely new challenges

More information

LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System

LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System Muralindran Mariappan, Manimehala Nadarajan, and Karthigayan Muthukaruppan Abstract Face identification and tracking has taken a

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

Active Safety Systems Development and Driver behavior Modeling: A Literature Survey

Active Safety Systems Development and Driver behavior Modeling: A Literature Survey Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 9 (2013) pp. 1153-1166 Research India Publications http://www.ripublication.com/aeee.htm Active Safety Systems Development

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

Human-Computer Intelligent Interaction: A Survey

Human-Computer Intelligent Interaction: A Survey Human-Computer Intelligent Interaction: A Survey Michael Lew 1, Erwin M. Bakker 1, Nicu Sebe 2, and Thomas S. Huang 3 1 LIACS Media Lab, Leiden University, The Netherlands 2 ISIS Group, University of Amsterdam,

More information

Technologies that will make a difference for Canadian Law Enforcement

Technologies that will make a difference for Canadian Law Enforcement The Future Of Public Safety In Smart Cities Technologies that will make a difference for Canadian Law Enforcement The car is several meters away, with only the passenger s side visible to the naked eye,

More information

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able

More information

Road Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke

Road Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke Road Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke Lanes in Construction Sites Roadway is often bounded by elevated objects (e.g. guidance walls)

More information

Background Pixel Classification for Motion Detection in Video Image Sequences

Background Pixel Classification for Motion Detection in Video Image Sequences Background Pixel Classification for Motion Detection in Video Image Sequences P. Gil-Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno Dpto. de Teoría de la señal y Comunicaciones. Universidad

More information

Main Subject Detection of Image by Cropping Specific Sharp Area

Main Subject Detection of Image by Cropping Specific Sharp Area Main Subject Detection of Image by Cropping Specific Sharp Area FOTIOS C. VAIOULIS 1, MARIOS S. POULOS 1, GEORGE D. BOKOS 1 and NIKOLAOS ALEXANDRIS 2 Department of Archives and Library Science Ionian University

More information

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications Young-Woo Seo and Ragunathan (Raj) Rajkumar GM-CMU Autonomous Driving Collaborative Research Lab Carnegie Mellon University

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

Face Tracking using Camshift in Head Gesture Recognition System

Face Tracking using Camshift in Head Gesture Recognition System Face Tracking using Camshift in Head Gesture Recognition System Er. Rushikesh T. Bankar 1, Dr. Suresh S. Salankar 2 1 Department of Electronics Engineering, G H Raisoni College of Engineering, Nagpur,

More information

Development of Gaze Detection Technology toward Driver's State Estimation

Development of Gaze Detection Technology toward Driver's State Estimation Development of Gaze Detection Technology toward Driver's State Estimation Naoyuki OKADA Akira SUGIE Itsuki HAMAUE Minoru FUJIOKA Susumu YAMAMOTO Abstract In recent years, the development of advanced safety

More information

Early Take-Over Preparation in Stereoscopic 3D

Early Take-Over Preparation in Stereoscopic 3D Adjunct Proceedings of the 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 18), September 23 25, 2018, Toronto, Canada. Early Take-Over

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

3D Face Recognition System in Time Critical Security Applications

3D Face Recognition System in Time Critical Security Applications Middle-East Journal of Scientific Research 25 (7): 1619-1623, 2017 ISSN 1990-9233 IDOSI Publications, 2017 DOI: 10.5829/idosi.mejsr.2017.1619.1623 3D Face Recognition System in Time Critical Security Applications

More information

Watermarking-based Image Authentication with Recovery Capability using Halftoning and IWT

Watermarking-based Image Authentication with Recovery Capability using Halftoning and IWT Watermarking-based Image Authentication with Recovery Capability using Halftoning and IWT Luis Rosales-Roldan, Manuel Cedillo-Hernández, Mariko Nakano-Miyatake, Héctor Pérez-Meana Postgraduate Section,

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

Controlling vehicle functions with natural body language

Controlling vehicle functions with natural body language Controlling vehicle functions with natural body language Dr. Alexander van Laack 1, Oliver Kirsch 2, Gert-Dieter Tuzar 3, Judy Blessing 4 Design Experience Europe, Visteon Innovation & Technology GmbH

More information

Ant? Bird? Dog? Human -SURE

Ant? Bird? Dog? Human -SURE ECE 172A: Intelligent Systems: Introduction Week 1 (October 1, 2007): Course Introduction and Announcements Intelligent Robots as Intelligent Systems A systems perspective of Intelligent Robots and capabilities

More information