Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle

Size: px
Start display at page:

Download "Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle"

Transcription

1 Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle Cuong Tran and Mohan Manubhai Trivedi Abstract An important real-life application domain of computer vision techniques looking at people is in developing Intelligent Driver Assistance Systems (IDAS s). By analyzing information from both looking in and looking out of the vehicle, such systems can actively prevent vehicular accidents, improve driver safety as well as driver experience. Towards such goals, developing systems looking people in a vehicle (i.e. driver and passengers) to understand their intent, behavior, and states is needed. This is a challenging task which typically requires high reliability, accuracy, and efficient performance. Challenges also come from the dynamic background and varying lighting condition in driving scenes. However, looking at people in a vehicle also has its own characteristics which could be exploited to simplify the problem such as people typically sitting in a fixed position and their activities being highly related to the driving context. In this chapter, we give a concise overview of various related research studies to see how their approaches were developed to fit the specific requirements and characteristics of looking at people in a vehicle. From a historical point of view, we first discuss studies looking at head, eyes, and facial landmarks and then studies looking at body, hands, and feet. Despite lots of active research and published papers, developing accurate, reliable, and efficient approaches for looking at people in real-world driving scenarios is still an open problem. To this end, we will discuss some remaining issues for the future development in the area Introduction and Motivation Automobiles were at the core of transforming lives of individuals and nations during the 20th century. However, despite their many benefits, motor vehicles pose a considerable safety risk. A study by World Health Organization mentions that annually, over 1.2 million fatalities and over 20 million serious injuries occur worldwide [28]. C. Tran ( ) M.M. Trivedi Laboratory for Intelligent and Safe Automobiles (LISA), University of California at San Diego, San Diego, CA 92037, USA cutran@ucsd.edu M.M. Trivedi mtrivedi@ucsd.edu T.B. Moeslund et al. (eds.), Visual Analysis of Humans, DOI / _30, Springer-Verlag London Limited

2 598 C. Tran and M.M. Trivedi Fig Looking-in and Looking-out of a vehicle [36] Most roadway accidents are caused by driver error. A 2006 study sponsored by the US Department of Transportation s National Highway Traffic Safety Administration concluded that driver inattention contributes to nearly 80 percent of crashes and 65 percent of near crashes. Therefore in today s vehicles, embedded computing systems are increasingly used to make them safer as well as more reliable, comfortable, and enjoyable to drive. In vehicle-based safety systems, it is more desirable to prevent an accident (active safety) rather than reduce the severity of injuries (passive safety). However, activesafety systems also pose more difficult and challenging problems. To be effective, such technologies must be human-centric and work in a holistic manner [34, 36]. As illustrated in Fig. 30.1, information from looking inside a vehicle (i.e. driver and passengers), looking outside to the environment (e.g. looking at roads, other cars), as well as vehicle sensors (e.g. measuring steering angle, speed) need to be taken into account. In this chapter, we focus on the task of looking at people inside a vehicle (i.e. driver and passengers) to understand their intent, behavior, and states. This task is inherently challenging due to the dynamic driving scene background and varying lighting condition. Moreover it also demands high reliability, accuracy, and efficient performance (e.g. real-time performance for safety related applications). Obviously, the fundamental computer vision and machine learning techniques looking at people, which were covered in previous chapters, are the foundation for techniques looking at people inside a vehicle. However, human activity in a vehicle also has its own characteristics, which should be exploited to improve the system performance such as people typically sit in a fixed position and their activities are highly related to the driving context (e.g. most of driver foot movements are related to pedal press activity). In the following sections, we provide a concise overview of several selected research studies focusing on how computer vision techniques are developed to fit the requirements and characteristics of systems looking at people in a vehicle. We start in Sect with a discussion of some criteria for categorizing existing approaches such as their objective (e.g. to monitor driver fatigue or to analyze driver intent) or the cueing information which is used (e.g. looking at head, eyes, or feet). Initially,

3 30 Vision for Driver Assistance: Looking at People in a Vehicle 599 research studies in this area focus more on cues related to driver head like head pose, eye gaze, and facial landmarks which are needed to determine driver attention and fatigue state [3, 13, 14, 18, 20, 26, 30, 38]. Some selected approaches of this kind are covered in Sect More recently, beside these traditional cues, other parts of the body like hand movement, foot movement, or the whole upper body posture are also shown to be important for understanding people intent and behavior in a vehicle [6, 7, 19, 31, 33, 35]. We will talk about some selected approaches in this category in Sect Despite lots of active research, developing accurate, reliable, and efficient approaches for looking in a vehicle as well as combining them with looking-out information for holistic human-centered Intelligent Driver Assistance System (IDAS) are still open problems. Section 30.5 is a discussion of some open issues for the future development in the area, and finally we have some concluding remarks in Sect Overview of Selected Studies There are several ways to categorize related studies in the area depending on specific purpose. Figure 30.2 shows the basic steps of a common computer vision system looking at people. We see that approaches may use different types of input (e.g. monocular camera, stereo camera, camera with active infrared illuminators), extract different types of intermediate features, and aim to analyze different types of driver behavior or state. Beside these functional criteria, we can also categorize the approaches based on the fundamental techniques underlying their implementation at each step. With the goal of providing an overview of several selected research studies, we put them into a summary table (Table 30.1) with the following important elements associated with these approaches. Objective: What is the final goal of that study (e.g. to monitor driver fatigue, detect driver distraction, or to recognize driver turn intent)? Sensor input: Which type of sensor input is used (e.g. monocular, stereo, or thermal camera)? Fig Basic components of a system looking at people in a vehicle

4 600 C. Tran and M.M. Trivedi Table 30.1 Overview of selected studies for looking at people in a vehicle Objective Sensor input Monitored body parts Methodology and experimental evaluation Grace et al. 98 [14] Drowsiness detection for truck driver Two PERCLOS [14] cameras Eyes Use illuminated eye detection and PERCLOS measurement. In-vehicle experiment Smith et al. 03 [30] Determination of driver visual attention Monocular Head eyes, and face features Use appearance-based head and face features tracking. Model driver visual attention with FSM s. In-vehicle experiment Ishikawa et al. 04 [18] Driver gaze tracking Monocular Eyes Use active appearance model to track the whole face. Then detect iris with template matching and estimate eye gaze. In-vehicle and simulation Ji et al. 04 [20] Driver fatigue monitoring and prediction Two cameras with active infrared illuminators Head, eye, facial landmarks Combine illumination based and appearance-based techniques for eye detection. Fuse different information from head pose, eyes in a probabilistic fatigue model. Simulation experiment Trivedietal. 04 [35] Occupant posture analysis Stereo and thermal cameras Body posture Use head tracking to infer sitting posture. In-vehicle experiment Fletcher et al. 05 [13] Driver awareness monitoring Commercial eye tracker Eye gaze Develop road sign recognition algorithm. Use epipolar geometry to correlate eye gaze with road scene for awareness monitoring. In-vehicle experiment Veeraraghavan et al. 05 [37] Unsafe driver activities detection Monocular Face and hands Use motion of skin regions and a Bayesian classifier to detect some unsafe activities (e.g. drinking, using cellphone). Simulation experiment Bergasa et al. 06 [3] Driver vigilance monitoring One camera w/illuminator Head and eyes Use PERCLOS, nodding frequency, blink frequency in a fuzzy inference system to compute vigilance level. In-vehicle experiment Cheng and Trivedi 06 [6] Turn intent analysis Multi-modal sensors and maker-based motion capture Head and hands Use sparse Bayesian learning to classify turn intent and evaluate with different feature vector combination. In-vehicle experiment (continued on the next page)

5 30 Vision for Driver Assistance: Looking at People in a Vehicle 601 Table 30.1 (Continued) Objective Sensor input Monitored body parts Methodology and experimental evaluation Cheng et al. 06 [5] Driver hand grasp and turn analysis Color and thermal camera Head and hands Use optical flow head tracking and HMM based activity classifier. In-vehicle experiment Ito and Kanade 08 [19] Doshi and Trivedi 09 [11] Tran and Trivedi 09 [31] Prediction of 9driver operations Driver lane change intent analysis Driver distraction monitoring Monocular Body Track 6 marker points on shoulders, elbows, and wrists. Use discriminant analysis to learn Gaussian operation models and then Bayesian classifier. Simulation experiment Monocular Head, eye Use Relevance Vector Machine for lane change prediction (optical flow based head motion, manually labeled eye gaze). In-vehicle and simulation 3 cameras Head, hands Combine tracked head pose and hand position using a rule-based approach. In-vehicle experiment Murphy- Chutorian and Trivedi 10 [26] Real-time 3D head pose tracking Monocular Head Hybrid method combining static pose estimation with an appearance-based particle filter 3D head tracking algorithm. In-vehicle experiment Wu and Trivedi 10 [38] Eye gaze tracking and blink recognition Monocular Eye Use two interactive Particle Filters to simultaneously track eyes and detect blinks. In-vehicle and lab experiment Cheng and Trivedi 10 [7] Driver & passenger hand determination Monocular camera with illuminator Hands Use HOG feature descriptor and SVM classifier. In-vehicle experiment Monitored body parts: Which type of cueing feature is extracted (e.g. information about head pose, eye gaze, body posture, or foot movement)? Methodology and algorithm: The underlying techniques that were used Experiment and evaluation: How were the proposed approach evaluated? Was it actually evaluated in real-world driving scenario or indoor simulation? In the next sections, we will review several selected methods focusing on how computer vision techniques were developed to fit the requirements and characteristics of systems looking at people in a vehicle. Based on the type of cueing information, we will discuss those approaches in two main categories which are approaches

6 602 C. Tran and M.M. Trivedi looking at driver head, face, and facial landmarks (Sect. 30.3) and approaches looking at driver body, hands, and feet (Sect. 30.4) Looking at Driver Head, Face, and Facial Landmarks Initial research studies looking at driver focused more on cues related to driver head like head pose, eye gaze, and facial landmarks. This kind of cueing features were shown to be important in determining driver attention and cognitive state (e.g. fatigue) [3, 13 15, 20]. Some example studies in this category are approaches for monitoring and prediction of driver fatigue, driver head pose tracking for monitoring driver awareness, eye tracking and blink recognition Monitoring and Prediction of Driver Fatigue The National Highway Traffic Safety Administration (NHTSA) [27] has reported drowsy drivers as an important cause for fatal on road crashes and injuries in the U.S. Therefore, developing systems that actively monitor a driver s level of vigilance and alert the driver of any insecure driving conditions is desirable for accident prevention. Different approaches were used to tackle the problem such as assessing the vigilancecapacity of an operatorbefore the work is perform [9], assess the driver state using sensors mounted on the driver to measure heart rate, brain activity [39]or using vehicle embedded sensors information (e.g. steering wheel movements, acceleration and braking profiles) [2]. Computer vision techniques looking at the driver could provide another non-intrusive approach to the problem. Research studies have shown that information such as the PERCLOS measurement introduced by Grace et al. [14] are highly correlated to fatigue state and can be used to monitor driver fatigue. Other head and face related features like eye blink frequency, eye movement, nodding frequency, facial expression have also been used for driver fatigue and vigilance analysis [3, 20]. We will take a look at a representative approach proposed by Ji et al. [20] for real-time monitoring and prediction of driver fatigue. In order to achieve the robustness required for in-vehicle applications, different cues including eyelid movement, gaze movement, head movement, and facial expression were extracted and fused in a Bayesian Network for human fatigue modeling and prediction. Two CCD cameras with active infrared illuminators were used. For eye detection and tracking, the bright pupil technique was combined with an appearance-based technique using a SVM classifier to improve the robustness. This information of eye detection and tracking was then also utilized in their algorithm for tracking head pose with Kalman filter and using Gabor features to track facial landmarks around the mouth and eye regions. The validation of the eye detection and tracking part as well as the extracted fatigue parameters and score were provided which showed some good results (e.g.

7 30 Vision for Driver Assistance: Looking at People in a Vehicle % false-alarm rate and a 4.2% misdetection rate). However, it seems that the proposed approach was only evaluated with data from indoor environment. Therefore how this approach work with real-world driving scenarios with their challenges is still an open question Eye Localization, Tracking and Blink Pattern Recognition Focusing on the task of robustly extracting visual cue information, a former member of our team Wu et al. proposed an appearance-based approach using monocular camera input for eye tracking and blink pattern recognition [38]. For better accuracy and robustness, a binary tree is used to model the statistical structure of the object s feature space. This is a kind of global to local representation in which each subtree explains more detailed information than its parent tree (useful to represent object with high-order substructures like eye image). After the eyes are automatically located, a particle filter-based approach is used to simultaneously track eyes and detect blinks. Two interactive particle filters were used, one for open-eye and one for close-eye. The posterior probabilities learned by the particle filters are used to determine which particle filter gives the correct tracks. This particle filter is then labeled as the primary one and used to reinitialize the other particle filter. The performance of both the blink detection rate and the eye tracking accuracy were evaluated and showed good results with various scenarios including indoor and in-vehicle data sequences as well as the FRGC (Face Recognition Grand Challenge) benchmark data for evaluation of tracking accuracy. Also focusing on a robust eye gaze tracking system, Ishikawa et al. [18] proposed to track the whole face with AAMs for more reliable extraction of eye regions and head pose. Based on the extracted eye regions, a template matching method is used to detect iris and use that for eye gaze estimation. This approach was evaluated and showed promising results with a few subjects for both indoor and in vehicle video sequences Tracking Driver Head Pose Head pose information is also a strong indicator of a driver s field of view and current focus of attention and typically is less noisy than eye gaze. Driver head-motion estimation has also been used along with video-based lane detection and the vehicle CAN-bus (Controller Area Network) data to predict the driver s intent to change lanes in advance of the actual movement of the vehicle [22]. Related works in head pose estimation can be roughly categorized into static head pose estimation methods which estimate head pose directly from the current still image, tracking methods which recover the global pose change of the head from the observed movement between video frames, and hybrid methods. A detailed survey of head pose estimation

8 604 C. Tran and M.M. Trivedi and tracking approaches can be found in [25]. Up to now, computational head pose estimation still remains a challenging vision problem, and there are no solutions that are both inexpensive and widely available. In [26], a former member of our team Murphy-Chutorian et al. proposed an integrated approach using monocular camera input for real-time driver head pose tracking in 3D. In order to overcome the difficulties inherent with varying lighting conditions in a moving car, a static head pose estimator using support vector regressors (SVRs) was combined with an appearance-based particle filter for 3-D head model tracking in an augmented reality environment. For initial head pose estimation with SVRs, the Local Gradient Orientation (LGO) histogram, which is robust to minor deviations in region alignment, lighting, was used. The LGO histogram of a scale-normalized facial region is a 3D histogram M N O in which the first two dimensions correspond to the vertical and horizontal positions in the image and the third to the gradient orientation. Based on the initial head pose estimation, an appearance-based particle filter in an augmented reality, which is a virtual environment that mimics the view space of a real camera, is used to track the driver head in 3D. Using an initial estimate of the head position and orientation, the system generates a texture-mapped 3-D model of the head from the most recent video image and places it into the environment. A particle filter approach is then used to match the view from each subsequent video frame. Though this operation is computationally expensive, it was highly optimized for graphic processing units (GPUs) in the proposed implementation to achieve real-time performance (tracking head at 30 frames per second). Evaluation of this approach showed good results in real-world driving situations with drivers of varying ages, race, and sex spanning daytime and nighttime conditions Looking at Driver Body, Hands, and Feet Beside cues from head, eyes, and facial features, information from other parts of the driver body like hand movement, foot movement, or the whole upper body posture also provides important information. Recently, there have been more research studies making use of such cues for better understanding of driver intent, behavior [6, 7, 31, 33] Looking at Hands Looking at driver hands is needed since it is an important factor in controlling the vehicle. However, it has not been studied much in the area of looking inside a vehicle. In [6], a sparse Bayesian classifier taking into account both hand position and head pose was developed for lane change intent prediction. Hand position was also used in a system assisting driver in keeping hands on the wheel and eyes on the road [31]. A rule-based approach with state machines was used to combine hand

9 30 Vision for Driver Assistance: Looking at People in a Vehicle 605 Fig System for keeping hands on the wheel and eyes on the road [31] position and head pose in monitoring driver distraction (Fig. 30.3). In [7], a former member of our team Cheng et al. proposed a novel real-time computer-vision system that robustly discriminates which of the front-row seat occupants is accessing the infotainment controls. The knowledge of who is the user-that is, driver, passenger, or no one-can alleviate driver distraction and maximize the passenger infotainment experience (e.g. the infotainment system should only provide its fancy options, which can be distracting, to the passenger but not the driver). The algorithm uses a modified histogram-of-oriented-gradients HOG feature descriptor to represent the image area over the infotainment controls and a SVM and median filtering over time to classify each image to one of the three classes with 96% average correct classification rate. This rate was achieved over a wide range of illumination conditions, human subjects, and times of day Modeling and Prediction of Driver Foot Behavior Beside hands, driver feet also has an important role in controlling the vehicle. In addition to information from embedded pedal sensors, the visual foot movement before and after a pedal press can provide valuable information for better semantic understanding of driver behavior, state, and style. They can also be used to gain a time advantage in predicting a pedal press before it actually happens, which is very important for providing proper assistance to driver in time critical (e.g. safety related) situations. However, there were very few research studies in analyzing driver foot information. Mulder et al. have introduced a haptic gas pedal feedback system for car-following [23] in which the gas pedal position was used to improve the system performance. A former member of our team McCall et al. [22] developed a brake assistance system, which took into account both driver s intent to brake (from pedal positions and the camera-based foot position) and the need to brake given the current situation. Recently, our team has examined an approach for driver foot behavior analysis using a monocular foot camera input. The underlying idea is motivated by the fact that driver foot movement is highly related to the pedal press activity. After tracking the foot movement with an optical flow based tracking method, a 7-state HMM)for describing foot behavior was specifically designed for driving scenarios (Fig. 30.4). The elements of this driver foot behavior HMM are as follows.

10 606 C. Tran and M.M. Trivedi Fig Foot behavior HMM state model with 7 states Hidden states: We have 7 states {s 1,s 2,s 3,s 4,s 5,s 6,s 7 } including Neutral, BrkEngage, AccEngage, TowardsBrk, TowardsAcc, ReleaseBrk, ReleaseAcc.The state at time t is denoted by the random variable q t. Observation: The observation at time t is denoted by the random variable O t which has 6 components O t = { p x,p y,v x,v y,b,a } where {p x,p y,v x,v y } are the current estimated position and velocity of driver foot. {B,A} are obtained from vehicle CAN information which determine whether the brake and accelerator are currently engaged or not. Observation probability distributions: In our HMM model, we assume a Gaussian output probability distribution P(O t q t = s i ) = N(μ i,σ i ). Transition matrix: A ={a ij } is a 7 7 state transition matrix where a ij is the probability of making a transition from state s i to s j, a ij = P(q t+1 = s j q t = s i ). Initial state distribution: Assume an uniform distribution of the initial states. Utilizing reliable information from the vehicle CAN data, an automatic data labeling procedure was developed for training and evaluating of the HMM model. The HMM model parameters Λ including the Gaussian observation probability distribution and the transition matrix are learned using a Baum Welch algorithm. The meaning of these estimated foot behavior states also connect directly to the prediction of actual pedal presses (i.e. when the foot is in the state TowardsBrk or TowardsAcc, we can predict a corresponding brake or acceleration press in near future). This approach was evaluated with data from a real-world driving testbed

11 30 Vision for Driver Assistance: Looking at People in a Vehicle 607 Fig Vehicle testbed configuration for foot analysis experiment Fig Tracked trajectories of a brake (red) and an acceleration (blue). The labeled points show the outputs of the HMM based foot behavior analysis (Fig. 30.5). An experimental data collection paradigm was designed to approximate stop-and-go traffic in which the driver will accelerate or brake depending on whether the stop or go cue is shown. Figure 30.6 visualizes the outputs of the approach

12 608 C. Tran and M.M. Trivedi for a brake and an acceleration example. Over all 15 experimental runs with 128 trials (a stop or go cue is shown) per run, a major part 75% of the pedal presses can be predicted with 95% accuracy at 133 ms prior to the actual pedal press. Regarding the misapplication cases (i.e. subjects were cued to hit a specific pedal but instead applied the wrong pedal), all of them were predicted correctly 200 ms on average before the actual press, which is actually earlier than for general pedal press prediction. This indicates the potential of using the proposed approach in predicting and mitigating pedal errors which is one problem of recent interest to the automotive safety community [16] Analyzing Driver Posture for Driver Assistance The whole body posture is another cueing information that should be explored more in looking at people inside a vehicle. Figure 30.7 shows some possible ranges of driver posture movement which might have connection to driver state and intention. For example, leaning backward might indicate relax position, leaning forward indicates concentration. Driver may also change posture in preparation for some specific tasks such as moving head forward to prepare for a better visual check before lane change. In [19], Ito and Kanade used six marker points on shoulders, elbows, and wrists to predict nine driver operations toward different destinations including navigation, A/C, left vent, right vent, gear box, console box, passenger seat, glove compartment, and rear-view mirror. Their approach has been evaluated with different subjects in driving simulation with high prediction accuracy 90% and low false positive rate 1.4%. This approach, however, requires putting markers on the driver. In [8], Datta et al. have developed a markerless approach for tracking systems of articulated planes was also applied to track 2D driver body pose on these same simulation data. Though this approach has automated the tracking part, it still requires a manual initialization of the tracking model. Beside looking at driver, looking at occupant posture is also important. In [35], our team investigated basic feasibility of using stereo and thermal long-wavelength infrared video for occupant position and posture analysis, which is a key requirement in designing smart airbag systems. In this investigation, our suggestion was to use head tracking information, which is easier to track, instead of more detailed Fig Illustration of some possible range of driver posture movement during driving

13 30 Vision for Driver Assistance: Looking at People in a Vehicle 609 Fig Elbow joints prediction. (Left) Generate elbow candidates at each frames. (Right) Over a temporal segment, select the sequence of elbow joints that minimizes the joint displacement. By adding 2 pseudo nodes s and t with zero-weighted edges, this can be represented as a shortest path problem occupant posture analysis for robust smart airbag deployment. However, for potential applications goes beyond the purpose of smart airbag such as driver attentiveness analysis and human-machine interfaces inside the car, we see we need to look at more detailed body posture of driver and occupant. Our team has developed a computational approach for upper body tracking using the 3D movement of extremities (head and hands) [32]. This approach tracks a 3D skeletal upper body model which can be determined by a set of upper body joints and end points positions. To achieve robustness and real-time performance, this approach first tracks the 3D movements of extremities, including head and hands. Then using human upper body configuration constraints, movements of the extremities are used to predict the whole 3D upper body motion with inner joints. Since the head and hand regions are typically well defined and undergo less occlusion, tracking is more reliable and could enable us more robust upper body pose determination. Moreover by breaking the problem of high-dimensional search for upper body pose into two steps, the complexity is reduced considerably. The downside is that we need to deal with the ambiguity in inverse kinematics of upper body, i.e. there could be various upper body poses corresponding to the same head and hands positions. However, this issue is reduced in driving scenarios, since the driver typically sits in a fixed position. To deal with this ambiguity, the temporal inverse kinematics based on observation of dynamics of the extremities was used instead of just inverse kinematics constraints at each single frame. Figure 30.8 briefly describes this idea with a numerical method to predict elbow joint sequences. Since the lengths of upper arm and lower arm are fixed, possible elbow joint positions with known shoulder joint position S and hand position H will lie on a circle. At each frame, the range of possible elbow joint (the mentioned circle) is determined and then quantized into several elbow candidates based on a distance

14 610 C. Tran and M.M. Trivedi Fig Superimposed results of 3D driver body pose tracking using extremities movement threshold between candidates (Fig. 30.8(left)). For a whole temporal segmentation, the selected sequence is the one that minimize the total elbow joint displacement. As shown in Fig. 30.8(right), this selection can be represented as a shortest path problem. Due to the layer structure of the constructed graph, a dynamic programming technique can be used to solve this shortest path problem in linear time complexity O(n) where n is the number of frames in the sequence. This approach was validated and showed good results with various subjects in both indoor and in vehicle environments. Figure 30.9 shows some example results of the 3D driver body pose tracking superimposed on input images for visual evaluation Open Issues for Future Research Some related research studies have shown promising results. However, the development of accurate, reliable, and efficient approaches to looking at people in a vehicle for real-world driver assistance systems is still in its infancy. In this section, we will discuss some of the main issues that we think should be addressed for the future development in the area. Coordination between real-world and simulation testbeds: Simulation environments have the advantage of more flexibility in configuring sensors and designing experiment tasks for deeper analysis, which might be difficult and unsafe for implementing in real-world driving. However, the ultimate goal is to develop systems that work for real vehicle and there are always gaps between simulation environment and real world. Therefore in general a coordination between realworld driving and simulation environment is useful and should be considered in the development process. Looking at driver body at multiple levels: To achieve robustness and accuracy, a potential trend is to combine cues at multiple body levels since human body is a homogeneous and harmonious whole and behavior and states are generally expressed at different body levels simultaneously. However, we see that cueing information from different body parts have different characteristics and typically require different approaches to extract. Therefore how to develop efficient systems looking at driver body at multiple levels is still an open question. Investigating the role of features extracted from different body parts: Depending on the concerned behavior and/or cognitive state, features from some body parts may be useful, while others may not or may even be distracting factors. Moreover

15 30 Vision for Driver Assistance: Looking at People in a Vehicle 611 for efficiency, only useful feature cues should be extracted. In [11],Doshietal. from our team have done a comparative study on the role of head pose and eye gaze for driver lane change intent analysis. The results indicated that head pose, which is typically less noisy and easier to track than eye gaze, is actually a better feature for lane change intent prediction. In general, how to systematically do similar investigation for different feature cues and analysis tasks is desirable. Combining looking-in and looking-out: Some research studies have combined the output of looking-in and looking-out analysis for different assistance systems such as driver intent analysis [7, 10, 21], intelligent brake assistance [22], traffic sign awareness [13], driver distraction [17]. In [29], Pugeault and Bowden showed that information from a looking-out camera can be used to predict some driver actions including steering left or right, pressing accelerator, brake, or clutch. This implies that the contextual information from looking-out is also important to looking-in analysis of driver behavior and states. In general, both looking-in and looking-out information will be needed in developing efficient human-centered driver assistance systems [34, 36]. Interacting with driver when needed: Generally, IDAS s need to have the ability to provide feedbacks to the user when needed (e.g. to alert driver in critical situations). However, these IDAS feedbacks must be introduced carefully to ensure that they do not confuse or distract the driver, thereby undermining their intended purpose. Generally, interdisciplinary efforts need investigation as to the effect of different feedback mechanisms including visual, audio, and/or haptic feedback [1]. Learning individual driver models vs. generic driver models: It has been noted that individual drivers may act and respond in different ways under various conditions [4, 12, 24]. Therefore, it might be difficult to learn generic driver models that work well for all drivers. In order to achieve better performance, adapting the assistance systems to individual drivers based on their style and preferences has been needed. Murphey et al. [24] used the pedal press profile for classification of driver styles (i.e. calm, normal, and aggressive) and showed the correlation between these styles and the fuel consumption. In [12], our team has also studied some measures of driving style and their correlation with the predictability and responsiveness of the driver. The results indicated that aggressive drivers are more predictable than non-aggressive drivers, while non-aggressive drivers are more receptive of feedback from Driver Assistance Systems Conclusion Looking at people in a vehicle to understand their behavior and state is an important area which plays a significant role in developing human-centered Intelligent Driver Assistance Systems. The task is challenging due to the high demand on reliability and efficiency as well as the inherent computer vision difficulty of dynamic background and varying lighting conditions. In this chapter, we provided a concise overview of several selected research studies looking at different body parts ranging

16 612 C. Tran and M.M. Trivedi from coarse body to more detailed levels of feet, hands, head, eyes, and facial landmarks. To overcome the inherent challenges and achieve the required performance, some high-level directions learned from those studies are as follows. Design techniques which are specific for in-vehicle applications utilizing the characteristics such as that a driver typically sits in a fixed position or driver foot movement is highly related to pedal press actions. Integrate cueing information from different body parts. Consider the trade-offs between the cues that can be extracted more reliably and the cues that seem to be useful but hard to extract. Make use of both dynamic information (body motion) and static information (body appearance). Make use of different input modalities (e.g. color cameras and thermal infrared cameras). Despite lots of active studies, more research efforts are still needed to bring these high-level ideas into development of accurate, reliable, and efficient approaches for looking at people in a vehicle and actually improve the lives of drivers around the world Further Reading Interested readers may consult the following references for a broad overview of research topic trends and research groups in the area of intelligent transportation systems. Li, L., Li, X., Cheng, C., Chen, C., Ke, G., Zeng, D., Scherer, W.T.: Research collaboration and ITS topic evolution: 10 years at T-ITS. IEEE Trans. Intell. Transp. Syst. (June 2010) Li, L., Li, X., Li, Z., Zeng, D., Scherer, W.T.: A bibliographic analysis of the IEEE transactions on intelligent transportation systems literature. IEEE Trans. Intell. Transp. Syst. (October 2010) Acknowledgements We thank the sponsorships of U.C. Discovery Program, National Science Foundation as well as industry sponsors including Nissan, Volkswagen Electronic Research Laboratory, and Mercedes. We also thank former and current colleagues from our Laboratory for Intelligent and Safe Automobiles (LISA) for their cooperation, assistance, and contributions: Dr. Kohsia Huang, Dr. Joel McCall, Dr. Tarak Gandhi, Dr. Sangho Park, Dr. Shinko Cheng, Dr. Steve Krotosky, Dr. Junwen Wu, Dr. Erik Murphy-Chutorian, Dr. Brendan Morris, Dr. Anup Doshi, Mr. Sayanan Sivaraman, Mr. Ashish Tawari, and Mr. Ofer Achlertheir. References 1. Adell, E., Várhelyi, A.: Development of HMI components for a driver assistance system for safe speed and safe distance. In: The 13th World Congress and Exhibition on Intelligent Transport Systems and Services ExCel London, United Kingdom (2006) [611]

17 30 Vision for Driver Assistance: Looking at People in a Vehicle Artaud, P., Planque, S., Lavergne, C., Cara, H., de Lepine, P., Tarriere, C., Gueguen, B.: An onboard system for detecting lapses of alertness in car driving. In: The 14th Int. Conf. Enhanced Safety of Vehicles (1994) [602] 3. Bergasa, L.M., Nuevo, J., Sotelo, M.A., Barea, R., Lopez, M.E.: Real-time system for monitoring driver vigilance. IEEE Trans. Intell. Transp. Syst. 7(1), (2006) [599,600,602] 4. Burnham, G.O., Seo, J., Bekey, G.A.: Identification of human drivers models in car following. IEEE Trans. Autom. Control 19(6), (1974) [611] 5. Cheng, S.Y., Park, S., Trivedi, M.M.: Multiperspective and multimodal video arrays for 3d body tracking and activity analysis. Comput. Vis. Image Underst. (Special Issue on Advances in Vision Algorithms and Systems Beyond the Visible Spectrum) 106(2 3), (2007) [601] 6. Cheng, S.Y., Trivedi, M.M.: Turn-intent analysis using body pose for intelligent driver assistance. IEEE Pervasive Comput. 5(4), (2006) [599,600,604] 7. Cheng, S.Y., Trivedi, M.M.: Vision-based infotainment user determination by hand recognition for driver assistance. IEEE Trans. Intell. Transp. Syst. 11(3), (2010) [599,601, 604,605,611] 8. Datta, A., Sheikh, Y., Kanade, T.: Linear motion estimation for systems of articulated planes. In: IEEE Conference on Computer Vision and Pattern Recognition (2008) [608] 9. Dinges, D., Mallis, M.: Managing fatigue by drowsiness detection: Can technological promises be realized? In: Hartley, L. (ed.) Managing Fatigue in Transportation, Elsevier, Oxford (1998) [602] 10. Doshi, A., Trivedi, M.M.: Investigating the relationships between gaze patterns, dynamic vehicle surround analysis, and driver intentions. In: IEEE Intelligent Vehicles Symposium (2009) [611] 11. Doshi, A., Trivedi, M.M.: On the roles of eye gaze and head pose in predicting driver s intent to change lanes. IEEE Trans. Intell. Transp. Syst. 10(3), (2009) [601,611] 12. Doshi, A., Trivedi, M.M.: Examining the impact of driving style on the predictability and responsiveness of the driver: Real-world and simulator analysis. In: IEEE Intelligent Vehicles Symposium (2010) [611] 13. Fletchera, L., Loyb, G., Barnesc, N., Zelinsky, A.: Correlating driver gaze with the road scene for driver assistance systems. Robot. Auton. Syst. 52(1), (2005) [599,600,602,611] 14. Grace, R., Byrne, V.E., Bierman, D.M., Legrand, J.M., Davis, R.K., Staszewski, J.J., Carnahan, B.: A drowsy driver detection system for heavy vehicles. In: Digital Avionics Systems Conference, Proceedings, The 17th DASC, The AIAA/IEEE/SAE (1998) [599,600,602] 15. Hammoud, R., Wilhelm, A., Malawey, P., Witt, G.: Efficient realtime algorithms for eye state and head pose tracking in advanced driver support systems. In: IEEE Conference on Computer Vision and Pattern Recognition (2005) [602] 16. Healey, J.R., Carty, S.S.: Driver error found in some Toyota acceleration cases. In: USA Today (2010) [608] 17. Huang, K.S., Trivedi, M.M., Gandhi, T.: Driver s view and vehicle surround estimation using omnidirectional video stream. In: IEEE Intelligent Vehicles Symposium (2003) [611] 18. Ishikawa, T., Baker, S., Matthews, I., Kanade, T.: Passive driver gaze tracking with active appearance models. In: The 11th World Congress on Intelligent Transportation Systems (2004) [599,600,603] 19. Ito, T., Kanade, T.: Predicting driver operations inside vehicles. In: IEEE International Conference on Automatic Face and Gesture Recognition (2008) [599,601,608] 20. Ji, Q., Zhu, Z., Lan, P.: Real time non-intrusive monitoring and prediction of driver fatigue. IEEE Trans. Veh. Technol. 53(4), (2004) [599,600,602] 21. McCall, J., Wipf, D., Trivedi, M.M., Rao, B.: Lane change intent analysis using robust operators and sparse Bayesian learning. IEEE Trans. Intell. Transp. Syst. 8(3), (2007) [611] 22. McCall, J.C., Trivedi, M.M.: Driver behavior and situation aware brake assistance for intelligent vehicles. Proc. IEEE 95(2), (2007) [603,605,611]

18 614 C. Tran and M.M. Trivedi 23. Mulder, M., Pauwelussen, J.J.A., van Paassen, M.M., Mulder, M., Abbink, D.A.: Active deceleration support in car following. IEEE Trans. Syst. Man Cybern., Part A, Syst. Hum. 40(6), (2010) [605] 24. Murphey, Y.L., Milton, R., Kiliaris, L.: Driver s style classification using jerk analysis. In: IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems (2009) [611] 25. Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation in computer vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 31(4), (2009) [604] 26. Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation and augmented reality tracking: An integrated system and evaluation for monitoring driver awareness. IEEE Trans. Intell. Transp. Syst. 11(2), (2010) [599,601,604] 27. NHTSA: Traffic safety facts 2006 a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system. In: Washington, DC: Nat. Center Stat. Anal., US Dept. Transp. (2006) [602] 28. Peden, M., Scurfield, R., Sleet, D., Mohan, D., Hyder, A.A., Jarawan, E., Mathers, C.: World report on road traffic injury prevention: Summary. In: World Health Organization, Geneva, Switzerland (2004) [597] 29. Pugeault, N., Bowden, R.: Learning pre-attentive driving behaviour from holistic visual features. In: The 11th European Conference on Computer Vision (2010) [611] 30. Smith, P., Shah, M., Lobo, N.V.: Determining driver visual attention with one camera. IEEE Trans. Intell. Transp. Syst. 4(4), (2003) [599] 31. Tran, C., Trivedi, M.M.: Driver assistance for keeping hands on the wheel and eyes on the road. In: IEEE International Conference on Vehicular Electronics and Safety (2009) [599, 601,604,605] 32. Tran, C., Trivedi, M.M.: Introducing XMOB : Extremity movement observation framework for upper body pose tracking in 3d. In: IEEE International Symposium on Multimedia (2009) [609] 33. Tran, C., Trivedi, M.M.: Towards a vision-based system exploring 3d driver posture dynamics for driver assistance: Issues and possibilities. In: IEEE Intelligent Vehicles Symposium (2010) [599,604] 34. Trivedi, M.M., Cheng, S.Y.: Holistic sensing and active displays for intelligent driver support systems. IEEE Comput. 40(5), (2007) [598,611] 35. Trivedi, M.M., Cheng, S.Y., Childers, E., Krotosky, S.: Occupant posture analysis with stereo and thermal infrared video: Algorithms and experimental evaluation. IEEE Trans. Veh. Technol. (Special Issue on In-Vehicle Vision Systems) 53(6), (2004) [599,600,608] 36. Trivedi, M.M., Gandhi, T., McCall, J.: Looking-in and looking-out of a vehicle: Computervision-based enhanced vehicle safety. IEEE Trans. Intell. Transp. Syst. 8(1), (2007) [598,611] 37. Veeraraghavan, H., Atev, S., Bird, N., Schrater, P., Papanikolopoulos, N.: Driver activity monitoring through supervised and unsupervised learning. In: IEEE Conference on Intelligent Transportation Systems (2005) [600] 38. Wu, J., Trivedi, M.M.: An eye localization, tracking and blink pattern recognition system: Algorithm and evaluation. ACM Trans. Multimedia Comput. Commun. Appl. 6(2) (2010) [599,601,603] 39. Yammamoto, K., Higuchi, S.: Development of a drowsiness warning system. J. Soc. Automot. Eng. Jpn. (1992) [602]

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Driver Assistance for Keeping Hands on the Wheel and Eyes on the Road ICVES 2009 Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road" Cuong Tran and Mohan Manubhai Trivedi Laboratory for Intelligent and Safe Automobiles (LISA) University of California

More information

Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities

Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities 2010 IEEE Intelligent Vehicles Symposium University of California, San Diego, CA, USA June 21-24, 2010 TuB1.30 Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance:

More information

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles Introducing LISA LISA: Laboratory for Intelligent and Safe Automobiles Mohan M. Trivedi University of California at San Diego mtrivedi@ucsd.edu Int. Workshop on Progress and Future Directions of Adaptive

More information

Real Time and Non-intrusive Driver Fatigue Monitoring

Real Time and Non-intrusive Driver Fatigue Monitoring Real Time and Non-intrusive Driver Fatigue Monitoring Qiang Ji and Zhiwei Zhu jiq@rpi rpi.edu Intelligent Systems Lab Rensselaer Polytechnic Institute (RPI) Supported by AFOSR and Honda Introduction Motivation:

More information

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos 214 IEEE Intelligent Vehicles Symposium (IV) June 8-11, 214. Dearborn, Michigan, USA Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos Sujitha Martin 1, Eshed Ohn-Bar

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Joel C. McCall, Ofer Achler, Mohan M. Trivedi jmccall@ucsd.edu, oachler@ucsd.edu, mtrivedi@ucsd.edu Computer

More information

Driver status monitoring based on Neuromorphic visual processing

Driver status monitoring based on Neuromorphic visual processing Driver status monitoring based on Neuromorphic visual processing Dongwook Kim, Karam Hwang, Seungyoung Ahn, and Ilsong Han Cho Chun Shik Graduated School for Green Transportation Korea Advanced Institute

More information

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis IEEE Conference on Computer Vision and Pattern Recognition Workshops - Mobile Vision 2014 Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis Eshed Ohn-Bar, Ashish

More information

Prof Trivedi ECE253A Notes for Students only

Prof Trivedi ECE253A Notes for Students only ECE 253A: Digital Processing: Course Related Class Website: https://sites.google.com/a/eng.ucsd.edu/ece253fall2017/ Course Graduate Assistants: Nachiket Deo Borhan Vasili Kirill Pirozenko Piazza Grading:

More information

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14) ISSN 0976 6367(Print) ISSN 0976

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Image Processing Based Vehicle Detection And Tracking System

Image Processing Based Vehicle Detection And Tracking System Image Processing Based Vehicle Detection And Tracking System Poonam A. Kandalkar 1, Gajanan P. Dhok 2 ME, Scholar, Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology,

More information

Current Technologies in Vehicular Communications

Current Technologies in Vehicular Communications Current Technologies in Vehicular Communications George Dimitrakopoulos George Bravos Current Technologies in Vehicular Communications George Dimitrakopoulos Department of Informatics and Telematics Harokopio

More information

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye

More information

C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00. Draft Agenda

C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00. Draft Agenda C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00 Venue: Rue Philippe Le Bon 3, Room 2/17 (Metro Maalbek) Draft Agenda 1. Welcome & Presentations

More information

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Sujitha Martin Laboratory of Intelligent and Safe Automobiles UCSD - La Jolla, CA, USA scmartin@ucsd.edu Ashish

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

Scanned Image Segmentation and Detection Using MSER Algorithm

Scanned Image Segmentation and Detection Using MSER Algorithm Scanned Image Segmentation and Detection Using MSER Algorithm P.Sajithira 1, P.Nobelaskitta 1, Saranya.E 1, Madhu Mitha.M 1, Raja S 2 PG Students, Dept. of ECE, Sri Shakthi Institute of, Coimbatore, India

More information

Development of Gaze Detection Technology toward Driver's State Estimation

Development of Gaze Detection Technology toward Driver's State Estimation Development of Gaze Detection Technology toward Driver's State Estimation Naoyuki OKADA Akira SUGIE Itsuki HAMAUE Minoru FUJIOKA Susumu YAMAMOTO Abstract In recent years, the development of advanced safety

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

NTU Robot PAL 2009 Team Report

NTU Robot PAL 2009 Team Report NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering

More information

Simulation and Animation Tools for Analysis of Vehicle Collision: SMAC (Simulation Model of Automobile Collisions) and Carmma (Simulation Animations)

Simulation and Animation Tools for Analysis of Vehicle Collision: SMAC (Simulation Model of Automobile Collisions) and Carmma (Simulation Animations) CALIFORNIA PATH PROGRAM INSTITUTE OF TRANSPORTATION STUDIES UNIVERSITY OF CALIFORNIA, BERKELEY Simulation and Animation Tools for Analysis of Vehicle Collision: SMAC (Simulation Model of Automobile Collisions)

More information

Vehicle Detection Using Imaging Technologies and its Applications under Varying Environments: A Review

Vehicle Detection Using Imaging Technologies and its Applications under Varying Environments: A Review Proceedings of the 2 nd World Congress on Civil, Structural, and Environmental Engineering (CSEE 17) Barcelona, Spain April 2 4, 2017 Paper No. ICTE 110 ISSN: 2371-5294 DOI: 10.11159/icte17.110 Vehicle

More information

WB2306 The Human Controller

WB2306 The Human Controller Simulation WB2306 The Human Controller Class 1. General Introduction Adapt the device to the human, not the human to the device! Teacher: David ABBINK Assistant professor at Delft Haptics Lab (www.delfthapticslab.nl)

More information

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Petr Bouchner, Stanislav Novotný, Roman Piekník, Ondřej Sýkora Abstract Behavior of road users on railway crossings

More information

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAVE-IT David W. Eby,, PhD University of Michigan Transportation Research Institute International Distracted Driving Conference

More information

A Proposal for Security Oversight at Automated Teller Machine System

A Proposal for Security Oversight at Automated Teller Machine System International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 6 (June 2014), PP.18-25 A Proposal for Security Oversight at Automated

More information

Head, Eye, and Hand Patterns for Driver Activity Recognition

Head, Eye, and Hand Patterns for Driver Activity Recognition 2014 22nd International Conference on Pattern Recognition Head, Eye, and Hand Patterns for Driver Activity Recognition Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan Trivedi University of California

More information

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster)

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster) Lessons from Collecting a Million Biometric Samples 109 Expression Robust 3D Face Recognition by Matching Multi-component Local Shape Descriptors on the Nasal and Adjoining Cheek Regions 177 Shared Representation

More information

Map Interface for Geo-Registering and Monitoring Distributed Events

Map Interface for Geo-Registering and Monitoring Distributed Events 2010 13th International IEEE Annual Conference on Intelligent Transportation Systems Madeira Island, Portugal, September 19-22, 2010 TB1.5 Map Interface for Geo-Registering and Monitoring Distributed Events

More information

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015 Risk assessment & Decision-making for safe Vehicle Navigation under Uncertainty Christian LAUGIER, First class Research Director at Inria http://emotion.inrialpes.fr/laugier Contributions from Mathias

More information

A SURVEY ON HAND GESTURE RECOGNITION

A SURVEY ON HAND GESTURE RECOGNITION A SURVEY ON HAND GESTURE RECOGNITION U.K. Jaliya 1, Dr. Darshak Thakore 2, Deepali Kawdiya 3 1 Assistant Professor, Department of Computer Engineering, B.V.M, Gujarat, India 2 Assistant Professor, Department

More information

An Automated Face Reader for Fatigue Detection

An Automated Face Reader for Fatigue Detection An Automated Face Reader for Fatigue Detection Haisong Gu Dept. of Computer Science University of Nevada Reno Haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute qji@ecse.rpi.edu

More information

Driver Assistance Systems (DAS)

Driver Assistance Systems (DAS) Driver Assistance Systems (DAS) Short Overview László Czúni University of Pannonia What is DAS? DAS: electronic systems helping the driving of a vehicle ADAS (advanced DAS): the collection of systems and

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Knowledge-based Reconfiguration of Driving Styles for Intelligent Transport Systems

Knowledge-based Reconfiguration of Driving Styles for Intelligent Transport Systems Knowledge-based Reconfiguration of Driving Styles for Intelligent Transport Systems Lecturer, Informatics and Telematics department Harokopion University of Athens GREECE e-mail: gdimitra@hua.gr International

More information

An Improved Bernsen Algorithm Approaches For License Plate Recognition

An Improved Bernsen Algorithm Approaches For License Plate Recognition IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) ISSN: 78-834, ISBN: 78-8735. Volume 3, Issue 4 (Sep-Oct. 01), PP 01-05 An Improved Bernsen Algorithm Approaches For License Plate Recognition

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

Virtual Grasping Using a Data Glove

Virtual Grasping Using a Data Glove Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct

More information

Content Based Image Retrieval Using Color Histogram

Content Based Image Retrieval Using Color Histogram Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

SLIC based Hand Gesture Recognition with Artificial Neural Network

SLIC based Hand Gesture Recognition with Artificial Neural Network IJSTE - International Journal of Science Technology & Engineering Volume 3 Issue 03 September 2016 ISSN (online): 2349-784X SLIC based Hand Gesture Recognition with Artificial Neural Network Harpreet Kaur

More information

Active Safety Systems Development and Driver behavior Modeling: A Literature Survey

Active Safety Systems Development and Driver behavior Modeling: A Literature Survey Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 9 (2013) pp. 1153-1166 Research India Publications http://www.ripublication.com/aeee.htm Active Safety Systems Development

More information

Multimodal Face Recognition using Hybrid Correlation Filters

Multimodal Face Recognition using Hybrid Correlation Filters Multimodal Face Recognition using Hybrid Correlation Filters Anamika Dubey, Abhishek Sharma Electrical Engineering Department, Indian Institute of Technology Roorkee, India {ana.iitr, abhisharayiya}@gmail.com

More information

The Research of the Lane Detection Algorithm Base on Vision Sensor

The Research of the Lane Detection Algorithm Base on Vision Sensor Research Journal of Applied Sciences, Engineering and Technology 6(4): 642-646, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 03, 2012 Accepted: October

More information

Recognition Of Vehicle Number Plate Using MATLAB

Recognition Of Vehicle Number Plate Using MATLAB Recognition Of Vehicle Number Plate Using MATLAB Mr. Ami Kumar Parida 1, SH Mayuri 2,Pallabi Nayk 3,Nidhi Bharti 4 1Asst. Professor, Gandhi Institute Of Engineering and Technology, Gunupur 234Under Graduate,

More information

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES http:// COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES Rafiqul Z. Khan 1, Noor A. Ibraheem 2 1 Department of Computer Science, A.M.U. Aligarh, India 2 Department of Computer Science,

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS

A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS Vol. 12, Issue 1/2016, 42-46 DOI: 10.1515/cee-2016-0006 A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS Slavomir MATUSKA 1*, Robert HUDEC 2, Patrik KAMENCAY 3,

More information

HAPTICS AND AUTOMOTIVE HMI

HAPTICS AND AUTOMOTIVE HMI HAPTICS AND AUTOMOTIVE HMI Technology and trends report January 2018 EXECUTIVE SUMMARY The automotive industry is on the cusp of a perfect storm of trends driving radical design change. Mary Barra (CEO

More information

Iris Recognition using Histogram Analysis

Iris Recognition using Histogram Analysis Iris Recognition using Histogram Analysis Robert W. Ives, Anthony J. Guidry and Delores M. Etter Electrical Engineering Department, U.S. Naval Academy Annapolis, MD 21402-5025 Abstract- Iris recognition

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

HAVEit Highly Automated Vehicles for Intelligent Transport

HAVEit Highly Automated Vehicles for Intelligent Transport HAVEit Highly Automated Vehicles for Intelligent Transport Holger Zeng Project Manager CONTINENTAL AUTOMOTIVE HAVEit General Information Project full title: Highly Automated Vehicles for Intelligent Transport

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Ant? Bird? Dog? Human -SURE

Ant? Bird? Dog? Human -SURE ECE 172A: Intelligent Systems: Introduction Week 1 (October 1, 2007): Course Introduction and Announcements Intelligent Robots as Intelligent Systems A systems perspective of Intelligent Robots and capabilities

More information

Driver Assistance System Based on Video Image Processing for Emergency Case in Tunnel

Driver Assistance System Based on Video Image Processing for Emergency Case in Tunnel American Journal of Networks and Communications 2015; 4(1): 5-9 Published online March 12, 2015 (http://www.sciencepublishinggroup.com/j/ajnc) doi: 10.11648/j.ajnc.20150401.12 ISSN: 2326-893X (Print);

More information

Human Autonomous Vehicles Interactions: An Interdisciplinary Approach

Human Autonomous Vehicles Interactions: An Interdisciplinary Approach Human Autonomous Vehicles Interactions: An Interdisciplinary Approach X. Jessie Yang xijyang@umich.edu Dawn Tilbury tilbury@umich.edu Anuj K. Pradhan Transportation Research Institute anujkp@umich.edu

More information

Platform-Based Design of Augmented Cognition Systems. Latosha Marshall & Colby Raley ENSE623 Fall 2004

Platform-Based Design of Augmented Cognition Systems. Latosha Marshall & Colby Raley ENSE623 Fall 2004 Platform-Based Design of Augmented Cognition Systems Latosha Marshall & Colby Raley ENSE623 Fall 2004 Design & implementation of Augmented Cognition systems: Modular design can make it possible Platform-based

More information

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results Angelos Amditis (ICCS) and Lali Ghosh (DEL) 18 th October 2013 20 th ITS World

More information

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION Volume 116 No. 11 2017, 91-99 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v116i11.10 ijpam.eu DRIVER FATIGUE DETECTION USING IMAGE

More information

Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere

Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere Kiyotaka Fukumoto (&), Takumi Tsuzuki, and Yoshinobu Ebisawa

More information

Drowsy Driver Detection System

Drowsy Driver Detection System Drowsy Driver Detection System Abstract Driver drowsiness is one of the major causes of serious traffic accidents, which makes this an area of great socioeconomic concern. Continuous monitoring of drivers'

More information

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

Face Registration Using Wearable Active Vision Systems for Augmented Memory

Face Registration Using Wearable Active Vision Systems for Augmented Memory DICTA2002: Digital Image Computing Techniques and Applications, 21 22 January 2002, Melbourne, Australia 1 Face Registration Using Wearable Active Vision Systems for Augmented Memory Takekazu Kato Takeshi

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Development of Hybrid Image Sensor for Pedestrian Detection

Development of Hybrid Image Sensor for Pedestrian Detection AUTOMOTIVE Development of Hybrid Image Sensor for Pedestrian Detection Hiroaki Saito*, Kenichi HatanaKa and toshikatsu HayaSaKi To reduce traffic accidents and serious injuries at intersections, development

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

A Real Time Static & Dynamic Hand Gesture Recognition System

A Real Time Static & Dynamic Hand Gesture Recognition System International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 4, Issue 12 [Aug. 2015] PP: 93-98 A Real Time Static & Dynamic Hand Gesture Recognition System N. Subhash Chandra

More information

THE World Health Organization reports that more than

THE World Health Organization reports that more than IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 1 Toward Privacy-Protecting Safety Systems for Naturalistic Driving Videos Sujitha Martin, Student Member, IEEE, Ashish Tawari, Student Member, IEEE,

More information

Effective Collision Avoidance System Using Modified Kalman Filter

Effective Collision Avoidance System Using Modified Kalman Filter Effective Collision Avoidance System Using Modified Kalman Filter Dnyaneshwar V. Avatirak, S. L. Nalbalwar & N. S. Jadhav DBATU Lonere E-mail : dvavatirak@dbatu.ac.in, nalbalwar_sanjayan@yahoo.com, nsjadhav@dbatu.ac.in

More information

DRIVER BEHAVIOR ANALYSIS USING NON-INVASIVE SENSORS

DRIVER BEHAVIOR ANALYSIS USING NON-INVASIVE SENSORS DRIVER BEHAVIOR ANALYSIS USING NON-INVASIVE SENSORS 1 L. Nikitha, 2 J.Kiranmai, 3 B.Vidhyalakshmi 1,2,3 Department of Electronics and Communication, SCSVMV University, Kanchipuram, (India) ABSTRACT Detecting

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness

Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness 1 Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness Nachiket Deo, and Mohan M. Trivedi, Fellow, IEEE arxiv:1811.06047v1 [cs.cv] 14 Nov 2018 Abstract Continuous estimation

More information

Hand & Upper Body Based Hybrid Gesture Recognition

Hand & Upper Body Based Hybrid Gesture Recognition Hand & Upper Body Based Hybrid Gesture Prerna Sharma #1, Naman Sharma *2 # Research Scholor, G. B. P. U. A. & T. Pantnagar, India * Ideal Institue of Technology, Ghaziabad, India Abstract Communication

More information

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Somnath Mukherjee, Kritikal Solutions Pvt. Ltd. (India); Soumyajit Ganguly, International Institute of Information Technology (India)

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

Research on visual physiological characteristics via virtual driving platform

Research on visual physiological characteristics via virtual driving platform Special Issue Article Research on visual physiological characteristics via virtual driving platform Advances in Mechanical Engineering 2018, Vol. 10(1) 1 10 Ó The Author(s) 2018 DOI: 10.1177/1687814017717664

More information

Directional Driver Hazard Advisory System. Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He

Directional Driver Hazard Advisory System. Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He Directional Driver Hazard Advisory System Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He 1 Table of Contents 1 Introduction... 3 1.1 Objective... 3 1.2

More information

A Training Based Approach for Vehicle Plate Recognition (VPR)

A Training Based Approach for Vehicle Plate Recognition (VPR) A Training Based Approach for Vehicle Plate Recognition (VPR) Laveena Agarwal 1, Vinish Kumar 2, Dwaipayan Dey 3 1 Department of Computer Science & Engineering, Sanskar College of Engineering &Technology,

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

Evaluation of High Intensity Discharge Automotive Forward Lighting

Evaluation of High Intensity Discharge Automotive Forward Lighting Evaluation of High Intensity Discharge Automotive Forward Lighting John van Derlofske, John D. Bullough, Claudia M. Hunter Rensselaer Polytechnic Institute, USA Abstract An experimental field investigation

More information

THE EFFECTS OF PC-BASED TRAINING ON NOVICE DRIVERS RISK AWARENESS IN A DRIVING SIMULATOR

THE EFFECTS OF PC-BASED TRAINING ON NOVICE DRIVERS RISK AWARENESS IN A DRIVING SIMULATOR THE EFFECTS OF PC-BASED TRAINING ON NOVICE DRIVERS RISK AWARENESS IN A DRIVING SIMULATOR Anuj K. Pradhan 1, Donald L. Fisher 1, Alexander Pollatsek 2 1 Department of Mechanical and Industrial Engineering

More information

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications Bluetooth Low Energy Sensing Technology for Proximity Construction Applications JeeWoong Park School of Civil and Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr. N.W., Atlanta,

More information

INDOOR USER ZONING AND TRACKING IN PASSIVE INFRARED SENSING SYSTEMS. Gianluca Monaci, Ashish Pandharipande

INDOOR USER ZONING AND TRACKING IN PASSIVE INFRARED SENSING SYSTEMS. Gianluca Monaci, Ashish Pandharipande 20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 INDOOR USER ZONING AND TRACKING IN PASSIVE INFRARED SENSING SYSTEMS Gianluca Monaci, Ashish Pandharipande

More information

Automatic Licenses Plate Recognition System

Automatic Licenses Plate Recognition System Automatic Licenses Plate Recognition System Garima R. Yadav Dept. of Electronics & Comm. Engineering Marathwada Institute of Technology, Aurangabad (Maharashtra), India yadavgarima08@gmail.com Prof. H.K.

More information

MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE

MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE First Annual 2018 National Mobility Summit of US DOT University Transportation Centers (UTC) April 12, 2018 Washington, DC Research Areas Cooperative

More information

Applying Vision to Intelligent Human-Computer Interaction

Applying Vision to Intelligent Human-Computer Interaction Applying Vision to Intelligent Human-Computer Interaction Guangqi Ye Department of Computer Science The Johns Hopkins University Baltimore, MD 21218 October 21, 2005 1 Vision for Natural HCI Advantages

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor ADAS Development using Advanced Real-Time All-in-the-Loop Simulators Roberto De Vecchi VI-grade Enrico Busto - AddFor The Scenario The introduction of ADAS and AV has created completely new challenges

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Using Driving Simulator for Advance Placement of Guide Sign Design for Exits along Highways

Using Driving Simulator for Advance Placement of Guide Sign Design for Exits along Highways Using Driving Simulator for Advance Placement of Guide Sign Design for Exits along Highways Fengxiang Qiao, Xiaoyue Liu, and Lei Yu Department of Transportation Studies Texas Southern University 3100 Cleburne

More information

Vision Based Intelligent Traffic Analysis System for Accident Detection and Reporting System

Vision Based Intelligent Traffic Analysis System for Accident Detection and Reporting System Vision Based Intelligent Traffic Analysis System for Accident Detection and Reporting System 1 Gayathri Elumalai, 2 O.S.P.Mathanki, 3 S.Swetha 1, 2, 3 III Year, Student, Department of CSE, Panimalar Institute

More information

On Generalizing Driver Gaze Zone Estimation using Convolutional Neural Networks

On Generalizing Driver Gaze Zone Estimation using Convolutional Neural Networks 2017 IEEE Intelligent Vehicles Symposium (IV) June 11-14, 2017, Redondo Beach, CA, USA On Generalizing Driver Gaze Zone Estimation using Convolutional Neural Networks Sourabh Vora, Akshay Rangesh and Mohan

More information

Multi-robot Formation Control Based on Leader-follower Method

Multi-robot Formation Control Based on Leader-follower Method Journal of Computers Vol. 29 No. 2, 2018, pp. 233-240 doi:10.3966/199115992018042902022 Multi-robot Formation Control Based on Leader-follower Method Xibao Wu 1*, Wenbai Chen 1, Fangfang Ji 1, Jixing Ye

More information