Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Size: px
Start display at page:

Download "Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road""

Transcription

1 ICVES 2009 Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road" Cuong Tran and Mohan Manubhai Trivedi Laboratory for Intelligent and Safe Automobiles (LISA) University of California at San Diego, La Jolla, CA, USA Abstract Motivated from a basic tip for safe driving, Keeping Hands on the Wheel and Eyes on the Road, this paper introduces a vision based system for driver activity analysis by observing 3D movement of driver s head and hands from multiview video. From the results of upper body and head pose tracking, semantic descriptions of driver activities are extracted in two steps: First, we determine basic activities of each upper body part (e.g. Two, one, or no hand is on the steering wheel; Head is looking left, straight, or right). Then these basic activities are combined in a fusion step to extract higher level of semantic description of driver activities (e.g. whether the driver is following the above safety tip or not). Our experimental evaluation with real-world street driving shows the promise of applying the proposed system for both post analysis of captured driving data as well as for real-time driver assistance. Index Terms Driver activity analysis, Upper body pose analysis, Intelligent vehicles, Active safety. I. INTRODUCTION OMPUTER vision and machine learning technologies Cplay an important role and are increasingly used in today s vehicles to improve the safety as well as comfortability. However, to be effective, such technologies need to be human centric and need to work in a holistic manner which takes into account different components of the system involving driver (e.g. looking at driver to recognize driver activity and attention state), vehicle (e.g. looking at vehicle speed, steering angle, braking), and vehicle surround (e.g. looking at road and other cars to understand surrounding situation). [14, 15]. Among those components, we consider in this paper the part of looking at the driver which is a very important component in driver assistance systems (it is shown that a large portion of accidents is caused by human errors like driver inattention or cognitive overload [1]). We propose a vision based system for driver activity analysis by observing movement of upper body extremities including head and hands. The motivation is that from computer vision viewpoint, extremal parts have less occlusion and could be tracked more robustly compared to other upper body parts. In the scene of driving where the types of upper body motion are quite restricted compared to general case, we can also use our knowledge of upper body model to predict the whole upper body motion from extremities movement as an inverse kinematics problem. We have developed an approach for doing so which we called XMOB (extremity Movement OBservation) upper body pose tracker [12]. From driver assistance systems viewpoint, only coarse level of head and hand blobs movement have already exposed several important cues about driver state and activity. For example, hand position can be used to determine how many hands are currently on the steering wheel, which is an indicator of the readiness to control the vehicle. Hand blobs movement may indicate whether the driver is in rest state or is doing some actions (which may or may not relate to the driving task). Since the driver typically sits in a fixed position, head blob tracking roughly reveal the sitting posture which is important, e.g. for smart airbag deployment [13]. In this paper, we also incorporate head pose estimation which is kind of a more detailed level of information. However it is worthwhile because head pose is a strong cue for determining driver focus of attention. Our experiments with real-world street driving scene show the potential of the proposed system in vision based analysis of driver activity. Although the system currently works in offline mode which does post analysis of the saved driving scene data, it has strong promise to be applied in real-time driver assistance system such as active safety system (which predicts critical situations and alert driver if needed) since several major parts of the system currently run in real-time. On the other hand, it should be mentioned that post processing of saved data is also an important task. More concretely, in our lab we have built several vehicle testbeds that capture rich contextual realistic data of the environment, vehicle, and driver which can be used later for studying driver behavior as well as developing better analyzing algorithms. Since there is a huge amount of such data that cannot be processed manually, there is a need for post processing tools to automatically extract semantic descriptions from these data. The remaining part of the paper is organized as follows: Section 2 is the related research studies. Section 3 describes the details of the proposed system for driver activity analysis. Section 4 provides experimental evaluation and finally we have some concluding remarks and discussions in Section 5. II. RELATED RESEARCH STUDIES There were several research studies for driver activity /09/$ IEEE 97

2 analysis based on tracking different upper body parts. Park and Trivedi [9] proposed a hierarchical framework for driver activity analysis at multiple levels: from low level of individual body-part pose to middle level of single body part action, and then high level of driver interaction with vehicle. However this paper just proposed the ideas of such a framework, no real implementation was done. Cheng et al. [3] proposed using multi-perspective, multi-spectral (thermal infrared and color) cameras combined with data about vehicle dynamics (e.g. steering angles, speed) for turn analysis and hand grasp analysis. Although the idea of using different camera types to improve the system s robustness is theoretically true, using thermal camera seems not be a good choice since in many real driving situation, the heat from car engine, sunshine, could make thermal images very noisy. In this paper we propose a system for driver activity by observing movement of upper body extremities (head and hands) from multiview color cameras. At high level, this system follows quite similar ideas of the hierarchical framework proposed in [9]. However, this is the first proposed system with real implementation and evaluation. The XMOB upper body tracker is motivated by previous researches in inverse kinematics of arms, e.g. Soechting and Flanders, researchers in neurophysiology, found that the desired position of the hand roughly determines the arm posture [10]. Compared to other upper body pose tracking methods (e.g. [4, 6]), XMOB only needs image evidences to track extremal parts (head and hands) which are the easiest parts to track with less occlusion. From the 3D movements of head and hand blobs, XMOB then predicts whole upper body motion as an inverse kinematics problem. By breaking the very large search problem of upper body pose estimation into two sub problems (tracking extremal parts first, and then predict the most likely corresponding sequence of remaining inner joints), the complexity is also reduced to achieve realtime performance. In our proposed system for driver activity tracking, we also incorporate head pose estimation because it reveals important information about driver focus of attention. Moreover realtime and robust head pose tracking systems like HyHOPE (Hybrid Head Orientation and Pose Estimation) [8] have been developed. We chose head pose rather than for example a more detailed level like eye gaze because systems for eye gaze tracking are typically not as robust as for head pose tracking and it was also shown that for some tasks like lane change prediction, head pose is a better cue than eye gaze [5]. In the next section, we will go into more details of the proposed vision based system for driver activity analysis. III. THE PROPOSED SYSTEM FOR DRIVER ACTIVITY ANALYSIS BY OBSERVING HEAD AND HAND MOVEMENTS The flowchart of the proposed system is shown in Fig.1. From synchronized multiview video input, the upper body Fig. 1. Flowchart of the proposed system for driver activity analysis pose and head pose are tracked by XMOB and HyHOPE respectively in two different streams. However the pose estimation results are then synchronized and used for extracting semantic descriptions of driver activities in two steps: First, we determine basic activities of individual upper body part such as determine if the head is looking left, straight, or right; if hand is in rest or moving state (currently we use 4 set of basic activities as shown in Fig.1.C). Then there is a fusion step using these basic activities as input for higher level of semantic description of driver activity. Fig. 1.D shows the 6 types of semantic driver activity descriptions (events) that we currently detect. A. XMOB Upper Body Tracker The skeletal upper body model used in XMOB upper body tracker is shown in Fig. 2. The length of body parts are considered fixed, which means there is only kinematics movement at the joints. There are 4 joints in the model: two shoulder joints, each has 3 Degree of Freedom (DOF) and two 1 DOF elbow joints. An upper body pose at time t can be represented by 7 positions in 3D of inner joints and extremities X t = {P hea t, P lha t, P rha t, P lsh t, P rsh t, P leb t, P reb t } As mentioned above, the idea is to break the estimation of X t into two sub problems: First estimating the subset of extremities {P hea t, P lha t, P rha t } which are the easiest parts to track with less occlusion. Then by observing sequence of these extremity movements over time, we predict the 98

3 P reb P rsh P rha P hea P lha P lsh P leb Fig. 2. Upper body model used in XMOB upper body tracker corresponding sequence of the remaining subset {P t lsh, P t rsh, P t leb, P t reb } as an inverse kinematics problem. The main ideas of XMOB upper body tracker [12] are: Regarding head and hands tracking, XMOB has a semisupervised procedure, in which user starts by moving only their extremities, to robustly segment skin color of a particular user from background colors of a particular scene. By learning a specific color clustering model for each case, we can achieve better performance compared to a general model for arbitrary human skin color, e.g. [7] and background colors. Regarding the prediction of corresponding sequence of inner joint positions {P t lsh, P t rsh, P t leb, P t reb } from head and hand movement {P t hea, P t lha, P t rha }: Since upper body kinematics is redundant, there could be many solutions for upper body pose with the same head and hand positions. Therefore, there is no precise solution in every case but we want to find the most likely solution which could be true in many cases. XMOB took a numerical approach. Using geometric constraints between upper body joints and extremities, at each frame it determines a set of hypotheses for possible inner joint locations. Then by observing extremity movements in a period of time, XMOB uses heuristic rules of minimizing joint displacement and preserving left-right symmetry to predict the corresponding sequence of inner joint positions. B. HyHOPE Head Pose Tracking We use the implementation HyHOPE in [8] for head pose tracking. The main idea of HyHOPE is to improve the performance by combining a static head pose estimation with a real time 3D model-based tracking system. From an initial estimate of head position and orientation, the system generates a texture mapped 3D model of the head from the most recent head image and using particle filter approach to find the best match of this 3D model from each subsequent frame. HyHOPE also exploit GPU (Graphics Processing Unit) programming to run in real-time. C. Extracting basic activities of each upper body part There are 4 set of basic activities as shown in Fig. 1.C. These basic activities are simply extracted from upper body pose and head pose tracking results by a thresholding process. For set 1, we roughly mark a 3D region for the steering wheel and determine if hand position is inside that region (hand on wheel) or not. For set 2, a distance threshold (which is 10cm in our experiment) between consecutive hand positions is used to determine if hand is in rest or motion state. For set 3, since we assume that the driver sitting in a fixed position, sitting pose is determined by a distance threshold (which is 20cm) between the referenced 3D head position and current 3D head position projected on the direction of car length. For set 4, an angle threshold of head pan angle (which is 200 in our experiment) is used to determine looking left/straight/right. D. Combining basic activities of each upper body part for higher level of semantic description We represent the relation between basic activities as separate, sequential, or concurrent based on their time gap as well as the percentage of overlapping between them (as illustrated in Fig. 5) Fig. 5. Three types of relation between basic activities As shown in Fig. 1.D, the fusion of basic activities into higher level of semantic description works in a rule-based manner in which AND operator represents concurrent relation and THEN operator represents sequential relation. These rules can be implemented by 2 types of state machine: 2-state state machine for rules with no THEN operator and 3-state state machine for rules with 1 THEN operator. Fig. 6 and Fig. 7 show examples of these 2 types of state machine. Automatically return to the start state s 0 s s 1 Hand on wheel AND Hand on wheel AND 2 Head look left Hand move Turn left Start state detected Fig. 6. Example of a 3-state state machine for rule 2 in Fig.1.D Automatically return to the start state s 0 Head look straight AND Hand on s 1 wheel AND Hand rest Start state Normal going forward detected Fig. 7. Example of a 2-state state machine for rule 1 in Fig.1.D 99

4 this driver activity analysis should finally be incorporated with other component of looking at vehicle and surround environment to have a holistic sensing system for intelligent driver support. Fig. 8. Camera setup for the experiment in LISAP testbed IV. EXPERIMENTAL EVALUATION All of the data used here was collected from LISA-P testbed ( The testbed is centered on the computing, power and mounting resources in the LISA-P to allow frame-rate collection of data simultaneously from multiple sources. For this experiment, we used 2 color cameras for upper body pose tracking and 1 color camera for head pose tracking (Fig. 8). Real driving data of different drivers was captured and then analyzed by the proposed system. Fig. 9 shows some results for visual evaluation of the HyHOPE head pose tracking with large range of head rotation, different lighting conditions, and some occlusions. Fig. 10 shows results of XMOB upper body tracking in several different driving activities. Regarding the runtime performance, we did the analysis on a Pentium(R) D CPU 2.8 GHz and HyHOPE head pose tracking ran at around 15 fps (frame per second) and the runtime of XMOB upper body tracking is around 6 fps. Fig. 11 shows the results of extracting basic activities for each upper body part (described in Section 3.C) based on the above pose tracking results. These extraction results are also compared with the manually annotated ground truth and we see that the proposed system can capture these basic activities quite well. Fig. 12 is a sample result of combining basic activities of each upper body part for higher level of semantic description (described in Section 3.D). V. CONCLUDING REMARKS We have proposed a vision based system for driver activity analysis by observing 3D movement of upper body extremities including head and hands. The system was evaluated with real-world street driving scene and showed promise to be applied for both real-time active safety systems as well as analyzing systems which post process driving data captured from rich contextual realistic situation. For future direction, ACKNOWLEDGMENT We are thankful to our colleagues at CVRR lab for useful discussions and assistances. We especially acknowledge Dr. Erik Murphy Chutorian for his contribution in developing the HyHOPE system. The first author also thanks Vietnam Education Foundation (VEF) for its sponsorship. REFERENCES [1] World report on road traffic injury prevention: Summary. Technical report, World Health Organization, [2] S. Y. Cheng, M. M. Trivedi, "Turn-Intent Analysis Using Body Pose for Intelligent Driver Assistance", IEEE Pervasive Computing, 2006 [3] S. Y. Cheng, S. Park, M. M. Trivedi, "Multi-spectral and Multiperspective Video Arrays for Driver Body Tracking and Activity Analysis", CVIU, [4] A. Datta, Y. Sheikh, and T. Kanade, Linear Motion Estimation for Systems of Articulated Planes, IEEE Conference on Computer Vision and Pattern Recognition, [5] A. Doshi and M. M. Trivedi, "On the Roles of Eye Gaze and Head Pose in Predicting Driver's Intent to Change Lanes", IEEE Trans on Intelligent Transportation System, [6] V. Ferrari, M. Jiménez, A. Zisserman, Progressive Search Space Reduction for Human Pose Estimation, IEEE CVPR, [7] G. Gomez and E. Morales, Automatic Feature Construction and a Simple Rule Induction Algorithm for Skin Detection, ICML Workshop on Machine Learning in Computer Vision, [8] E. Murphy-Chutorian and M. M. Trivedi, "HyHOPE: Hybrid Head Orientation and Position Estimation for Vision-based Driver Head Tracking", IEEE Intelligent Vehicles Symposium, June [9] S. Park, M. M. Trivedi, Driver Activity Analysis for Intelligent Vehicles: Issues and Development Framework, IEEE Intelligent Vehicles, [10] J. Soechting and M. Flanders, Errors in pointing are due to approximations in sensorimotor transformations, Journal of Neurophysiology, [11] C. Tran and M. M. Trivedi, "Human Body Modeling and Tracking Using Volumetric Representation: Selected Recent Studies and Possibilities for Extensions", ACM/IEEE ICDSC, September 2008 [12] C. Tran and M. M. Trivedi, Introducing XMOB: Extremity Movement Observation Framework for Upper Body Pose Tracking in 3D, IEEE International Symposium on Multimedia, [13] M. M. Trivedi, S. Y. Cheng, E. Childers, S. Krotosky, Occupant Posture Analysis with Stereo and Thermal Infrared Video: Algorithms and Experimental Evaluation, IEEE Trans on Vehicular Tech, Nov [14] M. M. Trivedi and S. Y. Cheng, "Holistic Sensing and Active Displays for Intelligent Driver Support Systems", IEEE Computer Magazine, [15] M. M. Trivedi, T Gandhi, J. McCall, "Looking-In and Looking-Out of a Vehicle: Computer-Vision-Based Enhanced Vehicle Safety", IEEE Trans on Intelligent Transportation System,

5 Fig. 9. Visual evaluation of HyHOPE head pose tracking with large range of head rotation, change in lighting condition, and some occlusions Fig. 10. Visual evaluation of XMOB upper body tracking in several different driving activities. Top - Upper body pose tracking results in 3D. Bottom Superimposed 3D pose tracking results on image Fig. 11. Basic activities extraction results. Top Head (right, straight, or left), Middle Number of hands on wheel, Bottom Hand motion (rest or move) Fig. 12. Combining basic activities of each upper body part for higher level of semantic description. Result of alert type 1 detection (rule 4 in Fig. 1.D: Head look straight AND No hand on wheel AND Hand rest) 101

Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities

Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance: Issues and Possibilities 2010 IEEE Intelligent Vehicles Symposium University of California, San Diego, CA, USA June 21-24, 2010 TuB1.30 Towards a Vision-based System Exploring 3D Driver Posture Dynamics for Driver Assistance:

More information

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles Introducing LISA LISA: Laboratory for Intelligent and Safe Automobiles Mohan M. Trivedi University of California at San Diego mtrivedi@ucsd.edu Int. Workshop on Progress and Future Directions of Adaptive

More information

Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle

Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle Chapter 30 Vision for Driver Assistance: Looking at People in a Vehicle Cuong Tran and Mohan Manubhai Trivedi Abstract An important real-life application domain of computer vision techniques looking at

More information

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos

Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos 214 IEEE Intelligent Vehicles Symposium (IV) June 8-11, 214. Dearborn, Michigan, USA Understanding Head and Hand Activities and Coordination in Naturalistic Driving Videos Sujitha Martin 1, Eshed Ohn-Bar

More information

Real Time and Non-intrusive Driver Fatigue Monitoring

Real Time and Non-intrusive Driver Fatigue Monitoring Real Time and Non-intrusive Driver Fatigue Monitoring Qiang Ji and Zhiwei Zhu jiq@rpi rpi.edu Intelligent Systems Lab Rensselaer Polytechnic Institute (RPI) Supported by AFOSR and Honda Introduction Motivation:

More information

Prof Trivedi ECE253A Notes for Students only

Prof Trivedi ECE253A Notes for Students only ECE 253A: Digital Processing: Course Related Class Website: https://sites.google.com/a/eng.ucsd.edu/ece253fall2017/ Course Graduate Assistants: Nachiket Deo Borhan Vasili Kirill Pirozenko Piazza Grading:

More information

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta 3D Interaction using Hand Motion Tracking Srinath Sridhar Antti Oulasvirta EIT ICT Labs Smart Spaces Summer School 05-June-2013 Speaker Srinath Sridhar PhD Student Supervised by Prof. Dr. Christian Theobalt

More information

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data

Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Balancing Privacy and Safety: Protecting Driver Identity in Naturalistic Driving Video Data Sujitha Martin Laboratory of Intelligent and Safe Automobiles UCSD - La Jolla, CA, USA scmartin@ucsd.edu Ashish

More information

Image Processing Based Vehicle Detection And Tracking System

Image Processing Based Vehicle Detection And Tracking System Image Processing Based Vehicle Detection And Tracking System Poonam A. Kandalkar 1, Gajanan P. Dhok 2 ME, Scholar, Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology,

More information

Head, Eye, and Hand Patterns for Driver Activity Recognition

Head, Eye, and Hand Patterns for Driver Activity Recognition 2014 22nd International Conference on Pattern Recognition Head, Eye, and Hand Patterns for Driver Activity Recognition Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan Trivedi University of California

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System Joel C. McCall, Ofer Achler, Mohan M. Trivedi jmccall@ucsd.edu, oachler@ucsd.edu, mtrivedi@ucsd.edu Computer

More information

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able

More information

Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers

Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers Gaze Fixations and Dynamics for Behavior Modeling and Prediction of On-road Driving Maneuvers Sujitha Martin and Mohan M. Trivedi Abstract From driver assistance in manual mode to takeover requests in

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM

STUDY OF VARIOUS TECHNIQUES FOR DRIVER BEHAVIOR MONITORING AND RECOGNITION SYSTEM INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14) ISSN 0976 6367(Print) ISSN 0976

More information

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples 2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples Daisuke Deguchi, Mitsunori

More information

Activity monitoring and summarization for an intelligent meeting room

Activity monitoring and summarization for an intelligent meeting room IEEE Workshop on Human Motion, Austin, Texas, December 2000 Activity monitoring and summarization for an intelligent meeting room Ivana Mikic, Kohsia Huang, Mohan Trivedi Computer Vision and Robotics Research

More information

Resilient and Accurate Autonomous Vehicle Navigation via Signals of Opportunity

Resilient and Accurate Autonomous Vehicle Navigation via Signals of Opportunity Resilient and Accurate Autonomous Vehicle Navigation via Signals of Opportunity Zak M. Kassas Autonomous Systems Perception, Intelligence, and Navigation (ASPIN) Laboratory University of California, Riverside

More information

Face Detection: A Literature Review

Face Detection: A Literature Review Face Detection: A Literature Review Dr.Vipulsangram.K.Kadam 1, Deepali G. Ganakwar 2 Professor, Department of Electronics Engineering, P.E.S. College of Engineering, Nagsenvana Aurangabad, Maharashtra,

More information

Estimation of Folding Operations Using Silhouette Model

Estimation of Folding Operations Using Silhouette Model Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or

More information

FP7 ICT Call 6: Cognitive Systems and Robotics

FP7 ICT Call 6: Cognitive Systems and Robotics FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Map Interface for Geo-Registering and Monitoring Distributed Events

Map Interface for Geo-Registering and Monitoring Distributed Events 2010 13th International IEEE Annual Conference on Intelligent Transportation Systems Madeira Island, Portugal, September 19-22, 2010 TB1.5 Map Interface for Geo-Registering and Monitoring Distributed Events

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis IEEE Conference on Computer Vision and Pattern Recognition Workshops - Mobile Vision 2014 Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis Eshed Ohn-Bar, Ashish

More information

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments , pp.32-36 http://dx.doi.org/10.14257/astl.2016.129.07 Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments Viet Dung Do 1 and Dong-Min Woo 1 1 Department of

More information

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed AUTOMOTIVE Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed Yoshiaki HAYASHI*, Izumi MEMEZAWA, Takuji KANTOU, Shingo OHASHI, and Koichi TAKAYAMA ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

More information

Super resolution with Epitomes

Super resolution with Epitomes Super resolution with Epitomes Aaron Brown University of Wisconsin Madison, WI Abstract Techniques exist for aligning and stitching photos of a scene and for interpolating image data to generate higher

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation Rahman Davoodi and Gerald E. Loeb Department of Biomedical Engineering, University of Southern California Abstract.

More information

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015 Risk assessment & Decision-making for safe Vehicle Navigation under Uncertainty Christian LAUGIER, First class Research Director at Inria http://emotion.inrialpes.fr/laugier Contributions from Mathias

More information

Background Pixel Classification for Motion Detection in Video Image Sequences

Background Pixel Classification for Motion Detection in Video Image Sequences Background Pixel Classification for Motion Detection in Video Image Sequences P. Gil-Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno Dpto. de Teoría de la señal y Comunicaciones. Universidad

More information

Lifelog-Style Experience Recording and Analysis for Group Activities

Lifelog-Style Experience Recording and Analysis for Group Activities Lifelog-Style Experience Recording and Analysis for Group Activities Yuichi Nakamura Academic Center for Computing and Media Studies, Kyoto University Lifelog and Grouplog for Experience Integration entering

More information

C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00. Draft Agenda

C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00. Draft Agenda C-ITS Platform WG9: Implementation issues Topic: Road Safety Issues 1 st Meeting: 3rd December 2014, 09:00 13:00 Venue: Rue Philippe Le Bon 3, Room 2/17 (Metro Maalbek) Draft Agenda 1. Welcome & Presentations

More information

Driver status monitoring based on Neuromorphic visual processing

Driver status monitoring based on Neuromorphic visual processing Driver status monitoring based on Neuromorphic visual processing Dongwook Kim, Karam Hwang, Seungyoung Ahn, and Ilsong Han Cho Chun Shik Graduated School for Green Transportation Korea Advanced Institute

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Connected Car Networking

Connected Car Networking Connected Car Networking Teng Yang, Francis Wolff and Christos Papachristou Electrical Engineering and Computer Science Case Western Reserve University Cleveland, Ohio Outline Motivation Connected Car

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Development of Gaze Detection Technology toward Driver's State Estimation

Development of Gaze Detection Technology toward Driver's State Estimation Development of Gaze Detection Technology toward Driver's State Estimation Naoyuki OKADA Akira SUGIE Itsuki HAMAUE Minoru FUJIOKA Susumu YAMAMOTO Abstract In recent years, the development of advanced safety

More information

A software video stabilization system for automotive oriented applications

A software video stabilization system for automotive oriented applications A software video stabilization system for automotive oriented applications A. Broggi, P. Grisleri Dipartimento di Ingegneria dellinformazione Universita degli studi di Parma 43100 Parma, Italy Email: {broggi,

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Gesture recognition based on arm tracking for human-robot interaction

Gesture recognition based on arm tracking for human-robot interaction Gesture recognition based on arm tracking for human-robot interaction Markos Sigalas, Haris Baltzakis and Panos Trahanias Institute of Computer Science Foundation for Research and Technology - Hellas (FORTH)

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications

Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications AASHTO GIS-T Symposium April 2012 Table Of Contents Connected Vehicle Program Goals Mapping Technology

More information

A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians

A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians A Multimodal Approach for Dynamic Event Capture of Vehicles and Pedestrians Jeffrey Ploetner Computer Vision and Robotics Research Laboratory (CVRR) University of California, San Diego La Jolla, CA 9293,

More information

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study Petr Bouchner, Stanislav Novotný, Roman Piekník, Ondřej Sýkora Abstract Behavior of road users on railway crossings

More information

Sujitha C. Martin. Contact Information Education Ph.D., Electrical and Computer Engineering Fall 2016

Sujitha C. Martin. Contact Information   Education Ph.D., Electrical and Computer Engineering Fall 2016 Sujitha C. Martin Contact Information Email: Website: scmartin@ucsd.edu http://cvrr.ucsd.edu/scmartin/ Education Ph.D., Electrical and Computer Engineering Fall 2016 University of California, San Diego,

More information

Immersive Interaction Group

Immersive Interaction Group Immersive Interaction Group EPFL is one of the two Swiss Federal Institutes of Technology. With the status of a national school since 1969, the young engineering school has grown in many dimensions, to

More information

Scanned Image Segmentation and Detection Using MSER Algorithm

Scanned Image Segmentation and Detection Using MSER Algorithm Scanned Image Segmentation and Detection Using MSER Algorithm P.Sajithira 1, P.Nobelaskitta 1, Saranya.E 1, Madhu Mitha.M 1, Raja S 2 PG Students, Dept. of ECE, Sri Shakthi Institute of, Coimbatore, India

More information

Humanoid Robotics (TIF 160)

Humanoid Robotics (TIF 160) Humanoid Robotics (TIF 160) Lecture 1, 20100831 Introduction and motivation to humanoid robotics What will you learn? (Aims) Basic facts about humanoid robots Kinematics (and dynamics) of humanoid robots

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Young Children s Fall Prevention based on Computer Vision Recognition

Young Children s Fall Prevention based on Computer Vision Recognition Young Children s Fall Prevention based on Computer Vision Recognition Hana Na, Sheng Feng Qin, David Wright School of Engineering and Design Brunel University Uxbridge, Middlesex, UB8 3PH UNITED KINGDOM

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image

Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Real Time Video Analysis using Smart Phone Camera for Stroboscopic Image Somnath Mukherjee, Kritikal Solutions Pvt. Ltd. (India); Soumyajit Ganguly, International Institute of Information Technology (India)

More information

Personal Driving Diary: Constructing a Video Archive of Everyday Driving Events

Personal Driving Diary: Constructing a Video Archive of Everyday Driving Events Proceedings of IEEE Workshop on Applications of Computer Vision (WACV), Kona, Hawaii, January 2011 Personal Driving Diary: Constructing a Video Archive of Everyday Driving Events M. S. Ryoo, Jae-Yeong

More information

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION

DRIVER FATIGUE DETECTION USING IMAGE PROCESSING AND ACCIDENT PREVENTION Volume 116 No. 11 2017, 91-99 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v116i11.10 ijpam.eu DRIVER FATIGUE DETECTION USING IMAGE

More information

Ant? Bird? Dog? Human -SURE

Ant? Bird? Dog? Human -SURE ECE 172A: Intelligent Systems: Introduction Week 1 (October 1, 2007): Course Introduction and Announcements Intelligent Robots as Intelligent Systems A systems perspective of Intelligent Robots and capabilities

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

Face Registration Using Wearable Active Vision Systems for Augmented Memory

Face Registration Using Wearable Active Vision Systems for Augmented Memory DICTA2002: Digital Image Computing Techniques and Applications, 21 22 January 2002, Melbourne, Australia 1 Face Registration Using Wearable Active Vision Systems for Augmented Memory Takekazu Kato Takeshi

More information

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results Angelos Amditis (ICCS) and Lali Ghosh (DEL) 18 th October 2013 20 th ITS World

More information

A Gesture Oriented Android Multi Touch Interaction Scheme of Car. Feilong Xu

A Gesture Oriented Android Multi Touch Interaction Scheme of Car. Feilong Xu 3rd International Conference on Management, Education, Information and Control (MEICI 2015) A Gesture Oriented Android Multi Touch Interaction Scheme of Car Feilong Xu 1 Institute of Information Technology,

More information

Quality Measure of Multicamera Image for Geometric Distortion

Quality Measure of Multicamera Image for Geometric Distortion Quality Measure of Multicamera for Geometric Distortion Mahesh G. Chinchole 1, Prof. Sanjeev.N.Jain 2 M.E. II nd Year student 1, Professor 2, Department of Electronics Engineering, SSVPSBSD College of

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

Driver Assistance Systems (DAS)

Driver Assistance Systems (DAS) Driver Assistance Systems (DAS) Short Overview László Czúni University of Pannonia What is DAS? DAS: electronic systems helping the driving of a vehicle ADAS (advanced DAS): the collection of systems and

More information

Today I t n d ro ucti tion to computer vision Course overview Course requirements

Today I t n d ro ucti tion to computer vision Course overview Course requirements COMP 776: Computer Vision Today Introduction ti to computer vision i Course overview Course requirements The goal of computer vision To extract t meaning from pixels What we see What a computer sees Source:

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles Ali Osman Ors May 2, 2017 Copyright 2017 NXP Semiconductors 1 Sensing Technology Comparison Rating: H = High, M=Medium,

More information

OBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK

OBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK xv Preface Advancement in technology leads to wide spread use of mounting cameras to capture video imagery. Such surveillance cameras are predominant in commercial institutions through recording the cameras

More information

Intelligent Technology for More Advanced Autonomous Driving

Intelligent Technology for More Advanced Autonomous Driving FEATURED ARTICLES Autonomous Driving Technology for Connected Cars Intelligent Technology for More Advanced Autonomous Driving Autonomous driving is recognized as an important technology for dealing with

More information

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview

SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAfety VEhicles using adaptive Interface Technology (SAVE-IT): A Program Overview SAVE-IT David W. Eby,, PhD University of Michigan Transportation Research Institute International Distracted Driving Conference

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

Chinese civilization has accumulated

Chinese civilization has accumulated Color Restoration and Image Retrieval for Dunhuang Fresco Preservation Xiangyang Li, Dongming Lu, and Yunhe Pan Zhejiang University, China Chinese civilization has accumulated many heritage sites over

More information

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface Kei Okada 1, Yasuyuki Kino 1, Fumio Kanehiro 2, Yasuo Kuniyoshi 1, Masayuki Inaba 1, Hirochika Inoue 1 1

More information

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT 1 Rudolph P. Darken, 1 Joseph A. Sullivan, and 2 Jeffrey Mulligan 1 Naval Postgraduate School,

More information

Novel machine interface for scaled telesurgery

Novel machine interface for scaled telesurgery Novel machine interface for scaled telesurgery S. Clanton, D. Wang, Y. Matsuoka, D. Shelton, G. Stetten SPIE Medical Imaging, vol. 5367, pp. 697-704. San Diego, Feb. 2004. A Novel Machine Interface for

More information

Telling What-Is-What in Video. Gerard Medioni

Telling What-Is-What in Video. Gerard Medioni Telling What-Is-What in Video Gerard Medioni medioni@usc.edu 1 Tracking Essential problem Establishes correspondences between elements in successive frames Basic problem easy 2 Many issues One target (pursuit)

More information

Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks

Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks Alvaro Pinto, Zhe Zhang, Xin Dong, Senem Velipasalar, M. Can Vuran, M. Cenk Gursoy Electrical Engineering Department, University

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Mission-focused Interaction and Visualization for Cyber-Awareness!

Mission-focused Interaction and Visualization for Cyber-Awareness! Mission-focused Interaction and Visualization for Cyber-Awareness! ARO MURI on Cyber Situation Awareness Year Two Review Meeting Tobias Höllerer Four Eyes Laboratory (Imaging, Interaction, and Innovative

More information

Applying Vision to Intelligent Human-Computer Interaction

Applying Vision to Intelligent Human-Computer Interaction Applying Vision to Intelligent Human-Computer Interaction Guangqi Ye Department of Computer Science The Johns Hopkins University Baltimore, MD 21218 October 21, 2005 1 Vision for Natural HCI Advantages

More information

Humanoid Robotics (TIF 160)

Humanoid Robotics (TIF 160) Humanoid Robotics (TIF 160) Lecture 1, 20090901 Introduction and motivation to humanoid robotics What will you learn? (Aims) Basic facts about humanoid robots Kinematics (and dynamics) of humanoid robots

More information

Computer Vision-based Mathematics Learning Enhancement. for Children with Visual Impairments

Computer Vision-based Mathematics Learning Enhancement. for Children with Visual Impairments Computer Vision-based Mathematics Learning Enhancement for Children with Visual Impairments Chenyang Zhang 1, Mohsin Shabbir 1, Despina Stylianou 2, and Yingli Tian 1 1 Department of Electrical Engineering,

More information

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter Final Report Prepared by: Ryan G. Rosandich Department of

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Image Processing and Particle Analysis for Road Traffic Detection

Image Processing and Particle Analysis for Road Traffic Detection Image Processing and Particle Analysis for Road Traffic Detection ABSTRACT Aditya Kamath Manipal Institute of Technology Manipal, India This article presents a system developed using graphic programming

More information

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Ankur Gupta, James J. Little, Robert J. Woodham Laboratory for Computational Intelligence (LCI) The University of British Columbia

More information

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 6 (55) No. 2-2013 PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES A. FRATU 1 M. FRATU 2 Abstract:

More information

Classification for Motion Game Based on EEG Sensing

Classification for Motion Game Based on EEG Sensing Classification for Motion Game Based on EEG Sensing Ran WEI 1,3,4, Xing-Hua ZHANG 1,4, Xin DANG 2,3,4,a and Guo-Hui LI 3 1 School of Electronics and Information Engineering, Tianjin Polytechnic University,

More information

interactive IP: Perception platform and modules

interactive IP: Perception platform and modules interactive IP: Perception platform and modules Angelos Amditis, ICCS 19 th ITS-WC-SIS76: Advanced integrated safety applications based on enhanced perception, active interventions and new advanced sensors

More information

Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno. Editors. Intelligent Environments. Methods, Algorithms and Applications.

Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno. Editors. Intelligent Environments. Methods, Algorithms and Applications. Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno Editors Intelligent Environments Methods, Algorithms and Applications ~ Springer Contents Preface............................................................

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

Driving Simulators for Commercial Truck Drivers - Humans in the Loop

Driving Simulators for Commercial Truck Drivers - Humans in the Loop University of Iowa Iowa Research Online Driving Assessment Conference 2005 Driving Assessment Conference Jun 29th, 12:00 AM Driving Simulators for Commercial Truck Drivers - Humans in the Loop Talleah

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

YUMI IWASHITA

YUMI IWASHITA YUMI IWASHITA yumi@ieee.org http://robotics.ait.kyushu-u.ac.jp/~yumi/index-e.html RESEARCH INTERESTS Computer vision for robotics applications, such as motion capture system using multiple cameras and

More information

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Applied Mathematical Sciences, Vol. 6, 2012, no. 96, 4767-4771 A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Anna Gorbenko Department

More information