Visuomotor strategies using driving simulators in virtual and pre-recorded environment

Similar documents
The Perception of Optical Flow in Driving Simulators

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA

A reduction of visual fields during changes in the background image such as while driving a car and looking in the rearview mirror

The Mona Lisa Effect: Perception of Gaze Direction in Real and Pictured Faces

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

Chapter 1 Virtual World Fundamentals

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces

Image Characteristics and Their Effect on Driving Simulator Validity

Development and Validation of Virtual Driving Simulator for the Spinal Injury Patient

Part I Introduction to the Human Visual System (HVS)

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Comparison of Wrap Around Screens and HMDs on a Driver s Response to an Unexpected Pedestrian Crossing Using Simulator Vehicle Parameters

Illusory size-speed bias: Could this help explain motorist collisions with railway trains and other large vehicles?

CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? University of Guelph Guelph, Ontario, Canada

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

Application of 3D Terrain Representation System for Highway Landscape Design

Eye catchers in comics: Controlling eye movements in reading pictorial and textual media.

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

Loughborough University Institutional Repository. This item was submitted to Loughborough University's Institutional Repository by the/an author.

TRAFFIC SIGN DETECTION AND IDENTIFICATION.

THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K.

Gaze Behaviour as a Measure of Trust in Automated Vehicles

CONSIDERATIONS WHEN CALCULATING PERCENT ROAD CENTRE FROM EYE MOVEMENT DATA IN DRIVER DISTRACTION MONITORING

Object Perception. 23 August PSY Object & Scene 1

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency

Insights into High-level Visual Perception

Einführung in die Erweiterte Realität. 5. Head-Mounted Displays

Multi variable strategy reduces symptoms of simulator sickness

Perceived depth is enhanced with parallax scanning

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

Driving Simulators for Commercial Truck Drivers - Humans in the Loop

Research on visual physiological characteristics via virtual driving platform

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur

Here I present more details about the methods of the experiments which are. described in the main text, and describe two additional examinations which

Head-Movement Evaluation for First-Person Games

The eye, displays and visual effects

Lecture 26: Eye Tracking

Discriminating direction of motion trajectories from angular speed and background information

Behavioural Realism as a metric of Presence

PSYCHOLOGICAL SCIENCE. Research Report

CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS

PERCEPTUAL INSIGHTS INTO FOVEATED VIRTUAL REALITY. Anjul Patney Senior Research Scientist

Physiology Lessons for use with the Biopac Student Lab

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

Driver Comprehension of Integrated Collision Avoidance System Alerts Presented Through a Haptic Driver Seat

EFFECTS OF A NIGHT VISION ENHANCEMENT SYSTEM (NVES) ON DRIVING: RESULTS FROM A SIMULATOR STUDY

The Design and Assessment of Attention-Getting Rear Brake Light Signals

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

IOC, Vector sum, and squaring: three different motion effects or one?

Work Domain Analysis (WDA) for Ecological Interface Design (EID) of Vehicle Control Display

A Gaze-Controlled Interface to Virtual Reality Applications for Motor- and Speech-Impaired Users

User Interfaces in Panoramic Augmented Reality Environments

INTERACTIVE SKETCHING OF THE URBAN-ARCHITECTURAL SPATIAL DRAFT Peter Kardoš Slovak University of Technology in Bratislava

Tobii Pro VR Analytics Product Description

THE EFFECTS OF PC-BASED TRAINING ON NOVICE DRIVERS RISK AWARENESS IN A DRIVING SIMULATOR

SE4D03 Computer User Interfaces

III. Publication III. c 2005 Toni Hirvonen.

Using Driving Simulator for Advance Placement of Guide Sign Design for Exits along Highways

Movement analysis to indicate discomfort in vehicle seats

Vection in depth during consistent and inconsistent multisensory stimulation

The effect of illumination on gray color

Draft Recommended Practice - SAE J-2396

Effective Iconography....convey ideas without words; attract attention...

Motion sickness issues in VR content

3D Space Perception. (aka Depth Perception)

7Motion Perception. 7 Motion Perception. 7 Computation of Visual Motion. Chapter 7

Virtual Reality. NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9

Evaluating Effect of Sense of Ownership and Sense of Agency on Body Representation Change of Human Upper Limb

Visual Processing: Implications for Helmet Mounted Displays (Reprint)

Proposed Watertown Plan Road Interchange Evaluation Using Full Scale Driving Simulator

Comparing Computer-predicted Fixations to Human Gaze

The Haptic Perception of Spatial Orientations studied with an Haptic Display

Comparison of Three Eye Tracking Devices in Psychology of Programming Research

Comparison of Haptic and Non-Speech Audio Feedback

Using VR and simulation to enable agile processes for safety-critical environments

Advancing Simulation as a Safety Research Tool

Tobii Pro VR Analytics Product Description

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang

Lecture IV. Sensory processing during active versus passive movements

the ecological approach to vision - evolution & development

Perception. What We Will Cover in This Section. Perception. How we interpret the information our senses receive. Overview Perception

1:1 Scale Perception in Virtual and Augmented Reality

Virtual Reality. Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

Motion Perception II Chapter 8

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity

Chapter 6 Experiments

The Automated Psychophysical Test (APT) for assessing age-diminished capabilities

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Surface Contents Author Index

COMPARISON OF DRIVER DISTRACTION EVALUATIONS ACROSS TWO SIMULATOR PLATFORMS AND AN INSTRUMENTED VEHICLE.

Differences in Fitts Law Task Performance Based on Environment Scaling

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror

Analyzing Situation Awareness During Wayfinding in a Driving Simulator

Transcription:

Visuomotor strategies using driving simulators in virtual and pre-recorded environment I. Giannopulu 1, RJV. Bertin 1 1 Laboratoire Central des Ponts et Chaussés, 58, bd Lefèvre 75015 Paris, France irini.giannopulu@inrets.fr, rene.bertin@inrets.fr Z. Kapoula 2 2 IRIS-CNRS Hôpital G. Pompidou 20 Rue Leblanc 75015 Paris zoi.kapoula@egp.aphp.fr R. Brémond 1, roland.bremond@lcpc.fr S. Espié 1 stephane.espie@inrets.fr Abstract Drivers fix their gaze where they are planning to go. It is considered that steering is based on optic flow, the formation of visual motion at the moving eye. Different studies have analyzed the impact of visuomotor strategies on steering but, to our knowledge, few studies have compared visual strategies using similar urban videoprojected environments. The aim of this study is to compare subjects visuomotor strategies on similar videoprojected environments in a fixed-base driving simulator. Experienced car drivers are exposed to two visual environments: a real traffic urban scenario pre-recorded on video; and the 3D simulation of the same scene. The visual environment represents a district of the center of Paris, between the Louvre and the Opera. Subjects visual strategies are recorded using a binocular eye tracking system (Eyelink II). It is supposed that visuomotor strategies depend on the degree of similarity between both environments. The results indicate that eye movements differ between pre-recorded and virtual environment. Integration of information in the saccade buffer and visual attention control may explain these results. Index Terms - Binocular eye movement recording, driving simulator, visual environment, visuomotor strategy. I. INTRODUCTION Steering a car is a complex cognitive task, which necessitates the integration of various information such as visual, auditory, proprioceptive, motor, and spatial essentially. The aim of the present study is to examine visuomotor strategies when people are exposed to two similar visual environments. The effect of visual strategies on driving has been studied from various viewpoints [1]-[5]. However, their role in car driving tasks is still poorly understood [5]. In road driving, driver directs his/her gaze where s/he intends to go [6]. Studies show that with changes in driving parameters i.e., an increase in speed, gaze links more tightly to the driver s intended goal [7], [2], [5], [3]. Visuomotor strategy is thought, therefore to, depend on optic flow properties [8]-[10]. Using two contrasted visual environments, e.g., traffic of the same road presented in real situation or in pictures, authors show [11] that eye-movements were significantly different. But when experimental conditions are similar this result is inverted. Drivers visual strategies have been investigated in real and virtual intersections using an HMD in a driving simulator [12]. It has been found that drivers tended to make longer left-right glances in virtual reality that they did in reality but there was no difference in the number of glances between both experimental situations. Using a monocular tracking system, researchers [13] have recently revealed that eye fixation time for traffic signs, e.g., information signs or road markings, was quite comparable between watching a video and actual driving. However, to our knowledge, the binocular recording of eye movement have never been investigated in pre-recorded and virtual environments. The above data suggest that the similarity of visuomotor strategies could depend on the similarity between visual environments. In order to examine this hypothesis, drivers aged 25 to 35 years were exposed to two visual environments in a fixed base simulator: the first one was a pre-recorded scene of real urban traffic scenario; the second was the 3D simulation of same scene. Their visuomotor strategies were recorded using a binocular eye tracking system (eyelink II). II. METHOD A. Subjects Twenty-six experienced car drivers (13 men and 13 women) were examined. Their average age was 30 years (SD 3 years). All were licensed to drive in France for 5 years or more and have driven at least five times a week. None of the subjects had more than 10 hours of driving experience on a simulator. All had normal or corrected-to-normal vision and no history of neurological disease. An additional number of four subjects participated but excluded owing to technical problems (one subject) or of simulator sickness (three subjects). Before the experiment all subjects signed an informed consent form. Study approved by local ethic committee and conform Helsinki convention. B. Simulator The INRETS (MSIS-SIM 2 ) driving simulator, which is a fixed-based simulator, was used for this experiment. This simulator was composed of a vehicle (Xantia), fully functional pedals, speedometer, manual gearshift and three large flat screens. The screens stimulated 150 deg of subject visual field (120 deg horizontally and 37.5 deg vertically around). Three, IRIS BarcoGraphics 808s projectors, one for each screen, were used. Each projector runs 900 x 1600 pixels at 90 Hz refresh. Two visual environments, which correspond to two visual conditions, were used. A computer (AMD

Athlon (tm) 64x2 Dual; Core Processor 5400+ 2.81 GHz, RAM 3.00 Go) controlled both environments. D. Visual conditions Two visual environments were compared: the first one was a real traffic urban scenario pre-recorded on video; the second was the virtual representation of the same scene. Both scenes represented a district in the center of Paris between the Louvre and the Opera (the 1st district of Paris). The beginning of the scene was the Pyramid s square. The traffic scene comprised the Pyramid s street until the intersection with St Honoré s street, turning right, in the direction of Palais Royal Square; continuing down to the Rohan s street and again turning right to Rivoli s street, and continuing straight ahead until the Pyramid s square. In both conditions the vehicle of the simulator was put on normal steering position, the same procedure was used. The subject was equipped with the eye tracker. They were installed on driver position. They were on passive driving situation. The subjects were informed that scenes of a Parisian district were presented on the screens. To incite the subjects to have an active research of visual information, they were asked to use the steering wheel of the simulator as they do in the real driving situation. E. Procedure for a trial. When the subject was entered the simulator, only the computer s desktop background was projected on the three scenes. A typical trial proceeded as follows. The subject s back and shoulders were in contact with the vehicle s seat back. The subject was asked to orient the head and gaze straight ahead. As soon as the head and the gaze were correct and after the subject declared that s/he was ready, the visual environment was presented on the screens. The presentation of the visual environment (pre-recorded or virtual) was the beginning of the experiment. The subject correctly installed on the simulator was in the middle of the street and had a direct vision of the traffic scene. The experimenter asked the subject to use the steering wheel of the simulator as s/he did in the real situation. The subject s visuomotor strategies were recorded during the whole trajectory. After that, a new trial was started. Each trial lasted around 3 minutes; the inter-trial interval was around 20 seconds. In each visual condition, each subject performed four trials. Two standard 2D calibrations of the eye tracker took place: the first one at the beginning of the trial; the second one after the first four trials. The order of the trials was randomly assigned to each subject. Each subject was given eight valid trials. F. Eye movement recording The eye movements were recorded with the Eyelink II. This video system was set to acquire eye position at 250 Hz. The apparatus consisted of video cameras that are mounted on a headband. G. Experiment organization Prior to visual exposition in the fixed-base driving simulator, a visual oculomotor clinical examination was performed for each subject. This examination was composed of visual acuity, heterophoria and vergence movement measures. In addition, a part of the French version neuropsychological battery VOSP (visual and object spatial perception [14]) was also used to evaluate subjects spatial perception of objects. At the end of the visual exposition, the subjects were asked to verbalize their sensations and impressions and to represent in a graphic way, using a paper pencil test, the traveled trajectory. H. Dependant variables and analysis The dependant variable was the eye movement. After standard calibration, the conjugate eye position, which corresponds to the left and right eye position divided by two, was computed. Four components of eye movement were analyzed: time and number of fixations, latency and amplitude of saccades (see results). All statistical analyses were performed with R 2.2.1 software [15]. III. RESULTS The results are presented in three sections. In the first section, the results of visual oculomotor and neuropsychological examination are exposed; in the second the data of the comparison of eye movement between the visual environments are explained; in the third the verbalizations and graphic reproduction of the traveled trajectory given by the subjects are described. A. Visual and neuropsychological examinations The description of the population according to the visual oculomotor examination revealed that all the subjects presented a normal profile of visual acuity (10/10 for right eye visual acuity and 09/10 for left eye visual acuity). The subjects had no obvious problem of vergence and heterophoria. In the same vein, the description of the population according to their results to the neuropsychological test showed that the subjects had no difficulties to detect objects (20/20); to identify objects independently of the degree of perceptual modification of their form (19/20); or to perceive their position in space i.e., (23/30) for silhouettes and (8/20) for progressive silhouettes. B. Comparison between visual environments Under the hypothesis that the visual strategies depend on the degree of the similarity between the environments, eye movements were expected to be different between the prerecorded and the virtual environment. The comparison of eye movements between pre-recorded and virtual environments is investigated in terms of a) time of fixations (in ms); b) number of fixations; c) latency of saccades (in ms); and d) amplitude of saccades (in deg -1 ). Within each of the defined visual condition, the observed individuals distribution of each of the above dependant variables approximates a J-shaped one. The same J-shaped distribution is preserved when data are pooled within each of the visual conditions. With such distribution shapes, the median has been chosen as a central index of each variable. In order to compare the time and number of fixation, the latency and amplitude of saccades, the differences between medians have been intra-individually computed between the two visual conditions. The statistical comparisons have been conducted with the Wilcoxon matched-pairs signed-ranks test.

These comparisons show that: a) the median time of fixation is longer in pre-recorded than in virtual environments (T = 15, n =26, two-tailed p <.0005). The groups' median time was 267 and 258 ms in the prerecorded and virtual scene respectively. c) the median latency of the first saccade upon starting of the video is longer in pre-recorded than in virtual environments (T = 1, n =25, two-tailed p <.0005). The groups' median latency of saccades was 307 and 291 ms in the pre-recorded and virtual scene respectively. Fig. 1 differences between median fixation time b) the median number of fixation is higher in the prerecorded than in the virtual environments (T = 78, n =26, twotailed p <.01). The groups' median number was 429.5 and 425.2 in the pre-recorded and virtual scene respectively. Fig. 3 - differences between latency of saccades d) the median amplitude is bigger in pre-recorded than in virtual environment (T = 6, n =25, two-tailed p <.0005). The groups' median amplitude were 5.1 and 4.2 in the prerecorded and virtual scene respectively. Fig. 2 - differences between median number of fixation Fig. 4 - differences between median amplitude of saccades C. Subjects verbalizations and graphic representations of the traveled trajectory. Twenty-four of the twenty-six subjects declared that they were completed immersed in both visual environments. Twenty of them reported that immersion was better in the prerecorded environment. Subjects also found that both environments were rather different. They declared that the pre-recorded environment was dynamic, gives more sensations and gave better raise to actions than the virtual one. All the subjects were able to represent graphically the traveled trajectory. They also gave several details, which concern the road, the buildings, the traffic lights, the pedestrians, and the cars. As the subjects declared, the most of the details they remembered were from the pre-recorded environment.

IV. DISCUSSION To our knowledge this is the first study which examined eye movements using a binocular apparatus in pre-recorded and virtual urban environments. Urban environments are rather rich in visual information and necessitate constant and continuous treatment. Consistent with our hypothesis, the present study indicates that the eye movements differ between the pre-recorded and virtual environments. In particular, the fixation times were shorter in the virtual environment than in the pre-recorded environment; the number of fixation was higher in the prerecorded environment. In addition, the latency of the first saccade was longer in the pre-recorded environment; the amplitude of saccades was bigger in pre-recorded than in virtual environment. It may be contradictory that both the frequency and duration of fixation are higher in the prerecorded environment. This observation reveals that eye movements such as pursuit and OKN were involved and should be analysed in another study. These results aren t consistent with previous data reporting that drivers visuomotor strategies do not differ between virtual and real environments [12] or pre-recorded and actual environments [13]. The differences between those studies and our study can be explained from a methodological viewpoint. In our study, we used an urban scenario, which is extremely rich in information. We also used a binocular eye tracker to record drivers eye movements. In the above studies the authors not only used partly rural and totally rural areas, which are obviously less rich on information, but they also recorded drivers visuomotor strategies using an HMD or a monocular eye tracking system. In addition, we completed a preliminary clinic screening of our subjects. This screening revealed that the differences between both conditions cannot be explained neither by oculomotor nor by neuropsychological problems. Technical limits could explain our data. In the pre-recorded condition there are limits due to the resolution of the cameras, in the virtual condition there are limits due to the computer s capacities. In both situations there are limits owing to the resolution of the projection. However, these limits cannot totally explain the results. The higher number of fixations in pre-recorded environment may mirror subjects need to renew their perception. This is classically the case in real situations [8]. The difference of fixation time between both environments may be explained by the fact that in the pre-recorded environment the subjects are more attentive than in the virtual environment, because it s real. The larger saccade amplitude and latency in the pre-recorded than in the virtual environment is consistent with these interpretations. To better understand this hypothesis, it should be reminded that eye movements are strongly linked to visual attention: the more eye movement the more the attention [16]. As the prerecorded environment is richer in information than the virtual one, making more fixations and saccades implies higher attention control. In other words, the exposition of subjects to a pre-recorded environment necessitates more cerebral activity than to the virtual environment. This is also corroborated with the declaration of subjects that the details they were able to recall during graphic representation of the scene came from the pre-recorded scene. The above declarations could account for the comprehension of the results. Information recall is a genuine memory process, which necessitates storage in visuo-spatial working memory, in our case. Different explanations, such as integration in saccade memory buffer or the fact that perception is renewed with each fixation and saccades are coherent with this hypothesis. Neuroimaging data also suggest the presence of common cortical areas for attention and memory processes [17]. Our results underline the ecological validity of the simulator. To valid all these interpretations, additional studies are required with normal and clinical groups of drivers. V. ACKNOWLEDGMENT I.G author wish to thank Claude Perrot, Fabrice Vienne, Jacky Robouant and Pierre Gauriat for their technical assistance and the voluntary subjects for their participation. This study was supported by the European project Humanist. REFERENCES [1] A.S. Cohen and H. Studach, Eye movement while driving cars around curves, Perceptual and Motor skills, 44, 683-689, 1977. [2] M.E. Land, J. Horwood, and T.S. Cornwell, Fast driving reduces eye movement amplitude and frequency, Investigative Opthalmology & Visual Science, 35, 2033, 1994. [3] S.D. Rogers & E.E. Kadar, The role of experience in high speed curve navigation. Mahwah, NJ: Lawrence Erlbaum Associates, 1998, pp. 113-116. [4] S.D. Rogers and E.E. Kadar, The role of eye movement in the perceptual control of car driving. Paris, EDK, 1999, pp. 110-114. [5] S.D. Rogers, E.E. Kadar, and A. Costall, An inverse relationship between gaze distribution and driving speed, Mahwah, NJ: Lawrence Erlbaum Associates, 2001, pp. 234-237. [6] S.D. Rogers, E.E. Kadar & A. Costall, Drivers gaze patterns in braking from three different approaches to a crash Barrier, Ecological Psychology, 17, 39-53, 2005. [7] D.A. Gordon, Perceptual basis of vehicular guidance, Public Roads, 34, 53-68, 1966. [8] J.J. Gibson, The ecological approach to visual perception, Hillsdale, N.J: Lawrence Erlbaum Associates, Inc 1979. [9] D.N. Lee, Guiding movement by coupling taus. Ecological Psychology, 10, 221-250, 1998. [10] B.R. Fajen and W.H. Warren, Go with the flow, Trends in Cognitive Sciences, 4, 449-450, 2000. [11] A. S. Cohen, (1981) Car drivers pattern of eye fixations on the road and in the laboratory, Perceptual and Motor Skills, 52, 515-522, 1981. [12] P.C. Burns and D. Saluäär, Intersection between driving in reality and virtual reality, in Proc DSC Driving Simulator, Conference, Paris, 1999. [13] M.H. Martens, and M. Fox, Does road familiarity change eye fixations? Comparison between watching a video and real driving, Transportation Research Part F, 10, 33-47, 2007. [14] M. James, and E.K. Warrington, Visual Object and Spatial Perception, Paris, EAP: Version française I. Giannopulu, 1997. [15] R Development Core Team, R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, 2005. [16] J.M. Hollingworth and A. Henderson, Eye movement during scene viewing an Overview, Amsterdam, NL: Elsevier Science, 1998, pp 269-293. [17] C.C. Ruff, A. Kristjánsson, J. Driver, Readout from iconic memory and selective spatial attention involve similar neural processes, Psychological Science, 18, 901-909, 2007.