Optical See-Through Head Up Displays Effect on Depth Judgments of Real World Objects

Similar documents
Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality

Virtual Road Signs: Augmented Reality Driving Aid for Novice Drivers

Virtual Shadow: Making Cross Traffic Dynamics Visible through Augmented Reality Head Up Display

TRAFFIC SIGN DETECTION AND IDENTIFICATION.

Virtual Shadow: Making Cross Traffic Dynamics Visible through Augmented Reality Head Up Display

Augmented Reality as an Advanced Driver-Assistance System: A Cognitive Approach

The application of Work Domain Analysis (WDA) for the development of vehicle control display

Comparison of Wrap Around Screens and HMDs on a Driver s Response to an Unexpected Pedestrian Crossing Using Simulator Vehicle Parameters

THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K.

Early Take-Over Preparation in Stereoscopic 3D

Work Domain Analysis (WDA) for Ecological Interface Design (EID) of Vehicle Control Display

Image Characteristics and Their Effect on Driving Simulator Validity

SPATIAL AWARENESS BIASES IN SYNTHETIC VISION SYSTEMS DISPLAYS. Matthew L. Bolton, Ellen J. Bass University of Virginia Charlottesville, VA

Effective Iconography....convey ideas without words; attract attention...

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

EFFECTS OF A NIGHT VISION ENHANCEMENT SYSTEM (NVES) ON DRIVING: RESULTS FROM A SIMULATOR STUDY

Preliminary evaluation of a virtual reality-based driving assessment test

The Visitors Behavior Study and an Experimental Plan for Reviving Scientific Instruments in a New Suburban Science Museum

Situational Awareness A Missing DP Sensor output

Gaze Behaviour as a Measure of Trust in Automated Vehicles

LED NAVIGATION SYSTEM

Interactions and Applications for See- Through interfaces: Industrial application examples

DAARIA: Driver Assistance by Augmented Reality for Intelligent Automotive

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

Iowa Research Online. University of Iowa. Robert E. Llaneras Virginia Tech Transportation Institute, Blacksburg. Jul 11th, 12:00 AM

Spatial Judgments from Different Vantage Points: A Different Perspective

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA

Steering a Driving Simulator Using the Queueing Network-Model Human Processor (QN-MHP)

CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? University of Guelph Guelph, Ontario, Canada

DIFFERENCE BETWEEN A PHYSICAL MODEL AND A VIRTUAL ENVIRONMENT AS REGARDS PERCEPTION OF SCALE

More than Meets the Eye

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small- Scale Spatial Judgment Task

Chapter 6 Experiments

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING

CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN

The Mona Lisa Effect: Perception of Gaze Direction in Real and Pictured Faces

Perception in Immersive Environments

Driver behavior in mixed and virtual reality a comparative study

Driving Simulation Scenario Definition Based on Performance Measures

Study of Effectiveness of Collision Avoidance Technology

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

Survey and Classification of Head-Up Display Presentation Principles

3D and Sequential Representations of Spatial Relationships among Photos

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

Collaboration on Interactive Ceilings

Poles for Increasing the Sensibility of Vertical Gradient. in a Downhill Road

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow

Evaluation based on drivers' needs analysis

Haptic control in a virtual environment

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

HUMAN-MACHINE COLLABORATION THROUGH VEHICLE HEAD UP DISPLAY INTERFACE

Visualization of Vehicular Traffic in Augmented Reality for Improved Planning and Analysis of Road Construction Projects

CSC Stereography Course I. What is Stereoscopic Photography?... 3 A. Binocular Vision Depth perception due to stereopsis

MOTION PARALLAX AND ABSOLUTE DISTANCE. Steven H. Ferris NAVAL SUBMARINE MEDICAL RESEARCH LABORATORY NAVAL SUBMARINE MEDICAL CENTER REPORT NUMBER 673

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

Towards Wearable Gaze Supported Augmented Cognition

Optical Marionette: Graphical Manipulation of Human s Walking Direction

PERFORMANCE IN A HAPTIC ENVIRONMENT ABSTRACT

F=MA. W=F d = -F FACILITATOR - APPENDICES

Impact of Connected Vehicle Safety Applications on Driving Behavior at Varying Market Penetrations: A Driving Simulator Study

Multimedia Virtual Laboratory: Integration of Computer Simulation and Experiment

Gaze Interaction and Gameplay for Generation Y and Baby Boomer Users

DAARIA: Driver Assistance by Augmented Reality for Intelligent Automobile

Development and Validation of Virtual Driving Simulator for the Spinal Injury Patient

Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions

Calling While Driving: An Initial Experiment with HoloLens

COPYRIGHTED MATERIAL. Overview

Gaze informed View Management in Mobile Augmented Reality

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

COPYRIGHTED MATERIAL OVERVIEW 1

EVALUATION OF COMPLEX AT-GRADE RAIL CROSSING DESIGNS USING A DRIVER SIMULATION

AR 2 kanoid: Augmented Reality ARkanoid

Discriminating direction of motion trajectories from angular speed and background information

Baby Boomers and Gaze Enabled Gaming

Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays

Perception vs. Reality: Challenge, Control And Mystery In Video Games

Adapting SatNav to Meet the Demands of Future Automated Vehicles

THE SCHOOL BUS. Figure 1

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

Perspective of Reality

Do Stereo Display Deficiencies Affect 3D Pointing?

Research on visual physiological characteristics via virtual driving platform

COMPARISON OF DRIVER DISTRACTION EVALUATIONS ACROSS TWO SIMULATOR PLATFORMS AND AN INSTRUMENTED VEHICLE.

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli

Admin. Today: Designing for Virtual Reality VR and 3D interfaces Interaction design for VR Prototyping for VR

Effects of Visual-Vestibular Interactions on Navigation Tasks in Virtual Environments

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

LED flicker: Root cause, impact and measurement for automotive imaging applications

TxDOT Project : Evaluation of Pavement Rutting and Distress Measurements

Augmented Reality Head-Up-Display for Advanced Driver Assistance System: A Driving Simulation

HAPTICS AND AUTOMOTIVE HMI

Intelligent Technology for More Advanced Autonomous Driving

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Driving Simulators for Commercial Truck Drivers - Humans in the Loop

Regan Mandryk. Depth and Space Perception

Transcription:

Optical See-Through Head Up Displays Effect on Depth Judgments of Real World Objects Missie Smith 1 Nadejda Doutcheva 2 Joseph L. Gabbard 3 Gary Burnett 4 Human Factors Research Group University of Nottingham ABSTRACT Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical seethrough head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition. Keywords: augmented reality, depth perception, driving. Index Terms: H.5 [Information Interfaces and Presentation]: H.5.1: Multimedia Information Systems Artificial, Augmented, and Virtual Realities; H.5.2: User Interfaces Ergonomics, Evaluation / Methodology, Screen Design, Style Guides 1 INTRODUCTION People are often distracted while driving, especially when attending to secondary (e.g., engaging with GPS navigation) and tertiary tasks (e.g., manipulating entertainment controls, using a cell phone). In the US, there is ample evidence that the role of 1 mis16@vt.edu, 2 jgabbard@vt.edu, 4 Gary.Burnett@nottingham.ac.uk IEEE Virtual Reality Conference 2015 23-27 March, Arles, France 978-1-4799-1727-3/15/$31.00 2015 IEEE 3 nldoutch@vt.edu, distraction in accidents is increasing [1]. As such, there is growing interest in designing in-vehicle AR HUD-based technologies to minimize visual distraction by integrating critical primary and secondary task information into the forward looking field of view [2]. Some examples of AR driving applications include safety warning systems [3, 4], lane marking for low visibility [5], GPS services [6-8], and social awareness [9]. While head-up AR graphics offer opportunities for improved driver safety by, for example, decreasing eyes off road time, there exists a need to more deeply research, identify and design for the effects that augmenting graphics have on driver perception and workload. While an AR HUD may help co-locate a visual warning cue (pedestrian hazard) with a real hazard (i.e., the pedestrian), drivers still must visually attend to and process two types of visual information real and virtual. Further work is needed to quantify driver workload while trying to attend to multiple visual information channels; even when intended to be useful as in redundant encoding, collocated or conformal [10]. This paper presents a study where we begin to examine this relationship with respect to depth perception of real world objects. Safe driving necessitates accurate egocentric distance judgments (e.g. stopping time, braking distance, distance to other vehicles, pedestrians, and unexpected hazards). In vehicles, AR graphics are typically intended to cue and enhance real world objects. As auto manufacturers move towards increasingly integrated AR HUDs, a better understanding of the effect of these graphics on depth perception at various distances is needed. To our knowledge, there is no research quantifying how drivers perceive real world objects that are augmented via optical seethrough HUDs, though it has been shown that the depth perception of the actual AR graphics are generally underestimated [11]. In addition, most studies examined relatively short distances of a few meters, which employ a different weighting of depth cues than used in typical driving situation where important objects are located at considerably greater distances. Others have examined AR HUD designs for driving via VR simulation [12], however some perceptual phenomena, such as accommodation and thus depth perception at long distances, cannot generally be reproduced in these settings. This work proposes an alternative approach that employs an AR HUD and realistic distances to help bridge the gap in understanding how augmented reality graphics affect perception of real world objects. While this study does not require actual driving, or even simulated driving, it does quantify how drivers depth perception may be changed when AR graphics are present. Further, the method affords direct measurement the effects of AR graphics on depth perception as opposed to indirectly inferring the effect on perception via braking distance or time to stop as is typically used in driving simulators 2 RELATED WORK Previous studies have explored AR depth perception with a mixture of results. Many studies indicate a tendency for 401

individuals to underestimate the distance of a virtual object to themselves [13]. Swan s study comparing virtual reality and augmented reality depth judgments using the directed walking technique indicated that while VR viewing conditions lead individuals to underestimate distance to virtual objects, the same effect is not present in AR scenarios. Instead, there was no underestimation and depth judgments instead were more accurate [13]. Recent studies indicate that there is a distance at which the quality of our AR depth perception shifts from underestimation to overestimation of distance. Though the factors contributing to the location of the shift are not fully understood, Swan, 2007 [2] found this shift to occur at approximately 23m. In addition, this study showed that participant task errors tended to increase with increasing distance [2]. This means that the farther away a participant thought the object was, the less accurate they were, and the longer it took for them to provide a response. This phenomenon is cause for concern in driving when decisionmaking is time pressured, slow reactions can have negative results, and distances between driver and AR graphics may vary across this underestimate/overestimate boundary. Other research shows an interesting trend where the location of the study indoors or outdoors may affect the direction of error. While people tend to underestimate depth indoors, performing the same experiment outside leads to overestimation of depth [14, 15]. The experiment presented herein was performed indoors, though the vast majority of driving application will only occur outside. With our methods and apparatus in place, our intention is to follow up with an outdoor study to determine if a similar switch occurs. Given the difficulty of running experiments outdoors, and the variability in outdoor lighting minute to minute, we felt it was prudent to start our work indoors. While there has been a focus on depth perception of the AR graphics, there has been less research delving into the depth judgment comparisons of AR graphics and real world objects. Jerome & Witmer [16] found that participants were better at judging the distance to a real world object than an AR graphic. However, in this work, the participants did not view both the AR graphic and corresponding real-world object concurrently. 3 METHOD 3.1 Participants This study involved 14 male and 10 female participants. Each participant was required to have normal corrected vision. Additionally, the participants were all over 18 years of age (mean age = 32) and each had a driver s license and experience driving. 3.2 Equipment A cardboard adult male pedestrian was attached to a wheeled cart that could then be pushed towards the participant as if a pedestrian was walking forward (Figure 1). To assist the researchers in manually pushing the cart, a metronome and tape measure were used to ensure a constant rate of consistently sized steps, resulting in a constant pedestrian speed across all tasks. At all times during the task, the researcher pushing the cart was fully hidden by the cardboard cutout of a pedestrian. Attached to the cart was a digital single-lens reflex (DSLR) camera with a high shutter speed that was positioned to capture the cart s position along the tape measure on demand. A flashlight was placed next to the camera to provide adequate lighting and better picture quality due to the fast shutter speed. A tablet computer was fastened to the torso of the pedestrian to support our three display conditions as described in Section 4. Participants sat at a table with their chin resting on a chin rest to ensure that their gaze always centered on the tablet affixed to the pedestrian. A rudimentary HUD was placed in front of the participant for all scenarios, though it was only turned on for one of the three scenarios. The HUD had a focal depth of approximately two meters beyond the participants eyes. Each participant was given a remote trigger synched to the camera that would trigger a picture when the button was pressed. This gave the participant full control over when the distance was measured, and removed some of the inherent human error in measuring. Figure 1: Side view of experimental setup with participant sitting in chair on the left. Figure 2: Chin rest and view of pedestrian at beginning of experiment. We removed as many visual cues as possible in order for the experiment not to be skewed by unexpected, nearby depth cues. Because of this, the room selected for the study was extremely large with black curtains around all walls and all other furniture removed. Unfortunately, we were not able to remove the horizon cue created by the contrast in floor and background color (Figure 2). However, while piloting, we determined that the amount of perceived vertical movement of the pedestrian/horizon intersection was minimal, and ranged from the pedestrian s midthigh to just below the knee. 4 PROCEDURE Each participant experienced three different experimental conditions: no AR visual display (control), an AR graphic perceptually overlaid on the pedestrian via HUD, and the same graphic physically located on the pedestrian-mounted tablet. The three visual display conditions were counterbalanced across the 24 participants. The control condition was designed to yield baseline depth judgments in absence of any other AR-based visual channels. The HUD display condition was constructed to be representative of many emerging automotive HUDs with a fixed focal depth of 2m. The tablet condition was designed to emulate what we expect to be available in select forthcoming AR displays; namely dynamic accommodation; where individual AR graphics will be rendered at arbitrary focal depths, thus allowing designers to not only position conformal graphics in the correct location within the 2D view plane, but also at the correct depth position. 402

Figure 3: Experimental Conditions. Participants looked through the large screen used for the AR display for all three conditions, though only the AR experimental condition actually had a display on the screen. The tablet was used in all three conditions, but in slightly different ways. In the tablet condition, our AR stimuli were displayed using the tablet s graphics and display capabilities such that the AR graphics were rendered at the same focal depth as the moving real-world object/pedestrian. In the control and HUD display conditions, the tablet (in its powered-off state) was used simply as a visual cue for participants to direct their attention (and keep them from attending to other cues). In the HUD condition, while we rendered our visual stimuli via HUD, we carefully positioned the AR stimuli in space so as it was perceived to be located within the tablet footprint (and also, directing participants attention to that same area of the pedestrian). For the HUD condition, we also animated the stimuli so that it got larger as both the pedestrian and AR graphics got closer to participant (Figure 4). The graphic essentially emulated true conformal AR imagery and thus attempting to maintain the height in the visual field depth cue that is present in the tablet condition. Figure 4: Changing HUD display size with tail as the pedestrian and tablet approach a user. There were ten different distances examined ranging from 11-20 m inclusively which were randomly ordered for each participant. We chose this range since it represents typical distance-to-hazards in common driving scenarios. At 1M/sec, the 11-20 meter distances equate to about a 2 second time headway between a driver s vehicle and hazard vehicle that is travelling between 2 and 5 MPH slower than the driver (suggesting that the driver must take some action quickly). It is within these 2 seconds in which most drivers have to make critical depth judgments (e.g., brake hard, swerve, or simply slow down). Prior to each of the three display conditions, and to diminish learning effects and possible confusion, each participant was allowed up to three practice tasks before starting the recorded tasks. These practice tasks allowed participants to become comfortable with using the remote trigger and to better understand the structure of the experiment. For each trial, the pedestrian started at a distance of 25 m directly in front of the participant. Next, a researcher walked to a pre-specified distance along on a straight line between the pedestrian s starting point and the participant. The researcher then paused and asked the participant to note the target location. The purpose of using a different person to note the target location was to keep participants from using size as a cue. After all researchers were out of sight, a researcher began to push the pedestrian towards the participant at a rate of one meter per second. When the participants believed that the pedestrian had reached the target location, they pressed the remote DSLR camera trigger. Thus, each participant was exposed to ten different target distances under each of three display conditions for a total of 30 trials. After each trial, they were asked how confident they were (on a scale of 1-10) that they correctly identified the distance. While the primary task was to assess pedestrian distance and respond at specific target locations, the secondary AR task was designed to meet two specific criteria: 1. require only the most basic visual perception and no cognitive processing in the automotive domain, this would equate to a simple conformal indicator as opposed to a say a text message; and; 2. demand participants visual attention so that participants would not simply ignore the AR cue and perform the task based on real-world pedestrian cues alone. The visual search task employed a square graphic with a short tail that traced around the outside of the screen, much like the classic Snake, Blockade and Surround computer games. Because the short line moved, this required more visual attention than a still graphic. As mentioned above, in the HUD display condition, the graphic dynamically scaled (larger) at a constant rate and appeared conformal with the tablet attached to the pedestrian. In the control display condition, participants were instructed to visually attend to the blank tablet screen only. In the HUD and tablet conditions, participants were instructed to visually attend to the AR graphics rather than just the pedestrian. Asking participants to attend to the AR graphics alone while trying to judge the depth of the pedestrian was purposeful, and based on post-task interviews resulted in participants visually attending to both the AR graphics and the pedestrian (as opposed to just the pedestrian, which is what was likely to happen in lieu of any concrete instruction). In the HUD display condition, participants accommodated back and forth between 2m and the pedestrian distance at that point in time. In the tablet condition, post-task interviews suggest that participants shifted their foveal attention back and forth from the tablet AR stimuli to the edges of the pedestrian figure. Interestingly, even though the tablet and pedestrian focal depths were identical, participants still sought additional visual cues to perform the primary depth judgment task. In a driving setting, perhaps this accommodative switching and saccade/fixate pattern is a reasonable human response, since a driver would naturally attend to the AR cue first, and then immediately switch to the real-world hazard of interest. Ultimately, the AR community would benefit from experimental tasks that require very tight visual and cognitive integration of both virtual and real-world visual cues. The less integrated the cues are, the more likely participants are to essentially switch their accommodation between the real-world object and AR graphics; a practice that is so common among AR users that they most likely are not aware it is always occurring. 403

5 RESULTS Analysis was performed on the distance from the target location to the location determined by participants. This was calculated using the image taken using the remote trigger. The target location minus the location specified by the participant through the remote trigger was the metric for depth perception offset (DPO). Target Location Participant-specified location = DPO For example, if the target location was 15 m from the participant, and they pressed the trigger when the pedestrian was at 17 m, then the resulting value was -2 m. This value indicates that they thought the pedestrian was 2 m closer and corresponds with an underestimation of depth. Any negative value corresponds with underestimation. A value of 0 would indicate that the participant identified exactly the target location. Out of 720 possible data points, 10 values were lost (due to inability to read distance markings in images) for a total of N=710 data points. Of these, 13 outliers were outside of the values contained by 1.5xInter-Quartile Range (IQR) subtracted from the 1 st quartile, and added to the 3 rd quartile (IQR = 1.728). Outlier values less than -3.976 or greater than 2.934 were transformed to the corresponding number (-3.976 or 2.934). The variances are not equal with the HUD condition resulting in the highest variance (HUD-stdev = 1.406, control-stdev=1.193, tablet-stdev = 1.158). This indicates that the presence of the HUD graphic may result in greater variation in responses. There was no significant difference in overall means for the conditions (HUD-mean = -0.716, controlmean = -0.441, tablet-mean = -0.526). A second analysis was performed to understand the impact of the three scenarios on the confidence of each participant. There were 16 outliers, 13 of which were associated with the HUD condition, 3 in the tablet condition. Because of the strict upper and lower bounds, scores of 1 or 2 (indicating extremely low confidence) fell outside of the range of non-outlier values using the value 1.5xIQR ± quartile values. Like the DPO data, these scores were transformed to the next highest non-outlier value. As with difference, the reported confidence levels also had unequal variances with the HUD condition reporting the highest standard deviation (HUD-stdev=1.786, control-stdev=1.434, tabletstdev=1.490). Again, the HUD scenario is coupled with greater variation in responses than the other two conditions. Figure 5: Mean confidence level by distance. Display Type Distance Control HUD Tablet 11 7.417 5.833 7.208 12 7.304 5.583 6.792 13 7.348 5.5 7.042 14 7.042 5.833 6.625 15 7.25 5.458 6.818 16 7.739 5.792 6.583 17 7.208 5.458 7.087 18 7.083 6.136 6.5 19 7.292 6.217 6.792 20 7.708 6.391 6.958 Table 1: Mean confidence level by distance. The ANOVA indicated that the mean reported confidence levels were significantly different (F(2, 68)=8.199, p<0.0001). This indicates that under the HUD condition, participants were less confident in their ability to correctly judge the distance. This is likely due to the continual need to focus on the image and then back on the pedestrian. Most available HUDs do not have a variable focal depth, which means that drivers would continually need to make these adjustments while using AR HUDs in vehicles. This indicates a need to further refine AR HUD technology to remove potential problems caused by this. Overall, the use of a HUD significantly affects both the depth perception offset of individuals from the target location and the confidence level of participants. 6 CONCLUSIONS AND FUTURE WORK Overall, the AR HUD condition resulted in greater variability in both distance judgment accuracy and confidence level. On a whole, participants tended to underestimate the closeness of the physical object supporting findings of previous studies (e.g. refs). While the applications of AR HUDs in vehicles are still being explored, one possible application of AR HUDs is to highlight pedestrians or other hazards in the road. It is imperative to understand how graphic overlays affect perception and therefore how they could also affect driving ability. In addition, the fact that the AR HUD is correlated with lower confidence levels and therefore more uncertainty suggests the need to better refine the technology before implementing it into vehicles. Drivers need to be able to confidently make decisions, and anything that reduces their confidence could threaten the safety of both drivers and pedestrians in the vicinity. There is still a great deal of research to be done to fully understand the impact of AR HUDs on real world objects. Future studies could include exploring a wider variety of distances while continuing to try to minimize the depth cues that are available to participants. In addition, there may be interesting findings when changing speeds, lighting conditions, or the contrast on the AR graphic. Finally, this study used a relatively short focal depth relative to the distance to the pedestrian. Future studies should test different focal depths. REFERENCES [1] F. A. Wilson and J. P. Stimpson, "Trends in fatalities from distracted driving in the United States, 1999 to 2008," American Journal of Public Health, vol. 100, 2010. [2] J. E. Swan, A. Jones, E. Kolstad, M. A. Livingston, and H. S. Smallman, "Egocentric depth judgments in optical, see-through augmented reality," Visualization and Computer Graphics, IEEE Transactions on, vol. 13, pp. 429-442, 2007. [3] M. Tonnis, C. Lange, and G. Klinker, "Visual longitudinal and lateral driving assistance in the head-up display of cars," 2007, pp. 91-94. 404

[4] H. Kim, X. Wu, and J. L. Gabbard, "Exploring Head-up Augmented Reality Interfaces for Crash Warning Systems " presented at the Submitted to Automotive UI 2013, Eindhoven, Netherland, 2013. [5] V. Charissis, S. Papanastasiou, L. Mackenzie, and S. Arafat, "Evaluation of collision avoidance prototype head-up display interface for older drivers," in Human-Computer Interaction. Towards Mobile and Intelligent Interaction Environments, ed: Springer, 2011, pp. 367-375. [6] Z. Medenica, A. L. Kun, T. Paek, and O. Palinko, "Augmented reality vs. street views: a driving simulator study comparing two emerging navigation aids," 2011, pp. 265-274. [7] S. Kim and A. K. Dey, "Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation," 2009, pp. 133-142 %@ 1605582468. [8] A. Doshi, S. Y. Cheng, and M. M. Trivedi, "A novel active heads-up display for driver assistance," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 39, pp. 85-93, 2009. [9] R. Schroeter, A. Rakotonirainy, and M. Foth, "The social car: new interactive vehicular applications derived from social media and urban informatics," 2012, pp. 107-110. [10] G. K. Edgar, "Accommodation, cognition, and virtual image displays: A review of the literature," Displays, vol. 28, pp. 45-59, 2007. [11] G. P. Hirshberg, "System for aiding a driver's depth perception," ed: Google Patents, 1990. [12] A. Kemeny and F. Panerai, "Evaluating perception in driving simulation experiments," Trends in cognitive sciences, vol. 7, pp. 31-37, 2003. [13] J. A. Jones, J. E. Swan II, G. Singh, E. Kolstad, and S. R. Ellis, "The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception," in Proceedings of the 5th symposium on Applied perception in graphics and visualization, 2008, pp. 9-14. [14] M. A. Livingston, Z. Ai, J. E. Swan, and H. S. Smallman, "Indoor vs. outdoor depth perception for mobile augmented reality," in Virtual Reality Conference, 2009. VR 2009. IEEE, 2009, pp. 55-62. [15] A. Dey, A. Cunningham, and C. Sandor, "Evaluating depth perception of photorealistic mixed reality visualizations for occluded objects in outdoor environments," in Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology, 2010, pp. 211-218. [16] C. Jerome and B. Witmer, "The perception and estimation of egocentric distance in real and augmented reality environments," in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2005, pp. 2249-2252. 405