Distance Perception derived from Optic Flow (Wahrnehmung von zurückgelegten Distanzen auf der Basis Optischer Flussfelder)

Size: px
Start display at page:

Download "Distance Perception derived from Optic Flow (Wahrnehmung von zurückgelegten Distanzen auf der Basis Optischer Flussfelder)"

Transcription

1 Distance Perception derived from Optic Flow (Wahrnehmung von zurückgelegten Distanzen auf der Basis Optischer Flussfelder) Inaugural-Dissertation zur Erlangung des Grades eines Doktors der Naturwissenschaften der Fakultät für Biologie der Ruhr-Universität Bochum angefertigt am Lehrstuhl für Allgemeine Zoologie und Neurobiologie vorgelegt von Harald Frenz aus Bochum Datum der mündlichen Prüfung:

2 Contents 1 General Introduction General Introduction Optic ow elds and their information about ego-motion Biological relevance of optic ow elds for locomotion The role of optic ow for path integration in humans The goal of this thesis General Methods Apparatus Subjects General Procedure Variation of the experimental set-up The used virtual environments Textured ground plane Dot plane Dot plane The virtual horizontal lines Data analysis Distance estimation from optic ow Introduction Methods Results Do human subjects possess an abstract distance gauge derived from optic ow? The inuence of dierent depth cues on distance estimation The role of translation velocity and simulation duration for distance estimation Discussion i

3 CONTENTS ii 4 Perceived metric of the visual space Introduction Methods Procedure Participants Parameters Results Discussion Conclusion Over- vs underestimation Introduction Methods Results Reproduction of the Redlick et al. (2001) experiments The inuence of simulation duration and simulated translation velocity on distance estimation Discussion Conclusion Dierent ways of distance indication Introduction Motion simulation - distance reproduction - distance indication (interval) Methods Results Distance indication in terms of eye heights Methods Results Indication of travel distances by actively walking Methods Results Discussion Conclusion

4 CONTENTS iii 7 Stereoscopic depth information Introduction Methods Procedure Experimental set-up with the single screen Experimental set-up in the CAVE Results The single screen experiments The CAVE experiments Comparison between the two experimental set-ups Discussion Conclusion General discussion Abstract distance gauge derived from optic ow Travel distances were underestimated in virtual environments Sources of the underestimation Indication of the distances Additional depth information The perceived metric of visual space Representation of the space Conclusion Conclusions Summary Perceived distances derived from optic ow Underestimation of the travelled distances Metric of the visual space in virtual environments Dierent ways to indicate the travelled distances Stereoscopic presentation of the virtual scene Zusammenfassung Distanzwahrnehmung auf der Basis optischer Flussfelder Unterschätzung der zurückgelegten Distanzen Die Metrik des visuellen Raums in virtuellen Umgebungen Verschiedene Arten, die zurückgelegte Distanz anzuzeigen Stereoskopische Präsentation der virtuellen Umgebung Bibliography 114

5 CONTENTS iv Publication list 117 Danksagung 118 Curriculum vitae 119

6 Chapter 1 General Introduction 1.1 General Introduction All organisms have to interact with their environment in some way to survive and to reproduce. While plants are mostly limited to growth processes and motion of smaller parts, animals can actively move around and explore the environment. To do so, animals as well as humans have to be able to react upon their surrounding, e.g. to avoid collisions or to interact with other animals. For this it is crucial to have information about the structure of the environment. These information could be sonic, odour or visual. They are received via dierent sensors and then transmitted to the brain. The brain analysis these information and thereupon generates and controls appropriate reactions to it. In addition to these information about the external world, the brain has also to take the actual movements of the individual itself into account. This means, the brain has also to consider the heading direction or speed of the self-motion. For the analysis of the self-motion the brain can use dierent sources of information: the position and orientation of the limbs are indicated by proprioception, the orientation and acceleration of the head is signalled by the vestibular sense. But the most important information about navigation for higher mammals and humans is provided by the visual system. The structure of the reected light from surfaces changes over time on the retina as the observer moves around. This temporal change of object images is called optic ow (Gibson, 1950) and provides rich information about the movement of the observer. Optic ow is used in humans to control the upright stance (Lee, 1980; Bronstein & Buckwell, 1997) and walking speed (Prokop et al., 1997). The heading direction of self-motions (Warren et al., 1988; Warren & Hannon, 1990; van den Berg & Brenner, 1994a) or the time to collision with objects in the environment (Lee, 1980) can also be 1

7 CHAPTER 1. GENERAL INTRODUCTION 2 determined on the basis of optic ow elds. The amount of optic ow (Φ) a moving observer perceives depends on dierent parameters, for example the translation velocity V and the distance (Z) between objects and observer (Φ V/Z). Hence, ow elds are ambiguous with respect to travel distances as one needs to know or make assumptions about Z to calculate the translation velocity from optic ow. Recent research showed, that it is possible to discriminate the travel distances of two visually simulated self-motions (Bremmer & Lappe, 1999). In this study the authors used the phenomenon of vection (relative motion of the environment to a stationary observer evoking the perception of observer motion) to visually simulate two ego-motions sequentially. The human subjects had to indicate, which motion simulation covered a larger distance. The subjects could correctly perform this discrimination if they made the assumption that the environment did not change between the two motions, that means the distribution of all Z remained constant. Even when the two motions were simulated in dierent directions (rst motion forward, second motion backwards or vice versa) travel distances could be accurately discriminated. There are two main possibilities of how subjects could discriminate the distances: the rst one follows the hypothesis that the subjects use the image motion of the environment on the retina, that means they use Φ. This could be described as a 2-D hypothesis. The second possibility assumes a percept of the ego-motion. This means the subject uses only the amount of optic ow based on the simulated translation ego-velocity (V ) and integrates this over time (3-D hypothesis). Frenz et al. (2003) performed experiments in which the optic ow eld was altered by varying the translation velocity and the layout of the environment (that means the distribution of all Z changed). If the environment changed unnoticed by the subjects between the two ego-motion simulations, the subjects made predictable errors in distance discrimination: they traced the whole change in the optic ow eld on a change of the translation velocity, assuming that the distribution of all Z remained constant. If the subjects noticed the altered environmental structure, they could extract the amount of change in the ow eld based on the change of the environment from the amount based on changes of translation velocity. These results support the 3-D hypothesis. My thesis follows the question whether or not human subjects possess a distance measure in some arbitrary unit derived from visual motion. Therefore, I visually simulated ego-motion, using the phenomenon of vection. I asked human subjects to indicate the perceived travel distances in various ways. To build up a correct distance measure the subjects have to calibrate the optic ow on the basis of the environmental depth

8 CHAPTER 1. GENERAL INTRODUCTION 3 information and integrate the velocity of the ego-motion over time. But can human subjects also indicate the perceived travel distances in a stationary environment? The measured distance has therefore to be transferred to the environment in terms of a virtual distance. Another aspect of my work was to investigate in what geometry the visual space in virtual environments is perceived. The perceived metric of the visual space was until now only measured in real world experiments. The presentation of virtual scenes might have an inuence on distance estimation. To gauge the visual space I instructed subjects to adjust a virtual ground interval in size to a xed one. 1.2 Optic ow elds and their information about egomotion The rst stage in the process of visual perception are the eyes, receiving light reected from surfaces in the environment. Because surfaces are constructed of texture elements, they do not uniformly reect light. That means, from one point of observation our eyes get a uniquely structured optic array. As the observer moves through the environment, the optical array changes over time in a lawful transformation. Gibson (1950) called this temporal change optic ow (on the level of the retina it is the retinal ow). There are dierent types of ow elds, depending on the motion the observer performs and whether eye movements occur or not. In the present experiments I simulated all egomotions in the forward direction without eye movement simulation. Therefore, I will limit the description of ow elds as far as possible to this case. If gaze direction and the direction of translational locomotion are the same, a radial ow eld occurs (see gure 1.1). The objects of the environment radially move outwards from the focus of expansion, the location in a ow eld where the optical velocity is zero. Each vector in the ow eld indicates the optical speed (length of the vector) and the direction of optical motion of an object relative to the observer. The ow eld is 2-dimensionally represented on the retina and can be analysed by the observer. There are dierent information in ow elds about the observer's self-motion: one information is the time to elapse to when an object will hit the eyes. This time-to-contact can be obtained through the rate of expansion of an object's image on the retina. Lee (1980) designated this time by the symbol τ(t), thus τ(t) = r(t)/v(t) with r(t) as object size on the retina and v(t) as velocity of the object on the retina. Therefore, it is not necessary to know the real size or the distance to an object to know when it will hit the observer. The second information about ego-motion included in ow elds is the heading direc-

9 CHAPTER 1. GENERAL INTRODUCTION 4 Fig. 1.1: Radial ow eld. Gaze and motion point in the same direction (indicated by the black dot). In this example no eye rotations are included. The vectors symolise optical velocity and movement direction of environmental elements relative to the observer. In this case the environment is a ground plane. tion. In the example in gure 1.1 the heading direction is simply the focus of expansion. Gibson suggested that the focus of expansion could be used to directly detect the heading direction, because the distribution and shape of the optical ow vectors are only determined by this direction. When the observer rotates the eyes during motion the perception of one's own movement direction gets more complicated. While translation motion of an observer produces a radial ow pattern, pure rotation of the head produces a translational pattern of ow. In this case the direction and magnitude of the ow vectors are completely determined by the eye movements and independent of the distance to objects in the environment. The focus of expansion no longer marks the heading direction but results of the vector summation of the translation and eye rotation component of the motion. Humans as well as animals are able to detect the heading direction solely based on the optic ow elds even in this condition (Warren et al., 1988). One information which is not directly included in ow elds is the travel distance of a visually simulated motion. The image velocity Φ in the optic ow depends on the

10 CHAPTER 1. GENERAL INTRODUCTION 5 observer velocity V and the distance of objects in the environment to the observer Z Φ = V/Z. (1.1) Travel distance might be estimated by measuring the observer's virtual speed V and the duration of the motion. But judging the observer's speed from the optic ow speed is not possible, because also the distance of elements Z has to be known to solve equitation 1.1. So optic ow elds are ambiguous with respect to travel distance as identical optical ow elds can be achieved by varying the observer speed in the same amount as the distance to environmental objects. 1.3 Biological relevance of optic ow elds for locomotion As described in section 1.2, optic ow elds provide rich information about the selfmotion of an observer. But the question arises whether or not animals really use this information to control their locomotion. In the literature researchers mostly investigated the inuence of optic ow elds on the behaviour of insects. Srinivasan et al. (1996) investigated, which cues derived from optic ow a ying honeybee can use to control its ight. They showed that honeybees y in the middle between two obstacles by balancing the image speed of the obstacles on both retinae. This behaviour ensures the maximum distance between these obstacles, because optical velocity is increased with decreasing distance to an object (see equation 1.1). Also the speed of ight is reduced at very narrow passages of a tunnel. In this case, the average image velocity is kept constant. The mechanism of constant retinal image velocity ensures that ight speed is decreased when bees land on a horizontal surface. The same strategy is used to control bees walking velocity (Schöne, 1996). After honeybees found a food source, they instruct other bees about the direction and distance to the food source by waggle dances. In a study of Esch et al. (2001) the bess ew through a narrow tunnel (8 cm width) to the food source in a distance of 11 m. The mean waggle dance to instruct the other bees about the own distance lasted for 358 ms. For other bees not ying through the tunnel but on an open eld a waggle dance of 358 ms indicated a distance of 72 m to the food source. The authors of this study concluded that the visually driven odometer misread the own distance, because the narrow tunnel increased the bee's retinal ow. In another set of experiments Esch and Burns trained honeybees to y to a food source

11 CHAPTER 1. GENERAL INTRODUCTION 6 in a distance of 70 m to the hive (Esch & Burns, 1995; Esch & Burns., 1996). If the food source was positioned on the ground level, the foraging honeybees correctly indicated this distance to the other bees. When the food source was lifted with a balloon up to 90 m above the ground, the own distance increased from 70 m on ground level to 114 m. But the bees indicated own distances of only 25 m. In these experiments the optic ow was reduced as the distance to environmental objects increased. The results of these three studies (Esch & Burns, 1995; Esch & Burns., 1996; Esch et al., 2001) showed, that the indicated distances strongly depended on the environment between hive and food source. Under normal conditions navigational errors were avoided as the instructed bees y in the same directions (and therefore through the same environment) as the rst foragers and hence get the same retinal ow information. Additionally, the results showed that honeybees indicate the amount of the perceived retinal ow rather than the total amount of the own distance. The role of optic ow elds for desert ants is still under debate. In the study of Ronacher & Wehner (1995) desert ants were trained to walk in a tunnel to a food source in a distance of 10 m. Under the oor of the perspex tunnel an endless band with a black and white grating was visible for the ants. After the training, the ants were put in a second 20 m long perspex tunnel. The endless band in this tunnel could be moved with dierent velocities in or opposite the ants walking direction to vary the optic ow eld the ants perceived. When the grating on the oor was not moved in the test tunnel, the ants walked 9.5 m before they started the food search. When the grating was moved in the walking direction of the ants, the ants started their food search after the walked more than 10 m depending on the velocity of the grating's movement. Grating movement opposite the ant's walking direction reduced the walking distance also depending on the velocity of the grating's movement. These results showed that the ants use the perceived optic ow to estimate the distance travelled besides kinestetic cues. Experiments with mammals were performed with the Mongolian gerbil : Sun et al. (1992) trained the animals to walk along a tunnel of dierent length to collect a food reward at the end of the tunnel. Behind the front wall where the reward was given the authors placed a monitor and presented a white circle, which could shrink or rise in size. Sun et al. (1992) measured the walking velocity when the gerbils approached the food. They found out, that with a shrinking circle on the monitor the gerbils reduced their walking velocity much slower compared to a rising circle. This showed, that the Mongolian gerbil used the time-to-contact to control their deceleration as they approached the target.

12 CHAPTER 1. GENERAL INTRODUCTION The role of optic ow for path integration in humans The term path integration or dead reckoning describes the ability to update one's own position relative to a starting point of locomotion on the basis of signals generated during this locomotion. These signals can be derived either from external references, e.g. landmarks, or from self-generated information about the locomotion, e.g. propriceptive, vestibular or visual signals (optic ow). In the following I will rst summarise the results of dierent experiments concerning path integration on the basis of vestibular and proprioceptive information. Afterwards I describe some studies about path integration on the basis of optic ow. Human's ability to reproduce distances of translational self-motions on the basis of vestibular information was investigated by Berthoz et al. (1995). Human blindfolded subjects were passively displaced on a four wheel robot for certain distances using dierent velocity proles, e.g. sinusoidal or triangular. After the motion ended, the subject's task was to actively reproduce the perceived travel distances by steering the robot with a joystick. The subjects very accurately reproduced the travelled distances. Their strategy was to copy the velocity prole of the rst displacement. Also perceived rotations on a xed spot revealed high correlation between passive rotation and active reproduction (Israel et al., 1996). Same results were obtained when subjects were rotated on a xed spot, between two linear displacements or on a circular trajectory (Ivanenko et al., 1997a). Although the angular displacement was overestimated, passive rotation and active reproduction were highly correlated. Even during more complex 2-D motion (rotation in space, on a linear motion or on a semicircular trajectory) the orientation of the subject's head could be indicated quite accurately, although the perceived motion trajectory was only correctly indicated if the orientation of the body was coherent with the heading direction of the motion (Ivanenko et al., 1997b). Loomis et al. (1993) investigated blind and blindfolded human's ability to directly walk back to a starting point after they walked along two straight distances including one turn in between (triangle completion task). The results revealed, that subjects were sensitive to manipulations of the travel distances and turning angles, but small turning angles and travel distances were overestimated whereas large turning angles and outbound distances were underestimated. Similar path integration tasks as described above were also performed using only visual information about the motion. Bremmer & Lappe (1999) instructed subjects to indicate which one of two successively visually simulated ego-motions covered a greater

13 CHAPTER 1. GENERAL INTRODUCTION 8 distance. The distance discrimination was very accurate even when the ego-motions were simulated in dierent directions (forward/backward). In another experiment the subjects had to actively reproduce the distance of a reference motion simulation. To solve this task the participants could control the velocity of the visually simulated selfmotion with the help of a joystick. As in the experiments of Berthoz et al. (1995), the subject's strategy was to reproduce the velocity prole of the reference motion, leading to accurate distance reproduction. Also angles of visually simulated rotations could be discriminated (Bremmer & Lappe, unpublished data). When subjects were instructed to indicate the trajectory of visually simulated 2-D motion, they correctly perceived ego-rotations, but they perceived their orientation xed to the heading direction or xed to space (Bertin et al., 2000). Triangle completion can also be performed with only visual information, although the performance is somewhat less accurate than in tasks with vestibular and propriceptive information about the motion (Peruch, 1997). Here, the nal turning angle back to the starting point was stronger underestimated than in the experiments of Loomis et al. (1993). Kearns et al. (2002) performed triangle completion tasks in their experiments with pure visual, pure vestibular/prorioceptive and a combination of both kind of information. The results indicated that triangle completion can be performed with only visual information, but human subjects seem to rely more on body senses if both kind of information were available. 1.5 The goal of this thesis The role of optic ow in navigation has already been described for some animals. All these animals use information provided by optic ow to determine heading direction and distances of explorating trips, the time-to-contact with objects and to adjust ight or walking velocity. In human navigation, optic ow elds become more and more important as proprioceptive cues about self-motions are reduced when using any kind of motor driven vehicles. Human subjects can determine the heading direction of selfmotions on the basis of optic ow alone. But until now it is only rarely investigated whether humans possess an abstract distance gauge derived from ow elds and which depth information of the ow elds can be used to develop a distance gauge in the absence of other information about the environment. In the literature only comparisons about distances of two visually simulated self-motions were described. The ability of developing a distance gauge in some abstract unit derived from optic ow elds was not investigated. My thesis will follow these questions by using psychophysical experiments. In the rst part I will present the results of experiments that investigate the ability of

14 CHAPTER 1. GENERAL INTRODUCTION 9 developing an abstract distance gauge. I will show, that human subjects possess such a distance gauge but make systematic errors in judging the travelled distances. The following chapters examine the source of this error. In chapter 4 I present experiments in which I measured the geometry of the perceived visual space in virtual environments, as one possible source for the error in distance estimation. Further on, I will show that the experimental time course (chapter 5), the way human subjects indicate the perceived travel distance (chapter 6) and dierent depth information (chapter 7) do not reduce the error in distance estimation. In the general discussion the results will be discussed with respect to recent ndings.

15 Chapter 2 General Methods 2.1 Apparatus All experiments were performed in a dark room, which was lit only by the luminance of the virtual environment (see section 2.4). I generated the virtual environments on a Silicon Graphics Indigo2 workstation with a resolution of 1280 x 1024 pixel. The subjects were seated in front of a 120 x 120 cm back-projection screen (Dataframe, type CINEPLEX) at a distance of 0.6 m with the eyes on the same height as the virtual horizon. The resulting eld of view was 90 x 90 deg viewing angle. I backprojected the stimuli on the screen using a 3 colour CRT video projector (Electrohome ECP 4100, resolution 1280 x 1024 pixel). Because no head xation was used, subjects were instructed to avoid head movements during the experiments. Neither head- nor eye movements were recorded. The participants viewed the scene binocularly. 2.2 Subjects Altogether eight subjects (six males and two females) participated in the experiments. All were members of the Institute of General Zoology and Neurobiology of the Ruhr- Universität Bochum. Three subjects had never before participated in psychophysical experiments. The subjects age ranged between 24 and 31 years. All had normal or corrected to normal vision. No payment was granted for the participation in the experiments. 2.3 General Procedure Before the experiments started, I presented the subjects some test trials. This was done to ensure that the subjects understood the task and to allow them to get used to 10

16 CHAPTER 2. GENERAL METHODS 11 the virtual scene. Only subjects experiencing the impression of vection during motion simulation participated in this experiments (all subjects reported so). No feedback about the performance was given. I simulated the travel distances with dierent combination of optical velocity and duration. Because the travel distances diered between the experiments, I list the used parameters later in the description of the experiments. The velocity prole was always rectangular, which means that the motion simulation instantaneously started with the chosen velocity and abruptly ended after the determined duration. Each trial started with the presentation of the static scene for 300 ms (see gure 2.1). After this time a translation motion was simulated with variable duration and velocity. Simulated gaze and heading was always in the forward direction. The environment was again statically presented for 300 ms before two virtual horizontal lines appeared on the screen. The reference line appeared in front of the observer at a virtual distance of 4 m. The second line was introduced 3 m in front of the observer's virtual position in the scene. This line could be moved towards or away from the observer's position by moving a computer mouse. Both lines were placed on the virtual ground plane. The subject's task was to indicate the travelled distances of the motion sequences in terms of a ground interval with the reference line nearer to the observer's position than the adjustable one. The subjects indicated their decision by adjusting the line and then pressing a mouse button to arm it Variation of the experimental set-up In some experiments the described set-up was slightly varied. For clarity these variations will be described in detail in the corresponding chapters. However, a general overview will be given at this point. The metric of the visual space (chapter 4) was measured without any motion simulation. In these experiments, the subjects had to adjust the size of one depth interval on the ground plane to a second one, which was xed in size. The experiments described in chapter 5 investigated the role of the time course on the distance perception and indication. Therefore, I rst presented the subjects a visual movement target in a static scene. Afterwards the participants indicated the perceived distance to the movement goal in terms of an ego-motion simulation. When the subjects thought that they reached the position of the movement goal in the virtual scene with the simulated ego-motion they indicated this. In chapter 6 the perceived travel distance was not indicated with the virtual ground

17 CHAPTER 2. GENERAL METHODS 12 Static scene 0.3 s Start of trial Motion simulation 1-3 s Static scene 0.3 s Distance indication End of trial Fig. 2.1: The temporal sequence of the stimuli. In this example, the motion is simulated on the textured ground plane. The white arrows symbolise the ego-motion simulation of the observer. The environment is rst presented for 300 ms, afterwards the ego-motion is simulated for 1-3 seconds. Before the two horizontal lines are added to the static environment, the scene is again presented without motion and virtual lines for 300 ms. interval but either by verbally reporting the travelled distance in terms of eye-heights, by walking the same distance in the real world, or reproducing the travelled distance with a second actively controlled self-motion simulation. 2.4 The used virtual environments As already mentioned in the general introduction (see section 1.2), the structure of the environment is one parameter that determines the optic ow eld. In the following experiments I used three virtual ground planes with dierent depth cues (see table 2.1). The visibility range was restricted to 30 m in front of the observer. At this distance the ground plane was simply truncated. The simulated eye height was 2.6 m above the virtual ground plane.

18 CHAPTER 2. GENERAL METHODS Textured ground plane I mapped a texture (Iris Performer type "gravel") on a 400 m x 400 m virtual ground plane (gure 2.2A). To make the stimulus appear more realistic, a blue sky (rgb code: 0.1, 0.3, 0.7) was added to the environment. The mean luminance of the scene was 3.1 CD/m 2. The scene was presented with a frame rate of 36 Hz. Before each trial the starting position of the movement sequence was randomly varied up to 10 m to either side of the ground plane origin (midpoint of the plane) to avoid recognition of texture elements in successive trials. The textured ground plane provided ample static depth cues, contained in gradients of texture density and size of texture elements towards the horizon (Cutting, 1997). It also provided dynamic depth cues in the motion simulation, most notably motion parallax and the change of size of texture elements as they approached the observer. Another information about depth structure and travel distance was provided by the trajectories of the ground plane elements Dot plane 1 This ground plane consisted of 3300 white light points on a black background (gure 2.2B). The light points were rst set on a grating every 6 m within a distance of 30 m to each side of the starting position of the observer and every 2 m within 52 m in front of the observer. After the light points were set on this grating, their position was jittered up to 5 m forward or backwards and to either side. I limited the lifetime of the light points so that the observer could not obtain information about the travel distance from the trajectories of the light points. With a probability of 10 % each point could vanish and reappear randomly in the scene in each frame. With a frame rate of 72 Hz the mean lifetime of each dot was 139 ms. Therefore, on average 970 light points were visible on the screen. Mean luminance was 2.0 CD/m 2. During motion simulation the size and luminance remained constant, eliminating size change as a distance cue. Dynamic depth cues were provided by motion parallax. In the static scene, the gradient of texture density towards the horizon still served as depth cue. The frame rate was 72 Hz Dot plane 2 This ground plane consisted of 150 white light points on a black background (gure 2.2C). The points were evenly distributed on the lower part on the screen. During movement simulation the dots moved as if they lay on a ground plane, i.e. they obeyed the pattern of motion parallax. Without motion simulation this ground plane provided

19 CHAPTER 2. GENERAL METHODS 14 no information about the distance between observer and light points or the structure of the environment. Furthermore the lifetime of the light points was limited in the same way as described for dot plane 1. The mean luminance was 0.6 CD/m 2. A B C Fig. 2.2: Screenshots from the used static environments. A: textured ground plane; B: dot plane 1; C: dot plane 2. For a detailed description see text. Table 2.1: Dierent depth cues contained in the three environments. A plus marks the presence of a cue, a minus its absence. Density gradient refers to the increase of texture density towards the horizon. Change of size is the looming of objects as they approach the observer. Motion parallax is the scaling of visual velocity of an object with its distance from the observer. Trajectories means that objects can be tracked as they cross the screen. density gradient change in size motion parallax trajectories textured ground plane dot plane dot plane The virtual horizontal lines The virtual lines spread over 40 m to both sides of the observer's virtual position and had a thickness of 2 pixels. The luminance and size of the light points remained constant regardless of the distance to the observer's virtual position. The subjects controlled the movement of the adjustable line by moving a computer

20 CHAPTER 2. GENERAL METHODS 15 mouse. I used the vertical co-ordinates of the invisible mouse pointer position on the screen (ranging from 0 to 1024 pixel) and calculated the corresponding virtual position on the simulated environment (ranging from 0 to 30 m). Therefore, changing the position of the mouse pointer by one pixel altered the position of the line by 2.93 cm on the virtual ground plane (3000 cm /1024 pixel). 2.5 Data analysis I programmed all analysis in MatLab (The Mathwork Corp.). The workstation stored the simulated travel distances, simulated self-motion velocity and duration, and the distances between each line and the observer's virtual position in les for each subject and experimental condition. Afterwards I rst sorted the data by experimental conditions with my MatLab functions and plotted the data also in MatLab. The statistic functions used (ANOVA, linear regression, correlation coecients) are standard functions of MatLab.

21 Chapter 3 Distance estimation from optic ow 3.1 Introduction As already described in section 1.2, optic ow elds are ambiguous with respect to travel distances without information about the structure of the environment. Bremmer & Lappe (1999) showed, that human subjects were able to discriminate and reproduce the travelled distances from two visually simulated ego-motions. To correctly perform this task, the subjects had to assume that the structure of the environment remained constant between the two motion sequences. If the structure changed unnoticed by the subjects, a predictable error was observed. Frenz et al. (2003) investigated whether human subjects integrate image motion (2-D motion) over time for distance estimation or integrate the perceived ego-motion (3-D motion). The subjects had to indicate in these experiments, which of two visually simulated ego-motions covered a larger distance. Between the two self-motion simulations either the eye-height above the virtual ground plane, the viewing range, or the viewing angle on the scene was changed. These changes altered the optic ow eld but not the translation velocity of the simulated ego-motion (see section 1.2). Thus, these changes did not aect the travelled distances. Frenz et al. (2003) found out that the subjects could separate the inuence of environmental changes on the ow eld from the inuence of altered translation velocity if the subjects noticed the change of the environment. This means, the subjects used the perceived self-motion instead of the image-motion to estimate the travelled distance. In the experiments mentioned above subjects always had to compare two distances derived from optic ow simulations. However, the interesting question, whether human subjects have an abstract distance gauge and are able to indicate the travel distance in a static environment remained unclear. In the following experiments of this work I simulated ego-motion visually and asked the participants to indicate the distance travelled 16

22 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 17 in terms of a ground interval in a static scene. To correctly perform the task, subjects would have to be able to develop a distance gauge in some abstract unit derived from the optic ow eld and transfer it onto a static scene. 3.2 Methods I used the experimental set-up described in the general methods (see chapter 2). Each trial started with a visually simulated ego-motion sequence. The velocity of the selfmotion was either 1, 1.5, 2.5 or 3 m/s, the duration of the simulation 1.5, 2, 2.5 or 3 s. Accordingly, the travelled distances varied between 1.5 m and 9 m. Four travel distances (3, 3.75, 4.5 and 7.5 m) were simulated with two dierent combination of self-motion velocity and duration of the motion sequence. The used virtual distances, translational velocity and simulation durations are listed in table 3.1. I presented each of the 16 conditions (4 velocities, 4 durations) ten times in a pseudorandomised order. At the end of the motion simulation, I added two virtual horizontal white lines to the Table 3.1: Virtual travel distances, translation velocities and duration of the self-motion simulations. distance [m] velocity [m/s] duration [s]

23 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 18 Static scene 0.3 s Start of trial Motion simulation 1-3 s Static scene 0.3 s Distance indication End of trial Fig. 3.1: The temporal sequence of the stimuli. In this example, the motion is simulated on the textured ground plane. The white arrows symbolise the ego-motion simulation of the observer. The environment is rst presented for 300 ms. Afterwards ego-motion is simulated for 1-3 seconds. Before the two horizontal lines are presented in the static environment, the static scene is again presented without virtual lines for 300 ms. scene. One line (reference) was always presented 4 m in front of the observer's virtual position. The second line appeared 3 m in front of the observer and was adjustable by moving the computer mouse. Both lines were positioned on the virtual ground level. The time course of the stimulus presentation is illustrated in gure 3.1. The subject's task was to indicate the virtually travelled distance in terms of a ground interval. The reference line had to be nearer to the observer's position than the adjustable line. That means the subjects had to move the adjustable line at any case. The experiments were performed on the textured ground plane, dot plane 1 and dot plane 2 (see section 2.4 in the general methods). As mentioned in section 2.4, the light points on dot plane 2 were evenly distributed on the lower part on the screen. In a static scene, the light points formed a vertical plane without any depth information. Because of this lack of depth information in a static scene, distance estimation in terms of a ground interval was impossible. Therefore, dot plane 2 continuously simulated forward motion during adjustment of the virtual ground interval. This forward motion simulation provided dynamic depth information in terms of motion parallax. The simulated velocity of this motion was the same as in the reference motion. To investigate the inuence of the motion simulation during interval adjustment on distance estimation, I performed

24 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 19 control experiment 1. In this control experiment the textured ground plane served as the virtual environment. During indication of the travelled distance forward motion was simulated. Each virtual environment was separately tested in a block of 160 trials. One block lasted approximately 20 minutes. I rst run the experiment on the textured ground plane, than on dot plane 1, followed by the experiment on dot plane 2. To investigate whether or not the subjects showed a practice eect, I repeated the experiment on the textured ground plane afterwards. In this control experiment 2 each condition was now only tested ve times to reduce the duration of the experiment. Five subjects (24-30 years of age, 3 male and 2 female) participated in the experiments, including the author. Two subjects (ps and jl) had never before participated in psychophysical experiments. 3.3 Results Do human subjects possess an abstract distance gauge derived from optic ow? In gure I plotted the sizes of the adjusted ground intervals as a function of the simulated travel distances. If subjects possessed an abstract distance gauge, the sizes of the ground intervals should be positively correlated with the simulated travel distances. I listed the results of the corresponding correlation calculation in table 3.2. The correlation coecients for the single subjects varied between 0.94 and 0.96 for the textured ground plane, between 0.73 and 0.98 for dot plane 1, and between 0.75 and 0.98 for dot plane 2. The pooled data of all subjects also revealed a high correlation between the indicated and simulated travel distances (ρ= ). These high correlations indicated that the subjects really possess an abstract distance gauge: with increasing distance simulation the subjects also indicated larger travel distances. But this analysis gave no answer about the correctness of the distance estimation. Therefore, I tted linear regressions to the data points. If subjects correctly perceived and indicated the travel distances, a tted linear regression should have a slope of 1 and an oset of 0 (dashed line in the result gures). The osets of the tted regressions showed constant errors in distance estimation. They gave no information about the relationship between simulated and indicated distances and were therefore left out from further analysis. The linear regressions showed an appropriate description of the data. The r 2 were higher than 0.5 for all ts with a p-values below The slopes of the regression lines were

25 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 20 below 1 regardless of the used environment the data were obtained with (see table 3.2). Also the pooled results over all subjects for each tested distance simulation (5 subjects x 16 conditions x 10 iterations) showed an underestimation of the travelled distance (gure 3.7). This means, that the subjects possess a distance gauge but this distance gauge was incorrect: subjects tended to underestimate the increasing travel distance of about 50 % on the textured ground plane, 30 % on dot plane 1 and 25 % on dot plane 2. The inuence of the virtual environment on distance estimation is described in the next section The inuence of dierent depth cues on distance estimation As described in the general methods (see chapter 2), the three used virtual environments provided dierent depth information. To test whether or not the dierent kinds of depth information inuenced the subject's distance perception, I compared the slopes of the linear regressions tted to the data obtained with the three used environments. I used the pooled data of all subjects obtained with the tested environments (gures 3.7) for this analysis. I normalised the data by subtracting the intercepts of the slopes from all data points. This implied that the linear regressions now crossed the origin of the co-ordinate system. I calculated a two-way-anova afterwards. The slopes of the linear regressions obtained with the three dierent environments were signicantly dierent (p < 0.05). Figure 3.5 shows the single subject's results when motion was simulated during indication of the travel distance on the textured ground plane (control experiment 1). The slopes of the tted linear regressions varied between 0.43 and 1 among the subjects. Four of the ve participating subjects showed furthermore an improvement in distance estimation compared to the rst experiment on the textured ground plane. The slope of the linear regression tted to the pooled data of all subjects was 0.72 and was not signicantly dierent from the slope of the linear regression tted to the data obtained with the ongoing motion simulation on dot plane 2 (two-way-anova, p = 0.119) but from the slope of the experiment on the textured ground plane without ongoing motion simulation during the adjustment of the ground interval (two-way-anova, p < 0.05). Thus, indication of travel distances in dynamic scenes seemed to improve the ability to estimate travel distances. Figure 3.6 shows the results of the repeated experiment on the textured ground plane (control experiment 2) for each subject. In contrast to the rst experiment on the textured ground plane, the slopes of the linear regression were now higher. They varied

26 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 21 Fig. 3.2: The gure shows the results obtained with experiments performed on the textured ground plane. The size of the adjusted interval (i.e. the indicated travel distance) is plotted as a function of the simulated travel distance. Each circle shows the mean over ten trials, the error bars indicate the standard deviation. I simulated four travel distances (3, 3.75, 4.5, and 7.5 m) with two dierent combinations of translational velocity and duration. Red circles indicate data obtained with higher translation velocity (1.5, 2.5, 3, and 3 m/s) and shorter simulation duration (2, 1.5, 1.5, and 2.5 s) than the corresponding blue circles (1, 1.5, 1.5, and 2.5 m/s and 3, 2.5, 3, and 3 s, respectively). The solid line corresponds to the tted regression. The dashed line indicates hypothetical data of exact distance estimation without an error.

27 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 22 Fig. 3.3: Obtained results with self-motion estimation on dot plane 1. The ordinate is the size of the virtual ground interval, the abscissa is the simulated travel distance. Same convention as g. 3.2.

28 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 23 Fig. 3.4: This gure shows the results obtained with experiments performed on dot plane 2. The size of the virtual ground interval is plotted as a function of the simulated travel distance. Same convention as g. 3.2.

29 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 24 between 0.73 and 0.94 among subjects and were for all ve subjects higher than in the rst experiment. For the linear regression tted to the pooled data of all subjects I calculated a slope of This slope was signicantly dierent from that obtained with the rst experiment on the textured ground plane (two-way-anova, p < 0.05). This increase of the slope indicates that the dierences in the slopes of the tted linear regressions obtained with the three used environments were caused by a practice eect. If subjects got used to this kind of experiments the error was reduced up to 25 % The role of translation velocity and simulation duration for distance estimation To test whether or not the distance estimation is biased towards translation velocity or simulation duration, I simulated four distances (3, 3.75, 4.5 and 7.5 m) with two dierent combinations of translation velocity and motion duration. In the result gures red circles symbolise distances simulated with higher translation velocities and shorter simulation durations than those indicated by corresponding blue circles. If the subjects tended to base their distance indication on duration or velocity alone, the same simulated distance should result in dierent distance estimations depending on the used parameter. If this was not the case the participants could only have indicated the perceived distance based on an integrate of the perceived translation ow over time. It is clear from the result gures of the single subjects (see gure ) as well as from the result gures of the pooled data of all subjects (see gure ) that same travel distances were indicated with same ground interval sizes regardless of the used translation velocity and motion simulation. These results showed, that the participants strongly based their travel distance estimation on an integral of the perceived translation velocity over time and not solely on duration or velocity.

30 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 25 Fig. 3.5: The results of the ve participating subjects. The experiment was performed on the textured ground plane with distance indicating in a dynamic scene. For a detailed description of the gure see g. 3.2.

31 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 26 Fig. 3.6: Single subjects results for distance estimation on the textured ground plane (control experiment). The indicated travel distances in terms of a ground interval are plotted as a function of the simulated travel distance. Each circle marks the mean over ve trials. Same convention as g. 3.2

32 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 27 Fig. 3.7: Comparison of the pooled results of all subjects between the virtual environments. The mean size of the adjusted interval over all subjects are plotted as a function of the simulated motion distance. The left panel shows the results of the experiment on the textured ground plane, the middle and right the results on dot plane 1 and dot plane 2, respectively. Each circle corresponds to the mean over all subjects, the error bars indicate the standard deviation among subjects. Red circles indicate results of self-motions simulated with high translation velocities and short simulation durations. Blue circles correspond to means of the same simulated travel distances obtained with lower translation velocities and longer simulation durations. The solid line is the tted regression, the dashed line indicates exact responses without errors. Fig. 3.8: Results of the pooled data over all subjects, obtained in the control experiments. The left panel shows the mean results of all subjects when the experiment on the textured ground plane is just repeated. The right panel shows the mean results of all subjects while the textured ground plane simulates forward motion during adjusting the ground interval. Same convention as g. 3.2.

33 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 28 Table 3.2: The parameters of the tted linear regressions to the data obtained in the three tested environments. The correlation coecients (ρ) between the simulated and indicated distances, slopes and r 2 of the linear regressions are listed for each subject and the three tested environments. "All subjects" are the results of the tted linear regression to data points of all participating subjects. textured ground plane dot plane 1 dot plane 2 subject ρ Slope r 2 ρ Slope r 2 ρ Slope r 2 ha hf jl kg ps All subjects Table 3.3: Parameters of the tted linear regressions to the data obtained with the control experiments. The correlation coecients (ρ) between the simulated and indicated distances, slopes and r 2 of the linear regressions are listed for each subject. "All subjects" are the results of the tted linear regression to data points of all participating subjects. textured ground plane dynamic textured ground plane control subject ρ Slope r 2 ρ Slope r 2 ha hf jl kg ps All subjects Discussion Bremmer & Lappe (1999) showed, that human subjects were able to discriminate two visually simulated ego-motions with respect to the travelled distances. Subjects could solve the task by integrating the perceived translation velocity over time (assuming that the whole optical velocity was based on translation movement) and comparing the two results. In my experiments subjects had to indicate the perceived travel distances in a static environment. That means, that the participants must have had an

34 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 29 internal representation of the travel distance to correctly indicate it in a static environment. Hence, if the traversed distance of the self-motion was doubled, subjects should have also doubled the size of the virtual ground interval. A linear correlation between the indicated travel distance and the simulated distance of the self-motion would be expected. The single subject data as well as the pooled data over all subjects showed, that the indicated travel distance was highly and linearly correlated with the simulated travel distance. This hints at an internal representation of travel distance in terms of an abstract distance gauge. The same virtual travel distances were gauged with the same sizes of the ground interval regardless of the simulation durations and translation velocities. This indicated, that subjects really use a distance gauge. If they would base their judgement on the translation velocity or the simulation duration, dierent interval sizes for same travel distances would be expected. Furthermore if subjects made no errors in judging the distances, the linear regression tted to the data should have a slope of 1.0. Depending on the used virtual environment, the slopes of the linear regression varied between 0.51 and Thus, the subjects indicated shorter distances than the simulated distances. They underestimated the simulated travel distance by about 25 % to 50 % depending on the used virtual environment. My control experiments furthermore showed, that the dierence in slopes of the tted linear regressions obtained with the used environment was based on a practice eect. The repetition of the experiment on the textured ground plane increased the slope of the linear regression compared to the rst experiment even without the motion simulation during distance indication. The subjects seem to have developed a strategy of how to size the ground interval after a certain distance was simulated. Even without feedback about the performance in the experiment, the practice in this kind of experiments improved distance estimation up to an error between 21 % (control experiment 2) and 28 % (control experiment 1). In the further text I will summarise the error in distance estimation obtained with the present experiments with a value 27 %, which is the rounded mean error of the experiments dot plane 1, dot plane 2, control experiments 1 and 2 (the error obtained with the rst experiment on the textured ground plane is left out of the mean error calculation because of the practice eect). But even with practised participants, the travel distances were still underestimated. Underestimation of travel distances was already described in the literature: Loomis et al. (1993) performed experiments in which human blindfolded or blind subjects had to walk a previously traversed distance ranging from 2 m to 10 m. Their results showed, that the walking distance was linearly correlated with the traversed distance. However, with increasing walking distance the subjects underestimated this distance

35 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 30 by about 16 %. In a second experiment, Loomis et al. investigated the ability of human subjects to perform triangle completion tasks with only vestibular and proprioceptive information about the self-motion. The subjects had rst to walk along a travel path including one or two turns. Afterwards they were instructed either to walk directly back to the starting point of their movement or to walk back to the starting point along the travelled path. Again, the subjects walked back shorter distances than they should have to reach the starting point. Peruch (1997) investigated human's performance in triangle completion tasks with only visual information (optic ow) about the self-motion. Besides underestimation of the turning angle between the two travel motions, Peruch et al. also found underestimation of the visual simulated homing distance. Kearns et al. (2002) combined the two studies of Loomis et al. (1993) and Peruch (1997): they instructed subjects to perform triangle completion tasks with only visual, with only vestibular/proprioceptive, or with both kinds of information. If only visual information about the self-motion was provided, the distance back to the starting point of the movement was underestimated with increasing "correct" distance. If vestibular/proprioceptive information was provided, a slight overestimation of the movement distance to the starting point was observed, regardless of the presence or absence of visual information. In a further study, Redlick et al. (2001) used a dierent experimental set-up to investigate human's distance estimation: they rst presented a virtual goal of the self-motion and afterwards visually simulated ego-motion. The subjects were instructed to press a button when they thought they reached the virtual position of the previously seen movement goal. In this study, an overestimation of the travelled distance was observed. The subjects indicated at shorter distances that they reached the virtual movement goal. Taken together, in the literature travel distances were mostly found to be underestimated when the reference travel distances were perceived dynamically (e.g. walking, visual motion simulation). In the study of Redlick et al. (2001), subjects perceived the reference travel distances in a static scene and overestimated the virtual travel distances. Hence, a possible explanation for the under- or overestimation of the travelled distances could be the way of how the distances were presented to the subjects. In chapter 5 I will describe experiments in which I presented the virtual travel distance in a static scene and afterwards visually simulated self-motion. Another possible explanation for the error of travel distance estimation is the perceived metric of the visual space in virtual environments. It is known that the environment is structured in an Euclidean co-ordinate system, but the visual space is not. Most researchers described the visual space as a Riemann space of constant curvature (e.g.

36 CHAPTER 3. DISTANCE ESTIMATION FROM OPTIC FLOW 31 Luneburg (1950); von Helmholtz (1896); Indow (1991)). These researchers based this hypothesis on experiments performed in the real world in comparison to experiments performed with the help of photos. Yet, the metric of visual space has not been measured in virtual environments. We thus do not know, which metric the visual space in my experiments had. If the subjects perceived the visual space in a non-euclidean co-ordinate system, it is possible that the physical size of the adjusted ground interval diers from the actual size that the subjects perceived. To test this, I performed experiments in which I gauged the visual space in virtual environments. I will describe these experiments in chapter 4.

37 Chapter 4 Perceived metric of the visual space in virtual environments 4.1 Introduction In chapter 3 I could show, that travelled distances derived from optic ow were underestimated: with increasing simulated travel distances, the subjects more and more underestimated these distances. The experiments of chapter 3 also revealed, that the indicated distances were linearly correlated with the simulated distances. One possible reason for this underestimation is the metric of the visual space, that means how we visually perceive our environment (the physical space). The discrepancy between the physical and the visual space becomes obvious when we look for example along a railroad track. The two rails seem to converge with increasing distance, but in the physical space they are of course parallel. In the past several groups already investigated the metric of the perceived visual space under standardised conditions. These experiments were performed in natural environments (in the real world). One of the rst researchers who investigated the metric of the perceived visual space was Hermann von Helmholtz: he instructed his subjects to arrange light points so that they appear to lie on a horizontal line (von Helmholtz, 1896). The results showed, that the light points were considerably placed on curves in the physical space. The curvature of these so called horopter curves depended on the distance in depth to the observer. At a certain subject depending distance a the horopter is straight. Nearer distances revealed concave, farer convex horopters. Luneburg (1950) claimed, that the visual space can be described in terms of psychometric distance functions as a space of constant curvature. He based his theory on so called "alley" experiments in which 32

38 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 33 two rows of light points (e.g. street lights) had to be arranged either equidistant between the two sides of the street ("distance alley") or equidistant on either side of the street ("parallel alley") in a frameless environment. In both tasks the seemingly parallel rows were curved in the physical space and distances were increasingly underestimated. Also under full cue conditions the perceived visual space was found to be curved (Koenderink et al., 2002). For a good overview of experiments investigating the metric of the perceived visual space see Indow (1991). All these studies indicate that the metric of the perceived visual space could indeed have inuenced the distance judgements in my experiments. An even stronger hint, that this might have been the case comes from a study of Beusmans (1998). In this study subjects had to match the 3-D size of two collinear depth intervals. Two markers were xed on the ground plane along a straight line in front of the observer, while a third marker, the nearest marker to the observer's position, could be moved towards or away from the observer by a system of pulleys. Therefore, two depth intervals were visible for the subjects. The 3-D size of the xed interval as well as the distance to the xed interval was varied. Beusmans et al. could show, that the interval size was increasingly underestimated with increasing interval size and distance to the observer. The distortion of the perceived visual space found in this study could therefore be suitable to explain my results. However, there is a major dierence between the study of Beusmans (1998) and mine: his experiments were performed in natural scenes while I performed mine in virtual environments. How is visual space perceived in virtual environments, which is important for the explanation of the observed underestimation of the travelled distances in my experiments? The only study examining the metric of the perceived visual space (Hecht et al., 1999) compared the space perception of corner angles of buildings with the perception of the corner angles on pictures of the same object. They described in both experiments an increasing underestimation of the corner angles with increasing distance to the observer's position. This shows that there is also a distortion of perceived visual space in virtual environments. However, this study is not comparable to the underlying spatial processes required in my experiments and can thus give not an answer to the question whether the underestimation in my experiments was caused by mis-perception of the visual space. Taken together, all the experiments described above showed, that human subjects perceive the visual space to be curved. The curvature depended on the distance between the object and the observer's position. With increasing distance to the observer the physical space was increasingly perceived curved and distances were increasingly under-

39 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 34 estimated. Thus, a possible explanation for the observed underestimation in my rst experiments (see chapter 3) could be the curvature of the visual space. That means, the subjects possibly indicated other distances in the physical space than they perceived in the visual space. Until now, no literature exists about the metric of the visual space in virtual environments comparable to those I used in my rst experiments. To investigate the structure of the perceived visual space in virtual environments, I created a virtual environment and surveyed the visual space with two depth intervals, varying in size and distance to the observer. The procedure of the experiment was similar to those of Beusmans (1998) described above. By comparing the results of the metric of the perceived visual space with the observed results of my previous study (chapter 3) I will be able to answer the question whether the distortion of the metric would be able to account for the underestimation of travel distances derived from optic ow elds. Moreover, the similarity to the study of Beusmans (1998) additionally allows me to directly compare the metric of the perceived visual space in virtual environments (my experiments) and under natural conditions (Beusmans, 1998). 4.2 Methods Procedure For the presentation of the stimuli I used the experimental set-up described in section 2.1. At the beginning of each trial I presented the static virtual ground plane (textured ground plane; see chapter 2 for description) together with three horizontal white lines placed on ground level onto the fronto-parallel screen. I placed the three horizontal lines in dierent distances to the observer's virtual position. Thus, the horizontal lines built two depth intervals on the virtual ground plane. In gure 4.1 I show screenshots of the virtual environments. To survey the visual space, I shifted the two ground intervals in depth: the second line could be presented in four dierent distances to the observer's virtual position on the ground plane. Additionally, I also varied the sizes of the ground intervals. I describe the exact values of the distance between the observer and the second line together with the used interval sizes in section The subject's task was to match the size of the adjustable depth interval on the ground plane to the xed one. Therefore, one of the horizontal lines, either the nearest line (experiment 1) or the furthest line (experiment 2) could be moved towards or away from the observer's virtual position on the ground plane by moving a computer mouse. The subjects indicated their decision with a mouse button press after adjusting the ground interval. Before

40 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 35 Fig. 4.1: Screenshots of the visual scenes. The left image shows a scene from experiment 1 (far interval is xed in size), the right from experiment 2 (near interval is xed in size). The moveable line is red coloured for illustration. In the experiments all three horizontal lines are white. the next trial started the observer's virtual position was randomly shifted to either side within 10 m to avoid recognition of same texture elements Participants Four subjects (age 24-30) voluntarily participated in these experiments including myself. All had normal or corrected to normal vision. No payment was granted for the participation. The subjects got no feedback about their performance in the experiment Parameters In the rst experiment, the near interval had to be matched in size with the far interval. I varied the size in depth of the xed interval and the distance to the observer (distance to second line). The observer's virtual distance to the xed interval was either 4, 5, 6 or 7 m, that means, the second interval was shifted in depth. The size of the interval ranged from 0.47 to 5.05 m (see table 4.1). I presented each of the 16 conditions ten times in a pseudorandomised order. In the second experiment the size of the far interval had to be matched with the size of the near interval. The distance to the second line was again either 4, 5, 6 or 7 m.

41 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 36 Table 4.1: Distances of the observer's virtual position on the ground plane to the second line and the size of the xed ground intervals. Distance to Size of xed Distance to Size of xed interval [m] second line [m] interval [ ] third line [m] The size of the near interval in meter was the same as listed in table 4.1. This means, I presented the correct positions of the rst line in experiment 1 together with the second line as the xed ground interval. 4.3 Results Figure 4.2 shows the single subject results of experiment 1, and gure 4.3 the single subjects results obtained with experiment 2. Each circle in the result gures represents the median index (ratio of the indicated interval size and the size of the xed interval) over ten trials. The indices are plotted as a function of the size of the xed ground interval. The error bars show the 30th and 70th percentile. Note, that I changed the scaling of the y-axis in gure 4.3 for subject ms. Same colours of datapoints indicate same distances between the observer's position and the position of the second horizontal

42 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 37 Fig. 4.2: The ratios of the indicated and xed interval sizes (median) are plotted for each subject as a function of the xed interval sizes in meter. In the present experiment the far interval was xed in size and the subjects had to adjust the near interval. Colour coded are the distances to the second line (red: 4 m, black: 5 m, green: 6 m and blue: 7 m). The error bars are the 30th and the 70th percentile. line on the virtual ground plane (red: 4 m, black: 5 m, green: 6 m and blue: 7 m). If subjects make no errors in interval adjustment, the indices should be 1 regardless of the size of the xed interval and the distance to the xed interval. The results of experiment 1 (see gure 4.2) show, that the indices decreased for all subjects with increasing size of the ground interval in depth. This means, the subjects indicated smaller near intervals than the simulated far intervals and therefore underestimated the size of the far interval.

43 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 38 Fig. 4.3: In this experiment the near interval was xed in size and the far virtual ground interval had to be adjusted. Same convention as gure 4.2 Also the distance between the observer and the xed interval inuenced the size of the adjusted near interval: the indices were higher for same interval sizes when the interval was simulated in a greater distance to the observer's virtual position. This is an astonishing result at rst sight as I would have expected the interval size estimation to be reduced with increasing position in depth. The explanation of the results is quite simple. The subjects indicated smaller ground interval sizes as simulated. In screen coordinates this means, the indicated interval sizes were biased towards uniformity. A uniformity in screen coordinates leads to higher errors in ground interval adjustment when the interval is presented near the observer's virtual position on the ground plane than in greater distances because of the perspective presentation. To test whether or

44 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE Far interval fixed in size 2.5 Near interval fixed in size index (indicated/fixed interval) index (indicated/fixed interval) size of fixed interval (2) [m] size of fixed interval (1) [m] Fig. 4.4: The ratios of the indicated and the xed interval sizes (medians) are plotted as a function of the size of the xed interval in meters. Colour coded are the dierent distances of the observer to the second line (red: 4 m, black: 5 m, green: 6 m and blue: 7 m). The error bars indicate the 30th and 70th percentile. not the subjects really equalised the size intervals in screen coordinates, I compared the data (10 iterations x 16 conditions x 4 subjects) with the expected data if subjects would have adjusted same interval sizes in screen coordinates. A Mann-Whitney-U-test revealed that the data were signicantly dierent (p < 0.01) from the expected data of equalising the ground intervals in screen coordinates. This means, the subjects judged the size of the virtual intervals on the ground plane but show a tendency towards a uniformity in screen coordinates. In gure 4.4 in the left panel the results of the pooled data of all subjects for experiment 1 are shown. For this result gure I pooled all data from all subjects (4 subjects, 10 iterations per condition) and again plotted the median indices as a function of the xed interval sizes. This pooled data showed the same pattern of results as the single subject data: decreasing indices with increasing interval sizes and better estimation of same interval sizes for intervals in greater distance than for nearer intervals. An analysis of variance (two-way-anova) revealed, that both parameters, the size of the xed interval (p < 0.05) and the virtual distance of the observer to the xed interval (p < 0.05) as well as the interaction of both (p < 0.05) had a signicant inuence on the perceived size of the xed virtual ground interval. Taken together, these results showed that distances were increasingly underestimated with increasing position in depth in the virtual environment. In gure 4.3 the single subject results obtained with experiment 2 are shown. For all subjects the indices increase with increasing size of the xed near interval. This

45 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 40 means, the subjects adjusted larger far intervals than the simulated size of the near interval and therefore overestimated the size of the near interval. Or, the other way round, they underestimated the size of the virtual far ground interval. I calculated a two-way-anova and found out that the size of the xed interval, the distance between the observer's virtual position and the second line as well as the interaction of both had a signicant inuence on the subject's interval judgement (always p < 0.05). The variation of the distance to the second line showed the same inuence on interval size estimation as in experiment 1. With increasing distance to the second interval the size estimation for same interval sizes became more accurate. Again I calculated a Mann-Whitney-U-test to test whether or not the subjects equalised the interval sizes in screen coordinates. The test showed, that the data were signicantly dierent from hypothetical data of equal interval sizes in screen coordinates (p < 0.01). Figure 4.4 (right panel) shows the pooled data of all subjects obtained with experiment 2. Also for the pooled data the index increases with increasing interval size and decreasing distance to the far interval. The inuence of the xed interval size, the distance to the far interval and the interaction of both was signicant (two-way-anova, p < 0.05). Experiment 2 also showed that the interval sizes were increasingly underestimated with increasing size in depth in the virtual environment. As described above, distances were increasingly underestimated with increasing size of the ground interval. The subjects perceived the visual space compressed with increasing size in depth. To quantify this compression I used the distance between the observer's virtual position and the adjustable line. The subjects positioned the adjustable line on the ground plane so that the variable interval has the same size as the xed interval. In gure 4.5 I plotted these subjective distances for every subject as a function of the physical distances (black dots). The physical distances are the intrinsic distances between the observer and the adjustable line on the virtual ground plane. In the literature, dierent psychometric depth functions were used to describe these types of data. One is the Luneburg function of the form f(r) = ae δ r, another the Gilinski function of the from f(r) = ar. r is the distance to the observer and a the subject a+r depending distance of straight horopter. Both types of function were already used in the literature to describe the metric of the perceived visual space (see for example Hecht et al. (1999)). I tted these two functions together with a power function to the data points. The t of the Luneburg function (red line in gure 4.5) produced the least residuals and therefore described the data best. The data revealed an a of 5.53 and a δ of

46 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE subjective distance [m] physical distance [m] Fig. 4.5: The subjective distance is plotted as a function of the physical distance. Black dots indicate the raw data of four subjects from both experiments. The solid red line is the tted Luneburg function f(r) = ae δ r (see text for description). 4.4 Discussion I performed the present experiments to investigate how human subjects perceive the metric of the visual space in virtual environments and whether or not the perceived metric can explain the previously observed underestimation of travel distances of visually simulated self-motions (see chapter 3). The present experiments clearly show, that human subjects increasingly perceived the visual space compressed with increasing position in depth of the ground interval. The error in distance judgement increased with interval size and distance to the observer up to 46 % in experiment 1 and 42 % in experiment 2 for an interval size of 5.05 m in a distance of 7 m to the observer. These errors correspond to those found in real world experiments. In this real world experiments of Beusmans (1998) the subjects stood on an open eld and had to match the size of a xed ground interval in a distance up to 20 m with a second ground in-

47 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 42 terval, nearer o the subject's position. The arrangement of the two ground intervals in the Beusmans (1998) experiments was the same as in my present experiments except of the markers, which formed the intervals (orange triangles in the Beusmans (1998) experiments). Beusmans (1998) also found increasing underestimation of the interval size with increasing interval size and position in depth: the error for interval sizes of 12 m in a distance of 20 m to the subject's position was about 35 % and thus smaller than in my experiments. The dierence in the underestimation of the interval sizes between the present experiments and those performed by Beusmans (1998) can be explained with the additional depth cues in the real world (size gradient of objects towards the horizon). Similarly, the subjects in the Beusmans experiments were also allowed to perform small head movement and therefore get additional depth information in terms of motion parallax between the interval markers. Taken together, the present experiments show, that the metric of the visual space in virtual environments is comparable to those found in real world experiments. The second aspect of the present experiments was to investigate whether or not the metric of the visual space can explain the underestimation of travel distances found in the experiments described in chapter 3. In these experiments I instructed the subjects to indicate the perceived travel distances of visually simulated self-motions in terms of virtual ground intervals. The indicated ground interval sizes were highly and linearly correlated with the simulated travel distances, but the subjects underestimated the travelled distances. The metric of the visual space, illustrated in gure 4.5, shows, that in a static scene the interval sizes are also underestimated, but not linearly correlated with the simulated sizes of the reference intervals. The indicated interval sizes can be described best with a psychometric function of the form f(r) = ae δ r. The selfmotions in chapter 3 were simulated covering distances ranging from 3 m to 9 m. With a physical distance of 4 m to the virtual ground interval, the position of the adjustable line varied between 7 m and 13 m. In this range, the psychometric function (see gure 4.5 red line) is not linear. Hence, the metric of the perceived visual space in static scenes can not explain the underestimation found in chapter 3 as the tted Luneburg function would predict a non-linear relationship between the simulated travel distances and the adjusted interval sizes. 4.5 Conclusion With the present experiments I investigated the metric of the perceived visual space in virtual environments. I was able to show, that there is a non-linear relationship be-

48 CHAPTER 4. PERCEIVED METRIC OF THE VISUAL SPACE 43 tween the subjective and physical distance in virtual environments, which corresponds to the metric of the perceived visual space in the real world. But although this nonlinear relationship describes underestimation of distances with their increasing size and distance in depth, it can not directly explain the linearly increasing underestimation of travel distances found in the previous chapter (see chapter 3). I therefore conclude that the observed underestimation of distances of simulated self-motion is not based on a mis-perception of the visual space. Further hypothesis to explain the observed underestimation of virtually travelled distances are errors in adjusting the virtual ground intervals or a mis-perception of the self-motion simulation. The investigation of these two hypothesis will be described in the next two chapters.

49 Chapter 5 Over- vs underestimation of the traversed distance 5.1 Introduction In chapter 3 I could show, that human subjects can estimate travel distances derived from optic ow, but they underestimate the travel distances of about 27 %. Redlick et al. (2001) also investigated the ability of humans to estimate the traversed distances of visually simulated self-motions. In contrast to my experiments, they rst visually simulated the goal of the movement in a static scene with a head mounted display and afterwards visually simulated a forward motion along a virtual corridor. The subject's task was to press a button when they thought they reached the location of the previously seen movement goal. Redlick and co-workers found an overestimation of the traversed distances. This means that the subjects pressed the button to early and therefore indicated shorter distances than the previously presented distances in the static scene. The error in distance estimation was about 40 %. The Redlick study as well as my own experiments showed, that human subjects were able to estimate travel distances. But the distance estimation was incorrect. In both studies an error was observed. The discrepancy in the results of the Redlick et al. study and my own experiments is the direction of the error: in the experiments performed by Redlick and co-workers the travel distances were overestimated, whereas in my experiments the travel distances were underestimated. The dierence in the direction of the observed error can be based on dierent factors: the experimental set-up (screen presentation vs. head mounted display), the presentation of the virtual distance, which had to be judged (optic ow vs. static distance), and the length of the virtual distances and simulated translation velocities. The possible inuence of these factors on distance 44

50 CHAPTER 5. OVER- VS UNDERESTIMATION 45 estimation are described in the following. Redlick et al. presented the stimuli in their study with a head mounted display with a 84 x 65 eld of view. The head mounted display was head tracked. This means that the gaze in the virtual environment changed according to the new position and orientation of the head mounted display in space. The subjects could obtain additional depth information in terms of motion parallax if they rotated their heads. In my experiments the subjects sat in front of a 90 x 90 projection screen. The orientation or position of the subject's head was not tracked. The gaze on the virtual environment was constantly simulated in the forward direction during the experiments. Self-induced depth information by changing the head's position or orientation was therefore excluded. The lack of this kind of depth information could lead to dierent perceptions of the simulated self-motion and explain the discrepancy between the Redlick et al. and my own experiments. To investigate the inuence of the additional depth information, I reproduced the experiments performed by Redlick et al., using my experimental set-up without self-induced depth information (experiment A). Redlick et al. presented the travel distances, which had to be judged, in a static scene. Afterwards the indication of the virtual distance was performed in a dynamic scene by visual simulation of self-motions. In my experiments I described in chapter 3 the time course was the other way round: I presented the virtual distances, which had to be estimated in a dynamic scene in terms of optic ow. The subjects indicated the perceived travel distances afterwards in terms of a virtual ground interval in a static or in a dynamic scene. In the literature underestimation of travel distances were described (Loomis et al., 1993; Peruch, 1997; Kearns et al., 2002). In these studies, the distances, which the subjects had to judge were perceived dynamically, e.g. by actively walking a certain distance or by perceiving a visual self-motion simulation (see discussion of the previous chapter 3.4). It is unclear whether or not the presentation of the reference distance in a static or dynamic scene inuences the indicated travel distance. Another possible explanation for the dierence between the Redlick et al. and my own results described in section 3.7 were the used parameters of the simulated self-motion. Because the self-motions were simulated with a certain translation velocity or acceleration in the Redlick et al. study, the only way for the subjects to vary the distance travelled during the self-motion was to vary the duration of the self-motion simulation. If the subjects pressed the button to early, they indicated shorter distances than the distances presented in the static scene and therefore showed an overestimation of the travelled distances. Redlick and co-workers simulated movement goals up to a distance of 32 m in front of the observer's virtual position. With the lowest used translation

51 CHAPTER 5. OVER- VS UNDERESTIMATION 46 velocity of 0.4 m/s, the duration of the motion simulation for a correct distance indication was 80 s. The maximum travel distance I used in my rst experiments was 9 m. The duration of the motion simulation for a correct answer never exceeded 3 s. In a second experiment (B) I therefore investigated the inuence of the presentation of reference distances and the translation velocity together with the simulation duration on distance estimation. Therefore, I used the experimental time course described in the study of Redlick and co-workers and simulated the self-motion with the translation velocities and durations of my rst experiments, described in chapter Methods I performed the experiments with the experimental set-up described in the general methods (see chapter 2). At the beginning of each trial, a virtual white horizontal line was presented for 2 s on the ground plane in front of the observer's position in the virtual environment. This virtual line was the self-motion goal. The distance between the observer's virtual position on the ground plane and the motion goal ranged from 4 to 32 m in experiment A and from 3 to 9 m in experiment B. After 2 s the horizontal line vanished and a self-motion was visually simulated in the forward direction. In experiment A the velocity varied from m/s, and in experiment B from 1-3 m/s. For a detailed description of the movement goal distances and the simulated self-motion velocities see table 5.1. The time course of the stimulus presentation is illustrated in gure 5.1. I used dot plane 1 as virtual environment in both experiments. Because of the limited lifetime of the light points, dot plane 1 provided depth information in a static scene but it avoids the strategy of tracking oncoming objects near the presented movement goal. In both experiments the subjects had to press a mouse button when they thought they reached the virtual location previously occupied by the movement goal with the self-motion simulation. I presented each condition 5 times in experiment A and 10 times in experiment B. The same four subjects participated in both experiments. 5.3 Results Reproduction of the Redlick et al. (2001) experiments Figure 5.2 shows the results of experiment A. The y-axis shows the traversed distances of the self-motions until the subjects indicated that they reached the motion goal. The

52 CHAPTER 5. OVER- VS UNDERESTIMATION 47 Table 5.1: Virtual distances to the movement goal (s), translation velocities (v) and durations (t) for a correctly indicated distance simulated in the two experiments. Experiment A is the reproduction of the experiment performed by Redlick et al. (2001). In experiment B I used the virtual distance and translation velocities of my rst experiments described in chapter 3. Experiment A Experiment B s [m] v [m/s] t [s] s [m] v [m/s] t [s] x-axis represents the simulated target distance presented in the static scene. The used velocities of the self-motion simulation are colour encoded (0.4 m/s: blue; 0.8 m/s: green; 1.6 m/s: red; 3.2 m/s: cyan and 6.4 m/s: magenta). In the result gures of the single subjects, each circle shows the mean over ve trials, error bars represent the standard deviation. I tted linear regressions to the data points for each subject and translation velocity. If the subjects correctly perceived and indicated the travelled distances, the linear regression would have a slope of 1 (dashed line in gure 5.2). Note that in contrast to the result gures in chapter 3 of my rst experiments, a slope

53 CHAPTER 5. OVER- VS UNDERESTIMATION 48 presentation of the movement goal 2 s Start of trial dark screen 0.3 s self-motion simulation End of trial Fig. 5.1: The time course of the stimulus presentation. The white horizontal line (motion goal) is presented for 2 s in a certain distance in front of the observer's virtual position. After this time the screen turns black for 300 ms. The virtual environment reappeas and forward directed self-motion is simulated. This motion simulation is indicated by the white arrows, which are not presented during the experiment. smaller than 1 indicated an overestimation of the travelled distance. This means that the subjects perceived the shorter distance covered by the motion simulation as equal sized with the presented motion goal in a greater distance to the observer's virtual position in the static environment. Slopes of the linear regressions larger than 1 indicated underestimation of the simulated travel distance derived from optic ow. I listed the parameters of the calculated slopes and the correlation coecients in table 5.2. The calculated correlation coecients were above 0.93 for three of four subjects and showed, that the subjects were able to transfer a distance, perceived in a static scene to an estimation of travel distance based on virtual self-motion information. The exceptions were the data of subject ps: I found correlation coecients with values between 0.53 and 0.87, depending on the translation velocity of the self-motion. Compared to the correlation coecients of the other subjects the indicated traversed distances were weaker correlated to the presented static distances. The slopes of the linear regressions

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow Chapter 9 Conclusions 9.1 Summary For successful navigation it is essential to be aware of one's own movement direction as well as of the distance travelled. When we walk around in our daily life, we get

More information

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity Vision Research 45 (25) 397 42 Rapid Communication Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity Hiroyuki Ito *, Ikuko Shibata Department of Visual

More information

the ecological approach to vision - evolution & development

the ecological approach to vision - evolution & development PS36: Perception and Action (L.3) Driving a vehicle: control of heading, collision avoidance, braking Johannes M. Zanker the ecological approach to vision: from insects to humans standing up on your feet,

More information

Factors affecting curved versus straight path heading perception

Factors affecting curved versus straight path heading perception Perception & Psychophysics 2006, 68 (2), 184-193 Factors affecting curved versus straight path heading perception CONSTANCE S. ROYDEN, JAMES M. CAHILL, and DANIEL M. CONTI College of the Holy Cross, Worcester,

More information

Perceived depth is enhanced with parallax scanning

Perceived depth is enhanced with parallax scanning Perceived Depth is Enhanced with Parallax Scanning March 1, 1999 Dennis Proffitt & Tom Banton Department of Psychology University of Virginia Perceived depth is enhanced with parallax scanning Background

More information

Discriminating direction of motion trajectories from angular speed and background information

Discriminating direction of motion trajectories from angular speed and background information Atten Percept Psychophys (2013) 75:1570 1582 DOI 10.3758/s13414-013-0488-z Discriminating direction of motion trajectories from angular speed and background information Zheng Bian & Myron L. Braunstein

More information

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1 Perception, 13, volume 42, pages 11 1 doi:1.168/p711 SHORT AND SWEET Vection induced by illusory motion in a stationary image Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 1 Institute for

More information

IOC, Vector sum, and squaring: three different motion effects or one?

IOC, Vector sum, and squaring: three different motion effects or one? Vision Research 41 (2001) 965 972 www.elsevier.com/locate/visres IOC, Vector sum, and squaring: three different motion effects or one? L. Bowns * School of Psychology, Uni ersity of Nottingham, Uni ersity

More information

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS Bobby Nguyen 1, Yan Zhuo 2, & Rui Ni 1 1 Wichita State University, Wichita, Kansas, USA 2 Institute of Biophysics, Chinese Academy of Sciences,

More information

Human Vision. Human Vision - Perception

Human Vision. Human Vision - Perception 1 Human Vision SPATIAL ORIENTATION IN FLIGHT 2 Limitations of the Senses Visual Sense Nonvisual Senses SPATIAL ORIENTATION IN FLIGHT 3 Limitations of the Senses Visual Sense Nonvisual Senses Sluggish source

More information

Vision V Perceiving Movement

Vision V Perceiving Movement Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion

More information

Distance perception from motion parallax and ground contact. Rui Ni and Myron L. Braunstein. University of California, Irvine, California

Distance perception from motion parallax and ground contact. Rui Ni and Myron L. Braunstein. University of California, Irvine, California Distance perception 1 Distance perception from motion parallax and ground contact Rui Ni and Myron L. Braunstein University of California, Irvine, California George J. Andersen University of California,

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

Vision V Perceiving Movement

Vision V Perceiving Movement Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion

More information

Chapter 73. Two-Stroke Apparent Motion. George Mather

Chapter 73. Two-Stroke Apparent Motion. George Mather Chapter 73 Two-Stroke Apparent Motion George Mather The Effect One hundred years ago, the Gestalt psychologist Max Wertheimer published the first detailed study of the apparent visual movement seen when

More information

TED TED. τfac τpt. A intensity. B intensity A facilitation voltage Vfac. A direction voltage Vright. A output current Iout. Vfac. Vright. Vleft.

TED TED. τfac τpt. A intensity. B intensity A facilitation voltage Vfac. A direction voltage Vright. A output current Iout. Vfac. Vright. Vleft. Real-Time Analog VLSI Sensors for 2-D Direction of Motion Rainer A. Deutschmann ;2, Charles M. Higgins 2 and Christof Koch 2 Technische Universitat, Munchen 2 California Institute of Technology Pasadena,

More information

B.A. II Psychology Paper A MOVEMENT PERCEPTION. Dr. Neelam Rathee Department of Psychology G.C.G.-11, Chandigarh

B.A. II Psychology Paper A MOVEMENT PERCEPTION. Dr. Neelam Rathee Department of Psychology G.C.G.-11, Chandigarh B.A. II Psychology Paper A MOVEMENT PERCEPTION Dr. Neelam Rathee Department of Psychology G.C.G.-11, Chandigarh 2 The Perception of Movement Where is it going? 3 Biological Functions of Motion Perception

More information

Effects of Visual-Vestibular Interactions on Navigation Tasks in Virtual Environments

Effects of Visual-Vestibular Interactions on Navigation Tasks in Virtual Environments Effects of Visual-Vestibular Interactions on Navigation Tasks in Virtual Environments Date of Report: September 1 st, 2016 Fellow: Heather Panic Advisors: James R. Lackner and Paul DiZio Institution: Brandeis

More information

Image Characteristics and Their Effect on Driving Simulator Validity

Image Characteristics and Their Effect on Driving Simulator Validity University of Iowa Iowa Research Online Driving Assessment Conference 2001 Driving Assessment Conference Aug 16th, 12:00 AM Image Characteristics and Their Effect on Driving Simulator Validity Hamish Jamson

More information

Module 2. Lecture-1. Understanding basic principles of perception including depth and its representation.

Module 2. Lecture-1. Understanding basic principles of perception including depth and its representation. Module 2 Lecture-1 Understanding basic principles of perception including depth and its representation. Initially let us take the reference of Gestalt law in order to have an understanding of the basic

More information

Apparent depth with motion aftereffect and head movement

Apparent depth with motion aftereffect and head movement Perception, 1994, volume 23, pages 1241-1248 Apparent depth with motion aftereffect and head movement Hiroshi Ono, Hiroyasu Ujike Centre for Vision Research and Department of Psychology, York University,

More information

VISUAL VESTIBULAR INTERACTIONS FOR SELF MOTION ESTIMATION

VISUAL VESTIBULAR INTERACTIONS FOR SELF MOTION ESTIMATION VISUAL VESTIBULAR INTERACTIONS FOR SELF MOTION ESTIMATION Butler J 1, Smith S T 2, Beykirch K 1, Bülthoff H H 1 1 Max Planck Institute for Biological Cybernetics, Tübingen, Germany 2 University College

More information

the dimensionality of the world Travelling through Space and Time Learning Outcomes Johannes M. Zanker

the dimensionality of the world Travelling through Space and Time Learning Outcomes Johannes M. Zanker Travelling through Space and Time Johannes M. Zanker http://www.pc.rhul.ac.uk/staff/j.zanker/ps1061/l4/ps1061_4.htm 05/02/2015 PS1061 Sensation & Perception #4 JMZ 1 Learning Outcomes at the end of this

More information

PSYCHOLOGICAL SCIENCE. Research Report

PSYCHOLOGICAL SCIENCE. Research Report Research Report RETINAL FLOW IS SUFFICIENT FOR STEERING DURING OBSERVER ROTATION Brown University Abstract How do people control locomotion while their eyes are simultaneously rotating? A previous study

More information

Vestibular cues and virtual environments: choosing the magnitude of the vestibular cue Laurence Harris 1;3 Michael Jenkin 2;3 Daniel C. Zikovitz 3 Dep

Vestibular cues and virtual environments: choosing the magnitude of the vestibular cue Laurence Harris 1;3 Michael Jenkin 2;3 Daniel C. Zikovitz 3 Dep Vestibular cues and virtual environments: choosing the magnitude of the vestibular cue Laurence Harris 1;3 Michael Jenkin 2;3 Daniel C. Zikovitz 3 Departments of Psychology 1, Computer Science 2, and Biology

More information

Path completion after haptic exploration without vision: Implications for haptic spatial representations

Path completion after haptic exploration without vision: Implications for haptic spatial representations Perception & Psychophysics 1999, 61 (2), 220-235 Path completion after haptic exploration without vision: Implications for haptic spatial representations ROBERTA L. KLATZKY Carnegie Mellon University,

More information

MOTION PARALLAX AND ABSOLUTE DISTANCE. Steven H. Ferris NAVAL SUBMARINE MEDICAL RESEARCH LABORATORY NAVAL SUBMARINE MEDICAL CENTER REPORT NUMBER 673

MOTION PARALLAX AND ABSOLUTE DISTANCE. Steven H. Ferris NAVAL SUBMARINE MEDICAL RESEARCH LABORATORY NAVAL SUBMARINE MEDICAL CENTER REPORT NUMBER 673 MOTION PARALLAX AND ABSOLUTE DISTANCE by Steven H. Ferris NAVAL SUBMARINE MEDICAL RESEARCH LABORATORY NAVAL SUBMARINE MEDICAL CENTER REPORT NUMBER 673 Bureau of Medicine and Surgery, Navy Department Research

More information

Psychophysics of night vision device halo

Psychophysics of night vision device halo University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Psychophysics of night vision device halo Robert S Allison

More information

The Shape-Weight Illusion

The Shape-Weight Illusion The Shape-Weight Illusion Mirela Kahrimanovic, Wouter M. Bergmann Tiest, and Astrid M.L. Kappers Universiteit Utrecht, Helmholtz Institute Padualaan 8, 3584 CH Utrecht, The Netherlands {m.kahrimanovic,w.m.bergmanntiest,a.m.l.kappers}@uu.nl

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays

The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays Damian Gordon * and David Vernon Department of Computer Science Maynooth College Ireland ABSTRACT

More information

3D Space Perception. (aka Depth Perception)

3D Space Perception. (aka Depth Perception) 3D Space Perception (aka Depth Perception) 3D Space Perception The flat retinal image problem: How do we reconstruct 3D-space from 2D image? What information is available to support this process? Interaction

More information

2 Study of an embarked vibro-impact system: experimental analysis

2 Study of an embarked vibro-impact system: experimental analysis 2 Study of an embarked vibro-impact system: experimental analysis This chapter presents and discusses the experimental part of the thesis. Two test rigs were built at the Dynamics and Vibrations laboratory

More information

Modulating motion-induced blindness with depth ordering and surface completion

Modulating motion-induced blindness with depth ordering and surface completion Vision Research 42 (2002) 2731 2735 www.elsevier.com/locate/visres Modulating motion-induced blindness with depth ordering and surface completion Erich W. Graf *, Wendy J. Adams, Martin Lages Department

More information

7Motion Perception. 7 Motion Perception. 7 Computation of Visual Motion. Chapter 7

7Motion Perception. 7 Motion Perception. 7 Computation of Visual Motion. Chapter 7 7Motion Perception Chapter 7 7 Motion Perception Computation of Visual Motion Eye Movements Using Motion Information The Man Who Couldn t See Motion 7 Computation of Visual Motion How would you build a

More information

Experiments on the locus of induced motion

Experiments on the locus of induced motion Perception & Psychophysics 1977, Vol. 21 (2). 157 161 Experiments on the locus of induced motion JOHN N. BASSILI Scarborough College, University of Toronto, West Hill, Ontario MIC la4, Canada and JAMES

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

WHEN moving through the real world humans

WHEN moving through the real world humans TUNING SELF-MOTION PERCEPTION IN VIRTUAL REALITY WITH VISUAL ILLUSIONS 1 Tuning Self-Motion Perception in Virtual Reality with Visual Illusions Gerd Bruder, Student Member, IEEE, Frank Steinicke, Member,

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

Magnification rate of objects in a perspective image to fit to our perception

Magnification rate of objects in a perspective image to fit to our perception Japanese Psychological Research 2008, Volume 50, No. 3, 117 127 doi: 10.1111./j.1468-5884.2008.00368.x Blackwell ORIGINAL Publishing ARTICLES rate to Asia fit to perception Magnification rate of objects

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

The shape of luminance increments at the intersection alters the magnitude of the scintillating grid illusion

The shape of luminance increments at the intersection alters the magnitude of the scintillating grid illusion The shape of luminance increments at the intersection alters the magnitude of the scintillating grid illusion Kun Qian a, Yuki Yamada a, Takahiro Kawabe b, Kayo Miura b a Graduate School of Human-Environment

More information

First-order structure induces the 3-D curvature contrast effect

First-order structure induces the 3-D curvature contrast effect Vision Research 41 (2001) 3829 3835 www.elsevier.com/locate/visres First-order structure induces the 3-D curvature contrast effect Susan F. te Pas a, *, Astrid M.L. Kappers b a Psychonomics, Helmholtz

More information

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Tone-in-noise detection: Observed discrepancies in spectral integration Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Armin Kohlrausch b) and

More information

Vision and navigation in bees and birds and applications to robotics. Mandyam Srinivasan

Vision and navigation in bees and birds and applications to robotics. Mandyam Srinivasan Vision and navigation in bees and birds and applications to robotics Mandyam Srinivasan Queensland Brain Institute and Institute of Electrical and Electronic Engineering University of Queensland and ARC

More information

Visual Effects of Light. Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana

Visual Effects of Light. Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana Visual Effects of Light Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana Light is life If sun would turn off the life on earth would

More information

IV: Visual Organization and Interpretation

IV: Visual Organization and Interpretation IV: Visual Organization and Interpretation Describe Gestalt psychologists understanding of perceptual organization, and explain how figure-ground and grouping principles contribute to our perceptions Explain

More information

Optimizing color reproduction of natural images

Optimizing color reproduction of natural images Optimizing color reproduction of natural images S.N. Yendrikhovskij, F.J.J. Blommaert, H. de Ridder IPO, Center for Research on User-System Interaction Eindhoven, The Netherlands Abstract The paper elaborates

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

Visual Effects of. Light. Warmth. Light is life. Sun as a deity (god) If sun would turn off the life on earth would extinct

Visual Effects of. Light. Warmth. Light is life. Sun as a deity (god) If sun would turn off the life on earth would extinct Visual Effects of Light Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana Light is life If sun would turn off the life on earth would

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Inventory of Supplemental Information

Inventory of Supplemental Information Current Biology, Volume 20 Supplemental Information Great Bowerbirds Create Theaters with Forced Perspective When Seen by Their Audience John A. Endler, Lorna C. Endler, and Natalie R. Doerr Inventory

More information

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity and acceleration sensing Force sensing Vision based

More information

Learning relative directions between landmarks in a desktop virtual environment

Learning relative directions between landmarks in a desktop virtual environment Spatial Cognition and Computation 1: 131 144, 1999. 2000 Kluwer Academic Publishers. Printed in the Netherlands. Learning relative directions between landmarks in a desktop virtual environment WILLIAM

More information

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang Vestibular Responses in Dorsal Visual Stream and Their Role in Heading Perception Recent experiments

More information

Concentric Spatial Maps for Neural Network Based Navigation

Concentric Spatial Maps for Neural Network Based Navigation Concentric Spatial Maps for Neural Network Based Navigation Gerald Chao and Michael G. Dyer Computer Science Department, University of California, Los Angeles Los Angeles, California 90095, U.S.A. gerald@cs.ucla.edu,

More information

Enclosure size and the use of local and global geometric cues for reorientation

Enclosure size and the use of local and global geometric cues for reorientation Psychon Bull Rev (2012) 19:270 276 DOI 10.3758/s13423-011-0195-5 BRIEF REPORT Enclosure size and the use of local and global geometric cues for reorientation Bradley R. Sturz & Martha R. Forloines & Kent

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion What You Need To Know: x x v v v o ox ox v v ox at 1 t at a x FIGURE 1 Linear Motion Equations The Physics So far in lab you ve dealt with an object moving horizontally or an

More information

Thin Lenses * OpenStax

Thin Lenses * OpenStax OpenStax-CNX module: m58530 Thin Lenses * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 By the end of this section, you will be able to:

More information

Static and Moving Patterns

Static and Moving Patterns Static and Moving Patterns Lyn Bartram IAT 814 week 7 18.10.2007 Pattern learning People who work with visualizations must learn the skill of seeing patterns in data. In terms of making visualizations

More information

Quintic Hardware Tutorial Camera Set-Up

Quintic Hardware Tutorial Camera Set-Up Quintic Hardware Tutorial Camera Set-Up 1 All Quintic Live High-Speed cameras are specifically designed to meet a wide range of needs including coaching, performance analysis and research. Quintic LIVE

More information

Feeding human senses through Immersion

Feeding human senses through Immersion Virtual Reality Feeding human senses through Immersion 1. How many human senses? 2. Overview of key human senses 3. Sensory stimulation through Immersion 4. Conclusion Th3.1 1. How many human senses? [TRV

More information

Thinking About Psychology: The Science of Mind and Behavior 2e. Charles T. Blair-Broeker Randal M. Ernst

Thinking About Psychology: The Science of Mind and Behavior 2e. Charles T. Blair-Broeker Randal M. Ernst Thinking About Psychology: The Science of Mind and Behavior 2e Charles T. Blair-Broeker Randal M. Ernst Sensation and Perception Chapter Module 9 Perception Perception While sensation is the process by

More information

Human heading judgments in the presence. of moving objects.

Human heading judgments in the presence. of moving objects. Perception & Psychophysics 1996, 58 (6), 836 856 Human heading judgments in the presence of moving objects CONSTANCE S. ROYDEN and ELLEN C. HILDRETH Wellesley College, Wellesley, Massachusetts When moving

More information

Figure 1: Energy Distributions for light

Figure 1: Energy Distributions for light Lecture 4: Colour The physical description of colour Colour vision is a very complicated biological and psychological phenomenon. It can be described in many different ways, including by physics, by subjective

More information

The Perception of Optical Flow in Driving Simulators

The Perception of Optical Flow in Driving Simulators University of Iowa Iowa Research Online Driving Assessment Conference 2009 Driving Assessment Conference Jun 23rd, 12:00 AM The Perception of Optical Flow in Driving Simulators Zhishuai Yin Northeastern

More information

Three stimuli for visual motion perception compared

Three stimuli for visual motion perception compared Perception & Psychophysics 1982,32 (1),1-6 Three stimuli for visual motion perception compared HANS WALLACH Swarthmore Col/ege, Swarthmore, Pennsylvania ANN O'LEARY Stanford University, Stanford, California

More information

Extra-retinal and Retinal Amplitude and Phase Errors During Filehne Illusion and Path Perception.

Extra-retinal and Retinal Amplitude and Phase Errors During Filehne Illusion and Path Perception. Extra-retinal and Retinal Amplitude and Phase Errors During Filehne Illusion and Path Perception. Tom C.A. Freeman 1,2,*, Martin S. Banks 1 and James A. Crowell 1,3 1 School of Optometry University of

More information

Pursuit compensation during self-motion

Pursuit compensation during self-motion Perception, 2001, volume 30, pages 1465 ^ 1488 DOI:10.1068/p3271 Pursuit compensation during self-motion James A Crowell Department of Psychology, Townshend Hall, Ohio State University, 1885 Neil Avenue,

More information

Low-Frequency Transient Visual Oscillations in the Fly

Low-Frequency Transient Visual Oscillations in the Fly Kate Denning Biophysics Laboratory, UCSD Spring 2004 Low-Frequency Transient Visual Oscillations in the Fly ABSTRACT Low-frequency oscillations were observed near the H1 cell in the fly. Using coherence

More information

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system Line of Sight Method for Tracker Calibration in Projection-Based VR Systems Marek Czernuszenko, Daniel Sandin, Thomas DeFanti fmarek j dan j tomg @evl.uic.edu Electronic Visualization Laboratory (EVL)

More information

The Haptic Perception of Spatial Orientations studied with an Haptic Display

The Haptic Perception of Spatial Orientations studied with an Haptic Display The Haptic Perception of Spatial Orientations studied with an Haptic Display Gabriel Baud-Bovy 1 and Edouard Gentaz 2 1 Faculty of Psychology, UHSR University, Milan, Italy gabriel@shaker.med.umn.edu 2

More information

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School

More information

Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception

Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception Perception & Psychophysics 2000, 62 (5), 900-909 Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception TOM C. A. FREEMAN University of California, Berkeley, California

More information

Perception of scene layout from optical contact, shadows, and motion

Perception of scene layout from optical contact, shadows, and motion Perception, 2004, volume 33, pages 1305 ^ 1318 DOI:10.1068/p5288 Perception of scene layout from optical contact, shadows, and motion Rui Ni, Myron L Braunstein Department of Cognitive Sciences, University

More information

Muscular Torque Can Explain Biases in Haptic Length Perception: A Model Study on the Radial-Tangential Illusion

Muscular Torque Can Explain Biases in Haptic Length Perception: A Model Study on the Radial-Tangential Illusion Muscular Torque Can Explain Biases in Haptic Length Perception: A Model Study on the Radial-Tangential Illusion Nienke B. Debats, Idsart Kingma, Peter J. Beek, and Jeroen B.J. Smeets Research Institute

More information

Section 3. Imaging With A Thin Lens

Section 3. Imaging With A Thin Lens 3-1 Section 3 Imaging With A Thin Lens Object at Infinity An object at infinity produces a set of collimated set of rays entering the optical system. Consider the rays from a finite object located on the

More information

Chapter 8: Perceiving Motion

Chapter 8: Perceiving Motion Chapter 8: Perceiving Motion Motion perception occurs (a) when a stationary observer perceives moving stimuli, such as this couple crossing the street; and (b) when a moving observer, like this basketball

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway Interference in stimuli employed to assess masking by substitution Bernt Christian Skottun Ullevaalsalleen 4C 0852 Oslo Norway Short heading: Interference ABSTRACT Enns and Di Lollo (1997, Psychological

More information

FEM Approximation of Internal Combustion Chambers for Knock Investigations

FEM Approximation of Internal Combustion Chambers for Knock Investigations 2002-01-0237 FEM Approximation of Internal Combustion Chambers for Knock Investigations Copyright 2002 Society of Automotive Engineers, Inc. Sönke Carstens-Behrens, Mark Urlaub, and Johann F. Böhme Ruhr

More information

PERCEIVING MOTION CHAPTER 8

PERCEIVING MOTION CHAPTER 8 Motion 1 Perception (PSY 4204) Christine L. Ruva, Ph.D. PERCEIVING MOTION CHAPTER 8 Overview of Questions Why do some animals freeze in place when they sense danger? How do films create movement from still

More information

Static and Moving Patterns (part 2) Lyn Bartram IAT 814 week

Static and Moving Patterns (part 2) Lyn Bartram IAT 814 week Static and Moving Patterns (part 2) Lyn Bartram IAT 814 week 9 5.11.2009 Administrivia Assignment 3 Final projects Static and Moving Patterns IAT814 5.11.2009 Transparency and layering Transparency affords

More information

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency Shunsuke Hamasaki, Atsushi Yamashita and Hajime Asama Department of Precision

More information

VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE

VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE Yiru Zhou 1, Xuecheng Yin 1, and Masahiro Ohka 1 1 Graduate School of Information Science, Nagoya University Email: ohka@is.nagoya-u.ac.jp

More information

The introduction and background in the previous chapters provided context in

The introduction and background in the previous chapters provided context in Chapter 3 3. Eye Tracking Instrumentation 3.1 Overview The introduction and background in the previous chapters provided context in which eye tracking systems have been used to study how people look at

More information

Neural Network Driving with dierent Sensor Types in a Virtual Environment

Neural Network Driving with dierent Sensor Types in a Virtual Environment Neural Network Driving with dierent Sensor Types in a Virtual Environment Postgraduate Project Department of Computer Science University of Auckland New Zealand Benjamin Seidler supervised by Dr Burkhard

More information

Chapter 5: Color vision remnants Chapter 6: Depth perception

Chapter 5: Color vision remnants Chapter 6: Depth perception Chapter 5: Color vision remnants Chapter 6: Depth perception Lec 12 Jonathan Pillow, Sensation & Perception (PSY 345 / NEU 325) Princeton University, Fall 2017 1 Other types of color-blindness: Monochromat:

More information

Whole geometry Finite-Difference modeling of the violin

Whole geometry Finite-Difference modeling of the violin Whole geometry Finite-Difference modeling of the violin Institute of Musicology, Neue Rabenstr. 13, 20354 Hamburg, Germany e-mail: R_Bader@t-online.de, A Finite-Difference Modelling of the complete violin

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Maps in the Brain Introduction

Maps in the Brain Introduction Maps in the Brain Introduction 1 Overview A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place Fields 2 What are Maps I Intuitive Definition: Maps are

More information

Vision: How does your eye work? Student Advanced Version Vision Lab - Overview

Vision: How does your eye work? Student Advanced Version Vision Lab - Overview Vision: How does your eye work? Student Advanced Version Vision Lab - Overview In this lab, we will explore some of the capabilities and limitations of the eye. We will look Sight at is the one extent

More information

On Contrast Sensitivity in an Image Difference Model

On Contrast Sensitivity in an Image Difference Model On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New

More information

SWARM ROBOTICS: PART 2. Dr. Andrew Vardy COMP 4766 / 6912 Department of Computer Science Memorial University of Newfoundland St.

SWARM ROBOTICS: PART 2. Dr. Andrew Vardy COMP 4766 / 6912 Department of Computer Science Memorial University of Newfoundland St. SWARM ROBOTICS: PART 2 Dr. Andrew Vardy COMP 4766 / 6912 Department of Computer Science Memorial University of Newfoundland St. John s, Canada PRINCIPLE: SELF-ORGANIZATION 2 SELF-ORGANIZATION Self-organization

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Sadaf Fatima, Wendy Mixaynath October 07, 2011 ABSTRACT A small, spherical object (bearing ball)

More information

Vision. Definition. Sensing of objects by the light reflected off the objects into our eyes

Vision. Definition. Sensing of objects by the light reflected off the objects into our eyes Vision Vision Definition Sensing of objects by the light reflected off the objects into our eyes Only occurs when there is the interaction of the eyes and the brain (Perception) What is light? Visible

More information

The ground dominance effect in the perception of 3-D layout

The ground dominance effect in the perception of 3-D layout Perception & Psychophysics 2005, 67 (5), 802-815 The ground dominance effect in the perception of 3-D layout ZHENG BIAN and MYRON L. BRAUNSTEIN University of California, Irvine, California and GEORGE J.

More information

Estimating distances and traveled distances in virtual and real environments

Estimating distances and traveled distances in virtual and real environments University of Iowa Iowa Research Online Theses and Dissertations Fall 2011 Estimating distances and traveled distances in virtual and real environments Tien Dat Nguyen University of Iowa Copyright 2011

More information