Author s accepted manuscript. Original is available at 1

Size: px
Start display at page:

Download "Author s accepted manuscript. Original is available at 1"

Transcription

1 Author s accepted manuscript. Original is available at 1 Cover letter to PRESENCE for the manuscript entitled Sound representing self-motion in virtual environments enhances linear vection. Authors: Aleksander Väljamäe, Pontus Larsson, Daniel Västfjäll and Mendel Kleiner Division of Applied Acoustics, Chalmers University of Technology, Gothenburg, Sven Hultins gata 8a 41296, Sweden; Corresponding author: Aleksander Väljamäe, Chalmers University of Technology, Division of Applied Acoustics, Sven Hultins gata 8a 41296, Gothenburg, SWEDEN Additional note: Figures 1, 3-8 are added in the electronic format, as TIFF.

2 Author s accepted manuscript. Original is available at 2 Abstract Sound is an important, but often neglected, component for creating a self-motion illusion (vection) in Virtual Reality applications, e.g. motion simulators. Apart from auditory motion cues, sound can provide contextual information representing self-motion in a virtual environment. In two experiments we investigated the benefits of hearing an engine sound when presenting auditory (experiment 1) or auditory-vibrotactile (experiment 2) virtual environments inducing linear vection. The addition of the engine sound to the auditory scene significantly enhanced subjective ratings of vection intensity in experiment 1 and vection onset times but not subjective ratings in experiment 2. Further analysis using individual imagery vividness scores showed that this disparity between vection measures was created by participants with higher kinesthetic imagery. On the contrary, for participants with lower kinesthetic imagery scores the engine sound enhanced vection sensation in both experiments. A high correlation with participants kinesthetic imagery vividness scores suggests the influence of a first person perspective in the perception of the engine sound. We hypothesize that self-motion sounds (e.g. the sound of footsteps, engine sound) represent a specific type of acoustic body-centered feedback in virtual environments. Therefore, the results may contribute to a better understanding of the role of self-representation sounds (sonic self-avatars), in virtual and augmented environments.

3 Author s accepted manuscript. Original is available at 3 Introduction Sound effects are often used in modern multi-modal motion simulators (Brooks, 1999; Mourant & Refsland, 2003); however, little is known about effectiveness of the auditory cues in the creation of self-motion experiences. Illusory self-motion is often a desirable product in many professional training or entertainment industry applications where no physical motion simulation is provided. When perceiving illusory self-motion (also referred to as vection), one experiences a locomotion sensation relative to a stable surrounding environment. For example, in real life, a vection sensation may arise when observing a departing train on a neighboring railway track. Historically, vection studies have mainly been concentrated on visually induced self-motion (see Anderson (1986), Hettinger (2002) for reviews). The development of virtual reality (VR) technologies provided new tools for studying self-motion illusion where contributions of different sensory modalities and their interactions can be accessed (e.g. Harris et al., 2002, Dieterich, Bense, Stephan, Yousry, & Brandt, 2003). Hence, auditory induced vection (AIV) has recently attracted the attention of several research groups (Sakamoto, Osada, Suzuki, & Gyoba, 2004; Kapralos, Zikovitz, Jenkin, & Harris, 2004; Larsson, Västfjäll, & Kleiner, 2004). The research on AIV may provide an important input to the design of new multimodal, low-cost motion simulators where visual input is reduced (cf. arcade games, portable devices). Auditory induced vection can be elicited using moving sound fields, either real (e.g. using loudspeaker arrays) or virtual ones. Virtual acoustic environments are typically created using binaural sound synthesis techniques and rendered via headphones. In

4 Author s accepted manuscript. Original is available at 4 binaural technology, anechoic, non-spatialized ( dry ) sounds are convolved with premeasured Head-Related Transfer Functions (HRTFs) corresponding to spatial positions of a virtual source to be present in a simulation (Kleiner, Dalenbäck, & Svensson, 1993). Binaural synthesis of moving soundfields is a computationally demanding task and its optimization is beneficial in real time VR simulations. Therefore, finding the physical (e.g. binaural cues, Doppler effect, etc.) and contextual (e.g. type of sound sources used) auditory cues which are most instrumental in inducing AIV is an important step towards perceptually optimized, cross-modal motion simulators. However, the vection inducing power of auditory cues is rather weak compared to visual stimulation. For example, in a classic study on circular AIV by Lackner (1977) the range of self-motion reports varied from 17 to 75 % for a particular type of rotating sound fields. One of the aspects of this experiment was the influence of spatial rendering quality of acoustic sound fields on vection. Results demonstrated a significantly higher number of self-motion reports for sound reproduction based on 6 uniformly distributed loudspeakers as opposed to stereophonic, headphone-based reproduction. Recently we studied circular AIV employing binaural synthesis where rotating sound fields were simulated using generic and individualized HRTF catalogues (Väljamäe, Larsson, Västfjäll, & Kleiner, 2004). A significant difference between generic and individualized virtual acoustic fields was found for spatial presence ratings but not for any of the vection measures. Comparing this study with the results from (Lackner, 1977), it may be argued that the maximum AIV sensation may be created by an auditory space of an optimal spatial quality level, and that further improvement of spatial sound fidelity can meet a ceiling effect (e.g. using individualized HRTFs instead of generic ones). This should be

5 Author s accepted manuscript. Original is available at 5 particularly true for multisensory simulations where sound is accompanying visual scenes. In recent audio-visual studies on circular vection it was found that rotating sounds significantly enhance self-motion responses (Riecke, Schulte-Pelkum, Caniard, & Bülthoff, 2005), especially when the visual field of view is reduced (Riecke, Väljamäe, & Schulte-Pelkum, submitted). In addition, in (Riecke et al., submitted) three levels of spatial sound fidelity were tested and no significant improvement were found when the sound field resolution was refined from max 45 degrees up to a 5 degree resolution (cf. 6 loudspeaker setup in Lackner, 1977). These results are consistent with a large body of research showing that multisensory percepts are integrated in a statistically optimal fashion (cf. Ernst & Banks, 2002) and that in this integration process a relatively small weight is given to auditory cues. Cognitive aspects of acoustic information may play a more important role in selfmotion simulations compared to correct physical modeling of auditory motion cues. In our previous experiments on circular AIV (Larsson et al., 2004) we found that the selfmotion sensation was stronger when moving sound fields contained ecological, everyday sounds instead of artificial stimuli (pure tones and noises). In addition, creating moving auditory scenes with easily recognizable sound sources appears to help us resolving the question I am moving versus The surrounding environment is moving. In (Larsson et al., 2004) it was found that spatially moving sounds identified as auditory landmarks (e.g. a fountain or a church bell sound) give rise to a higher sensation of self motion than sounds originating from moving sources (e.g. the sound of a driving bus). Studying linear AIV in (Väljamäe, Larsson, Västfjäll, & Kleiner, 2005) we also addressed the influences of contextual information. Several recent studies on auditory and

6 Author s accepted manuscript. Original is available at 6 audio-visual perception demonstrated that looming sounds have perceptual and behavioral priority and that sounds perceived as approaching have greater biological salience than receding ones (e.g. Maier, Neuhoff, Logothetis, & Ghazanfar, 2004). Contrasting virtual environments where a listener was approaching or going away from two noise sources we found that former situation resulted in significantly higher vection responses (Väljamäe et al., 2005). In the experiment, virtual auditory environments for creating both forward and backward self-motion illusions were simulated. The forward direction bias for the reported illusory locomotion suggested the influence of ecological probability in AIV simulations. Further developing the ideas related to contextual information influences on vection, one can expect a vection enhancing effect when providing body-centered acoustic feedback via sounds associated with ego-motion. Such sounds are caused by user motion and can be represented by the sound of footsteps, engine sounds, the sound of coins in one s pocket, etc. This idea was inspired by participants verbal responses from (Larsson et al., 2004), where footstep sounds sometimes elicited an illusion of moving along with a crowd. According to a body-centered interaction paradigm introduced by Slater and Usoh (1994), the representation of virtual body states and a body-centered sensory feedback (e.g. visible parts of a virtual body) are crucial components for high presence responses in Virtual Environment (VE). In the two experiments presented in this paper we explore how the addition of an engine sound can enhance the illusory self-motion induced by moving soundfields only (experiment 1) or moving soundfields and vibrotactile stimulation (experiment 2). We hypothesize that in line with the idea of body-centered sensory feedback (Slater & Usoh,

7 Author s accepted manuscript. Original is available at ), self-representation sounds may serve as an auditory counterpart of a visual representation of oneself in VR and thus increase presence and self-motion responses. We believe that this line of research may contribute to a better understanding of the role of sounds for self-representation, sonic self-avatars, in Virtual Environments (VEs). 2 Experiment 1: auditory induced linear vection In the first experiment we studied how the addition of an engine sound will influence the illusory translational self-motion which was induced by presenting virtual acoustic scenes with moving naturalistic sound objects. 2.1 Method Apparatus The experiment was conducted in a special laboratory setup with black curtains surrounding the participant (see Figure 1). Stimuli were played back with Beyerdynamic DT-990Pro circumaural headphones. Specific measures were taken in order to amplify the AIV sensation taking into account the experimental procedure in (Lackner, 1977) and our previous experiments in (Larsson et al., 2004; Väljamäe et al., 2004). During the experiment, participants were blindfolded and seated on a chair mounted on a wheeled platform coupled with a wheeled footrest as shown in Figure 2. Our intention was to avoid strong influences from the visual cues signaling about the fixed position of the

8 Author s accepted manuscript. Original is available at 8 chair before and during the experiment. In other words, seeing the wheeled platform made participants aware of a potential for physical self-motion in the setup. FIGURE 1 HERE Stimuli and experimental design Binaural sound synthesis was used to simplify the experimental setup, which was intended to resemble an optimized, cost-effective VR motion simulator. The stimuli were synthesized in Matlab using a catalogue of generic HRTFs, which was measured from the KEMAR mannequin using the procedure described in (Väljamäe et al., 2004). For sound stimulus synthesis only one horizontal plane (-4 degree elevation) from the HRTF catalogue of 5 degree resolution was used. Each moving sound source was synthesized using separate convolutions with HRTFs of azimuth angles corresponding to the source trajectory. Then these different segments were combined into a single sound track using 50 ms overlaps. In this experiment no acoustic environment rendering (early reflections, reverberation etc.) was applied. Participants were presented with 12 ecological sound excerpts varying in auditory scene content and spatio-temporal properties of the moving sounds. The within-subjects factorial design was: 2 (engine sound on/off) x 3 (positions of auditory landmarks) x 2 (velocities of auditory landmarks). Auditory scenes contained either spatialized moving auditory landmarks alone or landmarks with a spatialized but stationary engine sound

9 Author s accepted manuscript. Original is available at 9 close to the listener. Both the auditory landmarks and the engine sound were designed to evoke clearly recognizable object stereotypes in participants. The auditory landmarks -scenes contained binaural spatializations of sound sources approaching the listener from 3 different starting points and at 2 velocities (constant velocity of 1 m/s or slow acceleration at m/s). The aim of using these spatializations was to elicit linear self-motion illusions in the forward direction in a similar manner as in previous research on auditory induced rotational vection (Larsson et al., 2004, Väljamäe et al., 2004). Hence, two ecological sounds - bus (sound of a bus on idle) and dog (barking dog) previously found to be effective in inducing rotational self-motion illusions were used as an input to the spatialization. The frequency range of the spatialized sounds was from 0.1 to 13 khz. Headphone equalization was applied in order to prevent coloration artifacts and to increase externalization of the sounds. Figure 2 illustrates the three moving scene types with different initial positions of the auditory landmarks. The Distant type simulate the situation where a listener is approaching the two landmarks, the Close type leaving the landmarks behind, the Mixed type moving from the one landmark towards the other one. Intensities of simulated sound sources varied according to the inverse square law (6 db sound pressure level change per distance doubling). It has to be noted that for anechoic simulations, distance perception depends on various factors like the type of sound source, listener expectations, etc., thus making such acoustic synthesis subjective. In the current experiment, the following assumptions about initial distances to the moving sound sources were used for modeling: 1) distant type 50 meters from a listener for the constant velocity sounds and 20 meters for the accelerating sounds (the main motivation

10 Author s accepted manuscript. Original is available at 10 for this difference was to have the point of closest passage of sound stimuli at a similar time for all conditions); 2) close type 1 meter 3) mixed type - 5 meters for nearby source and 50 meters to distant (see Figure 2). All moving sources were simulated at 5 m to the left and to the right from the median plane. FIGURE 2 HERE The engine sound intended to represent the sound of a small electrical vehicle, a Cybercart (such as an electrical wheelchair), was used. This sound was synthesized using software based on the Synthesis ToolKit-library (Cook & Scavone, 2004) and consisted of ten sinusoidal signals and a recorded noisy component mimicking the sound of a gearbox. The fundamental frequency of the sinusoidal part of the sound was at about 105 Hz while the other sine components were in the range Hz. The software developed to produce the engine sound also allows for varying the frequencies of the sine components in such way as to resemble an increase in rpm of an electrical engine. This feature was used in the experiment to create a sound mimicking the Cybercart engine acceleration. All stimuli were approximately 1 minute long. The sound sources motion pattern was the following: 4 sec. stationary phase, 3 sec. acceleration and 60 sec. constant velocity or acceleration phase. A Hann half-window of 0.5 sec. duration was applied to achieve smooth stimuli on- and off-sets. The engine sound started with a clear start-up noise burst at the onset of the sound sources motion and after that was kept at a constant rpm level.

11 Author s accepted manuscript. Original is available at Measures To assess auditory-induced vection, two direct verbal measures were used. Vection intensity corresponded to the level of the subjective sensation when experiencing selfmotion. On the vection convincingness scale subjects had to report how convinced they were of the direction of perceived self-motion. It should be noted that the convincingness and intensity ratings often are highly correlated. Participants were also asked to rate their spatial presence sensation in this virtual acoustic space, defined as a sensation of being actually present in the virtual world. Ratings of all three measures were given on a scale. During experiments on auditory-induced vection participants have to be blindfolded and it is known that the eye closure in darkness activates imagination and sensory systems (Marx et al., 2003). Therefore, individual data was collected using shortened form of Betts s Questionnaire upon Mental Imagery (QMI) by (Sheehan, 1967) in order to investigate the possible influence of participant s imagination on vection and spatial presence responses. The QMI accesses mental imagery vividness for visual, auditory, kinesthetic, cutaneous, olfactory and organic senses. For each modality 5 mental images should be evoked and evaluated for their vividness on a 0..7 reverse-scale (e.g. seeing a The sun as it is sinking below the horizon ).

12 Author s accepted manuscript. Original is available at Procedure In the first experiment 23 naive participants (11 male) with a mean age of 24.5 (SD 3.8) took part. All participants filled in a web-based shortened form of the QMI before coming to the experiment. Participants were instructed verbally about the experimental procedure and explained that their main task was to report the direction of perceived selfmotion. No clear indication was given whether the experimental chair would be physically moved or not. A short training session with two different stimuli was performed before the experiment started. All stimuli in the experiment 1 were presented in full length. At the end of each stimulus playback, participants had to report their spatial presence sensation verbally. If a sensation of self-motion was perceived, ratings on intensity and convincingness of perceived self-motion were recorded. Stimuli were presented in randomized order (varied across participants) with a small break after 6 excerpts. Apart from the verbal responses to the questionnaire, verbal probing was done by the experiment leader. After completing the experiment, participants were debriefed, thanked and paid for their participation. 2.2 Results and discussion Main effect of the engine sound

13 Author s accepted manuscript. Original is available at 13 Participants verbal responses on vection intensity, convincingness and spatial presence were submitted to three separate 2x3x2 within-subjects ANOVAs. Greenhouse- Geisser correction was used whenever unequal variances occurred. Verbal reports on self-motion intensity showed a significant enhancing effect of the engine sound at p < 0.05, F(1,22) = 4.63 (see Figure 3). Convincingness ratings followed the pattern of the intensity ratings, but did not reach significance: F(1,22) = 3.95, p = Spatial presence ratings did not show any significant effect for the engine sound factor. No other main effects reached significance. FIGURE 3 HERE Interaction between the engine sound and auditory landmarks initial position Different stimuli types elicited self-motion in 52-83% of the participants (from 12 to 19 vection reports out of 23, see Figure 3). Only one subject did not report experiencing self-motion for any of the conditions. In comparison, in our previous experiments on circular AIV, rotating acoustic fields elicited self-motion sensation only in 23-50% of participants (Larsson et al., 2004). When comparing the engine sound impact across 3 types of initial positions of acoustic landmarks (see Figure 2), the following vection reports pattern appeared. For the distant auditory scene type no difference occurred: 18 (auditory landmarks) vs. 18 (auditory landmarks + engine) reports; for the close type (14 vs. 19) and for the mixed type (16 vs. 18) the engine sound showed an enhancing effect. The vection reports distribution validated the parametric analysis results presented above.

14 Author s accepted manuscript. Original is available at 14 In addition, the observed interaction between the auditory scene type and the engine sound motivated post hoc tests 1, since no such significant interaction was found in the ANOVAs. Paired sampled t-tests showed a significant difference in the convincingness ratings for the close type of auditory scene (Figure 4). A similar pattern could be seen for the vection intensity ratings but not for the spatial presence ratings. FIGURE 4 HERE Our previous experiments on linear AIV (Väljamäe et al., 2005) auditory scenes containing distant, approaching sounds resulted in higher vection reports than close, receding ones. It seems to be more natural to prioritize the landmarks which are ahead of you (distant type) rather than the ones behind you (close type). Current data shown in Figure 4 shows a similar trend, however, this discrepancy disappears with an addition of the engine sound. One possible explanation might be that the engine sound served as an iconic representation of self-motion and thus was used by participants for mental interpolation of their motion dynamics once the auditory landmarks slowly faded away Kinesthetic imagery effects Submitted as a covariate in ANOVA analysis of vection intensity and convincingness, the kinesthetic but not other imagery scores significantly interacted with the engine sound factor. The kinesthetic imagery (KI) vividness score largely differs from the other items 1 Authors would like to thank the anonymous reviewer who pointed out the need for this more detailed analysis.

15 Author s accepted manuscript. Original is available at 15 in the shortened form of the Betts s QMI. In comparison with static mental imagery, it accesses abilities to imagine dynamic processes including locomotion (see Paivio and Clark (1988) and references therein). More recent studies (e.g. Hall, Pongrac, & Buckolz, 1985) separate motor imagery into visual motor imagery (seeing performing of the task) and kinesthetic motor imagery (imaging the feeling that the actual task produces). Interestingly, the questionnaire scores for these two modalities of imagery sometimes can significantly correlate depending on whether the external or internal visual motor imagery (VMI) is accessed. Sport psychologists refer to the external VMI when an action is seen from a third person perspective as opposed to seeing one s own, internal, performance. Participants sense of agency for mentally visualized actions can be changed by the instructional set and only internal VMI correlates with kinesthetic motor imagery (Callow & Hardy, 2004). Questions forming the KI score in the shortened form of the QMI access motor imagery from the first person perspective: the five items require imagining situations of Running upstairs, Springing across a gutter, Drawing a circle on paper, Reaching up to a high shelf and Kicking something out of your way (Betts, 1909). A significant interaction between kinesthetic imagery and the engine sound condition led to a further examination of this effect. FIGURE 5 HERE In order to visualize the KI score s effect on the engine sound factor, a median split at 2.2 on the averaged index (scores ranged from 1.2 to 4) was used to form two groups of vivid and non-vivid kinesthetic imagers. The results for intensity ratings and spatial

16 Author s accepted manuscript. Original is available at 16 presence are presented in Figure 5 (the convincingness ratings pattern closely resembled the vection intensity ratings). It can be seen that participants from the non-vivid kinesthetic imagers (KI) group were significantly influenced by auditory scenes containing the engine sound. Paired sampled t-tests were used to examine the difference between the rating means from the non-vivid KI group; the results were: 1) intensity ratings, p < 0.001, t(9) = (31.7 vs. 55.5); 2) convincingness ratings, p < 0.005, t(9) = (31.1 vs. 55); 3) spatial presence ratings, p < 0.05, t(9) = -2.4 (51 vs. 56.4). Between-groups analysis of the auditory landmarks condition did not show any significant difference. These results tentatively suggest that the participants with the low KI scores might benefit more from the addition of the engine sound representing their motion in a virtual environment. 3 Experiment 2: auditory-vibrotactile induced linear vection This experiment was designed to replicate and extend the results from the first experiment where the engine sound significantly enhanced the self-motion responses. In particular, new conditions containing only the engine sound were introduced. As a part of the experiment, an additional, multimodal stimulation was used in some of the trials for the purpose of another study. The results of auditory-vibrotactile interaction effects from this experiment are reported in Väljamäe, Larsson, Västfjäll, and Kleiner (2006).

17 Author s accepted manuscript. Original is available at Method Apparatus The same experimental setup as in experiment 1 was used (see section 2.1.1). In addition to auditory stimuli, vibrotactile and somatosensory stimulation was presented in some conditions (see stimuli and design section below). Vibrotactile stimulation was applied using mechanical shakers (RBH FX-80 Tactile Transducers) mounted under the chair and the footrest (see Figure 1), and also the custom-built subwoofer placed behind the chair Stimuli and design In experiment 2 the same binaural synthesis procedure with generic HRTFs and the same sounds as in experiment 1 were used for creating auditory scenes for eliciting linear motion in the forward direction (see Figure 2, the Distant type). The environment where a user is approaching the auditory landmarks at a constant speed proved to be most robust for inducing linear AIV, as was observed from in the experiment 1. Same sound sources trajectories as in experiment 1 were used (see section 2.1.2). Participants were exposed to a 3x9 within-subjects factorial design containing 27 sound excerpts. It included 3 different types of auditory scenes: the engine sound alone, the moving auditory landmarks alone and concurrent presentation of the engine sound

18 Author s accepted manuscript. Original is available at 18 with the landmarks; and 9 different conditions with additional sensory stimulations. Purely auditory stimulation in some trials was combined with: 1) 2 types of vibrotactile stimulation via mechanical shakers under the seat and footrest; 2) 2 types of a low frequency stimulations 3) 2 types of combined low frequency and vibrotactile stimulations; 4) 2 types of wind-flow simulations Measures The same three verbal measures as in experiment 1 were used: self-motion intensity, convincingness and spatial presence. In the current experiment, Vection Onset Time (VOT), i.e. the time the time it takes until the participant reports full vection intensity, was also measured. VOT also referred to as vection latency, is a sensitive psychophysical measure and it has often been used in visually induced self-motion research (e.g. Riecke et al., 2005, Wright, DiZio, & Lackner, 2006). If no self-motion was perceived, VOT was set to the full duration of a stimulus presentation (63 sec. for this experiment) Procedure In this second experiment 23 naive participants (10 female) with a mean age of 24.5 (SD 4.8) took part. Participants were briefed verbally about the test procedure and a short training session was performed before the actual experiment started. Each of the 27 stimulus conditions was presented once in randomized orders (varied across participants) with short pauses between each 9 conditions to minimize fatigue effects (experiment

19 Author s accepted manuscript. Original is available at 19 duration was approx minutes). During the stimulus presentation, participants had to verbally report the direction of perceived self-motion and the timing of the report was registered as a VOT. Once vection was reported, stimulus playback was stopped and participants ratings of self-motion intensity, convincingness and spatial presence were recorded. If no self-motion was experienced during the sound excerpt presentation, participants had to rate only the overall spatial presence sensation at the end of the stimulus playback. It has to be noted that in a recent study on audio-visual circular vection both stopping or continuous stimulus playback were contrasted and no significant differences in vection and presence responses were found (Väljamäe, Tajadura-Jiménez, Larsson, Västfjäll, & Kleiner, in press). Apart from the verbal responses to the questionnaire, verbal probing was done by the experiment manager. After completing the experiment, participants were debriefed, thanked and paid for their participation. 3.2 Results and discussion Main effect of the engine sound All subjects reported self-motion at least for some of the trials - from 14 to 23 vection reports for a particular stimulus type were give ( %). The lowest amount of vection self-reports was observed for the engine only condition and the highest for the conditions containing both the engine sound, the auditory landmarks, and vibrotactile stimulation. Averaging over 9 stimulation conditions, the environment with engine sound alone resulted in 16 out 23 vection reports (70% of participants).

20 Author s accepted manuscript. Original is available at 20 Vection onset data and vection intensity, convincingness and spatial presence ratings were submitted to four separate 3 (auditory scene type) x 9 (stimulation type) withinsubjects ANOVAs. Greenhouse-Geisser correction was used whenever unequal variances occurred. For vection intensity ratings a main effect of auditory scene type reached significance, F(1.3,29.4) = 8.77, p < (Figure 6). Planned Bonferroni-corrected pairwise comparisons showed that the difference between a) the engine alone and the landmarks alone was close to a significance, p = level; b) the engine alone and the engine with landmarks was significant p < 0.05; c) the landmarks alone and the engine with landmarks did not reach significance (p = 0.2). FIGURE 6 HERE The convincingness ratings were similar to the intensity ratings with a significant main effect of auditory scene type, F(1.4,29.9) = 8.17, p < (Figure 6). Planned Bonferroni-corrected pairwise comparisons showed that the difference between: a) the engine alone and the landmarks alone had p = level; b) the engine alone and the engine with landmarks was significant at p < 0.05 level; c) the landmarks alone and the engine with landmarks did not reach significance (p = 0.3). Verbal spatial presence ratings resulted in a highly significant main effect of auditory scene type, F(1.8,39) = 13.57, p < (see Figure 6). As opposed to the vection ratings, the condition with auditory landmarks alone got the highest mean (58.1), followed by engine and landmarks together (53.7) and the engine alone (42.8). Bonferroni-corrected pairwise comparisons showed that the difference between: a) the

21 Author s accepted manuscript. Original is available at 21 engine alone and the landmarks alone was highly significant at p < level; b) the engine alone and the engine with landmarks was significant at p < 0.01 level; c) the landmarks alone and the engine with landmarks did not reach significance (p = 0.3). For vection onset times no main effect of auditory scene type was found (p = 0.2). However, Bonferroni-corrected pairwise comparisons revealed a significant difference at p < 0.05 between landmarks only and the engine with landmarks (see Figure 7, all participants data). FIGURE 7 here The cross-modal effects of different stimulation types observed in this experiment were presented in (Väljamäe et al., 2006) and are only partially reported here. The analysis of the data demonstrated that vibrotactile enhancement of AIV was maximal when the engine sound was present in the auditory scenes (see Väljamäe et al., 2006 for details) Kinesthetic imagery effects Similar to experiment 1, a median split at 2.4 on the averaged index (scores range from 1 to 4) was used to form two groups of participants with vivid and non-vivid kinesthetic imagery based on the data from the shorten form of the Bett s QMI. Between-groups difference for the vection intensity ratings reached significance at p < 0.05, F(1,21) = 4.8, with the means of 34.2 (SE = 5.9) for non-vivid KI and 48.4 (SE =

22 Author s accepted manuscript. Original is available at ) for vivid KI groups. For the vection convincingness means a similar but nonsignificant pattern (p = 0.1) was observed: 39 (SE = 5.5) for non-vivid and 48.9 (SE = 5.7) for vivid KI group. Vection onset times followed the same pattern where participants from vivid KI group had significantly lower vection latency (30.8 sec., SE = 3.5) than non-vivid KI group (41.8 sec., SE = 3.6), F(1,21) = 4.87, p < 0.05 (see Figure 7, groups data). No such between-subjects difference occurred for spatial presence ratings. The analysis presented above shows that both vivid and non-vivid KI groups were affected by the engine sound addition. However, the effect was stronger for the vivid KI group in contrast to the experiment 1 where only participants in the non-vivid KI group were affected by the sonic motion metaphor. In order to understand this disparity, we further analyzed the KI group ratings since in some of the trials in experiment 2 auditory scenes were coupled with concurrent vibrotactile stimulation. Visualization of data showed a difference in the response pattern for vivid and non-vivid KI groups. FIGURE 8 HERE In (Väljamäe et al., 2006) we found that additional vibration significantly enhances self-motion when the engine sound in present in the auditory scene. The same pattern could be seen for both verbal and psychophysical responses for participants from the nonvivid KI group. However, for the vivid KI group the same effect can be only observed for vection onset times, while the self-reported vection intensity ratings show the reverse pattern (Figure 8). One possible explanation is that the perception of external sensory stimulation may depend on participants imagery vividness. Participants with vivid KI

23 Author s accepted manuscript. Original is available at 23 imagery might learn particular self-motion cues presented in some trials and later imagine these cues in subsequent trials (e.g. creating strong associations between the engine sound and vibrotactile stimulation). This might explain the disparity between self-reported selfmotion sensations and more objective psychophysical measures such as the vection onset times. On the contrary, participants from the non-vivid KI group might require the continuous presence of all sensory cues to achieve a strong self-motion experience. Due to the small sample size these results should be seen as tentative and further more focused research is needed to test this hypothesis. 4 General discussion In the two experiments the addition of a self-motion sound, the engine sound, significantly affected participants sensation of illusory self-motion. Two factors could contribute to such vection enhancement. Sound plays a crucial role in our perception of the surrounding world dynamics. Children often augment objects motion (e.g. toy cars) in a game by re-creating, vocalizing the corresponding sounds (Chion, 1994). It is known that the human auditory system has much better temporal resolution than other sensory systems and the recent findings on audio-visual rhythm perception suggest that the dynamic information encoding mechanism is auditory in its nature (Guttman, Gilroy, & Blake, 2005). It might be that the engine sound used in our experiments increased the perceived dynamics of the presented auditory scenes and thus enhanced the self-motion sensation. Recently, rhythmic music has been reported to reduce motion sickness (Yen-

24 Author s accepted manuscript. Original is available at 24 Pik-Sang et al., 2003). One may speculate that the reported positive effect might have occurred due to the motion cues conveyed in this music. Apart from dynamics information, sounds representing one s own self-motion (the sound of footsteps, engine sound, sounds produced by moving clothes, etc.) can manifest our embodiment in real or virtual environments. An embodiment sensation can be seen as a brain process which continuously compares the internal models of body behavior and the actual sensory inputs caused by the body action (Harris & Budge, 2003). Therefore, a high state of embodiment in virtual environments requires a specific body-centered sensory feedback or the so-called body-centered interaction paradigm introduced by Slater and Usoh (1994). The body-centered interaction includes both the representation of virtual body states and the body-centered sensory feedback such as, for example, visible parts of the virtual body and it s shadow. In their insightful experiments, Slater and Usoh showed that seeing one s own virtual representation, a self avatar, significantly increased presence ratings. In a similar vein, one could expect that sounds representing the user s virtual body in VE, a sonic self-avatar, might have similar effect on presence and selfmotion ratings. The importance of self-motion sounds has been recently demonstrated by (Nordahl, 2006) where interactive sound-producing footwear significantly changed users movement pattern in a virtual environment. Because of the human ability to become embodied with the mechanical extensions of one s body (e.g. skates, bicycle, wheelchair) (Harris & Budge, 2003), sounds representing the user s vehicle can be also considered as a body-centered. Although in the presented experiments the engine sound provided only imagery feedback about users self-motion the study results could be extended to

25 Author s accepted manuscript. Original is available at 25 interactive scenarios where engine sound rpm could be linked with the gas pedal and visual feedback in a motion simulator. Kinesthetic imagery is closely related to the perception of one s own bodily movements (e.g. Callow & Hardy, 2004). The high correlation between the kinesthetic imagery vividness scores and the engine sound enhancing effect of vection sensation give further support to the body-centered nature of the presented results. The participants with the low KI scores seem to benefit more from the addition of the engine sound representing their motion in a virtual environment, than the ones with the high KI ratings. In addition, in experiment 2, verbal responses were inconsistent with the psychophysical vection data for participants with vivid KI scores. Similar dissociation between subjective sensations and more objective measures of vection has been reported before (Freeman, Avons, Meddis, Pearson, & IJsselsteijn, 2000; Wright et al., 2006). As hypothesized by Wright et al. (2006), vividness of self-motion experience can differ from perceptual processes and might be strongly related to cognitive, contextual factors. In agreement with this hypothesis, our findings show that individual kinesthetic imagery scores modulate the vection-enhancing power of the metaphorical, self-motion representing engine sound. However, more focused studies need to address these individual differences; for example, the more recently developed Vividness of Movement Questionnaire (Isaac, Marks, & Russel, 1986) could be used to detect the type of motor imagery used by participants. Nevertheless, one might speculate that participants with vivid imagery might require weaker sensory stimulation and benefit from periodic rather than continuous supply of external self-motion cues.

26 Author s accepted manuscript. Original is available at 26 The sonic self-avatar can be an important part of users virtual body representation by adding sound-specific features to body-centered design in VE. For example, we hear our own breath continuously while we see our body parts only occasionally. Even in the magical situation resembling the idea of the invisible man we will be still aware of the sonic representation of ourselves. Hearing one s own breath can also manifest and emphasize the psychological state of a person. For example, a scene from 2001: A Space Odyssey (Kubrick, 1968) where a sound of an astronaut s breathing masterfully puts the viewer into the closed space of a spacesuit and represents the psychological state of the protagonist. Body sounds such as breathing or heartbeat might be associated by the listeners with their own bodily state. We found further support for this hypothesis in our recent study on auditory-vibrotactile false heartbeat feedback where judgments of emotional stimuli changed once the heartbeat sound resembled participant s own bodily experience (Tajadura-Jiménez, Väljamäe, & Västfjäll, in press). Although to the best of authors knowledge, no research has been made on the perception of one own chewing sound, these sounds also manifest our embodiment and are part of a sonic self-avatar design, especially in the cases of cross-modal interaction effects involving olfactory and taste stimulation. Considering the multimodal nature of human perception, both seeing and hearing an action provides the user with information about external world dynamics (Wilson & Knoblich, 2005). At the same time, self-motion and self-representation sounds can be particularly important when considering recent findings in mirror neuron research. It is known that mirror neurons discharge both when a monkey performs an action and when it sees another individual performing a similar action. Recently it has been found that

27 Author s accepted manuscript. Original is available at 27 only hearing the sound of the related action create a similar response in related mirror neurons (Kohler et al., 2002). Therefore the sonic representation of the body and selfmotion might have an influence not only on the cognitive, metaphorical level, but may also have neurophysiological correlates (cf. Brooks et al., 2007). To conclude, auditory cues appear to be an important, but often overlooked component in multimodal motion simulator designs. Future research may encompass at least three ways of sound-based enhancement of self-motion simulations. First, there are a number of auditory cues related to motion perception (e.g. Väljamäe et al., 2005), such as sound intensity and related point of closest passage of the objects passing by, binaural cues and the Doppler effect (if higher velocities need to be simulated). The temporal structure of auditory environments can provide information about overall scene dynamics and may compensate for visual imperfections via cross-modal interaction mechanisms (Guttman et al., 2005). Second, from the everyday listening perspective, sounds recognized as auditory landmarks (Larsson et al. 2004), or self-motion sounds addressed in this paper may enhance vection sensations by their contextual information. Finally, the spatial properties of the rendered sound are important not only from the auditory motion cues perspective, but also as carrying the general information about the VEs spatial characteristics, the so called spaciousness of the perceived environment. Therefore, despite the fact the auditory cues alone are insufficient in eliciting strong sensations of illusory self-motion, sounds in combination with other modalities may turn out as invaluable sensation enhancers; especially, when the visual cues are reduced.

28 Author s accepted manuscript. Original is available at 28 Acknowledgements The work presented in this paper was funded by the EU grant POEMS-IST ( and Swedish Science Council (VR). The authors would like to thank Max-Planck Institute for Biological Cybernetics for building the motion simulator setup at the CRAG-MSA laboratory, Ana Tajadura-Jiménez for conducting the experiments and three anonymous reviewers for very helpful comments and critique. References Andersen, G. J., (1986). Perception of self-motion: Psychophysical and computational approaches, Psychological Bulletin, 99(1), Betts, G. H. (1909). The distribution and functions of mental imagery. New York: Teachers College, Columbia University. Brooks, A., van der Zwan, R., Billard, A., Petreska, B., Clarke, S., & Blanke, O. (2007). Auditory motion affects visual biological motion processing, Neuropsychologia 45(3), 523. Brooks Jr., F. P. (1999). What's real about virtual reality? IEEE Computer Graphics and Applications, 19(6), Callow, N., & Hardy, L. (2004). The relationship between the use of kinaesthetic imagery and different visual imagery perspectives. Journal of Sports Sciences, 22, Chion, M. (1994). Audio-Vision: Sound on screen. New York: Columbia University Press.

29 Author s accepted manuscript. Original is available at 29 Cook, P. R., & Scavone, G. P. (2004). The Synthesis ToolKit in C++ (STK), Webpage: Dieterich, M., Bense, S., Stephan, T., Yousry, T.A., & Brandt, T. (2003). fmri signal increases and decreases in cortical areas during small-field optokinetic stimulation and central fixation. Experimental Brain Research, 148(1), Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion, Nature, 415, Freeman, J., Avons, S. E., Meddis, R., Pearson, D. E., & IJsselsteijn, W. A. (2000). Using behavioral realism to estimate presence: A study of the utility of postural responses to motion stimuli. Presence: Teleoperators and Virtual Environments, 9, Guttman, S. E., Gilroy, L. A., & Blake, R. (2005). Hearing what the eyes see: Auditory encoding of visual temporal sequences. Psychological Science, 16(3), Hall, C. R., Pongrac, J., & Buckolz, E. (1985). The measurement of imagery ability. Human Movement Science, 4, Harris, C., & Budge F. (2003). Embodiment as behavioural plasticity. Paper presented at the 2nd Conference on Making sense of Health, Illness and Disease, Oxford, UK, July 2003 [On-line]. Available: Harris, L. R., Jenkin, M., Zikovitz, D., Redlick, F., Jaekl, P., Jasiobedzka U., et al. (2002). Simulating self motion I: cues for the perception of motion. Virtual Reality, 6(2), Hettinger, L. J. (2002). Illusory self-motion in virtual environments. In K. M. Stanney (Ed.), Handbook of Virtual Environments, (pp ). Lawrence Erlbaum

30 Author s accepted manuscript. Original is available at 30 Isaac, A., Marks, D. F., & Russel, D. G. (1986). An instrument for assessing imagery of movement: the Vividness of Movement Imagery Questionnaire (VMIQ). Journal of Mental Imagery, 10, Kapralos, B., Zikovitz, D., Jenkin, M., & Harris, L. R. (2004). Auditory cues in the perception of self-motion. Paper presented at the 116th Convention of the Audio Engineering Society, Berlin, Germany, May Kleiner, M., Dalenbäck, B-I., & Svensson, P. (1993). Auralization - An Overview, Journal of Audio Engineering Society, 41 (11), Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G. (2002). Hearing sounds, understanding actions: action representation in mirror neurons, Science, 297, Kubrick, S (Producer & Director). (1968). 2001:A space odyssey. [Motion picture]. USA: Warner Bros. Lackner, J. R. (1977). Induction of illusory self-rotation and nystagmus by a rotating sound-field. Aviation, Space and Environmental Medicine, 48(2), Larsson, P., Västfjäll, D., & Kleiner, M. (2004). Perception of self-motion and presence in auditory virtual environments. In M. A. Raya, & B. R. Solaz (Eds.), Proceedings of PRESENCE th International Workshop on Presence, (pp ), Valencia, Spain. Marx, E., Stephan, T., Nolte, A., Deutschländer, A., Seelos, K.C., Dieterich, M., et al. (2003). Eye closure in darkness animates sensory systems. NeuroImage, 19, Maier, J. X., Neuhoff, J. G., Logothetis, N. K., & Ghazanfar, A. A. (2004). Multisensory integration of looming signals by rhesus monkeys. Neuron, 43,

31 Author s accepted manuscript. Original is available at 31 Mourant, R., & Refsland, D. (2003). Developing a 3D sound environment for a driving simulator. Paper presented at 9th International Conference on Virtual Systems and Multimedia (VSMM), (pp ), Quebec, Canada, October Nordahl, R. (2006). Increasing the motion of users in photo-realistic virtual environments by utilising auditory rendering of the environment and ego-motion. In C. C. Bracken, & M. Lombard (Eds.), Proceedings of PRESENCE th International Workshop on Presence, (pp ), Ohio, USA. Paivio, A., & Clark, J. M. (1991). Static versus dynamic imagery. In C. Cornoldi & M. A. McDaniels (Eds.), Imagery and cognition (pp ). New York: Springer-Verlag. Riecke, B., Schulte-Pelkum, J., Caniard F., & Bulthoff, H. (2005). Influence of auditory cues on the visually-induced self-motion illusion (circular vection) in virtual reality. In M. Slater (Ed.) Proceedings of PRESENCE th International Workshop on Presence, (pp ), London, UK. Riecke B. E., Väljamäe, A., & Schulte-Pelkum, J. Moving sounds enhance the visuallyinduced self-motion illusion (circular vection) in Virtual Reality (submitted to ACM- Transactions on Applied Perception) Sakamoto, S., Osada, Y., Suzuki, Y., & Gyoba, J. (2004). The effects of linearly moving sound images on self-motion perception, Acoustical Science and Technology, 25(1), Sheehan, P.W. (1967). A shortened form of Betts' Questionnaire Upon Mental Imagery, Journal of Clinical Psychology, 23,

32 Author s accepted manuscript. Original is available at 32 Slater, M. & Usoh M. (1994). Body centred interaction in immersive virtual environments, in N. Magnenat Thalmann & D. Thalmann, (Eds.), Artificial Life and Virtual Reality (pp ). John Wiley and Sons. Tajadura-Jiménez, A., Väljamäe, A., & Västfjäll D. Self-representation in mediated environments: the experience of emotions modulated by auditory-vibrotactile heartbeat (to appear in CyberPsychology and Behavior) Väljamäe, A., Larsson, P., Västfjäll, D., & Kleiner, M. (2004). Auditory presence, individualized head-related transfer functions and illusory ego-motion in virtual environments. In M. A. Raya, & B. R. Solaz (Eds.), Proceedings of PRESENCE th International Workshop on Presence, (pp ), Valencia, Spain. Väljamäe, A., Larsson, P., Västfjäll, D., & Kleiner, M. (2005). Travelling without moving: Auditory scene cues for translational self-motion. Paper presented at the 11th International Conference of Auditory Displays, (pp. 9-16), Limerick, Ireland, July [On-line]. Available: Väljamäe A., Larsson P., Västfjäll D., & Kleiner M. (2006). Vibrotactile enhancement of auditory induced self-motion and presence. Journal of Audio Engineering Society, 54, Väljamäe A., Tajadura-Jiménez, A., Larsson P., Västfjäll D., & Kleiner M., Binaural bone-conducted sound in virtual environments: Evaluation of a portable, multimodal motion simulator prototype (to appear in Acoustical Science and Technology) Wilson, M., & Knoblich, G. (2005). The Case for Motor Involvement in Perceiving Conspecifics. Psychological Bulletin, 131(3),

33 Author s accepted manuscript. Original is available at 33 Wong, S. C. P., & Frost B. J. (1981). The effect of visual-vestibular conflict on the latency of steady-state visually induced subjective rotation. Perception & Psychophysics, 30(3), Wright, W. G., DiZio, P., & Lackner, J. R., (2006) Perceived self-motion in two visual contexts: Dissociable mechanisms underlie perception. Journal of Vestibular Research, 16, Yen-Pik-Sang, F., Billar, J. P., Golding, J. F., & Gresty, M. A. (2003). Behavioral methods of alleviating motion sickness: effectiveness of controlled breathing and music audiotape. Journal of Travel Medicine, 10,

34 Author s accepted manuscript. Original is available at 34 FIGURE CAPTIONS AND FIGURES Figure 1. Laboratory setup: a participant sitting on the chair mounted on a wheeled platform coupled with a wheeled footrest. [one-column width].

35 Author s accepted manuscript. Original is available at 35 Figure 2. Representation of the auditory scene types used for creating translational auditory induced vection (AIV) in experiment 1. The arrow in front of the listener indicates the expected (AIV) direction, other arrows indicates the motion direction of the virtual sound objects. [two-column width].

36 Author s accepted manuscript. Original is available at 36 Figure 3. Self-reported vection intensity, convincingness and spatial presence ratings in experiment 1 (* marks significance at p < 0.05 level). Whiskers show standard errors of the means. [one-column width].

37 Author s accepted manuscript. Original is available at 37 Figure 4. Self-reported vection convincingness dependence on the initial positions of auditory landmarks and the addition of the engine sound in experiment 1 (* marks significance at p < 0.05 level, corrected for multiple comparisons). Whiskers show standard errors of the means. [one-column width].

38 Author s accepted manuscript. Original is available at 38 Figure 5. Vection intensity (left panel) and spatial presence (right panel) ratings for vivid and non-vivid kinesthetic imagers (KI) ((* marks significance at p < 0.05 level, *** at p < level). Whiskers show standard errors of the means. [one-column width].

39 Author s accepted manuscript. Original is available at 39 Figure 6. Vection intensity, convincingness and spatial presence ratings in experiment 2 (* marks significance at p < 0.05 level, ** at < 0.01 level, *** at p < level). Whiskers show standard errors of the means. [one-column width].

40 Author s accepted manuscript. Original is available at 40 Figure 7. Vection onset times (maximum 63 s) in experiment 2 for all participants data and for the kinesthetic imagery groups (* marks significance at p < 0.05 level). Whiskers show standard errors of the means. [one-column width].

41 Author s accepted manuscript. Original is available at 41 Figure 8. Vection intensity ratings (left panel) and vection onset times (right panel) for the vivid kinesthetic imagery (KI) group for auditory only or auditory-vibrotactile stimuli. Whiskers show standard errors of the means. [one-column width].

Perception of Self-motion and Presence in Auditory Virtual Environments

Perception of Self-motion and Presence in Auditory Virtual Environments Perception of Self-motion and Presence in Auditory Virtual Environments Pontus Larsson 1, Daniel Västfjäll 1,2, Mendel Kleiner 1,3 1 Department of Applied Acoustics, Chalmers University of Technology,

More information

Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality

Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality Bernhard E. Riecke, Jörg Schulte-Pelkum, Franck Caniard, & Heinrich H.Bülthoff Max Planck Institute

More information

ECOLOGICAL ACOUSTICS AND THE MULTI-MODAL PERCEPTION OF ROOMS: REAL AND UNREAL EXPERIENCES OF AUDITORY-VISUAL VIRTUAL ENVIRONMENTS

ECOLOGICAL ACOUSTICS AND THE MULTI-MODAL PERCEPTION OF ROOMS: REAL AND UNREAL EXPERIENCES OF AUDITORY-VISUAL VIRTUAL ENVIRONMENTS ECOLOGICAL ACOUSTICS AND THE MULTI-MODAL PERCEPTION OF ROOMS: REAL AND UNREAL EXPERIENCES OF AUDITORY-VISUAL VIRTUAL ENVIRONMENTS Pontus Larsson, Daniel Västfjäll, Mendel Kleiner Chalmers Room Acoustics

More information

Spatialized auditory cues enhance the visually-induced self-motion illusion (circular vection) in Virtual Reality

Spatialized auditory cues enhance the visually-induced self-motion illusion (circular vection) in Virtual Reality Max Planck Institut für biologische Kybernetik Max Planck Institute for Biological Cybernetics Technical Report No. 138 Spatialized auditory cues enhance the visually-induced self-motion illusion (circular

More information

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency Shunsuke Hamasaki, Atsushi Yamashita and Hajime Asama Department of Precision

More information

Exploring Surround Haptics Displays

Exploring Surround Haptics Displays Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,

More information

Sound rendering in Interactive Multimodal Systems. Federico Avanzini

Sound rendering in Interactive Multimodal Systems. Federico Avanzini Sound rendering in Interactive Multimodal Systems Federico Avanzini Background Outline Ecological Acoustics Multimodal perception Auditory visual rendering of egocentric distance Binaural sound Auditory

More information

Aalborg Universitet. Published in: Eurohaptics DOI (link to publication from Publisher): / _32. Publication date: 2012

Aalborg Universitet. Published in: Eurohaptics DOI (link to publication from Publisher): / _32. Publication date: 2012 Aalborg Universitet Haptically Induced Illusory Self-motion and the Influence of Context of Motion Nilsson, Niels Chr.; Nordahl, Rolf; Sikström, Erik; Turchet, Luca; Serafin, Stefania Published in: Eurohaptics

More information

Modulating motion-induced blindness with depth ordering and surface completion

Modulating motion-induced blindness with depth ordering and surface completion Vision Research 42 (2002) 2731 2735 www.elsevier.com/locate/visres Modulating motion-induced blindness with depth ordering and surface completion Erich W. Graf *, Wendy J. Adams, Martin Lages Department

More information

PERCEIVED SELF MOTION IN VIRTUAL ACOUSTIC SPACE FACILITATED BY PASSIVE WHOLE-BODY MOVEMENT

PERCEIVED SELF MOTION IN VIRTUAL ACOUSTIC SPACE FACILITATED BY PASSIVE WHOLE-BODY MOVEMENT PERCEIVED SELF MOTION IN VIRTUAL ACOUSTIC SPACE FACILITATED BY PASSIVE WHOLE-BODY MOVEMENT William L. MARTENS a,b, Shuichi SAKAMOTO b,c, and Yôiti SUZUKI c a Schulich School of Music, McGill University,

More information

HRTF adaptation and pattern learning

HRTF adaptation and pattern learning HRTF adaptation and pattern learning FLORIAN KLEIN * AND STEPHAN WERNER Electronic Media Technology Lab, Institute for Media Technology, Technische Universität Ilmenau, D-98693 Ilmenau, Germany The human

More information

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents ITE Trans. on MTA Vol. 2, No. 1, pp. 46-5 (214) Copyright 214 by ITE Transactions on Media Technology and Applications (MTA) Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

More information

Force versus Frequency Figure 1.

Force versus Frequency Figure 1. An important trend in the audio industry is a new class of devices that produce tactile sound. The term tactile sound appears to be a contradiction of terms, in that our concept of sound relates to information

More information

The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience

The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience Ryuta Okazaki 1,2, Hidenori Kuribayashi 3, Hiroyuki Kajimioto 1,4 1 The University of Electro-Communications,

More information

Behavioural Realism as a metric of Presence

Behavioural Realism as a metric of Presence Behavioural Realism as a metric of Presence (1) Jonathan Freeman jfreem@essex.ac.uk 01206 873786 01206 873590 (2) Department of Psychology, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ,

More information

Auditory-induced presence in mixed reality environments and related technology

Auditory-induced presence in mixed reality environments and related technology Author s accepted manuscript. Original is available at www.springerlink.com Auditory-induced presence in mixed reality environments and related technology Pontus Larsson 1,, Aleksander Väljamäe 1,2, Daniel

More information

BERNHARD E. RIECKE PUBLICATIONS 1

BERNHARD E. RIECKE PUBLICATIONS 1 BERNHARD E. RIECKE 1 Refereed papers Submitted Bizzocchi, L., Belgacem, B.Y., Quan, B., Suzuki, W., Barheri, M., Riecke, B.E. (submitted) Re:Cycle - a Generative Ambient Video Engine, DAC09 Meilinger,

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces In Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents and Virtual Reality (Vol. 1 of the Proceedings of the 9th International Conference on Human-Computer Interaction),

More information

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1 Perception, 13, volume 42, pages 11 1 doi:1.168/p711 SHORT AND SWEET Vection induced by illusory motion in a stationary image Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 1 Institute for

More information

Vection in depth during consistent and inconsistent multisensory stimulation

Vection in depth during consistent and inconsistent multisensory stimulation University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2011 Vection in depth during consistent and inconsistent multisensory

More information

When What You Hear is What You See: Presence and Auditory-Visual Integration in Virtual Environments

When What You Hear is What You See: Presence and Auditory-Visual Integration in Virtual Environments 1 When What You Hear is What You See: Presence and Auditory-Visual Integration in Virtual Environments Pontus Larsson 1, Daniel Västfjäll 1,2, Pierre Olsson 3, Mendel Kleiner 1 1 Applied Acoustics, Chalmers

More information

A triangulation method for determining the perceptual center of the head for auditory stimuli

A triangulation method for determining the perceptual center of the head for auditory stimuli A triangulation method for determining the perceptual center of the head for auditory stimuli PACS REFERENCE: 43.66.Qp Brungart, Douglas 1 ; Neelon, Michael 2 ; Kordik, Alexander 3 ; Simpson, Brian 4 1

More information

III. Publication III. c 2005 Toni Hirvonen.

III. Publication III. c 2005 Toni Hirvonen. III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on

More information

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Tone-in-noise detection: Observed discrepancies in spectral integration Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Armin Kohlrausch b) and

More information

Perception in Immersive Virtual Reality Environments ROB ALLISON DEPT. OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE YORK UNIVERSITY, TORONTO

Perception in Immersive Virtual Reality Environments ROB ALLISON DEPT. OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE YORK UNIVERSITY, TORONTO Perception in Immersive Virtual Reality Environments ROB ALLISON DEPT. OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE YORK UNIVERSITY, TORONTO Overview Basic concepts and ideas of virtual environments

More information

Binaural auralization based on spherical-harmonics beamforming

Binaural auralization based on spherical-harmonics beamforming Binaural auralization based on spherical-harmonics beamforming W. Song a, W. Ellermeier b and J. Hald a a Brüel & Kjær Sound & Vibration Measurement A/S, Skodsborgvej 7, DK-28 Nærum, Denmark b Institut

More information

Introduction. 1.1 Surround sound

Introduction. 1.1 Surround sound Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of

More information

CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? University of Guelph Guelph, Ontario, Canada

CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? University of Guelph Guelph, Ontario, Canada CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? Rebecca J. Reed-Jones, 1 James G. Reed-Jones, 2 Lana M. Trick, 2 Lori A. Vallis 1 1 Department of Human Health and Nutritional

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Marc Erich Latoschik AI & VR Lab Artificial Intelligence Group University of Bielefeld Virtual Reality (or VR for short) Virtual Reality (or VR for short)

More information

Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment

Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment Marko Horvat University of Zagreb Faculty of Electrical Engineering and Computing, Zagreb,

More information

Psychoacoustic Cues in Room Size Perception

Psychoacoustic Cues in Room Size Perception Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid

More information

What you see is not what you get. Grade Level: 3-12 Presentation time: minutes, depending on which activities are chosen

What you see is not what you get. Grade Level: 3-12 Presentation time: minutes, depending on which activities are chosen Optical Illusions What you see is not what you get The purpose of this lesson is to introduce students to basic principles of visual processing. Much of the lesson revolves around the use of visual illusions

More information

Using the perceptually oriented approach to optimize spatial presence & ego-motion simulation

Using the perceptually oriented approach to optimize spatial presence & ego-motion simulation Max Planck Institut für biologische Kybernetik Max Planck Institute for Biological Cybernetics Technical Report No. 153. Using the perceptually oriented approach to optimize spatial presence & ego-motion

More information

Spatial Audio Reproduction: Towards Individualized Binaural Sound

Spatial Audio Reproduction: Towards Individualized Binaural Sound Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution

More information

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS 20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR

More information

The effect of 3D audio and other audio techniques on virtual reality experience

The effect of 3D audio and other audio techniques on virtual reality experience The effect of 3D audio and other audio techniques on virtual reality experience Willem-Paul BRINKMAN a,1, Allart R.D. HOEKSTRA a, René van EGMOND a a Delft University of Technology, The Netherlands Abstract.

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

Spatial Audio & The Vestibular System!

Spatial Audio & The Vestibular System! ! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs

More information

REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR

REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR B.-I. Dalenbäck CATT, Mariagatan 16A, Gothenburg, Sweden M. Strömberg Valeo Graphics, Seglaregatan 10, Sweden 1 INTRODUCTION Various limited forms of

More information

The Perception of Optical Flow in Driving Simulators

The Perception of Optical Flow in Driving Simulators University of Iowa Iowa Research Online Driving Assessment Conference 2009 Driving Assessment Conference Jun 23rd, 12:00 AM The Perception of Optical Flow in Driving Simulators Zhishuai Yin Northeastern

More information

The Haptic Perception of Spatial Orientations studied with an Haptic Display

The Haptic Perception of Spatial Orientations studied with an Haptic Display The Haptic Perception of Spatial Orientations studied with an Haptic Display Gabriel Baud-Bovy 1 and Edouard Gentaz 2 1 Faculty of Psychology, UHSR University, Milan, Italy gabriel@shaker.med.umn.edu 2

More information

ARTICLE IN PRESS. Computers & Graphics

ARTICLE IN PRESS. Computers & Graphics Computers & Graphics 33 (2009) 47 58 Contents lists available at ScienceDirect Computers & Graphics journal homepage: www.elsevier.com/locate/cag Technical Section Circular, linear, and curvilinear vection

More information

The psychoacoustics of reverberation

The psychoacoustics of reverberation The psychoacoustics of reverberation Steven van de Par Steven.van.de.Par@uni-oldenburg.de July 19, 2016 Thanks to Julian Grosse and Andreas Häußler 2016 AES International Conference on Sound Field Control

More information

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner. Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions

More information

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang Vestibular Responses in Dorsal Visual Stream and Their Role in Heading Perception Recent experiments

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2aPPa: Binaural Hearing

More information

Crossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design. In the Realm of the Senses

Crossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design. In the Realm of the Senses Crossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design Charles Spence Department of Experimental Psychology, Oxford University In the Realm of the Senses Wickens

More information

Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality

Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality Bernhard E. Riecke 1, Jörg Schulte-Pelkum 1, Marios N. Avraamides 2, and Heinrich H. Bülthoff

More information

Sound source localization and its use in multimedia applications

Sound source localization and its use in multimedia applications Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,

More information

Chapter 73. Two-Stroke Apparent Motion. George Mather

Chapter 73. Two-Stroke Apparent Motion. George Mather Chapter 73 Two-Stroke Apparent Motion George Mather The Effect One hundred years ago, the Gestalt psychologist Max Wertheimer published the first detailed study of the apparent visual movement seen when

More information

3D SOUND CAN HAVE A NEGATIVE IMPACT ON THE PERCEPTION OF VISUAL CONTENT IN AUDIOVISUAL REPRODUCTIONS

3D SOUND CAN HAVE A NEGATIVE IMPACT ON THE PERCEPTION OF VISUAL CONTENT IN AUDIOVISUAL REPRODUCTIONS 3D SOUND CAN HAVE A NEGATIVE IMPACT ON THE PERCEPTION OF VISUAL CONTENT IN AUDIOVISUAL REPRODUCTIONS Catarina Mendonça, Olli Rummukainen, Ville Pulkki Dept. Processing and Acoustics Aalto University P

More information

Cybersickness, Console Video Games, & Head Mounted Displays

Cybersickness, Console Video Games, & Head Mounted Displays Cybersickness, Console Video Games, & Head Mounted Displays Lesley Scibora, Moira Flanagan, Omar Merhi, Elise Faugloire, & Thomas A. Stoffregen Affordance Perception-Action Laboratory, University of Minnesota,

More information

Externalization in binaural synthesis: effects of recording environment and measurement procedure

Externalization in binaural synthesis: effects of recording environment and measurement procedure Externalization in binaural synthesis: effects of recording environment and measurement procedure F. Völk, F. Heinemann and H. Fastl AG Technische Akustik, MMK, TU München, Arcisstr., 80 München, Germany

More information

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity Vision Research 45 (25) 397 42 Rapid Communication Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity Hiroyuki Ito *, Ikuko Shibata Department of Visual

More information

From Binaural Technology to Virtual Reality

From Binaural Technology to Virtual Reality From Binaural Technology to Virtual Reality Jens Blauert, D-Bochum Prominent Prominent Features of of Binaural Binaural Hearing Hearing - Localization Formation of positions of the auditory events (azimuth,

More information

Comparison of Haptic and Non-Speech Audio Feedback

Comparison of Haptic and Non-Speech Audio Feedback Comparison of Haptic and Non-Speech Audio Feedback Cagatay Goncu 1 and Kim Marriott 1 Monash University, Mebourne, Australia, cagatay.goncu@monash.edu, kim.marriott@monash.edu Abstract. We report a usability

More information

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION ARCHIVES OF ACOUSTICS 33, 4, 413 422 (2008) VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION Michael VORLÄNDER RWTH Aachen University Institute of Technical Acoustics 52056 Aachen,

More information

Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza

Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza Computer Graphics Computational Imaging Virtual Reality Joint work with: A. Serrano, J. Ruiz-Borau

More information

Multi variable strategy reduces symptoms of simulator sickness

Multi variable strategy reduces symptoms of simulator sickness Multi variable strategy reduces symptoms of simulator sickness Jorrit Kuipers Green Dino BV, Wageningen / Delft University of Technology 3ME, Delft, The Netherlands, jorrit@greendino.nl Introduction Interactive

More information

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments Elias Giannopoulos 1, Victor Eslava 2, María Oyarzabal 2, Teresa Hierro 2, Laura González 2, Manuel Ferre 2,

More information

Facilitation of Affection by Tactile Feedback of False Heartbeat

Facilitation of Affection by Tactile Feedback of False Heartbeat Facilitation of Affection by Tactile Feedback of False Heartbeat Narihiro Nishimura n-nishimura@kaji-lab.jp Asuka Ishi asuka@kaji-lab.jp Michi Sato michi@kaji-lab.jp Shogo Fukushima shogo@kaji-lab.jp Hiroyuki

More information

Binaural Hearing. Reading: Yost Ch. 12

Binaural Hearing. Reading: Yost Ch. 12 Binaural Hearing Reading: Yost Ch. 12 Binaural Advantages Sounds in our environment are usually complex, and occur either simultaneously or close together in time. Studies have shown that the ability to

More information

Head-Movement Evaluation for First-Person Games

Head-Movement Evaluation for First-Person Games Head-Movement Evaluation for First-Person Games Paulo G. de Barros Computer Science Department Worcester Polytechnic Institute 100 Institute Road. Worcester, MA 01609 USA pgb@wpi.edu Robert W. Lindeman

More information

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow Chapter 9 Conclusions 9.1 Summary For successful navigation it is essential to be aware of one's own movement direction as well as of the distance travelled. When we walk around in our daily life, we get

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

Auditory Localization

Auditory Localization Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception

More information

Graphics and Perception. Carol O Sullivan

Graphics and Perception. Carol O Sullivan Graphics and Perception Carol O Sullivan Carol.OSullivan@cs.tcd.ie Trinity College Dublin Outline Some basics Why perception is important For Modelling For Rendering For Animation Future research - multisensory

More information

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! The speaker is Anatole Lécuyer, senior researcher at Inria, Rennes, France; More information about him at : http://people.rennes.inria.fr/anatole.lecuyer/

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence

More information

Vision V Perceiving Movement

Vision V Perceiving Movement Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion

More information

Vision V Perceiving Movement

Vision V Perceiving Movement Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion

More information

The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students

The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students Published online by the Deluwak UG Research Department, December 2016 Abstract This study examines

More information

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS Bobby Nguyen 1, Yan Zhuo 2, & Rui Ni 1 1 Wichita State University, Wichita, Kansas, USA 2 Institute of Biophysics, Chinese Academy of Sciences,

More information

The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays

The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays Damian Gordon * and David Vernon Department of Computer Science Maynooth College Ireland ABSTRACT

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 3pPP: Multimodal Influences

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,

More information

Perception. What We Will Cover in This Section. Perception. How we interpret the information our senses receive. Overview Perception

Perception. What We Will Cover in This Section. Perception. How we interpret the information our senses receive. Overview Perception Perception 10/3/2002 Perception.ppt 1 What We Will Cover in This Section Overview Perception Visual perception. Organizing principles. 10/3/2002 Perception.ppt 2 Perception How we interpret the information

More information

Rendering Moving Tactile Stroke on the Palm Using a Sparse 2D Array

Rendering Moving Tactile Stroke on the Palm Using a Sparse 2D Array Rendering Moving Tactile Stroke on the Palm Using a Sparse 2D Array Jaeyoung Park 1(&), Jaeha Kim 1, Yonghwan Oh 1, and Hong Z. Tan 2 1 Korea Institute of Science and Technology, Seoul, Korea {jypcubic,lithium81,oyh}@kist.re.kr

More information

SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi

SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS György Wersényi Széchenyi István University Department of Telecommunications Egyetem tér 1, H-9024,

More information

COM325 Computer Speech and Hearing

COM325 Computer Speech and Hearing COM325 Computer Speech and Hearing Part III : Theories and Models of Pitch Perception Dr. Guy Brown Room 145 Regent Court Department of Computer Science University of Sheffield Email: g.brown@dcs.shef.ac.uk

More information

From acoustic simulation to virtual auditory displays

From acoustic simulation to virtual auditory displays PROCEEDINGS of the 22 nd International Congress on Acoustics Plenary Lecture: Paper ICA2016-481 From acoustic simulation to virtual auditory displays Michael Vorländer Institute of Technical Acoustics,

More information

From Encoding Sound to Encoding Touch

From Encoding Sound to Encoding Touch From Encoding Sound to Encoding Touch Toktam Mahmoodi King s College London, UK http://www.ctr.kcl.ac.uk/toktam/index.htm ETSI STQ Workshop, May 2017 Immersing a person into the real environment with Very

More information

Scholarly Article Review. The Potential of Using Virtual Reality Technology in Physical Activity Settings. Aaron Krieger.

Scholarly Article Review. The Potential of Using Virtual Reality Technology in Physical Activity Settings. Aaron Krieger. Scholarly Article Review The Potential of Using Virtual Reality Technology in Physical Activity Settings Aaron Krieger October 22, 2015 The Potential of Using Virtual Reality Technology in Physical Activity

More information

Glasgow eprints Service

Glasgow eprints Service Hoggan, E.E and Brewster, S.A. (2006) Crossmodal icons for information display. In, Conference on Human Factors in Computing Systems, 22-27 April 2006, pages pp. 857-862, Montréal, Québec, Canada. http://eprints.gla.ac.uk/3269/

More information

THE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES

THE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES THE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES Douglas S. Brungart Brian D. Simpson Richard L. McKinley Air Force Research

More information

The peripheral drift illusion: A motion illusion in the visual periphery

The peripheral drift illusion: A motion illusion in the visual periphery Perception, 1999, volume 28, pages 617-621 The peripheral drift illusion: A motion illusion in the visual periphery Jocelyn Faubert, Andrew M Herbert Ecole d'optometrie, Universite de Montreal, CP 6128,

More information

Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli

Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli 6.1 Introduction Chapters 4 and 5 have shown that motion sickness and vection can be manipulated separately

More information

Grasping Multisensory Integration: Proprioceptive Capture after Virtual Object Interactions

Grasping Multisensory Integration: Proprioceptive Capture after Virtual Object Interactions Grasping Multisensory Integration: Proprioceptive Capture after Virtual Object Interactions Johannes Lohmann (johannes.lohmann@uni-tuebingen.de) Department of Computer Science, Cognitive Modeling, Sand

More information

Measuring impulse responses containing complete spatial information ABSTRACT

Measuring impulse responses containing complete spatial information ABSTRACT Measuring impulse responses containing complete spatial information Angelo Farina, Paolo Martignon, Andrea Capra, Simone Fontana University of Parma, Industrial Eng. Dept., via delle Scienze 181/A, 43100

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

Effects of Simulation Fidelty on User Experience in Virtual Fear of Public Speaking Training An Experimental Study

Effects of Simulation Fidelty on User Experience in Virtual Fear of Public Speaking Training An Experimental Study Effects of Simulation Fidelty on User Experience in Virtual Fear of Public Speaking Training An Experimental Study Sandra POESCHL a,1 a and Nicola DOERING a TU Ilmenau Abstract. Realistic models in virtual

More information

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design CSE 165: 3D User Interaction Lecture #14: 3D UI Design 2 Announcements Homework 3 due tomorrow 2pm Monday: midterm discussion Next Thursday: midterm exam 3D UI Design Strategies 3 4 Thus far 3DUI hardware

More information

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Analysis of Frontal Localization in Double Layered Loudspeaker Array System Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang

More information