Proceedings of Meetings on Acoustics

Similar documents
Proceedings of Meetings on Acoustics

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences

Acoustics Research Institute

III. Publication III. c 2005 Toni Hirvonen.

A triangulation method for determining the perceptual center of the head for auditory stimuli

Listening with Headphones

Enhancing 3D Audio Using Blind Bandwidth Extension

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

Proceedings of Meetings on Acoustics

NEAR-FIELD VIRTUAL AUDIO DISPLAYS

Binaural Hearing. Reading: Yost Ch. 12

3D sound image control by individualized parametric head-related transfer functions

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION

THE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES

Proceedings of Meetings on Acoustics

The role of intrinsic masker fluctuations on the spectral spread of masking

Proceedings of Meetings on Acoustics

HRIR Customization in the Median Plane via Principal Components Analysis

Exploiting envelope fluctuations to achieve robust extraction and intelligent integration of binaural cues

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Auditory Localization

I R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG

Auditory Distance Perception. Yan-Chen Lu & Martin Cooke

The relation between perceived apparent source width and interaural cross-correlation in sound reproduction spaces with low reverberation

HRTF adaptation and pattern learning

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

Spatial Audio Reproduction: Towards Individualized Binaural Sound

2920 J. Acoust. Soc. Am. 102 (5), Pt. 1, November /97/102(5)/2920/5/$ Acoustical Society of America 2920

University of Huddersfield Repository

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES

Sound Source Localization using HRTF database

ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics

Envelopment and Small Room Acoustics

SPATIAL AUDITORY DISPLAY USING MULTIPLE SUBWOOFERS IN TWO DIFFERENT REVERBERANT REPRODUCTION ENVIRONMENTS

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Proceedings of Meetings on Acoustics

Psychoacoustic Cues in Room Size Perception

Spatial audio is a field that

Computational Perception. Sound localization 2

Creating three dimensions in virtual auditory displays *

Perception of Self-motion and Presence in Auditory Virtual Environments

Externalization in binaural synthesis: effects of recording environment and measurement procedure

Computational Perception /785

Binaural hearing. Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden

Jason Schickler Boston University Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215

Intensity Discrimination and Binaural Interaction

Sound rendering in Interactive Multimodal Systems. Federico Avanzini

Assessing the contribution of binaural cues for apparent source width perception via a functional model

Dataset of head-related transfer functions measured with a circular loudspeaker array

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Capturing 360 Audio Using an Equal Segment Microphone Array (ESMA)

COM325 Computer Speech and Hearing

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Modulating motion-induced blindness with depth ordering and surface completion

THE TEMPORAL and spectral structure of a sound signal

Ivan Tashev Microsoft Research

A binaural auditory model and applications to spatial sound evaluation

Recording and analysis of head movements, interaural level and time differences in rooms and real-world listening scenarios

On distance dependence of pinna spectral patterns in head-related transfer functions

Introduction. 1.1 Surround sound

Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface

Accurate sound reproduction from two loudspeakers in a living room

Potential and Limits of a High-Density Hemispherical Array of Loudspeakers for Spatial Hearing and Auralization Research

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST

Binaural auralization based on spherical-harmonics beamforming

Influence of fine structure and envelope variability on gap-duration discrimination thresholds Münkner, S.; Kohlrausch, A.G.; Püschel, D.

THE PERCEPTION OF ALL-PASS COMPONENTS IN TRANSFER FUNCTIONS

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

Tara J. Martin Boston University Hearing Research Center, 677 Beacon Street, Boston, Massachusetts 02215

Audio Engineering Society. Convention Paper. Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

PAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane

Salient features make a search easy

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang

SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi

The Haptic Perception of Spatial Orientations studied with an Haptic Display

Minimum Audible Movement Angles for Discriminating Upward from Downward Trajectories of Smooth Virtual Source Motion within a Sagittal Plane

Study on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno

Convention Paper Presented at the 144 th Convention 2018 May 23 26, Milan, Italy

The analysis of multi-channel sound reproduction algorithms using HRTF data

University of Huddersfield Repository

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES

Thresholds for Dynamic Changes in a Rotary Switch

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois.

Interaction of Object Binding Cues in Binaural Masking Pattern Experiments

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA

Vertical Stereophonic Localization in the Presence of Interchannel Crosstalk: The Analysis of Frequency-Dependent Localization Thresholds

GROUPING BASED ON PHENOMENAL PROXIMITY

Effect of Harmonicity on the Detection of a Signal in a Complex Masker and on Spatial Release from Masking

Simulation of wave field synthesis

The psychoacoustics of reverberation

Gravitoinertial Force Magnitude and Direction Influence Head-Centric Auditory Localization

The Persistence of Vision in Spatio-Temporal Illusory Contours formed by Dynamically-Changing LED Arrays

THE DEVELOPMENT OF A DESIGN TOOL FOR 5-SPEAKER SURROUND SOUND DECODERS

Transcription:

Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 3pPP: Multimodal Influences on Auditory Spatial Perception 3pPP6. Cue weighting and vestibular mediation of temporal dynamics in sound localization via head rotation Ewan A. Macpherson* *Corresponding author's address: National Centre for Audiology, Western University, 1201 Western Rd, London, N6G 1H1, ON, Canada, ewan.macpherson@nca.uwo.ca Our studies have quantified the salience and weighting of dynamic acoustic cues in sound localization via head rotation. Results support three key findings: 1) low-frequency interaural time-difference (ITD) is the dominant dynamic binaural difference cue; 2) dynamic cues dominate front/rear localization only when spectral cues are unavailable; and 3) the temporal dynamics of dynamic cue processing are particular to auditory-vestibular integration. ITD dominance is shown indirectly in findings that head movements are highly effective for localizing lowfrequency targets but not narrow-band high-frequency targets. Direct evidence comes from manipulation of dynamic binaural cues in sphericalhead simulations lacking spectral cues. If the stimulus provides access to dominant high-frequency spectral cues, location illusions involving head-coupled source motion fail. For low-frequency targets, localization performance improves with increasing head-turn angle, but decreases with increasing velocity such that performance depends primarily on stimulus duration; ~100 ms being required for accurate front/back localization. That duration threshold only applies in dynamic localization tasks, and not in auditory-only tasks involving similar stimuli. Correct spatial interpretation of dynamic acoustic cues appears to require vestibular information about head motion, thus the temporal threshold is likely a property of vestibular-auditory integration. Published by the Acoustical Society of America through the American Institute of Physics 2013 Acoustical Society of America [DOI: 10.1121/1.4799913] Received 23 Jan 2013; published 2 Jun 2013 Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 1

INTRODUCTION A listener can take an active role in sound localization by means of moving the head. In that case, dynamic information is provided by the relationship between the motion of the head and the resulting changes in the interaural-difference cues. In particular, for a given head rotation, the direction of change of interaural time-difference (ITD) and interaural level-difference (ILD) for a stationary source in the front hemisphere is opposite to that for a source in the rear. Previous studies have clearly demonstrated the benefits to sound localization of relatively large head movements for both free-field and virtual auditory space stimuli (e.g. Wallach, 1940; Perrett and Noble, 1997; Wightman and Kistler, 1999; Iwaya et al., 2003). Such studies have typically not used methods that permitted experimenter control over the amount of dynamic information available to the listener, and therefore have not addressed the limits of salience of the dynamic cues for small head movements that might be more typical of natural behavior. In our laboratory we have employed an experimental paradigm that does offer this level of control, and we review here the results of completed and ongoing studies that address the relative weighting of dynamic ITD and ILD cues, the relative influence of spectral cues and dynamic interaural cues, and the temporal dynamics of cue integration in sound localization via head rotation. HEAD-SWEEP METHOD The experiments described here all used variants of a common stimulus presentation paradigm we refer to as the head-sweep method, illustrated in Fig. 1. Blocks of trials were run with the head fixed (facing 0-deg azimuth) or the head in motion at a low (25 or 50 deg/s), intermediate (100 and 200 deg/s), or high (400 deg/s) velocities. Each head-motion trial began with the listener s head turned 45 degrees to one side. The listener then began a head rotation at a practised velocity. Head orientation was tracked continuously. When the orientation entered a selected spatial window (widths between 2.6 and 40 degrees), the stimulus was gated on with a 5-ms raised-cosine ramp. When the head s orientation exited the window, the stimulus was gated off. The listener continued the head rotation to 45 degrees on the other side. Following each head-fixed or head-moving stimulus presentation, the listener indicated the apparent direction of the stimulus by turning the body and orienting with the head. A final reading of head orientation constituted the listener s response. The locations of the sound sources were independent of the spatial window, and were typically arranged in 22.5-, 30-, or 45-degree intervals spanning 360 degrees around the horizontal plane. Stimuli were presented over loudspeakers in a darkened, anechoic room or in virtual auditory space using individually measured head-related transfer functions (HRTFs) and real-time, head-tracked HRTF filtering. Data analysis typically involved computing the proportion of correct or small-error responses, which were defined as those falling within 30 degrees of the true target location. In some cases we instead computed the proportion of correct front/rear hemisphere responses. Wider windows allowed for more onset-to-offset cue change and longer stimulus durations, and were therefore expected to lead to more accurate localization. FIGURE 1: Head-sweep presentation paradigm. Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 2

WEIGHTING OF DYNAMIC ITD AND ILD IN LOCALIZATION VIA HEAD ROTATION Free-field dynamic localization of wideband and low- and high-frequency narrowband noise targets In a free-field experiment, we measured static and dynamic localization performance with bursts of wideband (0.5-16 khz), or low- (0.5-1 khz) or high- (6-6.5 khz) frequency narrow-band noise. The narrow-band noises did not carry accurate spectral cues for front/rear location, and the listener therefore was forced to rely on dynamic ITD and ILD to localize accurately. Bursts presented in head-fixed conditions were 200 ms in duration. Target-response plots for one typical listener (S203) in selected conditions (head-fixed and 50-deg/s motion) are shown in Fig. 2. Each column corresponds to one spatial window width. With the head fixed ( width = 0, leftmost column), wideband stimuli were localized accurately, but all of this listener s responses to low- and high-frequency noise fell in the front hemisphere, producing many back-to-front reversals. In head-motion conditions: responses to wideband noise were somewhat more scattered, but generally accurate; performance for low-frequency noise improved with increasing spatial window width; similar improvement was not observed for high-frequency noise. That latter result is remarkable; the fact that responses lie near either the positive or negative diagonal shows that the listener can certainly use the available interaural cues to report accurately the left/right component of the target location, but the listener could not use the head-motion-related change in those cues to disambiguate front from rear for the high-frequency targets. FIGURE 2: Selected target/response scatter plots for one typical subject. Stimuli were bursts of wideband, low-frequency narrowband, or high-frequency narrowband noise. Mean performance for four listeners is plotted in Fig. 3 as a function of spatial window width for each stimulus type and head-movement condition. For the wideband noise (left panel), increasing head velocity had a small negative effect on performance. For the low-frequency noise (middle panel), performance for a given spatial window decreased with increasing head velocity. For the low-frequency noise, performance improved monotonically with window width. Significant improvement was not observed for the high-frequency noise (right panel) except for the largest spatial window (40 degrees) and the lowest head velocity (50 deg/s). Because the primary available interaural cue for the low-frequency stimulus was ITD, and that for the high-frequency stimulus was ILD, these data suggest that, coupled with head-movement information, low-frequency dynamic ITD is a very salient cue for source location whereas high-frequency dynamic ILD is not. Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 3

FIGURE 3: Effect of spatial window width, head velocity, and stimulus frequency spectrum on localization accuracy Virtual auditory space dynamic localization with spherical-head HRTFs A concern with the high-frequency narrow-band stimuli yielding the poor performance shown in Fig. 2 and Fig. 3 is that although they do not provide access to accurate spectral cues, they do in fact carry rather potent spectral cues of their own that could generate percepts of subject-dependent phantom locations (e.g. Middlebrooks, 1992; Morimoto and Aokata, 1984; Blauert, 1969/70). Simply replacing those stimuli with high-pass noise is problematic because such stimuli do provide access to accurate spectral cues and therefore do not force the listener to rely on dynamic cues. In an attempt to overcome this problem, we are conducting ongoing studies using virtual auditory space presentation of stimuli generated with sphere-related transfer functions (SRTFs) lacking any front/rear spectral cues. These are computed using the method of Duda and Martens (1998) with the sphere radius selected to provide a close match between the low-frequency ITDs in each listener s HRTFs and SRTFs. Figure 4 shows mean localization performance (four listeners) for wideband (0.5 16 khz), low-frequency (0.5 1 khz), and high-pass (4-16 khz) noise targets as a function of spatial window width and transfer-function type (HRTF or SRTF). The head-sweep velocity was 50 deg/s. As expected, static performance with HRTFs (open black circles) was good for the wideband and high-pass noises and poor for the low-pass noise. With SRTFs, static performance was poor for all target types due to the lack of front/rear spectral cues. While head-movement benefit was observed for each of the three target bandwidths with SRTFs, the benefit was lowest for the high-pass stimulus, for which only dynamic ILD was available for front/rear localization and highest for the low-pass stimulus, for which ITD would have been the available dynamic cue. The fact that the head movement benefit under SRTFs was less for the wideband stimulus than for the low-pass stimulus is perhaps related to poor externalization of the SRTF-filtered stimuli, but this remains to be assessed directly. FIGURE 4: Effect of spatial window width, HRTF vs. SRTF spectral cues, and stimulus frequency spectrum on localization accuracy Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 4

To further investigate the relative weighting of dynamic ITD and ILD cues, we are conducting a second study using SRTFs in which one or other of those binaural difference cues is frozen so that it does not change during head motion. For example, given a target at 30 degrees azimuth, the ITD is frozen by creating a new set of SRTFs in which each location has the normally computed spherical spectral cues (and thus normal spherical ILD), but in which the ITD for the 30-degree location was applied to every location in the set. Conversely the ILD can be frozen by applying the 30-degree left- and right-ear spectra to every location while allowing the ITD to vary normally. Figure 5 shows the mean asymptotic performance (data for 20- and 40-degree spatial window conditions pooled) for four listeners under normal and frozen-cue SRTFs with wideband targets. Paired-sample t-tests indicated that performance was significantly poorer when only dynamic ILD was available (ITD frozen) than when only dynamic ITD was available (ILD frozen), again suggesting that ITD is the primary dynamic localization cue. This parallels the finding that low-frequency ITD is the dominant static cue for source lateral angle (Macpherson and Middlebrooks, 2002; Wightman and Kistler, 1992). FIGURE 5: Effect of ITD or ILD cue-freezing on dynamic localization accuracy with spherical-head HRTFs. RELATIVE INFLUENCE OF SPECTRAL CUES AND DYNAMIC INTERAURAL DIFFERENCE CUES FOR FRONT/REAR LOCATION Since both static spectral cues and dynamic interaural difference cues can provide information about the front/rear location of a sound source, it is of interest to determine the factors influencing the relative weighting of these cues. Wallach (1939, 1940) devised a mechanical signal switching apparatus that allowed the experimenter to alter the location of the active loudspeaker in a circular array based on listener head movement. In particular, by causing the active source to rotate in the same direction as the head, but at twice the angular velocity, a dynamic stimulus was created that had interaural difference cues corresponding to a stationary source in the opposite hemisphere, as illustrated in Fig. 6. It was reported that listeners consistently perceived the illusory stationary source location, and on this basis Wallach derived the principle of least displacement, which states that spectral cues are subordinated to the auditory system s preferred stationary-source interpretation of the dynamic interaural cues. Macpherson (2011) reported, however, that in a virtual auditory space recreation of Wallach s arrangement, listeners readily perceived wideband noise targets veridically moving in the correct hemisphere. To examine more closely the effect of high-frequency spectral cues on the success of the Wallach illusion, we generated sound stimuli that varied between low-pass (0.5-1 khz) and wideband (0.5-16 khz) with a variety of attenuation slopes above 1 khz, as shown in Fig. 7 (left). Participants localized these stimuli under both static-head and head-sweep conditions, and in the latter, the sources executed the double-head-speed rotation intended to elicit the Wallach illusion over a head-sweep spatial window of 40 degrees. To determine the quality of the spectral cues carried by each stimulus spectrum, the mean proportion of correct front/rear hemisphere responses was computed over five listeners for the static-head listening condition. Figure 7 (right, blue bars) shows that front/rear localization was quite accurate (85% correct or Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 5

FIGURE 6: Equivalence of binaural cues for moving or stationary sources in opposite hemispheres. higher) for all the stimuli except the low-pass (less than 60% correct), which lacked high frequencies entirely. Thus all but the low-pass stimulus seemed to have available spectral cues for front/rear localization. The same front/rear performance computation for the moving stimuli indicates the rate of failure of the Wallach illusion, and results are shown in Figure 7 (right, red bars). Only for the low-pass stimulus was the failure rate of the illusion low, indicating dominance of the static interpretation of the dynamic cues. For the other spectra, although the details of the response patterns varied among the five listeners, the illusion was not robust, and for most listeners the conflict between the spectral cues and dynamic cues appeared to cause uncertainty about the source location. Similar localization behavior has been reported by Kawaura et al. (1991) under conditions of spectral and dynamic cue mismatch in virtual auditory space. This result, that dynamic cues dominate completely only when spectral cues are weak or absent, is consistent with two recent findings. Brimijoin and Akeroyd (2012) found the Wallach illusion to occur robustly in a free-field re-creation only when stimuli lacked high-frequency energy. Similarly, Martens et al. (2013) report success of the Phantom Walker Illusion only for stimuli dominated by low frequency energy. The Phantom Walker is analogous to the Wallach illusion, and manifests as a front/rear reversal of source location elicited by swapping the left and right ear signals by means of an electronic hearing instrument while the listener experiences small involuntary head movements associated with walking. FIGURE 7: Left: Noise-burst spectra used in source-motion experiment. Right: Percent of correct-hemisphere responses in static and source-movement conditions. VESTIBULAR MEDIATION OF TEMPORAL DYNAMICS IN SOUND LOCALIZATION VIA HEAD ROTATION Faster head movements necessarily reduce stimulus duration for a given spatial window width. In Fig. 8, the mean performance data from Fig. 3 are replotted as a function of stimulus duration. The results for the Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 6

low-frequency noise (middle panel) are of primary interest. For a given duration, performance for the low-frequency noise was very similar across head velocities despite spatial windows varying over a range of almost an order of magnitude. The low-frequency results indicate that, regardless of head velocity, a stimulus duration of approximately 100 ms is required for substantial head movement benefit ( 75% small errors). FIGURE 8: Effect of stimulus duration, head velocity and stimulus spectrum. Initial results from an ongoing study have shown that active head movement is not required for correct integration of dynamic interaural cues with motion of the head in space, and that performance in a passive head movement condition is equally accurate (Kim and Macpherson, 2012). Figure 9 shows low-frequency localization performance as function of stimulus duration for three listeners in the usual, head-sweep condition (active, 25, 50, and 100 deg/s head velocity) and when being rotated passively in the same manner on a swiveling chair. Stimuli were presented in virtual auditory space using individually measured HRTFs and real-time head tracking and impulse response interpolation. The results indicate a 100-ms duration requirement in both active and passive conditions, similar to that seen in Fig. 8. That suggests that neck proprioception and efference copy are not necessary for this form of temporal integration, since information from those modalities was greatly reduced in the passive rotation condition (Kim et al., 2013). FIGURE 9: Localization performance under active or passive head-sweep conditions. To determine whether these temporal dynamics are specific to vestibular/auditory integration, we measured discrimination performance in an equivalent auditory-only task. For front/rear localization via head rotation, a listener must determine whether the source moves left-to-right or right-to-left relative to the head as the head rotates and also whether this is in the same direction as the head turn, indicating a rear-hemisphere source, or in the opposite direction, indicating a front-hemisphere source. The auditory-only task required listeners to discriminate the direction of motion of a 0.5 1-kHz band of noise presented over headphones with monotonically increasing or decreasing ITD. The magnitude of the ITD change over the duration of each stimulus was 25, 50, 100, 200, or 400 μs, and the rate of ITD change was ±250, ±500, ±1000, or ±2000 μs/s. Since for an average-sized head, ITD naturally varies by 10 μs/deg across the midline, these values approximate those that were produced by listener head rotation in head-sweep localization tasks. In Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 7

some blocks of trials, the ITD sweep was centered on 0 μs, and in others, the starting ITD was roved over a 500-μs range to prevent the start or end point being used as a cue for the direction of motion. 5-ms onset and offset ramps were applied to the left and right channels before applying the ITD. Mean psychometric functions are shown in Fig. 10 for the same three listeners whose localization data are shown in Fig. 9. When performance is plotted as a function of ΔITD (top panels), the results show improved performance with greater cue change and a penalty for higher ITD velocity. ITD roving (right panels) somewhat decreased performance accuracy. When plotted as a function of stimulus duration, however, it is clear that the velocity penalty is too small to align the psychometric functions, and that, unlike in the active or passive head-sweep localization tasks, it is ΔITD rather than duration that is the better predictor of performance. Thus the duration-limited performance seen in passive or active dynamic localization seems to be specific to vestibular/auditory integration. Also suggestive of vestibular involvement is the similarity of the 100-ms integration window observed in dynamic localization to the 75-100 ms delay observed between the perception of head movement and sound onset (Raeder et al., 2012). Also, auditory-vestibular temporal binding windows are of similar duration and, as seen in our localization results, appear to be independent of rotational velocity (Chang et al., 2012). FIGURE 10: Psychometric functions for discrimination of ITD sweep direction. CONCLUSIONS The results presented lead to the conclusions that: low-frequency ITD is the primary dynamic interaural difference cue for front/rear location; dynamic interaural difference cues dominate localization judgments only when spectral cues are absent; and the 100-ms duration requirement for effective use of dynamic auditory cues is a property of vestibular-auditory integration and not of the auditory system alone. ACKNOWLEDGMENTS The author thanks Michael Barnett-Cowan for insights into vestibular psychophysics, David Grainger for technical support, and students and research assistants Alasdair Cumming, Sarah Gillespie, Devin Kerr, Janet Kim, Tran Nguyen, Robert Quelch, and Kristen Tonus for their contributions. This work was supported by funding from NSF (0717272, Perception, Action and Cognition Program), NSERC (Discovery Grant 386259), and Western University. Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 8

REFERENCES Blauert, J. (1969/70). Sound localization in the median plane, Acustica 22, 205 213. Brimijoin, W. O. and Akeroyd, M. A. (2012). The role of head movements and signal spectrum in an auditory front/back illusion, i-perception 3, 179 181. Chang, N.-Y. N., Malone, A. K., and Hullar, T. E. (2012). Changes in temporal binding related to decreased vestibular input, Seeing and Perceiving 25, 107 107(0). Duda, R. O. and Martens, W. L. (1998). Range dependence of the response of a spherical head model, J. Acoust. Soc. Am. 104, 3048 3058. Iwaya, Y., Suzuki, Y., and Kimura, D. (2003). Effects of head movement on front-back error in sound localization, Acoust. Sci. & Tech. 24, 322 324. Kawaura, J., Suzuki, Y., Asano, F., and Sone, T. (1991). Sound localization in headphone reproduction by simulating transfer functions from the sound source to the external ear, J. Acoust. Soc. Japan (E) 12, 203 216. Kim, J., Barnett-Cowan, M., and Macpherson, E. A. (2013). Integration of auditory input with vestibular and proprioceptive information in the interpretation of dynamic sound localization cues, in Proc. Int. Cong. Acoust., Montreal. Kim, J. and Macpherson, E. A. (2012). Integration of vestibular and auditory input in the interpretation of dynamic sound localization cues, in Auditory Perception Action and Cognition Meeting, Minneapolis. Macpherson, E. A. (2011). Head motion, spectral cues, and Wallach s principle of least displacement in sound localization, in Principles and Applications of Spatial Hearing, edited by Y. Suzuki and D. S. Brungart, 103 120 (World Scientific). Macpherson, E. A. and Middlebrooks, J. C. (2002). Listener weighting of cues for lateral angle: the duplex theory of sound localization revisited, J. Acoust. Soc. Am. 111, 2219 2236. Martens, W. L., Sakamoto, S., Miranda, L., and Cabrera, D. (2013). Dominance of head-motion-coupled directional cues over other cues during walking depends upon source spectrum, in Proc. Int. Cong. Acoust., Montreal. Middlebrooks, J. C. (1992). Narrow-band sound localization related to external ear acoustics, J. Acoust. Soc. Am. 92, 2607 2624. Morimoto, M. and Aokata, H. (1984). Localization cues of sound sources in the upper hemisphere, J. Acoust. Soc. Japan 5, 165 173. Perrett, S. and Noble, W. (1997). The contribution of head motion cues to localization of low-pass noise., Percept. Psychophys. 59, 1018 1026. Raeder, S., Bülthoff, H. H., and Barnett-Cowan, M. (2012). Persistent perceptual delay for head movement onset relative to auditory stimuli of different duration and rise times, Seeing and Perceiving 25, 32 32(0). Wallach, H. (1939). On sound localization, J. Acoust. Soc. Am. 10, 270 274. Wallach, H. (1940). The role of head movements and vestibular and visual cues in sound localization, J. Exp. Psychol. 27, 339 368. Wightman, F. L. and Kistler, D. J. (1992). The dominant role of low-frequency interaural time differences in sound localization, J. Acoust. Soc. Am. 91, 1648 1661. Wightman, F. L. and Kistler, D. J. (1999). Resolution of front-back ambiguity in spatial hearing by listener and source movement, J. Acoust. Soc. Am. 105, 2841 2853. Proceedings of Meetings on Acoustics, Vol. 19, 050131 (2013) Page 9