Tu1.D II Current Approaches to 3-D Sound Reproduction. Elizabeth M. Wenzel

Size: px
Start display at page:

Download "Tu1.D II Current Approaches to 3-D Sound Reproduction. Elizabeth M. Wenzel"

Transcription

1 Current Approaches to 3-D Sound Reproduction Elizabeth M. Wenzel NASA Ames Research Center Moffett Field, CA Abstract Current approaches to spatial sound synthesis are reviewed, particularly as they relate to the topics being addressed in the special session on 3-D Sound Reproduction. Most currently available virtual audio systems tend to fall into two categories. Those aimed at high-end simulations for purposes emphasize high-fidelity rendering while others are directed toward entertainment and game applications. The papers represented in this special session are primarily concerned with the goals of high-fidelity simulations of spatial sound presented over headphones. They seek to elucidate the nature of the acoustic parameters that must be rendered in order to provide a listener with an accurate or authentic perceptual experience. 1. Comparison of VAE Systems Different virtual acoustic environment (VAE) applications emphasize different aspects of the listening experience that require different approaches to rendering software/hardware. Auralization requires computationally intensive synthesis of the entire binaural room response that typically must be done offline and/or with specialized hardware. A simpler simulation that emphasizes accurate control of the direct path, and perhaps a limited number of early reflections, may be better suited to information display. The fact that such a simulation does not sound "real" may have little to do with the quality of directional information provided. Achieving both directional accuracy and presence in virtual reality applications requires that head tracking be enabled with special attention devoted to the dynamic response of the system. A relatively high update rate (~60 Hz) and low latency (less than ~100 ms) may be required to optimize localization cues from head motion and provide a smooth and responsive simulation of a moving listener or sound source [1-4]. Implementing a perceptually adequate dynamic response for a complex room is computationally intensive and may require multiple CPUs or DSPs. One solution for synthesizing interactive virtual audio has been the development of hybrid systems [e.g., 5, 6]. These systems attempt to reconcile the goals of directional accuracy and realism by implementing realtime processing of the direct path and early reflections using a model (e.g., the image model) combined with measured or modeled representations of late reflections and reverberation. During dynamic, real-time synthesis, only the direct path and early reflections can be readily updated in response to changes in listener or source position. A densely measured or interpolated headrelated transfer function (HRTF) database is needed to avoid artifacts during updates. Late portions of the room response typically remain static in response to head motion, or given enough computational power, could be updated using a database of impulse responses pre-computed for a limited set of listener-source positions. Model-based synthesis is computationally more expensive but requires less memory than databased rendering [6]. The Lake Huron/HeadScape system relies entirely on long, densely pre-computed binaural room impulse responses (BRIRs) rendered with a fast frequency-domain algorithm. The early portion of the BRIR (4000 samples) is updated in response to head motion and the late reverberation remains static. Another recent trend is that in some spatial sound systems, synthesis is now being performed entirely in software for use on generic hardware platforms such as a personal computer with a Windows or Linux operating system. NASA s SLAB software is an example of this approach. Tables 1 and 2 summarize system characteristics and specifications for some of the currently available virtual audio systems targeting different applications. (The Convolvotron is listed for historical comparison purposes.) These systems tend to fall into two categories. Those aimed at high-end simulations for purposes (e.g., auralization, psychoacoustics, information displays, virtual reality) tend to emphasize high-fidelity rendering of direct path and/or early reflections, accurate models of reverberation, and good system dynamics (high update rate, low latency). Other systems are directed toward entertainment and game applications. The rendering algorithms in such systems are proprietary and appear to emphasize efficient reverberation modeling; it is often not clear whether the direct path and/or early reflections are independently spatialized. The information in the tables is based on published papers in a few cases [e.g., 3, 5, 7] but more often on product literature and websites [8]. Tu1.D II - 883

2 VAE System / Primary Target Application SLAB / DIVA / AuSIM / Spat (IRCAM) / AM3D /, games Tucker-Davis / Lake /, entertainment Creative Audigy / games Sensaura / entertainment QSound / games Crystal River Convolvotron / Audio Display User Interface OS Implementation headphone C++ Windows 98/2k software / Intel C++ UNIX, Linux software / SGI headphone C client-server model software / (client: Win98/2k, Intel DOS, Mac, etc.) Graphical (Max, jmax) headphone Graphical / ActiveX Mac, Linux, IRIX software / Mac, Intel, SGI C++ Windows 98/2k software / Intel (MMX) Windows 98/2k special purpose DSP hardware (RP2.1) C++ Windows NT special purpose DSP hardware (CP4, Huron) C++ Windows 98/2k consumer sound card 3D sound N/A software / engine hardware 3D sound N/A software / engine hardware headphone C DOS special purpose DSP hardware Rendering Domain / Room Model image model image model direct path? / direct path direct path, reverb engine frequency (HRTF) / precomputed BRIR proprietary / proprietary / proprietary / direct path VAE System SLAB DIVA AuSIM Table 1. Summary table describing system characteristics for various VAE systems. # Sources Filter Order Room Effect Scenario Update Rate arbitrary, arbitrary image model arbitrary CPU-limited (max. direct: 128, 6 1 st order (120 Hz typical, (4 typical) reflections: 32) reflections 690 Hz max.) arbitrary, CPU-limited 32 per CPU GHz arbitrary, modeled HRIRs (typical direct: 30, reflections: 10) arbitrary (128 typical, 256 max.) image model 2 nd order reflections, late reverb N/A Internal Sampling Rate Latency 24 ms default 44.1 khz (adjustable output buffer size) 20 Hz ~ ms arbitrary (32 khz typical) arbitrary (375 Hz default max.) 8 ms default (adjustable output buffer size) 44.1 khz 48 khz (default) 96 khz AM3D , CPU-limited? N/A ~22 Hz max. 45 ms min. 22 khz (current) 44.1 khz (future) Lake to precomputed? 0.02 ms min. 48 khz (HeadScape, 4 DSPs) response Convolvotron N/A 33 Hz 32 ms 50 khz Table 2. Summary table describing system specifications for various VAE systems. II - 884

3 It is often difficult to determine details about a particular system s rendering algorithm and performance specifications. For example, critical dynamic parameters like scenario update rate and internal rendering latency are not readily available or not enough information about the measurement scenario is provided to evaluate the quoted values. Some systems listed in Table 2 are not present in Table 3 because not enough information was found regarding system performance specifications. 2. NASA s SLAB System SLAB is an example of a software-based real-time virtual acoustic environment rendering system designed for use in the personal computer environment. It is being developed by the Spatial Auditory Displays Lab at NASA Ames Research Center primarily as a tool for the study of spatial hearing. To enable a wide variety of psychoacoustic studies, SLAB provides extensive control over the VAE rendering process. It provides an API (Application Programming Interface) for specifying the acoustic scene and setting the low-level digital signal processing (DSP) parameters as well as an extensible architecture for exploring multiple rendering strategies. The project is also intended to provide a low-cost system for dynamic synthesis of virtual audio over headphones that does not require special purpose signal processing hardware. Because it is a software-only solution designed for the Windows/Intel platform, it can take advantage of improvements in hardware performance without extensive software revision. SOURCE Location (Implied Velocity) Orientation Sound Pressure Level Waveform Radiation Pattern Source Radius ENVIRONMENT Speed of Sound Spreading Loss Air Absorption Surface Locations Surface Boundaries Surface Reflection Surface Transmission Late Reverberation Table 3. Acoustic Scenario Parameters. LISTENER Location (Implied Velocity) Orientation HRIR ITD 2.1. SLAB Acoustic Scenario The acoustic scenario of a sound source radiating into an environment and heard by a listener can be specified by the parameters shown in Table 3. A source, characterized by its waveform, level, radiation pattern, size, and dynamic quantities including position and orientation, radiates into an environment. Propagation of acoustic energy in the environment is specified by the speed of sound, spherical spreading loss, and air absorption; the environment is further specified by the location and characteristics of reflecting and transmitting objects. The source signal propagates through the environment, arriving at a listener characterized by a head-related impulse response (HRIR) and interaural time delay (ITD), as well as a dynamically changing position and orientation. The HRIRs used here are derived from minimum-phase representations of the raw left and right-ear impulse responses measured for individual subjects. ITDs are estimated from the raw left and right-ear impulse responses and represented as a pure delay. HRTFs, on the other hand, refer to the equivalent frequency domain representations of the raw HRIRs. Currently, the SLAB Renderer supports all but the following parameters: radiation pattern, air absorption, surface transmission, and late reverberation. A signal path may be modeled according to the physical scenario using the signal flow architecture shown in Fig. 1(a). A set of P paths from the source to the listener (including the direct path) is separately rendered. The filter r(z) imposes the source radiation pattern on the source signal to take the signal from the source to a point in the vicinity of the source along a particular radiation direction. The filter z -τα a(z) applies the propagation delay, spherical spreading loss, and air absorption experienced as the source signal propagates from near the source to near the listener; the filter m(z) imposes transmission or reflection characteristics of any objects encountered. The filter z -τh h(z) represents the HRIR and ITD, and takes any arriving signal from the vicinity of the listener along a particular direction to the listener's ear canals. The SLAB signal flow shown in Fig. 1(b) was designed to implement the physical effects discussed above in an easily maintained, efficient architecture. It consists of a set of parallel signal paths, one for each rendered path from the source to a listener's ears. The propagation delay and interaural time delay for each source-to-ear path are combined, and implemented via an interpolated delay line. Static effects along each path, such as materials reflection filtering are combined and implemented as an infinite impulse response (IIR) filter. A finite impulse response (FIR) filter is used to implement dynamic effects such as the head-related impulse response and the source radiation pattern Dynamic Behavior Interactive virtual audio systems are necessarily time varying. As the scenario changes over time, different signal processing parameters are required to render the changing physical effects imposed on the source signal. The difficulty is that all signal processing structures available for implementing the changing scenario are inherently static, assuming fixed coefficients. As a result, care must be taken when updating signal processing parameters. Ideally, new parameters are II - 885

4 switched in sufficiently frequently that the change from one parameter set to the next is imperceptibly small. Certain parameters such as time delays need to be updated every sample to avoid artifacts; minimumphase head-related impulse responses are somewhat more forgiving. A primary problem with this approach is that it is expensive to compute signal processing parameters from scenario information. There is also the additional issue that peripherals such as head trackers typically provide update rates ranging from 30 to 120 Hz, so that intermediate scenario data must be developed. Two methods are typically used to accommodate a changing scenario: output crossfading, and parameter crossfading (described as commutation in [9]). In output crossfading (e.g., as in early versions of the Convolvotron that used non-minimum phase HRIRs), the output is a blend of the input processed once according to past parameters and then again according to present parameters. While the two processing paths use static coefficients, the blend is varied over time to achieve a transition between the parameter sets. Parameter crossfading, by contrast, processes the input only once according to a varying set of rendering parameters that have been crossfaded before processing of the input signal. Physical Signal Flow SOURCE RADIATION ENVIRONMENT PROPAGATION HEAD EFFECT source r(z) z a(z) m(z) z τh h(z) 1 P P P 2P + 2 radiation pattern τα propagation delay air absorption spherical spreading surface reflection object transmission interaural time delay left, right HRIR mix binaural signal P = Number of Paths (Direct Path & Reflections); 2P = Paths Rendered for Left & Right Ears (a) source SLAB Signal Flow mix headphone output 1 τ a ± τ z h interpolated delay line propagation delay ITD 2P m(z) IIR filter, m reflection transmission 2P h(z) a(z) r(z) 2P + FIR filter radiation pattern air absorption spherical spreading HRIR 2 e(z) 2 IIR filter output device equalization (b) P = Number of Paths (Direct Path & Reflections); 2P = Paths Rendered for Left & Right Ears Figure 1. (a) The physical signal flow partitions the properties of the acoustic scenario into the relevant signal processing components. (b) The SLAB signal flow partitions the physical scenario into signal processing components as they are implemented in the SLAB system architecture. Overlap-add methods that operate in the frequency domain are, in effect, a type of output crossfade where the crossfade interval corresponds to the overlap-add interval. Undesirable artifacts when updating the scenario are mitigated by the use of frequent updates and densely measured HRTF databases and/or densely pre-computed binaural room impulse responses [10, 11]. Disadvantages of this method include large memory requirements and the fact that changes in the source, room and receiver characteristics require new measurements or simulations. Other systems utilizing convolution in the time-domain also appear to have used densely-interpolated HRIR databases (e.g., spatial resolution on the order of 2 after interpolation), perhaps combined with a short period of output crossfade, to mitigate possible artifacts due to switching between filters [1, 12]. Output crossfading has the drawback of being computationally burdensome. In addition, the output is a mixture of two different systems and might not II - 886

5 resemble that of a single system intermediate between the two. Accordingly, the SLAB system uses a variation of parameter crossfading that we term "parameter tracking." Since new scenario information may be available relatively infrequently and contains measurement noise, signal processing parameters computed with each new scenario update become target parameters that are tracked or smoothed. Currently in the SLAB system, the scenario is updated at an average interval of about 8.3 ms given a 120 Hz scenario update rate. In parameter crossfading, there may be multiple update rates for various signal processing parameters. In SLAB, there are two parameter update rates. Every other input frame or 1.45 ms (64 samples), filter coefficients are replaced with ones slightly closer to the target coefficients, while path delays are updated every sample (22.7 µs) to preserve embedded Doppler shifts. A more detailed discussion of dynamic synthesis methods in SLAB and other systems can be found in [13]. Informal listening tests of the SLAB system indicate that its dynamic behavior is both smooth and responsive. The smoothness is enhanced by the 120-Hz scenario update rate, as well as the parameter tracking method, which smooths at rather high parameter update rates; i.e., time delays are updated at 44.1 khz and the FIR filter coefficients are updated at 690 Hz. The responsiveness of the system is enhanced by the relatively low latency of 24 ms. The scenario update rate, parameter update rates, and latency compare favorably to other virtual audio systems SLAB Software Features In addition to the scenario parameters, SLAB provides hooks into the DSP parameters, such as the FIR update smoothing time constant or the number of FIR filter taps used for rendering. Also, various features of the renderer can be modified, such as exaggerating spreading loss or disabling a surface reflection [14]. Recently implemented features include source trajectories, API scripting, user callback routines, reflection offsets, the Scene layer, and internal plug-ins. An external renderer plug-in interface has also been developed that allows users to implement and insert their own custom rendering algorithms. SLAB is being released via the web at The SLAB User Release consists of a set of Windows applications and libraries for writing spatial audio applications. The primary components are the SLABScape demonstration application, the SLABServer server application, and the SLAB Host and SLAB Client libraries. SLABScape (Figure 2) allows the user to experiment with the SLAB Renderer API. This API provides access to the acoustic scenario parameters listed in Table 3. The user can also specify sound source trajectories, enable Fastrak head tracking, edit and play SLAB Scripts, A/B different rendering strategies, and visualize the environment via a Direct3D display. 3. Conclusions Interest in the simulation of acoustic environments has prompted a number of technology development efforts over the years for applications such as auralization of concert halls and listening rooms, virtual reality, spatial information displays in aviation, and better sound effects for video games. Each of these applications implies different task requirements that require different approaches in the development of rendering software and hardware. For example, the auralization of a concert hall or listening room requires accurate synthesis of the room response in order to create what may be perceived as an authentic experience. Information displays that rely on spatial hearing, on the other hand, are more often concerned with localization accuracy than the subjective authenticity of the experience. Virtual reality applications such as astronaut training environments, where both good directional information and a sense of presence in the environment are desired, may have requirements for both accuracy and realism. All applications could benefit from the represented by the papers in this special session on 3D Sound Reproduction [see also 15, 16] that help to specify the acoustic parameters required for perceptually accurate spatial sound synthesis. Such studies can give system designers guidance about where to devote computational resources without sacrificing perceptual validity. Figure 2. SLABScape Screenshot. II - 887

6 4. Acknowledgements Work supported by the Human Measurement and Performance Project within NASA s Airspace Systems Program. 5. References [1] Sandvad, J. Dynamic aspects of auditory virtual environments. 100 th Conv. Aud. Eng. Soc, Copenhagen, preprint 4226, [2] Wenzel, E. M. Analysis of the role of update rate and system latency in interactive virtual acoustic environments. 103 rd Conv. Aud. Eng. Soc, New York, preprint 4633, [3] Wenzel, E. M. The impact of system latency on dynamic performance in virtual acoustic environments. Proc. 15 th Int. Cong. Acoust. & 135th Acoust. Soc. Amer. Seattle, pp , [4] Wenzel, E. M. Effect of increasing system latency on localization of virtual sounds. Proc. Aud. Eng. Soc. 16th Int. Conf. Spat. Sound Repro. Rovaniemi, Finland. April 10-12, New York: Audio Engineering Society, pp , 1999 [5] Savioja, L., Huopaniemi, J., Lokki, T. and Väänänen, R. Creating interactive virtual acoustic environments. J. Aud. Eng. Soc., vol. 47, pp , [6] Pelligrini, R. S. Comparison of data- and modelbased simulation algorithms for auditory virtual environments. 107 th Conv. Aud. Eng. Soc, Munich, [7] Wenzel, E. M., Miller, J. D. and Abel, J. S. A software-based system for interactive spatial sound synthesis, ICAD 2000, 6 th Intl. Conf. on Aud. Disp., Atlanta, Georgia, [8] Websites: [9] Jot, J. M., Larcher, V. and Warusfel, O. Digital signal processing issues in the context of binaural and transaural stereophony. 98 th Conv. Aud. Eng. Soc. Paris, France, 1995, Preprint [10] Bronkhorst, A. W. Localization of real and virtual sources. J. Acoust. Soc. Amer., (1995) 98, [11] Gardner, W. G. Efficient convolution without inputoutput delay. J. Aud. Eng. Soc., (1995) 43, [12] Sahrhage, J., Blauert, J. and Lehnert, H. Implementation of an auditory/tactile virtual environment. Proc. 2 nd FIVE Int. Conf., (Palazzo dei Congressi, Italy, 1996) [13] Wenzel, E. M., Miller, J. D. and Abel, J. S. Sound Lab: A real-time, software-based system for the study of spatial hearing, 108 th Conv. Aud. Eng. Soc, Paris, preprint 5140, [14] Miller, J. D. and Wenzel, E. M. (2002) Recent Developments in SLAB: A Software-Based System for Interactive Spatial Sound Synthesis. Proc. Int. Conf. Aud. Displ., ICAD 2002, Kyoto, Japan, pp [15] Begault, D. R. Audible and inaudible early reflections: Thresholds for auralization system design. 100 th Conv. Aud. Eng. Soc, Copenhagen, preprint 4244, [16] Begault, D. R., Wenzel, E. M. and Anderson, M. R. Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. J. Aud. Eng. Soc., vol. 49, pp , II - 888

Listening with Headphones

Listening with Headphones Listening with Headphones Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back Substantial individual differences Most evident in elevation

More information

REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR

REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR REAL TIME WALKTHROUGH AURALIZATION - THE FIRST YEAR B.-I. Dalenbäck CATT, Mariagatan 16A, Gothenburg, Sweden M. Strömberg Valeo Graphics, Seglaregatan 10, Sweden 1 INTRODUCTION Various limited forms of

More information

Aalborg Universitet. Audibility of time switching in dynamic binaural synthesis Hoffmann, Pablo Francisco F.; Møller, Henrik

Aalborg Universitet. Audibility of time switching in dynamic binaural synthesis Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time switching in dynamic binaural synthesis Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Journal of the Audio Engineering Society Publication date: 2005

More information

From Binaural Technology to Virtual Reality

From Binaural Technology to Virtual Reality From Binaural Technology to Virtual Reality Jens Blauert, D-Bochum Prominent Prominent Features of of Binaural Binaural Hearing Hearing - Localization Formation of positions of the auditory events (azimuth,

More information

Sound source localization and its use in multimedia applications

Sound source localization and its use in multimedia applications Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,

More information

Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA

Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA Audio Engineering Society Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract

More information

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION ARCHIVES OF ACOUSTICS 33, 4, 413 422 (2008) VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION Michael VORLÄNDER RWTH Aachen University Institute of Technical Acoustics 52056 Aachen,

More information

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations György Wersényi Széchenyi István University, Hungary. József Répás Széchenyi István University, Hungary. Summary

More information

From acoustic simulation to virtual auditory displays

From acoustic simulation to virtual auditory displays PROCEEDINGS of the 22 nd International Congress on Acoustics Plenary Lecture: Paper ICA2016-481 From acoustic simulation to virtual auditory displays Michael Vorländer Institute of Technical Acoustics,

More information

Platform for dynamic virtual auditory environment real-time rendering system

Platform for dynamic virtual auditory environment real-time rendering system Article Acoustics January 2013 Vol.58 No.3: 316327 doi: 10.1007/s11434-012-5523-2 SPECIAL TOPICS: Platform for dynamic virtual auditory environment real-time rendering system ZHANG ChengYun 1 & XIE BoSun

More information

Ivan Tashev Microsoft Research

Ivan Tashev Microsoft Research Hannes Gamper Microsoft Research David Johnston Microsoft Research Ivan Tashev Microsoft Research Mark R. P. Thomas Dolby Laboratories Jens Ahrens Chalmers University, Sweden Augmented and virtual reality,

More information

DECORRELATION TECHNIQUES FOR THE RENDERING OF APPARENT SOUND SOURCE WIDTH IN 3D AUDIO DISPLAYS. Guillaume Potard, Ian Burnett

DECORRELATION TECHNIQUES FOR THE RENDERING OF APPARENT SOUND SOURCE WIDTH IN 3D AUDIO DISPLAYS. Guillaume Potard, Ian Burnett 04 DAFx DECORRELATION TECHNIQUES FOR THE RENDERING OF APPARENT SOUND SOURCE WIDTH IN 3D AUDIO DISPLAYS Guillaume Potard, Ian Burnett School of Electrical, Computer and Telecommunications Engineering University

More information

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat Audio Transmission Technology for Multi-point Mobile Voice Chat Voice Chat Multi-channel Coding Binaural Signal Processing Audio Transmission Technology for Multi-point Mobile Voice Chat We have developed

More information

WAVELET-BASED SPECTRAL SMOOTHING FOR HEAD-RELATED TRANSFER FUNCTION FILTER DESIGN

WAVELET-BASED SPECTRAL SMOOTHING FOR HEAD-RELATED TRANSFER FUNCTION FILTER DESIGN WAVELET-BASE SPECTRAL SMOOTHING FOR HEA-RELATE TRANSFER FUNCTION FILTER ESIGN HUSEYIN HACIHABIBOGLU, BANU GUNEL, AN FIONN MURTAGH Sonic Arts Research Centre (SARC), Queen s University Belfast, Belfast,

More information

Magic Leap Soundfield Audio Plugin user guide for Unity

Magic Leap Soundfield Audio Plugin user guide for Unity Magic Leap Soundfield Audio Plugin user guide for Unity Plugin Version: MSA_1.0.0-21 Contents Get started using MSA in Unity. This guide contains the following sections: Magic Leap Soundfield Audio Plugin

More information

A Digital Signal Processor for Musicians and Audiophiles Published on Monday, 09 February :54

A Digital Signal Processor for Musicians and Audiophiles Published on Monday, 09 February :54 A Digital Signal Processor for Musicians and Audiophiles Published on Monday, 09 February 2009 09:54 The main focus of hearing aid research and development has been on the use of hearing aids to improve

More information

SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi

SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS György Wersényi Széchenyi István University Department of Telecommunications Egyetem tér 1, H-9024,

More information

The psychoacoustics of reverberation

The psychoacoustics of reverberation The psychoacoustics of reverberation Steven van de Par Steven.van.de.Par@uni-oldenburg.de July 19, 2016 Thanks to Julian Grosse and Andreas Häußler 2016 AES International Conference on Sound Field Control

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST PACS: 43.25.Lj M.Jones, S.J.Elliott, T.Takeuchi, J.Beer Institute of Sound and Vibration Research;

More information

I3DL2 and Creative R EAX

I3DL2 and Creative R EAX I3DL2 and Creative R EAX Jussi Mutanen Jussi.Mutanen@hut.fi Abstract I3DL2 3D audio rendering guidelines gives the minimum rendering requirements for the 3D audio developers, renderer s, and vendors. I3DL2

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 0.0 INTERACTIVE VEHICLE

More information

A binaural auditory model and applications to spatial sound evaluation

A binaural auditory model and applications to spatial sound evaluation A binaural auditory model and applications to spatial sound evaluation Ma r k o Ta k a n e n 1, Ga ë ta n Lo r h o 2, a n d Mat t i Ka r ja l a i n e n 1 1 Helsinki University of Technology, Dept. of Signal

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:

More information

Binaural Hearing. Reading: Yost Ch. 12

Binaural Hearing. Reading: Yost Ch. 12 Binaural Hearing Reading: Yost Ch. 12 Binaural Advantages Sounds in our environment are usually complex, and occur either simultaneously or close together in time. Studies have shown that the ability to

More information

3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES

3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES 3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES Rishabh Gupta, Bhan Lam, Joo-Young Hong, Zhen-Ting Ong, Woon-Seng Gan, Shyh Hao Chong, Jing Feng Nanyang Technological University,

More information

Interactive 3D Audio Rendering in Flexible Playback Configurations

Interactive 3D Audio Rendering in Flexible Playback Configurations Interactive 3D Audio Rendering in Flexible Playback Configurations Jean-Marc Jot DTS, Inc. Los Gatos, CA, USA E-mail: jean-marc.jot@dts.com Tel: +1-818-436-1385 Abstract Interactive object-based 3D audio

More information

Externalization in binaural synthesis: effects of recording environment and measurement procedure

Externalization in binaural synthesis: effects of recording environment and measurement procedure Externalization in binaural synthesis: effects of recording environment and measurement procedure F. Völk, F. Heinemann and H. Fastl AG Technische Akustik, MMK, TU München, Arcisstr., 80 München, Germany

More information

Auditory Localization

Auditory Localization Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception

More information

THE PERCEPTION OF ALL-PASS COMPONENTS IN TRANSFER FUNCTIONS

THE PERCEPTION OF ALL-PASS COMPONENTS IN TRANSFER FUNCTIONS PACS Reference: 43.66.Pn THE PERCEPTION OF ALL-PASS COMPONENTS IN TRANSFER FUNCTIONS Pauli Minnaar; Jan Plogsties; Søren Krarup Olesen; Flemming Christensen; Henrik Møller Department of Acoustics Aalborg

More information

Enhancing 3D Audio Using Blind Bandwidth Extension

Enhancing 3D Audio Using Blind Bandwidth Extension Enhancing 3D Audio Using Blind Bandwidth Extension (PREPRINT) Tim Habigt, Marko Ðurković, Martin Rothbucher, and Klaus Diepold Institute for Data Processing, Technische Universität München, 829 München,

More information

Audio Engineering Society. Convention Paper. Presented at the 115th Convention 2003 October New York, New York

Audio Engineering Society. Convention Paper. Presented at the 115th Convention 2003 October New York, New York Audio Engineering Society Convention Paper Presented at the 115th Convention 2003 October 10 13 New York, New York This convention paper has been reproduced from the author's advance manuscript, without

More information

PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS

PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS Myung-Suk Song #1, Cha Zhang 2, Dinei Florencio 3, and Hong-Goo Kang #4 # Department of Electrical and Electronic, Yonsei University Microsoft Research 1 earth112@dsp.yonsei.ac.kr,

More information

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Sebastian Merchel and Stephan Groth Chair of Communication Acoustics, Dresden University

More information

Direction-Dependent Physical Modeling of Musical Instruments

Direction-Dependent Physical Modeling of Musical Instruments 15th International Congress on Acoustics (ICA 95), Trondheim, Norway, June 26-3, 1995 Title of the paper: Direction-Dependent Physical ing of Musical Instruments Authors: Matti Karjalainen 1,3, Jyri Huopaniemi

More information

Spatial audio is a field that

Spatial audio is a field that [applications CORNER] Ville Pulkki and Matti Karjalainen Multichannel Audio Rendering Using Amplitude Panning Spatial audio is a field that investigates techniques to reproduce spatial attributes of sound

More information

MPEG-4 Structured Audio Systems

MPEG-4 Structured Audio Systems MPEG-4 Structured Audio Systems Mihir Anandpara The University of Texas at Austin anandpar@ece.utexas.edu 1 Abstract The MPEG-4 standard has been proposed to provide high quality audio and video content

More information

Spatial Audio & The Vestibular System!

Spatial Audio & The Vestibular System! ! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs

More information

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA Audio Engineering Society Convention Paper 987 Presented at the 143 rd Convention 217 October 18 21, New York, NY, USA This convention paper was selected based on a submitted abstract and 7-word precis

More information

ROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES

ROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES ROOM AND CONCERT HALL ACOUSTICS The perception of sound by human listeners in a listening space, such as a room or a concert hall is a complicated function of the type of source sound (speech, oration,

More information

New acoustical techniques for measuring spatial properties in concert halls

New acoustical techniques for measuring spatial properties in concert halls New acoustical techniques for measuring spatial properties in concert halls LAMBERTO TRONCHIN and VALERIO TARABUSI DIENCA CIARM, University of Bologna, Italy http://www.ciarm.ing.unibo.it Abstract: - The

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2aPPa: Binaural Hearing

More information

III. Publication III. c 2005 Toni Hirvonen.

III. Publication III. c 2005 Toni Hirvonen. III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on

More information

MANY emerging applications require the ability to render

MANY emerging applications require the ability to render IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004 553 Rendering Localized Spatial Audio in a Virtual Auditory Space Dmitry N. Zotkin, Ramani Duraiswami, Member, IEEE, and Larry S. Davis, Fellow,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 2aAAa: Adapting, Enhancing, and Fictionalizing

More information

Final Exam Study Guide: Introduction to Computer Music Course Staff April 24, 2015

Final Exam Study Guide: Introduction to Computer Music Course Staff April 24, 2015 Final Exam Study Guide: 15-322 Introduction to Computer Music Course Staff April 24, 2015 This document is intended to help you identify and master the main concepts of 15-322, which is also what we intend

More information

Introduction. 1.1 Surround sound

Introduction. 1.1 Surround sound Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of

More information

Virtual Acoustic Space as Assistive Technology

Virtual Acoustic Space as Assistive Technology Multimedia Technology Group Virtual Acoustic Space as Assistive Technology Czech Technical University in Prague Faculty of Electrical Engineering Department of Radioelectronics Technická 2 166 27 Prague

More information

Binaural auralization based on spherical-harmonics beamforming

Binaural auralization based on spherical-harmonics beamforming Binaural auralization based on spherical-harmonics beamforming W. Song a, W. Ellermeier b and J. Hald a a Brüel & Kjær Sound & Vibration Measurement A/S, Skodsborgvej 7, DK-28 Nærum, Denmark b Institut

More information

NAME STUDENT # ELEC 484 Audio Signal Processing. Midterm Exam July Listening test

NAME STUDENT # ELEC 484 Audio Signal Processing. Midterm Exam July Listening test NAME STUDENT # ELEC 484 Audio Signal Processing Midterm Exam July 2008 CLOSED BOOK EXAM Time 1 hour Listening test Choose one of the digital audio effects for each sound example. Put only ONE mark in each

More information

SPAT. Binaural Encoding Tool. Multiformat Room Acoustic Simulation & Localization Processor. Flux All rights reserved

SPAT. Binaural Encoding Tool. Multiformat Room Acoustic Simulation & Localization Processor. Flux All rights reserved SPAT Multiformat Room Acoustic Simulation & Localization Processor by by Binaural Encoding Tool Flux 2009. All rights reserved Introduction Auditory scene perception Localisation Binaural technology Virtual

More information

The analysis of multi-channel sound reproduction algorithms using HRTF data

The analysis of multi-channel sound reproduction algorithms using HRTF data The analysis of multichannel sound reproduction algorithms using HRTF data B. Wiggins, I. PatersonStephens, P. Schillebeeckx Processing Applications Research Group University of Derby Derby, United Kingdom

More information

SPATIALISATION IN AUDIO AUGMENTED REALITY USING FINGER SNAPS

SPATIALISATION IN AUDIO AUGMENTED REALITY USING FINGER SNAPS 1 SPATIALISATION IN AUDIO AUGMENTED REALITY USING FINGER SNAPS H. GAMPER and T. LOKKI Department of Media Technology, Aalto University, P.O.Box 15400, FI-00076 Aalto, FINLAND E-mail: [Hannes.Gamper,ktlokki]@tml.hut.fi

More information

Audio Engineering Society. Convention Paper. Presented at the 116th Convention 2004 May 8 11 Berlin, Germany

Audio Engineering Society. Convention Paper. Presented at the 116th Convention 2004 May 8 11 Berlin, Germany Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany This convention paper has been reproduced from the author's advance manuscript, without editing,

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Spatial SpatialHearing: Hearing: Single SingleSound SoundSource Sourcein infree FreeField Field Piotr PiotrMajdak Majdak&&Bernhard BernhardLaback

More information

Measuring impulse responses containing complete spatial information ABSTRACT

Measuring impulse responses containing complete spatial information ABSTRACT Measuring impulse responses containing complete spatial information Angelo Farina, Paolo Martignon, Andrea Capra, Simone Fontana University of Parma, Industrial Eng. Dept., via delle Scienze 181/A, 43100

More information

A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer

A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer 143rd AES Convention Engineering Brief 403 Session EB06 - Spatial Audio October 21st, 2017 Joseph G. Tylka (presenter) and Edgar Y.

More information

Modeling Diffraction of an Edge Between Surfaces with Different Materials

Modeling Diffraction of an Edge Between Surfaces with Different Materials Modeling Diffraction of an Edge Between Surfaces with Different Materials Tapio Lokki, Ville Pulkki Helsinki University of Technology Telecommunications Software and Multimedia Laboratory P.O.Box 5400,

More information

Psychoacoustic Cues in Room Size Perception

Psychoacoustic Cues in Room Size Perception Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,

More information

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Verona, Italy, December 7-9,2 AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Tapio Lokki Telecommunications

More information

Realtime auralization employing time-invariant invariant convolver

Realtime auralization employing time-invariant invariant convolver Realtime auralization employing a not-linear, not-time time-invariant invariant convolver Angelo Farina 1, Adriano Farina 2 1) Industrial Engineering Dept., University of Parma, Via delle Scienze 181/A

More information

Spatial Audio Reproduction: Towards Individualized Binaural Sound

Spatial Audio Reproduction: Towards Individualized Binaural Sound Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution

More information

396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011

396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011 396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011 Obtaining Binaural Room Impulse Responses From B-Format Impulse Responses Using Frequency-Dependent Coherence

More information

A Virtual Car: Prediction of Sound and Vibration in an Interactive Simulation Environment

A Virtual Car: Prediction of Sound and Vibration in an Interactive Simulation Environment 2001-01-1474 A Virtual Car: Prediction of Sound and Vibration in an Interactive Simulation Environment Klaus Genuit HEAD acoustics GmbH Wade R. Bray HEAD acoustics, Inc. Copyright 2001 Society of Automotive

More information

Convention e-brief 433

Convention e-brief 433 Audio Engineering Society Convention e-brief 433 Presented at the 144 th Convention 2018 May 23 26, Milan, Italy This Engineering Brief was selected on the basis of a submitted synopsis. The author is

More information

The Human Auditory System

The Human Auditory System medial geniculate nucleus primary auditory cortex inferior colliculus cochlea superior olivary complex The Human Auditory System Prominent Features of Binaural Hearing Localization Formation of positions

More information

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois.

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois. UNIVERSITY ILLINOIS @ URBANA-CHAMPAIGN OF CS 498PS Audio Computing Lab 3D and Virtual Sound Paris Smaragdis paris@illinois.edu paris.cs.illinois.edu Overview Human perception of sound and space ITD, IID,

More information

Subband Analysis of Time Delay Estimation in STFT Domain

Subband Analysis of Time Delay Estimation in STFT Domain PAGE 211 Subband Analysis of Time Delay Estimation in STFT Domain S. Wang, D. Sen and W. Lu School of Electrical Engineering & Telecommunications University of ew South Wales, Sydney, Australia sh.wang@student.unsw.edu.au,

More information

Creating three dimensions in virtual auditory displays *

Creating three dimensions in virtual auditory displays * Salvendy, D Harris, & RJ Koubek (eds.), (Proc HCI International 2, New Orleans, 5- August), NJ: Erlbaum, 64-68. Creating three dimensions in virtual auditory displays * Barbara Shinn-Cunningham Boston

More information

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4 SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................

More information

Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings

Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings Banu Gunel, Huseyin Hacihabiboglu and Ahmet Kondoz I-Lab Multimedia

More information

Waves Nx VIRTUAL REALITY AUDIO

Waves Nx VIRTUAL REALITY AUDIO Waves Nx VIRTUAL REALITY AUDIO WAVES VIRTUAL REALITY AUDIO THE FUTURE OF AUDIO REPRODUCTION AND CREATION Today s entertainment is on a mission to recreate the real world. Just as VR makes us feel like

More information

Speech Compression. Application Scenarios

Speech Compression. Application Scenarios Speech Compression Application Scenarios Multimedia application Live conversation? Real-time network? Video telephony/conference Yes Yes Business conference with data sharing Yes Yes Distance learning

More information

Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes

Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes Janina Fels, Florian Pausch, Josefa Oberem, Ramona Bomhardt, Jan-Gerrit-Richter Teaching and Research

More information

Spatial Audio with the SoundScape Renderer

Spatial Audio with the SoundScape Renderer Spatial Audio with the SoundScape Renderer Matthias Geier, Sascha Spors Institut für Nachrichtentechnik, Universität Rostock {Matthias.Geier,Sascha.Spors}@uni-rostock.de Abstract The SoundScape Renderer

More information

Electric Audio Unit Un

Electric Audio Unit Un Electric Audio Unit Un VIRTUALMONIUM The world s first acousmonium emulated in in higher-order ambisonics Natasha Barrett 2017 User Manual The Virtualmonium User manual Natasha Barrett 2017 Electric Audio

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid

More information

Convention e-brief 400

Convention e-brief 400 Audio Engineering Society Convention e-brief 400 Presented at the 143 rd Convention 017 October 18 1, New York, NY, USA This Engineering Brief was selected on the basis of a submitted synopsis. The author

More information

Exploring Haptics in Digital Waveguide Instruments

Exploring Haptics in Digital Waveguide Instruments Exploring Haptics in Digital Waveguide Instruments 1 Introduction... 1 2 Factors concerning Haptic Instruments... 2 2.1 Open and Closed Loop Systems... 2 2.2 Sampling Rate of the Control Loop... 2 3 An

More information

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work Audio Engineering Society Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract

More information

Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind

Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind Lorenzo Picinali Fused Media Lab, De Montfort University, Leicester, UK. Brian FG Katz, Amandine

More information

HRIR Customization in the Median Plane via Principal Components Analysis

HRIR Customization in the Median Plane via Principal Components Analysis 한국소음진동공학회 27 년춘계학술대회논문집 KSNVE7S-6- HRIR Customization in the Median Plane via Principal Components Analysis 주성분분석을이용한 HRIR 맞춤기법 Sungmok Hwang and Youngjin Park* 황성목 박영진 Key Words : Head-Related Transfer

More information

Envelopment and Small Room Acoustics

Envelopment and Small Room Acoustics Envelopment and Small Room Acoustics David Griesinger Lexicon 3 Oak Park Bedford, MA 01730 Copyright 9/21/00 by David Griesinger Preview of results Loudness isn t everything! At least two additional perceptions:

More information

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL 9th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 7 A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL PACS: PACS:. Pn Nicolas Le Goff ; Armin Kohlrausch ; Jeroen

More information

MULTICHANNEL CONTROL OF SPATIAL EXTENT THROUGH SINUSOIDAL PARTIAL MODULATION (SPM)

MULTICHANNEL CONTROL OF SPATIAL EXTENT THROUGH SINUSOIDAL PARTIAL MODULATION (SPM) MULTICHANNEL CONTROL OF SPATIAL EXTENT THROUGH SINUSOIDAL PARTIAL MODULATION (SPM) Andrés Cabrera Media Arts and Technology University of California Santa Barbara, USA andres@mat.ucsb.edu Gary Kendall

More information

A virtual headphone based on wave field synthesis

A virtual headphone based on wave field synthesis Acoustics 8 Paris A virtual headphone based on wave field synthesis K. Laumann a,b, G. Theile a and H. Fastl b a Institut für Rundfunktechnik GmbH, Floriansmühlstraße 6, 8939 München, Germany b AG Technische

More information

Personalized 3D sound rendering for content creation, delivery, and presentation

Personalized 3D sound rendering for content creation, delivery, and presentation Personalized 3D sound rendering for content creation, delivery, and presentation Federico Avanzini 1, Luca Mion 2, Simone Spagnol 1 1 Dep. of Information Engineering, University of Padova, Italy; 2 TasLab

More information

Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis

Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis Hagen Wierstorf Assessment of IP-based Applications, T-Labs, Technische Universität Berlin, Berlin, Germany. Sascha Spors

More information

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION Michał Pec, Michał Bujacz, Paweł Strumiłło Institute of Electronics, Technical University

More information

Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis General rights Take down policy

Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis General rights Take down policy Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis Markovic, Milos; Olesen, Søren Krarup; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte

More information

Sound Source Localization using HRTF database

Sound Source Localization using HRTF database ICCAS June -, KINTEX, Gyeonggi-Do, Korea Sound Source Localization using HRTF database Sungmok Hwang*, Youngjin Park and Younsik Park * Center for Noise and Vibration Control, Dept. of Mech. Eng., KAIST,

More information

3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte

3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte Aalborg Universitet 3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte Published in: Proceedings of BNAM2012

More information

Aalborg Universitet. Published in: Acustica United with Acta Acustica. Publication date: Document Version Early version, also known as pre-print

Aalborg Universitet. Published in: Acustica United with Acta Acustica. Publication date: Document Version Early version, also known as pre-print Aalborg Universitet Setup for demonstrating interactive binaural synthesis for telepresence applications Madsen, Esben; Olesen, Søren Krarup; Markovic, Milos; Hoffmann, Pablo Francisco F.; Hammershøi,

More information

CONTROL OF PERCEIVED ROOM SIZE USING SIMPLE BINAURAL TECHNOLOGY. Densil Cabrera

CONTROL OF PERCEIVED ROOM SIZE USING SIMPLE BINAURAL TECHNOLOGY. Densil Cabrera CONTROL OF PERCEIVED ROOM SIZE USING SIMPLE BINAURAL TECHNOLOGY Densil Cabrera Faculty of Architecture, Design and Planning University of Sydney NSW 26, Australia densil@usyd.edu.au ABSTRACT The localization

More information

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA Audio Engineering Society Convention Paper Presented at the 131st Convention 2011 October 20 23 New York, NY, USA This Convention paper was selected based on a submitted abstract and 750-word precis that

More information

Fundamentals of Digital Audio *

Fundamentals of Digital Audio * Digital Media The material in this handout is excerpted from Digital Media Curriculum Primer a work written by Dr. Yue-Ling Wong (ylwong@wfu.edu), Department of Computer Science and Department of Art,

More information

The Use of 3-D Audio in a Synthetic Environment: An Aural Renderer for a Distributed Virtual Reality System

The Use of 3-D Audio in a Synthetic Environment: An Aural Renderer for a Distributed Virtual Reality System The Use of 3-D Audio in a Synthetic Environment: An Aural Renderer for a Distributed Virtual Reality System Stephen Travis Pope and Lennart E. Fahlén DSLab Swedish Institute for Computer Science (SICS)

More information

WARPED FILTER DESIGN FOR THE BODY MODELING AND SOUND SYNTHESIS OF STRING INSTRUMENTS

WARPED FILTER DESIGN FOR THE BODY MODELING AND SOUND SYNTHESIS OF STRING INSTRUMENTS NORDIC ACOUSTICAL MEETING 12-14 JUNE 1996 HELSINKI WARPED FILTER DESIGN FOR THE BODY MODELING AND SOUND SYNTHESIS OF STRING INSTRUMENTS Helsinki University of Technology Laboratory of Acoustics and Audio

More information

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

6-channel recording/reproduction system for 3-dimensional auralization of sound fields Acoust. Sci. & Tech. 23, 2 (2002) TECHNICAL REPORT 6-channel recording/reproduction system for 3-dimensional auralization of sound fields Sakae Yokoyama 1;*, Kanako Ueno 2;{, Shinichi Sakamoto 2;{ and

More information

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues DeLiang Wang Perception & Neurodynamics Lab The Ohio State University Outline of presentation Introduction Human performance Reverberation

More information

Sound localization Sound localization in audio-based games for visually impaired children

Sound localization Sound localization in audio-based games for visually impaired children Sound localization Sound localization in audio-based games for visually impaired children R. Duba B.W. Kootte Delft University of Technology SOUND LOCALIZATION SOUND LOCALIZATION IN AUDIO-BASED GAMES

More information