Array processing for echo cancellation in the measurement of Head-Related Transfer Functions

Size: px
Start display at page:

Download "Array processing for echo cancellation in the measurement of Head-Related Transfer Functions"

Transcription

1 Array processing for echo cancellation in the measurement of Head-Related Transfer Functions Jose J. Lopez, Sergio Martinez-Sanchez and Pablo Gutierrez-Parera ITEAM Institute, Universitat Politècnica de València, 46022, Valencia, Spain. Summary The use of Head-Related Transfer Functions (HRTF) is necessary for individualizing binaural sound for each subject so as to provide a better experience. For measuring HRTF, motorized positioning systems and anechoic chambers are usually employed, which implies the utilization of expensive and complex facilities that are not always available. Measurement in normal rooms would be desirable, but reflections which are not part of a HRTF would be introduced. In this paper, a method to cancel these echoes by means of signal processing techniques allowing to measure in non-anechoic rooms is proposed. The sound field is captured by using a spherical array of microphones, placed where the listener will be situated, and obtaining the room impulse response (RIR) between each loudspeaker position and each microphone of the array. The use of array techniques as Plane Wave Decomposition (PWD) allow to discriminate direction of reflections. The proposed system is a sub-band method, where the higher frequency part of the HRTF is obtained by cropping the first segment of the RIR before the first reflection and the lower one by using array techniques. Two spherical microphones of different diameter will be compared for this task. PACS no Pn, Fg 1. Introduction The future of spatial sound is bound up with binaural audio, as headphone market has experienced a remarkable expansion [1] and more and more audiovisual content is being consumed through headphone listening. Virtual and augmented reality, and immersive technologies are also using headphones to reproduce 3D audio content with the binaural approach. Binaural sound employs Head-Related Transfer Functions (HRTF) to generate the spatialization of sounds to listen with headphones. The HRTF captures the effects that a source in free-field experiences to the ear canals of a subject. The contribution of the head and torso and significantly the outer ear, are registered in the HRTF [2]. Due to the strong influence of the anthropometric characteristics of the listener (size of the head, position of the ears, shape of the pinna, etc.) the HRTFs present tailored features that make them specific for each subject. Then, for a better experience, the binaural sound must be individualized for each subject through the use of their personal HRTF. Individualized HRTF provide a better immersive and natural listening experience [3]. It is possible to accurately simulate virtual sound sources in any position in the space just convolving the source signal with the individualized HRTF. There are different techniques that try to obtain personalized HRTFs: measuring directly the subject s HRTF in an anechoic chamber, through anthropometric data (including synthesis calculation with numerical methods), based on subjective perception, etc. [4]. The traditional method for measuring HRTFs was described in the Blauert in [5]. It uses a single loudspeaker mounted in a positioning system that measures the acoustic transfer path between the loudspeaker and the two microphones inserted in the subject ears. Using the positioning system, the loudspeaker is moved to different points from a virtual sphere around the subject, and the acoustic paths from these locations are measured. Despite the minimum audible angle is around 1-2 for frontal sources [6], the HRTF set resolution should be around 5 in the horizontal plane and 10 in the vertical plane for common applications. According to this rule, the number of HRTF measurement points should be bigger than However, different interpolations techniques have been proposed, where using less points, produce successfully results in practice. In any case, because of the large number of measurement points (from hundreds to thousands) the total Copyright 2018 EAA HELINA ISSN: All rights reserved

2 Euronoise Conference Proceedings time for a personal HRTF recording session could extend to even more than one hour with the traditional method where only one loudspeaker or just a few are employed. This causes fatigue and discomfort in subjects because they should not be moved to avoid measurement errors. To reduce the measurement time, the most obvious method is to install multiple loudspeakers in multiple positions to save the time required for positioning system movements. However, one loudspeaker in each measuring position is impractical, because it would require about a thousand units. Therefore, hybrid combinations have been used, using multiple loudspeakers to cover a polar angle and a single-axis positioning system to cover the other. The most common hybrid method uses as many loudspeakers as there are elevations to be measured, installed in an arc around the listener. In this way, the single-axis positioning system rotates the arc structure around the listener or rotates the listener seated on a turntable [7, 8]. There are also methods for simultaneous measurement of different HRTFs using multiple loudspeakers. A multiple exponential sweep method (MESM) was proposed in [8] and refined in [9], which allows the simultaneous playing of sweep signals through various loudspeakers, saving even more time. The HRTF refers to a free field measurement, so the measurement inside a room will introduce reflections that are not part of the HRTF. Anechoic conditions are, then, generally necessary for the HRTF measurement environment. The measurement systems described above should be installed in an anechoic chamber, setting up a complex, very expensive and not everywhere available installation, restricting these measurements to research laboratories and keeping these technologies away from the general public. These setbacks led to the appearance of new and innovative methods for individualizing HRTF based on the use of databases with anthropometric data and numerical methods. However, other obstacles related to matching physical human body parameters were needed to overcome, despite getting rid of anechoic chambers and positioning systems. Consequently, the possibility of employing a non-anechoic room with multiple loudspeaker arrays in different planes for HRTF measurement is considered as an attempt of bringing spatial sound closer to the general public. Additional processing, e.g. Spatial Decomposition Method (SDM) or Plane Wave Decomposition (PWD), is needed in order to detect room reflections and cancel them to obtain precise and clear HRTF. In this paper, the deployment of a complete system of loudspeaker arrays in a non-anechoic room is presented, together with an echo cancellation system. Therefore, in section 2 the aims of the project and hardware set-up of the proposed system are explained. The description of the employed microphone arrays, as well as the software environment based on Plane Wave Decomposition for discriminating the direction of reflections, are detailed in section 3. Section 4 covers the methodology followed to cancel the echoes in different measurement scenarios, describing the subband processing which will allow to establish a fullband method. The results after processing and the discussion are performed in section 5, highlighting the key points and problems that could arise from the method. Finally, section 6 summarizes the main idea of this paper and future lines to be addressed in order to improve the system are also proposed. 2. Objective and Set-Up In the previous section, the main drawbacks of traditional HRTF measurement methods have been noted. Also, synthesis of individualized HRTF has been commented as possible method. Anyway, until the last methods are precise enough, measuring the HRTF would be the best method for individualization. Moreover, in order to continue working into intelligent methods for HRTF individualization based on deeplearning techniques [10, 11], large collections of HRTF would be needed for training the systems. In order to develop a feasible, easy accessible and comfortable HRTF measuring installation for intensive use, a normal room instead of an anechoic would be desirable. In this paper a set-up and methodology for HRTF measuring in a non-anechoic room are presented. The method encompasses two important points: The use of different loudspeaker arrays in different planes in order to avoid the complex positioning system. The development of an echo cancellation system to remove undesired reflections inside the room. In this manner, it is possible to get rid of anechoic chambers and provide alternatives to the mechanical positioning system. Both elements contribute to build a set-up which allows to save measurement time, hardware complexity and costs. The set-up previously described in [12] has been used as a basis for an evolved system. It consists of a circular 2-meters array of 72 loudspeakers, composed by self-amplified monitors M-Audio model BX5 D2, providing a resolution of 5 in azimuth. This array provides just information for the horizontal plane. As reflected in Figure 1, the set-up evolved to include elevation. A circular array of 1 meter of radius has been deployed on the floor, concentric to the array of 72 loudspeakers. In this case, 8 loudspeakers have been used, placed each 45 in azimuth, and leaning at 45 with respect to the floor axis, in order to point to the listener position. Another circular array

3 Euronoise Conference Proceedings Figure 1. 3D model of the room with the complete system. of the same radius, also composed of 8 loudspeakers, has been suspended from the ceiling, being each loudspeaker at an angle of 45 to the ceiling axis. Finally, a single loudspeaker is placed at an elevation of 90 (zenithal position) in the center of all the arrays, oriented to the listener position. The presented configuration allows to measure four different elevation levels. The selected positions and radius of arrays coincide with the reflection point on the floor and ceiling and, as it is described in section 4, they will be useful in order to achieve the elimination of those echoes. Nevertheless, it is not possible to completely describe the whole evolution of the HRTF in the elevation plane with these four levels. Thus, new proposals for future set-ups come into mind in the sense of adding several loudspeakers in intermediate positions in elevation plane. For instance, it would be interesting to set additional loudspeakers in the elevation between 0 and 45. On the other hand, intelligent interpolation systems based on neural networks and deep learning are being developed in the literature and it would be profitable to take advantage of the data these systems provide so as to guarantee an interpolation of the aforementioned intermediate points, leaving the system untouched as it has been described or adding few loudspeakers. In fact, these techniques would cause a reduction in hardware equipment, since it would not be necessary to place a loudspeaker each 5, just for improving interpolation. For the moment, the current system is composed by 89 loudspeakers. We expect that just adding a few more intermediate elevation loudspeakers in an improved prototype, and employing the interpolation techniques described above, it is possible to accomplish an accurate system for fast HRTF measurement with less than 100 loudspeakers. 3. Spherical microphone arrays and Plane-Wave Decomposition To compensate the echoes inside the room in the HRTF measurement, it is necessary to separate the reflections from the direct sound at the listener s position for each of the loudspeakers. In this project, the sound field is analyzed using spherical microphone arrays placed at the position where the listener would be, which allows us to register the echoes for further processing

4 Euronoise Conference Proceedings Figure 2. Deployment of the proposed system inside the room Microphone arrays In the sound field analysis process, the sound field is captured by using spatially distributed microphone arrays, either on a real or imaginary surface. As stated in [13], several systems have been designed during last years with different configurations, like the scanning/sequential array designed at the Cologne University of Applied Sciences (VariSphear), which allows to capture many sampling positions as desired with a single microphone. Moreover, other configurations like spherical arrangements of microphones over open or rigid physical spheres were examined, providing a limited set of measurements (equal to the number of sensors distributed along the body) but saving time in the process (simultaneous capturing) and taking profit of the rotational symmetry, suitable for measuring 3D sound fields. individual electret microphone capsules inserted in a rigid sphere of 4.2 cm radius, providing better resolution in results at higher frequencies (up to a certain frequency, known as limit frequency of operation). A second array with bigger size (20 cm of radius) and in open sphere configuration was built and customized (Figure 3, left part) and is being rehearsed for this study. This system is custom-made, employing 3Dprinted pieces for the vertices or joints between edges and working as support for the different microphones, having the same sensor density as the em32 Eigenmike but assuming it will provide better resolution at lower frequencies. Both arrays have a microphone disposition which follows the shape of a pentakis dodecahedron Plane Wave Decomposition Software The selection of a microphone array comes along with the process of finding the best trade-off among the different factors or characteristics, depending on the application. For the purpose of our study, two arrays of different diameter will be used in the measurement process to compare the results in terms of frequency and resolution, as will be stated in section 5. Techniques to decompose the sound field in the different possible directions are necessary to help determining the origin of the echoes or reflections inside the room. Plane Wave Decomposition (PWD) method has been selected for studying direction-dependent echo cancellation, as it offers an extraction of the plane waves that compose the sound field by virtue of the spherical harmonics domain [15]. The em32 Eigenmike microphone array (Figure 3, right part) from mh acoustics [14] is composed by 32 A sound field analysis toolbox for MATLAB, called SOFiA [16], available as open source, has been em

5 Euronoise Conference Proceedings Figure 4. MATLAB interface for detection and management of echoes inside the room with measured sound field by spherical arrays. Figure 3. Spherical microphone arrays. On the left, the custom-made 20-cm microphone array in open sphere; on the right, the em32 Eigenmike microphone array. ployed in the project. The toolbox is structured in different modules which constitute each a step in the PWD process. Varying the parameters involved in the method, it is possible to adjust it for each microphone array and obtain information with the impinging direction of sound in azimuth and elevation, as well as the RIR in each possible direction with a resolution of 1 in each plane, giving possible plane waves. To take the maximum profit of this tool, a GUI interface for MATLAB has been developed. The program, as it can be observed in Figure 4, allows to study the echoes through the processing of the sound field captured by the microphone arrays. It incorporates the option of selecting and loading the measures dataset corresponding to different scenarios and then choose the loudspeaker in the direction being studied. In that way, the corresponding group of Room Impulse Responses is loaded and plotted. In the graph, two cursors are available to select the portion of RIR to process and obtain the direction of arrival of that sound by using SOFiA toolbox. Moreover, two modes of processing are available: for a specified single frequency or integration for the whole frequency range. A color globe sphere is represented with the intensity and reflecting the distribution of the arrival of reflections to the array. 4. Methodology and case studies The underlying idea in this project is to obtain HRTF measures free from echoes inside the non-anechoic room, mainly those with origin in the floor or walls. By cropping the first part of the impulse response before the first reflection, it is possible to keep only the direct path. However, the obtained windowed impulse response is usually too short to provide enough resolution at low frequencies Sub-band processing Since the system it is being exposed is expected to work properly for the whole frequency range and due to the resolution problem at low frequencies by using the cropping method, in this paper a sub-band technique has been chosen for the HRTF computation, where the high part of the spectrum is calculated by cropping, whereas the low part is subjected to array processing in order to suppress the echoes. The impulse responses are measured using the SineSweep technique, in which an exponential timegrowing frequency signal is reproduced and, at the same time, is being deconvolved in order to obtain the linear impulse response of the room [17]. In Figure 5, a typical result for an impulse response in the measurement process can be observed. Since the direct sound arrives, there exists a moment in time where an echo appears, which corresponds with a specific sample of the recorded data. Following Equation (1), it is possible to obtain the number of samples between direct sound and the arrival of the first echo, as well as the corresponding cut-off frequency up to which the array processing is needed to be employed. fcut of f = 1 Hz, tdelay,1st s echo (1) where, tdelay,1st is the delay of the first echo. echo Note that, when applying some kind of soft window to cut the RIR and to have a safety margin, a frequency between the one calculated in Equation (1) and twice that value would be the most suitable option. Figure 6 describes the process of sub-band processing proposed in this paper. From the HRTF measured, in the upper branch the HRTF is processed for low frequencies. Then, a low-pass filter is applied and the echo cancellation is carried out based on the room

6 Euronoise Conference Proceedings Figure 5. Room Impulse Response example and echoes detected. Figure 7. Set-up scheme for case study I. Translating that difference in Equation (3) in time interval by dividing by sound velocity: Figure 6. Sub-band processing diagram flow. acoustics information that the microphone array provides and PWD techniques. In the lower branch, the signal is processed for high frequencies and, using the cropping method, it is windowed before first reflection. Both treated signals are summed, giving as a result an HRTF that should be equal to other measured in anechoic conditions Case I: Floor reflection In order to describe the sub-band processing, the operation is illustrated through an example. The set-up is reflected in Figure 7. In the position where the listener would be placed (center of the circular array), a microphone array is set, and the Room Impulse Responses (RIR) are captured by the 32 sensors when loudspeaker emits. It is also reflected the path that follows the direct sound to the array (noted as H 1, which corresponds to a distance of 2 meters) and the reflection on the floor (named H 2 ). The distance in meters the wave traverses until it reaches the floor (first part of the reflection in H 2, segment AB) is: (2.005 ) 2 AB = + (1.44) 2 2 = m. (2) The same distance computed in Equation (2) is equal to the path the wave travels from point B (reflection on the floor) until point C (arrival to the array). Then, multiplying by 2 and subtracting the distance of H 1, the difference in space between direct sound and reflection is obtained: diff H1,H 2 = ( ) 2 = m. (3) ( ) m t H1,H 2 = 343 m = s. (4) s Then, the time interval obtained in Equation (4) is equal to the term t delay,1 st in Equation (1). So, applying that and expressing it in number of samples by echo multiplying by a sampling frequency, f s, of 48 khz: f cut off = Hz. (5) s N H1,H 2 = s f s = 211 samples. (6) After the results obtained in Equations (5) and (6), a frequency of 227 Hz is set as the minimum limit to necessarily apply array processing techniques, being the interval of 211 samples the distance in samples between the arrival of direct sound and the first reflection from the floor. Continuing with the method, a study of the sound field in the listener position is carried out. With the microphone array in the listener position, it can be considered that the acoustic channel between the loudspeaker and the listener position, H, is a mixture of the different contributions from all directions. Considering there is only one reflection, H is composed by the direct sound path (H 1 ) and reflection from the floor (H 2 ) at 45. H(φ, θ 1, f) = H 1 (φ, θ 1, f) + H 2 (φ, θ 2, f), (7) where φ is the azimuth angle and θ the one in elevation, being the subscripts 1 and 2 indicators of the direction of direct sound and floor reflection, respectively, in this last plane

7 Euronoise Conference Proceedings Due to the use of the spherical microphone arrays, as it has been explained in section 3, it is possible to separate H 1 and H 2 together with the application of PWD techniques. Then, if a dummy head is placed instead, the measured HRTF would be unprocessed and mixed with the two aforementioned signals. It could be expressed and decomposed as follows: HRT F (φ, θ 1, f) = H 1 (φ, θ 1, f) HRT F (φ, θ 1, f) + H 2 (φ, θ 2, f) HRT F (φ, θ 2, f). (8) The term HRT F (φ, θ 1, f) in Equation (8) is the interest object here. Solving and rearranging: HRT F (φ, θ 1, f) = HRT F (φ, θ 1, f) H 1 (φ, θ 1, f) H 2(φ, θ 2, f) HRT F (φ, θ 2, f).(9) H 1 (φ, θ 1, f) As it can be observed from Equation (9), in order to obtain HRT F (φ, θ 1, f) it is necessary to get HRT F (φ, θ 2, f). Nevertheless, it can be obtained in an easier manner, since there will not be floor reflection in that term from downwards to upwards or, in the case it contains any, it could be eliminated with other techniques. On the other hand, either H 1 or H 2, apart from having the acoustic channel, they also contain the frequency response of the loudspeaker, H loud (f), which can be compensated as well, since a measure of the loudspeaker in free-field conditions is available. Figure 8. Set-up scheme for case study II. 5. Results and discussion Results for the previous case studies are presented here, comparing them with the use of measurements of the two microphone arrays employed. The echo detection stage will be the only result illustrated here (for case I), since the composition and cancellation of them will be presented in future works. The RIRs are processed for a window which includes the direct sound and the first reflection, as observed in Figure 9. In Figure 10 and 11 are represented the reconstructed IRs after PWD processing with the MAT- LAB interface, by employing measures from Eigenmike and the custom-made 20-cm array, respectively Case II: Wall reflection In the previous case, H 1 only contains the loudspeaker frequency response, H loud (f). However, in this second case, as reflected in Figure 8, there also exists a reflection from the wall behind the loudspeaker, which arrives in the same direction as the direct sound, named as H 3, and goes from point A (loudspeaker), reflects in the wall and arrives to point D (microphone array). H 3 can be included inside the expression of H 1 as: H 1 (φ, θ 1, f) = H loud (f) + H 3 (φ, θ 1, f). (10) The solution will be, therefore, to correct the loudspeaker response as before, together with the cancellation of the wall reflection by inverting the signal. Moreover, the procedure to cancel the floor reflection is also needed, as it appears in the process. Figure 9. Interface results for case study I. The direction of the loudspeaker corresponds, in the coordinate system of the toolbox, with 0 AZ and 90 EL, and the results of the PWD method prove the same direction. From Figures 10 and 11 it can be seen the separation of the direct sound from the floor reflection, with the corresponding time interval in between. Results have been processed until a frequency of 300 Hz (closer to the cut-off frequency), providing a better resolution in result with Eigenmike measurements

8 Euronoise Conference Proceedings 0.8 H 1 & H 2 after Plane Wave Decomposition (PWD) References H 1 (0º AZ, 90º EL) Amplitude Time (s) H 2 (0º AZ, 135º EL) Figure 10. H 1 and H 2 separation for case study I (measures from Eigenmike array). Amplitude H 1 & H 2 after Plane Wave Decomposition (PWD) H 1 (0º AZ, 90º EL) H 2 (0º AZ, 135º EL) Time (s) Figure 11. H 1 and H 2 separation for case study I (measures from custom-made 20-cm array). 6. Conclusions The deployment of a complete system based on loudspeaker arrays in different elevations for measuring HRTF in non-anechoic rooms has been presented. In addition to this, an echo cancellation system has been proposed in order to detect and suppress reflections of the room. To this effect, Plane Wave Decomposition techniques have been used for detection, together with the developed sub-band processing method, which provides a full-band system. The results have proved the possibility of a clear detection of the echoes with the aforementioned techniques, which will allow to apply the cancellation of all of them with the described methods. The final aim of this project is to present this system as an advantageous alternative to traditional anechoic environments. As future work, all the methodology illustrated along the sections must be completely verified, specially the part of the echo cancellation and, then, check if the result would be a HRTF comparable to one measured in anechoic conditions. Moreover, intermediate loudspeakers and the use of interpolation techniques will be subject of future papers. Acknowledgement [1] Global Market Insights Inc., Earphones And Headphones Market Size By Technology (Wired, Wireless), By Application (Consumer, Call Center, Industrial, Aviation, Construction, Public Safety), Industry Analysis Report, Regional Outlook, USA, [2] H. Møller, M. F. Sorensen, D. Hammershoi, and C. B. Jensen. Head-Related Transfer-Functions of Human- Subjects, J. Audio Eng. Soc., vol. 43, no. 5, pp , [3] R. Nicol. Binaural Technology. New York: Audio Engineering Society Inc., 2010, ISBN [4] K. Sunder, J. He, E. Tan, W. S. Gan, and E. Tan. Natural Sound Rendering for Headphones, IEEE Signal Process. Mag., March, pp. 1-37, 2015, doi: /msp [5] J. Blauert. Spatial Hearing: The psychophysics of human sound localization, MIT Press Cambridge, 2nd edition, [6] J. C. Makous and J. C. Middlebrooks. Two- Dimensional Sound Localization by Human Listeners, J. Acoust. Soc. Am., vol. 87, pp , [7] J.G. Richter and J. Fels. Evaluation of Localization Accuracy of Static Sources Using HRTFs from a Fast Measurement System, Acta Acustica united with Acustica, vol. 102(4), pp , [8] P. Majdak, P. Balazs, and B. Laback. Multiple exponential sweep method for fast measurement of headrelated transfer functions, Journal Audio Eng. Soc, vol. 55(7/8), pp , [9] P. Dietrich, B. Masiero, and M. Vorlander. On the optimization of the multiple exponential sweep method, J. Audio Eng. Soc, vol. 61(3), pp , [10] X. Zhong: Interpolation of Head-related Transfer Functions Using Neural Network. Fifth International Conference on Intelligent Human-Machine Systems and Cybernetics, [11] M. Xu, Z. Wang and Y. Gao: Interpolation of Minimum-Phase HRIRs Using RBF Artificial Neural Network. Fuzzy Systems and Data Mining III, [12] J.J. Lopez, P. Gutierrez-Parera: Equipment for fast measurement of Head-Related Transfer Functions. Audio Engineering Society, Berlin, May [13] B. Bernschutz, P. Stade, M. Ruhl: Sound Field Analysis in Room Acoustics. 27th Tonmeistertagung - VDT International Convention, [14] mh acoustics: em32 Eigenmike microphone array release notes (v18.0). USA, June 2014 [15] V. Pulkki, S. Delikaris-Manias, A. Politis: Parametric Time-Frequency Domain Spatial Audio. Wiley, pp , Finland, [16] B. Bernschutz, C. Porschmann, S. Spors, S. Weinzierl: SOFiA Sound Field Analysis Toolbox, In: Proceedings of the International Conference on Spatial Audio. ICSA 2011, Detmold, Germany. [17] S. Guy-Bart, E. Jean-Jacques, A. Dominique: Comparison of different impulse responses measurement techniques. University of Liege, Belgium, The Spanish Ministry of Economy, Industry and Competitiveness supported this work under the project TEC R

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations György Wersényi Széchenyi István University, Hungary. József Répás Széchenyi István University, Hungary. Summary

More information

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA Audio Engineering Society Convention Paper 987 Presented at the 143 rd Convention 217 October 18 21, New York, NY, USA This convention paper was selected based on a submitted abstract and 7-word precis

More information

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4 SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................

More information

MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY

MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY AMBISONICS SYMPOSIUM 2009 June 25-27, Graz MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY Martin Pollow, Gottfried Behler, Bruno Masiero Institute of Technical Acoustics,

More information

BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA

BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA EUROPEAN SYMPOSIUM ON UNDERWATER BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA PACS: Rosas Pérez, Carmen; Luna Ramírez, Salvador Universidad de Málaga Campus de Teatinos, 29071 Málaga, España Tel:+34

More information

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Analysis of Frontal Localization in Double Layered Loudspeaker Array System Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang

More information

Sound Processing Technologies for Realistic Sensations in Teleworking

Sound Processing Technologies for Realistic Sensations in Teleworking Sound Processing Technologies for Realistic Sensations in Teleworking Takashi Yazu Makoto Morito In an office environment we usually acquire a large amount of information without any particular effort

More information

Enhancing 3D Audio Using Blind Bandwidth Extension

Enhancing 3D Audio Using Blind Bandwidth Extension Enhancing 3D Audio Using Blind Bandwidth Extension (PREPRINT) Tim Habigt, Marko Ðurković, Martin Rothbucher, and Klaus Diepold Institute for Data Processing, Technische Universität München, 829 München,

More information

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION Michał Pec, Michał Bujacz, Paweł Strumiłło Institute of Electronics, Technical University

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST PACS: 43.25.Lj M.Jones, S.J.Elliott, T.Takeuchi, J.Beer Institute of Sound and Vibration Research;

More information

The analysis of multi-channel sound reproduction algorithms using HRTF data

The analysis of multi-channel sound reproduction algorithms using HRTF data The analysis of multichannel sound reproduction algorithms using HRTF data B. Wiggins, I. PatersonStephens, P. Schillebeeckx Processing Applications Research Group University of Derby Derby, United Kingdom

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid

More information

Sound Source Localization using HRTF database

Sound Source Localization using HRTF database ICCAS June -, KINTEX, Gyeonggi-Do, Korea Sound Source Localization using HRTF database Sungmok Hwang*, Youngjin Park and Younsik Park * Center for Noise and Vibration Control, Dept. of Mech. Eng., KAIST,

More information

ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF

ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF F. Rund, D. Štorek, O. Glaser, M. Barda Faculty of Electrical Engineering Czech Technical University in Prague, Prague, Czech Republic

More information

HRIR Customization in the Median Plane via Principal Components Analysis

HRIR Customization in the Median Plane via Principal Components Analysis 한국소음진동공학회 27 년춘계학술대회논문집 KSNVE7S-6- HRIR Customization in the Median Plane via Principal Components Analysis 주성분분석을이용한 HRIR 맞춤기법 Sungmok Hwang and Youngjin Park* 황성목 박영진 Key Words : Head-Related Transfer

More information

Simultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array

Simultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array 2012 2nd International Conference on Computer Design and Engineering (ICCDE 2012) IPCSIT vol. 49 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V49.14 Simultaneous Recognition of Speech

More information

Spatial Audio Reproduction: Towards Individualized Binaural Sound

Spatial Audio Reproduction: Towards Individualized Binaural Sound Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution

More information

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA 9447 This Convention paper was selected based on a submitted abstract and 750-word

More information

Technique for the Derivation of Wide Band Room Impulse Response

Technique for the Derivation of Wide Band Room Impulse Response Technique for the Derivation of Wide Band Room Impulse Response PACS Reference: 43.55 Behler, Gottfried K.; Müller, Swen Institute on Technical Acoustics, RWTH, Technical University of Aachen Templergraben

More information

From Binaural Technology to Virtual Reality

From Binaural Technology to Virtual Reality From Binaural Technology to Virtual Reality Jens Blauert, D-Bochum Prominent Prominent Features of of Binaural Binaural Hearing Hearing - Localization Formation of positions of the auditory events (azimuth,

More information

Measuring impulse responses containing complete spatial information ABSTRACT

Measuring impulse responses containing complete spatial information ABSTRACT Measuring impulse responses containing complete spatial information Angelo Farina, Paolo Martignon, Andrea Capra, Simone Fontana University of Parma, Industrial Eng. Dept., via delle Scienze 181/A, 43100

More information

Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis

Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis Hagen Wierstorf Assessment of IP-based Applications, T-Labs, Technische Universität Berlin, Berlin, Germany. Sascha Spors

More information

Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal

Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal Aalborg Universitet Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal Published in: Acustica United with Acta Acustica

More information

Study on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno

Study on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno JAIST Reposi https://dspace.j Title Study on method of estimating direct arrival using monaural modulation sp Author(s)Ando, Masaru; Morikawa, Daisuke; Uno Citation Journal of Signal Processing, 18(4):

More information

III. Publication III. c 2005 Toni Hirvonen.

III. Publication III. c 2005 Toni Hirvonen. III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on

More information

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Acoust. Sci. & Tech. 24, 5 (23) PAPER Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Masayuki Morimoto 1;, Kazuhiro Iida 2;y and

More information

A virtual headphone based on wave field synthesis

A virtual headphone based on wave field synthesis Acoustics 8 Paris A virtual headphone based on wave field synthesis K. Laumann a,b, G. Theile a and H. Fastl b a Institut für Rundfunktechnik GmbH, Floriansmühlstraße 6, 8939 München, Germany b AG Technische

More information

Perception and evaluation of sound fields

Perception and evaluation of sound fields Perception and evaluation of sound fields Hagen Wierstorf 1, Sascha Spors 2, Alexander Raake 1 1 Assessment of IP-based Applications, Technische Universität Berlin 2 Institute of Communications Engineering,

More information

Convention Paper Presented at the 130th Convention 2011 May London, UK

Convention Paper Presented at the 130th Convention 2011 May London, UK Audio Engineering Society Convention Paper Presented at the 130th Convention 2011 May 13 16 London, UK The papers at this Convention have been selected on the basis of a submitted abstract and extended

More information

HRTF adaptation and pattern learning

HRTF adaptation and pattern learning HRTF adaptation and pattern learning FLORIAN KLEIN * AND STEPHAN WERNER Electronic Media Technology Lab, Institute for Media Technology, Technische Universität Ilmenau, D-98693 Ilmenau, Germany The human

More information

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Jie Huang, Katsunori Kume, Akira Saji, Masahiro Nishihashi, Teppei Watanabe and William L. Martens The University of Aizu Aizu-Wakamatsu,

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Spatial SpatialHearing: Hearing: Single SingleSound SoundSource Sourcein infree FreeField Field Piotr PiotrMajdak Majdak&&Bernhard BernhardLaback

More information

STÉPHANIE BERTET 13, JÉRÔME DANIEL 1, ETIENNE PARIZET 2, LAËTITIA GROS 1 AND OLIVIER WARUSFEL 3.

STÉPHANIE BERTET 13, JÉRÔME DANIEL 1, ETIENNE PARIZET 2, LAËTITIA GROS 1 AND OLIVIER WARUSFEL 3. INVESTIGATION OF THE PERCEIVED SPATIAL RESOLUTION OF HIGHER ORDER AMBISONICS SOUND FIELDS: A SUBJECTIVE EVALUATION INVOLVING VIRTUAL AND REAL 3D MICROPHONES STÉPHANIE BERTET 13, JÉRÔME DANIEL 1, ETIENNE

More information

3D sound image control by individualized parametric head-related transfer functions

3D sound image control by individualized parametric head-related transfer functions D sound image control by individualized parametric head-related transfer functions Kazuhiro IIDA 1 and Yohji ISHII 1 Chiba Institute of Technology 2-17-1 Tsudanuma, Narashino, Chiba 275-001 JAPAN ABSTRACT

More information

Psychoacoustic Cues in Room Size Perception

Psychoacoustic Cues in Room Size Perception Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,

More information

Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes

Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes Janina Fels, Florian Pausch, Josefa Oberem, Ramona Bomhardt, Jan-Gerrit-Richter Teaching and Research

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 1, 21 http://acousticalsociety.org/ ICA 21 Montreal Montreal, Canada 2 - June 21 Psychological and Physiological Acoustics Session appb: Binaural Hearing (Poster

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2aPPa: Binaural Hearing

More information

Selection of Microphones for Diffusion Measurement Method

Selection of Microphones for Diffusion Measurement Method Selection of Microphones for Diffusion Measurement Method Jan Karel, Ladislav Zuzjak, Oldřich Tureček Department of Technologies and Measurement, University of West Bohemia, Univerzitní 8, 304 14 Plzeň,

More information

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work Audio Engineering Society Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract

More information

RIR Estimation for Synthetic Data Acquisition

RIR Estimation for Synthetic Data Acquisition RIR Estimation for Synthetic Data Acquisition Kevin Venalainen, Philippe Moquin, Dinei Florencio Microsoft ABSTRACT - Automatic Speech Recognition (ASR) works best when the speech signal best matches the

More information

A White Paper on Danley Sound Labs Tapped Horn and Synergy Horn Technologies

A White Paper on Danley Sound Labs Tapped Horn and Synergy Horn Technologies Tapped Horn (patent pending) Horns have been used for decades in sound reinforcement to increase the loading on the loudspeaker driver. This is done to increase the power transfer from the driver to the

More information

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Verona, Italy, December 7-9,2 AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Tapio Lokki Telecommunications

More information

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

6-channel recording/reproduction system for 3-dimensional auralization of sound fields Acoust. Sci. & Tech. 23, 2 (2002) TECHNICAL REPORT 6-channel recording/reproduction system for 3-dimensional auralization of sound fields Sakae Yokoyama 1;*, Kanako Ueno 2;{, Shinichi Sakamoto 2;{ and

More information

A triangulation method for determining the perceptual center of the head for auditory stimuli

A triangulation method for determining the perceptual center of the head for auditory stimuli A triangulation method for determining the perceptual center of the head for auditory stimuli PACS REFERENCE: 43.66.Qp Brungart, Douglas 1 ; Neelon, Michael 2 ; Kordik, Alexander 3 ; Simpson, Brian 4 1

More information

Influence of artificial mouth s directivity in determining Speech Transmission Index

Influence of artificial mouth s directivity in determining Speech Transmission Index Audio Engineering Society Convention Paper Presented at the 119th Convention 2005 October 7 10 New York, New York USA This convention paper has been reproduced from the author's advance manuscript, without

More information

Audio Engineering Society. Convention Paper. Presented at the 115th Convention 2003 October New York, New York

Audio Engineering Society. Convention Paper. Presented at the 115th Convention 2003 October New York, New York Audio Engineering Society Convention Paper Presented at the 115th Convention 2003 October 10 13 New York, New York This convention paper has been reproduced from the author's advance manuscript, without

More information

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction S.B. Nielsen a and A. Celestinos b a Aalborg University, Fredrik Bajers Vej 7 B, 9220 Aalborg Ø, Denmark

More information

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION ARCHIVES OF ACOUSTICS 33, 4, 413 422 (2008) VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION Michael VORLÄNDER RWTH Aachen University Institute of Technical Acoustics 52056 Aachen,

More information

Externalization in binaural synthesis: effects of recording environment and measurement procedure

Externalization in binaural synthesis: effects of recording environment and measurement procedure Externalization in binaural synthesis: effects of recording environment and measurement procedure F. Völk, F. Heinemann and H. Fastl AG Technische Akustik, MMK, TU München, Arcisstr., 80 München, Germany

More information

Capturing 360 Audio Using an Equal Segment Microphone Array (ESMA)

Capturing 360 Audio Using an Equal Segment Microphone Array (ESMA) H. Lee, Capturing 360 Audio Using an Equal Segment Microphone Array (ESMA), J. Audio Eng. Soc., vol. 67, no. 1/2, pp. 13 26, (2019 January/February.). DOI: https://doi.org/10.17743/jaes.2018.0068 Capturing

More information

Virtual Sound Source Positioning and Mixing in 5.1 Implementation on the Real-Time System Genesis

Virtual Sound Source Positioning and Mixing in 5.1 Implementation on the Real-Time System Genesis Virtual Sound Source Positioning and Mixing in 5 Implementation on the Real-Time System Genesis Jean-Marie Pernaux () Patrick Boussard () Jean-Marc Jot (3) () and () Steria/Digilog SA, Aix-en-Provence

More information

Virtual Acoustic Space as Assistive Technology

Virtual Acoustic Space as Assistive Technology Multimedia Technology Group Virtual Acoustic Space as Assistive Technology Czech Technical University in Prague Faculty of Electrical Engineering Department of Radioelectronics Technická 2 166 27 Prague

More information

3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES

3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES 3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES Rishabh Gupta, Bhan Lam, Joo-Young Hong, Zhen-Ting Ong, Woon-Seng Gan, Shyh Hao Chong, Jing Feng Nanyang Technological University,

More information

Spatial Audio & The Vestibular System!

Spatial Audio & The Vestibular System! ! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs

More information

Ambisonics Directional Room Impulse Response as a New SOFA Convention

Ambisonics Directional Room Impulse Response as a New SOFA Convention Ambisonics Directional Room Impulse Response as a New Convention Andrés Pérez López 1 2 Julien De Muynke 1 1 Multimedia Technologies Unit Eurecat - Centre Tecnologic de Catalunya Barcelona 2 Music Technology

More information

Localization of 3D Ambisonic Recordings and Ambisonic Virtual Sources

Localization of 3D Ambisonic Recordings and Ambisonic Virtual Sources Localization of 3D Ambisonic Recordings and Ambisonic Virtual Sources Sebastian Braun and Matthias Frank Universität für Musik und darstellende Kunst Graz, Austria Institut für Elektronische Musik und

More information

Auditory Localization

Auditory Localization Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES PACS: 43.66.Qp, 43.66.Pn, 43.66Ba Iida, Kazuhiro 1 ; Itoh, Motokuni

More information

396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011

396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011 396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011 Obtaining Binaural Room Impulse Responses From B-Format Impulse Responses Using Frequency-Dependent Coherence

More information

SOUND FIELD MEASUREMENTS INSIDE A REVERBERANT ROOM BY MEANS OF A NEW 3D METHOD AND COMPARISON WITH FEM MODEL

SOUND FIELD MEASUREMENTS INSIDE A REVERBERANT ROOM BY MEANS OF A NEW 3D METHOD AND COMPARISON WITH FEM MODEL SOUND FIELD MEASUREMENTS INSIDE A REVERBERANT ROOM BY MEANS OF A NEW 3D METHOD AND COMPARISON WITH FEM MODEL P. Guidorzi a, F. Pompoli b, P. Bonfiglio b, M. Garai a a Department of Industrial Engineering

More information

Introduction. 1.1 Surround sound

Introduction. 1.1 Surround sound Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:

More information

Sound source localization accuracy of ambisonic microphone in anechoic conditions

Sound source localization accuracy of ambisonic microphone in anechoic conditions Sound source localization accuracy of ambisonic microphone in anechoic conditions Pawel MALECKI 1 ; 1 AGH University of Science and Technology in Krakow, Poland ABSTRACT The paper presents results of determination

More information

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA Audio Engineering Society Convention Paper Presented at the 131st Convention 2011 October 20 23 New York, NY, USA This Convention paper was selected based on a submitted abstract and 750-word precis that

More information

Sound source localization and its use in multimedia applications

Sound source localization and its use in multimedia applications Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,

More information

Computational Perception /785

Computational Perception /785 Computational Perception 15-485/785 Assignment 1 Sound Localization due: Thursday, Jan. 31 Introduction This assignment focuses on sound localization. You will develop Matlab programs that synthesize sounds

More information

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Sebastian Merchel and Stephan Groth Chair of Communication Acoustics, Dresden University

More information

29th TONMEISTERTAGUNG VDT INTERNATIONAL CONVENTION, November 2016

29th TONMEISTERTAGUNG VDT INTERNATIONAL CONVENTION, November 2016 Measurement and Visualization of Room Impulse Responses with Spherical Microphone Arrays (Messung und Visualisierung von Raumimpulsantworten mit kugelförmigen Mikrofonarrays) Michael Kerscher 1, Benjamin

More information

Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach

Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach Vol., No. 6, 0 Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach Zhixin Chen ILX Lightwave Corporation Bozeman, Montana, USA chen.zhixin.mt@gmail.com Abstract This paper

More information

A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer

A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer A Toolkit for Customizing the ambix Ambisonics-to- Binaural Renderer 143rd AES Convention Engineering Brief 403 Session EB06 - Spatial Audio October 21st, 2017 Joseph G. Tylka (presenter) and Edgar Y.

More information

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois.

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois. UNIVERSITY ILLINOIS @ URBANA-CHAMPAIGN OF CS 498PS Audio Computing Lab 3D and Virtual Sound Paris Smaragdis paris@illinois.edu paris.cs.illinois.edu Overview Human perception of sound and space ITD, IID,

More information

Sound Radiation Characteristic of a Shakuhachi with different Playing Techniques

Sound Radiation Characteristic of a Shakuhachi with different Playing Techniques Sound Radiation Characteristic of a Shakuhachi with different Playing Techniques T. Ziemer University of Hamburg, Neue Rabenstr. 13, 20354 Hamburg, Germany tim.ziemer@uni-hamburg.de 549 The shakuhachi,

More information

From acoustic simulation to virtual auditory displays

From acoustic simulation to virtual auditory displays PROCEEDINGS of the 22 nd International Congress on Acoustics Plenary Lecture: Paper ICA2016-481 From acoustic simulation to virtual auditory displays Michael Vorländer Institute of Technical Acoustics,

More information

Active Control of Energy Density in a Mock Cabin

Active Control of Energy Density in a Mock Cabin Cleveland, Ohio NOISE-CON 2003 2003 June 23-25 Active Control of Energy Density in a Mock Cabin Benjamin M. Faber and Scott D. Sommerfeldt Department of Physics and Astronomy Brigham Young University N283

More information

Audio Engineering Society Convention Paper Presented at the 110th Convention 2001 May Amsterdam, The Netherlands

Audio Engineering Society Convention Paper Presented at the 110th Convention 2001 May Amsterdam, The Netherlands Audio Engineering Society Convention Paper Presented at the 110th Convention 2001 May 12 15 Amsterdam, The Netherlands This convention paper has been reproduced from the author's advance manuscript, without

More information

RADIATION PATTERN RETRIEVAL IN NON-ANECHOIC CHAMBERS USING THE MATRIX PENCIL ALGO- RITHM. G. León, S. Loredo, S. Zapatero, and F.

RADIATION PATTERN RETRIEVAL IN NON-ANECHOIC CHAMBERS USING THE MATRIX PENCIL ALGO- RITHM. G. León, S. Loredo, S. Zapatero, and F. Progress In Electromagnetics Research Letters, Vol. 9, 119 127, 29 RADIATION PATTERN RETRIEVAL IN NON-ANECHOIC CHAMBERS USING THE MATRIX PENCIL ALGO- RITHM G. León, S. Loredo, S. Zapatero, and F. Las Heras

More information

ROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES

ROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES ROOM AND CONCERT HALL ACOUSTICS The perception of sound by human listeners in a listening space, such as a room or a concert hall is a complicated function of the type of source sound (speech, oration,

More information

A Database of Anechoic Microphone Array Measurements of Musical Instruments

A Database of Anechoic Microphone Array Measurements of Musical Instruments A Database of Anechoic Microphone Array Measurements of Musical Instruments Recordings, Directivities, and Audio Features Stefan Weinzierl 1, Michael Vorländer 2 Gottfried Behler 2, Fabian Brinkmann 1,

More information

capsule quality matter? A comparison study between spherical microphone arrays using different

capsule quality matter? A comparison study between spherical microphone arrays using different Does capsule quality matter? A comparison study between spherical microphone arrays using different types of omnidirectional capsules Simeon Delikaris-Manias, Vincent Koehl, Mathieu Paquier, Rozenn Nicol,

More information

Development of multichannel single-unit microphone using shotgun microphone array

Development of multichannel single-unit microphone using shotgun microphone array PROCEEDINGS of the 22 nd International Congress on Acoustics Electroacoustics and Audio Engineering: Paper ICA2016-155 Development of multichannel single-unit microphone using shotgun microphone array

More information

Reducing comb filtering on different musical instruments using time delay estimation

Reducing comb filtering on different musical instruments using time delay estimation Reducing comb filtering on different musical instruments using time delay estimation Alice Clifford and Josh Reiss Queen Mary, University of London alice.clifford@eecs.qmul.ac.uk Abstract Comb filtering

More information

Two-channel Separation of Speech Using Direction-of-arrival Estimation And Sinusoids Plus Transients Modeling

Two-channel Separation of Speech Using Direction-of-arrival Estimation And Sinusoids Plus Transients Modeling Two-channel Separation of Speech Using Direction-of-arrival Estimation And Sinusoids Plus Transients Modeling Mikko Parviainen 1 and Tuomas Virtanen 2 Institute of Signal Processing Tampere University

More information

Convention e-brief 310

Convention e-brief 310 Audio Engineering Society Convention e-brief 310 Presented at the 142nd Convention 2017 May 20 23 Berlin, Germany This Engineering Brief was selected on the basis of a submitted synopsis. The author is

More information

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS 20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR

More information

Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis General rights Take down policy

Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis General rights Take down policy Aalborg Universitet Usage of measured reverberation tail in a binaural room impulse response synthesis Markovic, Milos; Olesen, Søren Krarup; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte

More information

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Engineering

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ IA 213 Montreal Montreal, anada 2-7 June 213 Psychological and Physiological Acoustics Session 3pPP: Multimodal Influences

More information

3D Sound Simulation over Headphones

3D Sound Simulation over Headphones Lorenzo Picinali (lorenzo@limsi.fr or lpicinali@dmu.ac.uk) Paris, 30 th September, 2008 Chapter for the Handbook of Research on Computational Art and Creative Informatics Chapter title: 3D Sound Simulation

More information

Holographic Measurement of the 3D Sound Field using Near-Field Scanning by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch

Holographic Measurement of the 3D Sound Field using Near-Field Scanning by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch Holographic Measurement of the 3D Sound Field using Near-Field Scanning 2015 by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch KLIPPEL, WARKWYN: Near field scanning, 1 AGENDA 1. Pros

More information

A binaural auditory model and applications to spatial sound evaluation

A binaural auditory model and applications to spatial sound evaluation A binaural auditory model and applications to spatial sound evaluation Ma r k o Ta k a n e n 1, Ga ë ta n Lo r h o 2, a n d Mat t i Ka r ja l a i n e n 1 1 Helsinki University of Technology, Dept. of Signal

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Signal Processing in Acoustics Session 2aSP: Array Signal Processing for

More information

Binaural Hearing. Reading: Yost Ch. 12

Binaural Hearing. Reading: Yost Ch. 12 Binaural Hearing Reading: Yost Ch. 12 Binaural Advantages Sounds in our environment are usually complex, and occur either simultaneously or close together in time. Studies have shown that the ability to

More information

Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings

Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings Blind source separation and directional audio synthesis for binaural auralization of multiple sound sources using microphone array recordings Banu Gunel, Huseyin Hacihabiboglu and Ahmet Kondoz I-Lab Multimedia

More information

Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics

Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics Stage acoustics: Paper ISMRA2016-34 Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics Kanako Ueno (a), Maori Kobayashi (b), Haruhito Aso

More information

Holographic Measurement of the Acoustical 3D Output by Near Field Scanning by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch

Holographic Measurement of the Acoustical 3D Output by Near Field Scanning by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch Holographic Measurement of the Acoustical 3D Output by Near Field Scanning 2015 by Dave Logan, Wolfgang Klippel, Christian Bellmann, Daniel Knobloch LOGAN,NEAR FIELD SCANNING, 1 Introductions LOGAN,NEAR

More information

Personalized 3D sound rendering for content creation, delivery, and presentation

Personalized 3D sound rendering for content creation, delivery, and presentation Personalized 3D sound rendering for content creation, delivery, and presentation Federico Avanzini 1, Luca Mion 2, Simone Spagnol 1 1 Dep. of Information Engineering, University of Padova, Italy; 2 TasLab

More information

Spatial audio is a field that

Spatial audio is a field that [applications CORNER] Ville Pulkki and Matti Karjalainen Multichannel Audio Rendering Using Amplitude Panning Spatial audio is a field that investigates techniques to reproduce spatial attributes of sound

More information

Soundfield Navigation using an Array of Higher-Order Ambisonics Microphones

Soundfield Navigation using an Array of Higher-Order Ambisonics Microphones Soundfield Navigation using an Array of Higher-Order Ambisonics Microphones AES International Conference on Audio for Virtual and Augmented Reality September 30th, 2016 Joseph G. Tylka (presenter) Edgar

More information

10. Phase Cycling and Pulsed Field Gradients Introduction to Phase Cycling - Quadrature images

10. Phase Cycling and Pulsed Field Gradients Introduction to Phase Cycling - Quadrature images 10. Phase Cycling and Pulsed Field Gradients 10.1 Introduction to Phase Cycling - Quadrature images The selection of coherence transfer pathways (CTP) by phase cycling or PFGs is the tool that allows the

More information