Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Similar documents
SPATIAL SOUND REPRODUCTION WITH WAVE FIELD SYNTHESIS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST

Spatial audio is a field that

Virtual Sound Source Positioning and Mixing in 5.1 Implementation on the Real-Time System Genesis

Auditory Localization

Proceedings of Meetings on Acoustics

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

THE TEMPORAL and spectral structure of a sound signal

HRTF adaptation and pattern learning

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

A Comparative Study of the Performance of Spatialization Techniques for a Distributed Audience in a Concert Hall Environment

Psychoacoustic Cues in Room Size Perception

DESIGN OF ROOMS FOR MULTICHANNEL AUDIO MONITORING

Wave field synthesis: The future of spatial audio

The analysis of multi-channel sound reproduction algorithms using HRTF data

3D sound image control by individualized parametric head-related transfer functions

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Vertical Localization Performance in a Practical 3-D WFS Formulation

ON THE APPLICABILITY OF DISTRIBUTED MODE LOUDSPEAKER PANELS FOR WAVE FIELD SYNTHESIS BASED SOUND REPRODUCTION

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning

Multichannel Audio Technologies. More on Surround Sound Microphone Techniques:

A triangulation method for determining the perceptual center of the head for auditory stimuli

Development of multichannel single-unit microphone using shotgun microphone array

AN ORIENTATION EXPERIMENT USING AUDITORY ARTIFICIAL HORIZON

Perceived cathedral ceiling height in a multichannel virtual acoustic rendering for Gregorian Chant

PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES

Sound localization with multi-loudspeakers by usage of a coincident microphone array

MULTICHANNEL REPRODUCTION OF LOW FREQUENCIES. Toni Hirvonen, Miikka Tikander, and Ville Pulkki

The Steering for Distance Perception with Reflective Audio Spot

VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION

University of Huddersfield Repository

Multiple Sound Sources Localization Using Energetic Analysis Method

Predicting localization accuracy for stereophonic downmixes in Wave Field Synthesis

III. Publication III. c 2005 Toni Hirvonen.

Capturing 360 Audio Using an Equal Segment Microphone Array (ESMA)

Perceptual effects of visual images on out-of-head localization of sounds produced by binaural recording and reproduction.

EBU UER. european broadcasting union. Listening conditions for the assessment of sound programme material. Supplement 1.

Sound source localization and its use in multimedia applications

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

HRIR Customization in the Median Plane via Principal Components Analysis

ROOM SHAPE AND SIZE ESTIMATION USING DIRECTIONAL IMPULSE RESPONSE MEASUREMENTS

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

3D Sound System with Horizontally Arranged Loudspeakers

Proceedings of Meetings on Acoustics

Measuring impulse responses containing complete spatial information ABSTRACT

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

Sound Source Localization using HRTF database

Convention Paper Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Introduction. 1.1 Surround sound

GETTING MIXED UP WITH WFS, VBAP, HOA, TRM FROM ACRONYMIC CACOPHONY TO A GENERALIZED RENDERING TOOLBOX

A virtual headphone based on wave field synthesis

A binaural auditory model and applications to spatial sound evaluation

Simulation of wave field synthesis

Spatial Audio Reproduction: Towards Individualized Binaural Sound

ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF

Proceedings of Meetings on Acoustics

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences

.VP CREATING AN INVENTED ONE POINT PERSPECTIVE SPACE

Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA

3D audio overview : from 2.0 to N.M (?)

BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA

Circumaural transducer arrays for binaural synthesis

arxiv: v1 [cs.sd] 25 Nov 2017

Localization Experiments Using Different 2D Ambisonics Decoders (Lokalisationsversuche mit verschiedenen 2D Ambisonics Dekodern)

XLA 3200 Line Array Loudspeakers

Potential and Limits of a High-Density Hemispherical Array of Loudspeakers for Spatial Hearing and Auralization Research

UNIVERSITÉ DE SHERBROOKE

Outline. Context. Aim of our projects. Framework

Localization of 3D Ambisonic Recordings and Ambisonic Virtual Sources

Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics

Enhancing 3D Audio Using Blind Bandwidth Extension

Convention Paper Presented at the 128th Convention 2010 May London, UK

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

DESIGN AND APPLICATION OF DDS-CONTROLLED, CARDIOID LOUDSPEAKER ARRAYS

3D AUDIO AR/VR CAPTURE AND REPRODUCTION SETUP FOR AURALIZATION OF SOUNDSCAPES

Binaural auralization based on spherical-harmonics beamforming

IMPULSE RESPONSE MEASUREMENT WITH SINE SWEEPS AND AMPLITUDE MODULATION SCHEMES. Q. Meng, D. Sen, S. Wang and L. Hayes

AN APPROACH TO LISTENING ROOM COMPENSATION WITH WAVE FIELD SYNTHESIS

Accurate sound reproduction from two loudspeakers in a living room

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA

The Haptic Perception of Spatial Orientations studied with an Haptic Display

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Perception and evaluation of sound fields

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction

Externalization in binaural synthesis: effects of recording environment and measurement procedure

Convention Paper 7057

Perceptual Band Allocation (PBA) for the Rendering of Vertical Image Spread with a Vertical 2D Loudspeaker Array

Multi-View Drawing Review

PSYCHOACOUSTIC EVALUATION OF DIFFERENT METHODS FOR CREATING INDIVIDUALIZED, HEADPHONE-PRESENTED VAS FROM B-FORMAT RIRS

Direction-Dependent Physical Modeling of Musical Instruments

University of Huddersfield Repository

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois.

PAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane

Spatial Audio with the SoundScape Renderer

Convention Paper 7024 Presented at the 122th Convention 2007 May 5 8 Vienna, Austria

Analysis of Edge Boundaries in Multiactuator Flat Panel Loudspeakers

Computational Perception /785

IMPLEMENTATION AND APPLICATION OF A BINAURAL HEARING MODEL TO THE OBJECTIVE EVALUATION OF SPATIAL IMPRESSION

Transcription:

Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang Bae Chon (1), Jae-hyoun Yoo (2) and Koeng-Mo Sung (1) PACS: 43.10.Ce, 60.Fg, 66.Qp ABSTRACT (1) INMC, School of EECS, Seoul National University, Seoul, Korea (2) Electronics and Telecommunications Research Institute, Daejeon, Korea For the reproduction of 3-D acoustic images, the horizontal plane of loudspeakers should be extended to vertical directions using additional loudspeakers. In this paper, 32 channel double layered loudspeaker arrays were proposed for the reproduction of sound images in front of a listener. For the localization of virtual sources in both azimuth and elevation, WFS based rendering method is used. First of all, 2-D wave field is synthesized by virtual loudspeaker array, and then each column of upper and lower loudspeaker pair generates virtual loudspeakers by vertical amplitude panning using calculated elevation vectors. Subjective listening test comparing proposed method to 3-D VBAP were conducted for the evaluation of frontal localization quality in this system. INTRODUCTION In spatial audio reproduction, loudspeakers are arranged in the horizontal plane in most instances. In these ways, acoustic images are restricted to be placed on the projected plane. For the reproduction of 3-D acoustic images, the horizontal plane of loudspeakers should be extended to vertical directions using additional loudspeakers. Such as 10.2 or 22.2 channel surround formats have upper layer of loudspeakers to represent elevated images of sound sources [4]. Although binaural technique could be used by head-related transfer functions (HRTFs), it has a constraint that is reproduced only by headphones or earphones for one person. Wave Field Synthesis (WFS) is a holographic approach to reproduce sound field based on Huygens Principle. The main idea of WFS is that any wave front can be represented by superposition of elementary spherical waves. For driving loudspeakers in WFS system, the individual loudspeaker driving signals must be calculated. Its mathematical basis is the Kirchhoff- Helmholtz integral. It can be simplified into the Rayleigh I integral for monopoles. By Rayleigh s representation theorem and discretization, the loudspeaker functions can be achieved. In many WFS systems, like the conventional 5.1 channel surround system, the listening area was only interested in the horizontal plane at the listener s ear level. So the planar array of loudspeakers could be reduced to line arrays because it does not affect the shape of the wave fronts in the listener s horizontal plane. Therefore such as circular or square arrays surrounding entire listening area (horizontal plane) can be ideal loudspeaker arrangements for the reproduction of sound field by WFS. [2, 3] However these loudspeaker arrangements could reproduce sound sources only projected on the horizontal plane as mentioned earlier. Though the elevated image source using HRTF elevation cues was researched in [5], it was focused on rear and side reproduction. In this paper double layered loudspeaker arrays were used to reproduce virtual sources expanded to vertical plane in front of a listener. For the localization of virtual sources in both azimuth and elevation, spatial sound rendering technique is proposed base on WFS. Normal WFS loudspeaker array was replaced by vertical panning image array of two loudspeaker layers above and below listener s ear level. Subjective listening tests were conducted for the evaluation of frontal localization in this system. Though localization experiments about various spatial reproduction systems had been researched in [1, 8], the used WFS system remained in horizontal plane. In the first experiment the accuracy of listening test system were carried out. In a second experiment the frontal localization test using 3-D vector base amplitude panning (VBAP) [7] was carried out for the comparison with proposed rendering method. In the last experiment the frontal localization of proposed WFS rendering method was evaluated. RENDERING METHODS In 3-D sound impression, the most important cue is elevation of sound source. For the localization of virtual sources in both azimuth and elevation, spatial sound rendering technique is proposed base on WFS. Vector Base Amplitude Panning (VBAP) was also used to compare the localization quality between two techniques. 3-D Vector Base Panning Figure 1: An example of 3-D VBAP. 3-D VBAP is generalization of VBAP to three dimension using ICA 2010 1

23 27 August 2010, Sydney, Australia Proceedings of 20th International Congress on Acoustics, ICA 2010 three loudspeakers. The virtual sound source is positioned into a triangle formed by three loudspeakers. In figure 1, l is the unit vector from the listener to loudspeaker; p is the unit vector from the listener to virtual sound source. g is the loudspeaker gain vector. The p can be expressed as a linear combination form loudspeaker pairs. Then each virtual loudspeaker signal is extended to real upper and lower loudspeaker pairs by amplitude panning using elevation vectors. Only vertical amplitude panning was used because horizontal acoustic images are localized by 2-D WFS. And express it in matrix form, p = g 1 l 1 + g 2 l 2 + g 3 l 3. (1) p T = gl 123 { g = [ g1 g where 2 g 3 ] L 123 = [ l 1 l 2 l 3 ] T, thus vector g can be solved by inverse matrix operation. g = p T L 1 123 = [ p 1 p 2 p 3 ] l 1x l 1y l 1z l 2x l 2y l 2z l 3x l 3y l 3z 1 (2) (3) For satisfying g 1 2 + g 2 2 + g 3 2 = C, gain factors have to be normalized using the equation (4). [7] g scaled C = g (4) 3 n=1 g n P 2 3-D VBAP rendering in our system, triangles formed by three loudspeakers are connected one by one without intersecting. Total 12 loudspeakers (U1, U4, U7, U10, U13, U16, L1, L4, L7, L10, L13 and L16) were used in VBAP rendering as shown in Figure 2. The triangles are made with two adjacent loudspeakers and another nearest loudspeaker in the opposite side layer. Figure 2 shows that a virtual sound source S is located on a triangle made by U4, L4 and L7 loudspeaker. Figure 3: An example of 2-D WFS expanded to 3-D by VBAP. The main idea of 3-D expansion is as shown in Figure 3, namely rendered secondary source S on horizontal plane (2- D) is panned to upper layer U and lower layer of L. In practice, because the distance from virtual source V S to each loudspeaker differs, the exact elevation of secondary source may be different such as S2 and S11 in Figure 3. Moreover, in case of VBAP was used for vertical panning, the location of listening point should be the basis for calculating vectors from listener to virtual sound source. However, sound images were already localized on horizontal plane; only the vertical component from locations of sound source and listener was considered for 3-D expansion model. Thus, virtual sound source was located by vertical components of amplitude panning method based on the arrangement: the height of listener is 1.1 m; distance from loudspeaker arrays to listener is 2 m. Figure 5 represents the final concept WFS vertical panning method for 3-D expansion. Figure 2: Implementation of 3-D VBAP rendering: an example of virtual source localization by U4, L4, L7. Basically, VBAP is a kind of amplitude panning method by coherent signals applied to loudspeakers. Thus loudspeaker positions have to be equidistant from the listening point. In this system, because loudspeaker arrays are straight lines on the frontal plane, the distances from listener to each loudspeaker vary by position. Time delay compensation was a neccessary procedure to reproduce accurate virtual sources. WFS Vertical Panning First of all, two dimensional wave field is synthesized by virtual loudspeaker array which is located between two real array layers. Each column of upper and lower loudspeaker pair generates virtual loudspeakers, and elevation vectors of each loudspeaker are calculated from the layout of virtual source, upper and lower Figure 4: Side view of loudspeaker arrays and listener. LOCALIZATION TESTS In this paper, double layered loudspeaker arrays were used to reproduce virtual sources on the vertical plane in front of a listener. The system is built up with 32 channel loudspeakers. Each array has 16 channel loudspeakers with 20cm spacing. Two layers have 1.65m interval in height. Lower layer stands at intervals of 0.34 m away from the floor as described in Figure 4. For the convenience of subjects test procedure, the grid reference was made by colored strings at intervals of about 30 cm on the vertical plane of loudspeaker arrays. 2 ICA 2010

Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia symmetry to median plane. In each test, subjects listened to the test stimulus, then point the location of sound source using GUI program implemented by MATLAB. Subjects could repeat the same test stimulus until the decision made in the same manner in reference experiment. In GUI program, loudspeaker arrangements were displayed on screen, and also the same grid with on the loudspeaker arrays. Subjects moved a cursor on the screen by mouse, and clicked when their decisions were made. After first localization test, the same procedure was started with other sound source (bicycle bell sounds). RESULTS Figure 5: Proposed 3-D expansion model of WFS system. Test Environment The localization tests were carried out in a listening room. The room dimension is 4.7 m 5.3 m 2.4 m. Reverberation time (RT 60 ) is 0.32 at 1 khz. The subjects were sitting on a chair 2 m away from the middle of the loudspeaker array (Figure 4). The subjects were free to move their heads during the test. Subjects Test Panel Five male subjects ranging from 24 to 33 were participated for the experiment including two authors. All the subjects had previous localization test experiences and had no hearing problems. The test results of reference experiment are shown in Figure 6. 32 gray squares with white circle represent 32 loudspeakers used in this system. x-axis is identical to the index of loudspeaker numbers. z-axis has the same proportion to the x-axis. Namely, the relative distances between squares have same scale with actual loudspeaker arrangements, and the distance between two layers is about 8 times the length of horizontal interval. The asterisks denote average points of each perceived loudspeaker. Horizontal errors are represented by 45 inclined form for avoid overlapping. Vertical errors are represented by dash dot bars. All bars denote 95% confidence intervals. Listening point is projected on the center of the plot for the reference. Stimuli Two sound sources were used for localization experiments. First is pink noise bursts with 3 seconds duration. Second is mixed sounds of bicycle ring and ratchets. Each source was rendered by WFS vertical panning and 3-D VBAP methods. Reference Experiment The reference experiment was done prior to the localization tests. This test was designed to investigate how accurate the subjects recognize the positions of loudspeakers where the virtual sources are located exactly at the loudspeaker positions. Subjects referred this test to find the position of virtual sources located on the frontal area between two loudspeaker arrays, although reference test had real sources only on the loudspeaker array positions. Pink noise same as localization test stimuli was used. Random set of 20 loudspeaker channels were used in each test. A new random set of loudspeaker channels were generated per each subject. In reference test set, each loudspeaker channel was avoided overlapping. Subjects could repeat test signals until the decision made. During listening tests, subjects used test program implemented by MATLAB. They entered the numbers of the perceived loudspeaker until test ends. Localization Test Procedure 15 positions of virtual sound source were used during localization tests. All the test procedure used random test set. Each test set had random positions of virtual sound source. A new random set of virtual source positions were generated per each subject as like in reference experiment. Positions of virtual sound source have five different elevations and three different azimuths like 5 3 matrix form. All virtual sound sources were rendered as located 1ṁ behind the plane made by loudspeaker arrays. Horizontal positions are -1.2 m, 0 m, and +1.2 m from center. Vertical positions are -0.8 m, -0.4 m, 0 m, +0.4 m and +0.8 m from center. The test results of right side virtual source positions (+0.4 m and +0.8 m) were converted to left side by Figure 6: Result of loudspeaker perception tests. Asterisks denote average points of perceived loudspeakers. Diagonal bars denote 95% confidence intervals by horizontal errors. Dash dot lines denote confidence intervals by vertical errors. Listening point is projected on the center of the plot. There are notable vertical errors in lower side loudspeakers such as L1, L3 and L15. This shows that some subjects selected upper loudspeakers because they perceived sound originated from upper side position, even though the lower loudspeakers were driven actually. When comparing L1 and L3, it is possible to say that the farther toward sides, the more error occurs. From the point of horizontal errors, it seems to be more error occurs from center to sides. The errors have tendency to grow farther toward sides. However in the sections between loudspeaker number 1 to about 3, or about 14 to 16, the errors decrease. Subjects had comparatively correct answers at the middle and at both ends. This is thought to be related with cone of confusion [6]. Results of localization tests are represented in Figure 7, and Figure 8. In test 1, pink noise signal was used for virtual sound sources, and bell sound was used for test 2 as mentioned earlier. In most results of WFS vertical panning method (WFS VP), resolution of horizontal direction was quite accurate. However in test of 3-D VBAP, the side results were slightly further from the intended sound source. It seemed to be that it is related to ICA 2010 3

23 27 August 2010, Sydney, Australia Proceedings of 20th International Congress on Acoustics, ICA 2010 Figure 7: Total results of mean values in test 1. Figure 9: Test 2 signal in time domain and spectrum. the feature of VBAP. Secondary image made by three loudspeakers are located only on the triangular plane made by three loudspeakers. Thus the side virtual sources reproduced by our system might be projected toward the screen in normal direction not along the direction of the listening point. It is recognizable by specific results represented in Figure 10 and Figure 11. In both two methods, the results were gathered toward middle of vertical direction. Though results of WFS VP could be classified by elevation, VBAP results were somewhat poor. In test 2, there was notable tendency that most results were biased close to upper loudspeaker layer. It is more remarkable in VBAP results. Subjects confused to find exact elevation because some subjects felt sound was separated by upper and lower layer, or altered by moving their sights. It is regarded that this is related to the spectrum of signal (Figure 9) used in test 2. Tonal component of the signal might effect as spectral cue for judging elevation. Thus the result of test 2 was not successful than test 1, although it is intended for using familiar sound signals. Figure 8: Total results of mean values in test 2. In Figure 10 and 11, specific test results are represented. In each plot, pair of same elevation and different azimuth test results is shown. Diamonds denote average point of perceived 3-D VBAP localization tests. Squares denote average point of perceived WFS VP localization tests. Ellipses are made by 95% confidence intervals of both horizontal and vertical errors. For the comparison of 3-D VBAP test results and virtual sources in frontal view, the original virtual source was plotted at the end of extension line from listening point via projected virtual source. Projected virtual source means the cross point from listening point to virtual source through the virtual screen made by two loudspeaker arrays. It is the same as vectors which subjects are staring into. Original virtual source is projection of normal direction to screen which is the same as vectors in the view of loudspeakers. Horizontal localization quality of tested results was acceptable in both methods. 3-D VBAP had very narrow horizontal errors in test 1 although not fine vertical resolution relatively. In case of wide band noise (test 1), middle elevated virtual source had good localization quality especially in 3-D VBAP. However localization quality was better when virtual source was located close to upper array layer than middle or lower case at test 2. Or it could be said that most test results had a tendency to be originated from upper elevation in test 2. CONCLUSIONS A WFS based loudspeaker array system was implemented with two layers. WFS vertical panning method was proposed to expand sound images on horizontal plane to three dimensions. Based on conventional WFS methods, WFS VP generate virtual loudspeaker array in the middle of two real loudspeaker layers and reproduce them by vertical amplitude panning. 3-D VBAP was also studied and implemented suitably to this system s loudspeaker arrangement. Subject tests were conducted to evaluate localization quality of this system. By comparing 3-D VBAP and WFS VP, the horizontal localization resolution was observed to be fine in both systems. WFS VP system was somewhat better than 3-D VBAP system. However in the point of vertical localization quality, large errors occurred in both systems, although WFS VP had relatively small variances. Results showed that it is feasible to expand 2-D sound images to 3-D having both azimuth and elevation using double layered WFS array system, although it had errors to perceive clear virtual images in 3-D field. To compare both algorithms more accurately, calibrating 3-D VBAP rendering algorithm in this system remains in future work. Also subjective listening test procedure considered by psychoacoustical methods will be researched to obtain more robust experiments. ACKNOWLEDGEMENT This work was supported by the IT R&D program(2008-f-011, Development of Next Generation DTV Core Technology) of KEIT, KCC and MKE, Korea. 4 ICA 2010

Proceedings of 20th International Congress on Acoustics, ICA 2010 Figure 10: A set of results in test 1. ICA 2010 23 27 August 2010, Sydney, Australia Figure 11: A set of results in test 2. 5

23 27 August 2010, Sydney, Australia Proceedings of 20th International Congress on Acoustics, ICA 2010 REFERENCES [1] Judith Liebetrau et al. Localization in Spatial Audio - from Wave Field Synthesis to 22.2. 123rd AES Convention. New York, NY, 2007. [2] A. J. Berkhout. A Holographic Approach to Acoustic Control. J. Audio Eng. Soc. 36.12 (1988), pp. 977 995. [3] A.J. Berkhout, D. de Vries, and P. Vogel. Acoustic Control by Wave Field Synthesis. J. Audio Eng. Soc. 36 (1993), pp. 977 995. [4] K. Hamasaki, K. Hiyama, and R. Okumura. The 22.2 multichannel sound system and its application. 118th AES Convention. Barcelona, Spain, May 2005. [5] Jose J. Lopez, Maximo Cobos, and Basilio Pueo. Rear and Side Reproduction of Elevated Sources in Wave-Field Synthesis. 17th European Signal Processing Conference. Glasgow, Scotland, 2009. [6] Brian C. J. Moore. An Introduction to the Psychology of Hearing. 5th ed. Academic Press, 2004. [7] Ville Pulkki. Virtual Sound Source Positioning Using Vector Base Amplitude Panning. J. Audio Eng. Soc. 45.6 (1997), pp. 456 466. [8] Joseph Sanson, Etienne Corteel, and Olivier Warusfel. Objective and subjective analysis of localization accuracy in Wave Field Synthesis. 124th AES Convention. Amsterdam, The Netherlands, May 2008. 6 ICA 2010