Proceedings of Meetings on Acoustics

Similar documents
Personalization of head-related transfer functions in the median plane based on the anthropometry of the listener s pinnae a)

3D sound image control by individualized parametric head-related transfer functions

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES

Convention Paper 9712 Presented at the 142 nd Convention 2017 May 20 23, Berlin, Germany

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics

Computational Perception. Sound localization 2

Study on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno

Acoustics Research Institute

PAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Sound Source Localization using HRTF database

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

Spatial Audio Reproduction: Towards Individualized Binaural Sound

The relation between perceived apparent source width and interaural cross-correlation in sound reproduction spaces with low reverberation

HRIR Customization in the Median Plane via Principal Components Analysis

On distance dependence of pinna spectral patterns in head-related transfer functions

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

ANALYZING NOTCH PATTERNS OF HEAD RELATED TRANSFER FUNCTIONS IN CIPIC AND SYMARE DATABASES. M. Shahnawaz, L. Bianchi, A. Sarti, S.

Auditory Localization

Ivan Tashev Microsoft Research

Interior Noise Characteristics in Japanese, Korean and Chinese Subways

Modeling Head-Related Transfer Functions Based on Pinna Anthropometry

Method of acoustical estimation of an auditorium

Proceedings of Meetings on Acoustics

ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF

Psychoacoustic Cues in Room Size Perception

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 26, NO. 7, JULY

The analysis of multi-channel sound reproduction algorithms using HRTF data

A five-microphone method to measure the reflection coefficients of headsets

Enhancing 3D Audio Using Blind Bandwidth Extension

Dataset of head-related transfer functions measured with a circular loudspeaker array

Proceedings of Meetings on Acoustics

Binaural hearing. Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden

Introduction. 1.1 Surround sound

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION

Proceedings of Meetings on Acoustics

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 DO WE NEED NEW ARTIFICIAL HEADS?

HRTF adaptation and pattern learning

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

Sound Source Localization in Median Plane using Artificial Ear

3D Sound System with Horizontally Arranged Loudspeakers

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Circumaural transducer arrays for binaural synthesis

Proceedings of Meetings on Acoustics

From Binaural Technology to Virtual Reality

THE TEMPORAL and spectral structure of a sound signal

Audio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA

Extracting the frequencies of the pinna spectral notches in measured head related impulse responses

Binaural Hearing. Reading: Yost Ch. 12

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Loudness & Temporal resolution

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface

Digitally controlled Active Noise Reduction with integrated Speech Communication

Sound source localization and its use in multimedia applications

DECORRELATION TECHNIQUES FOR THE RENDERING OF APPARENT SOUND SOURCE WIDTH IN 3D AUDIO DISPLAYS. Guillaume Potard, Ian Burnett

Proceedings of Meetings on Acoustics

Application of a smartphone for introductory teaching of sound environment: Validation of the precision of the devices and examples of students work

Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

IMPROVED COCKTAIL-PARTY PROCESSING

EE1.el3 (EEE1023): Electronics III. Acoustics lecture 20 Sound localisation. Dr Philip Jackson.

Computational Perception /785

A triangulation method for determining the perceptual center of the head for auditory stimuli

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Listening with Headphones

Aalborg Universitet. Audibility of time switching in dynamic binaural synthesis Hoffmann, Pablo Francisco F.; Møller, Henrik

Audio Engineering Society. Convention Paper. Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction

SOUND QUALITY EVALUATION OF FAN NOISE BASED ON HEARING-RELATED PARAMETERS SUMMARY INTRODUCTION

Structural Modeling Of Pinna-Related Transfer Functions

III. Publication III. c 2005 Toni Hirvonen.

Multiple Sound Sources Localization Using Energetic Analysis Method

ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE

Sound localization with multi-loudspeakers by usage of a coincident microphone array

TDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones and Source Counting

Envelopment and Small Room Acoustics

Intensity Discrimination and Binaural Interaction

A binaural auditory model and applications to spatial sound evaluation

METHOD OF ESTIMATING DIRECTION OF ARRIVAL OF SOUND SOURCE FOR MONAURAL HEARING BASED ON TEMPORAL MODULATION PERCEPTION

AXIHORN CP5TB: HF module for the high definition active loudspeaker system "NIDA Mk1"

Application Note: Headphone Electroacoustic Measurements

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

Spatial audio is a field that

IS SII BETTER THAN STI AT RECOGNISING THE EFFECTS OF POOR TONAL BALANCE ON INTELLIGIBILITY?

WIND SPEED ESTIMATION AND WIND-INDUCED NOISE REDUCTION USING A 2-CHANNEL SMALL MICROPHONE ARRAY

Aalborg Universitet. Binaural Technique Hammershøi, Dorte; Møller, Henrik. Published in: Communication Acoustics. Publication date: 2005

Advanced techniques for the determination of sound spatialization in Italian Opera Theatres

Adaptive Filters Application of Linear Prediction

Design of diffusive surfaces for improving sound quality of underground stations

XLA 3200 Line Array Loudspeakers

ECMA TR/105. A Shaped Noise File Representative of Speech. 1 st Edition / December Reference number ECMA TR/12:2009

Reproduction of Surround Sound in Headphones

Modeling Diffraction of an Edge Between Surfaces with Different Materials

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

High Resolution Ear Simulator

Convention Paper Presented at the 130th Convention 2011 May London, UK

THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES

Transcription:

Proceedings of Meetings on Acoustics Volume 1, 21 http://acousticalsociety.org/ ICA 21 Montreal Montreal, Canada 2 - June 21 Psychological and Physiological Acoustics Session appb: Binaural Hearing (Poster Session) appb. Estimation of spectral notch frequencies of the individual head-related transfer function from anthropometry of listener's pinna Yohji Ishii and Kazuhiro Iida* *Corresponding author's address: Faculty of Engineering, Chiba Institute of Technology, 2-1-1 Tsudanuma, Narashino, 2-1, Chiba, Japan, kazuhiro.iida@it-chiba.ac.jp Listener's own head-related transfer functions (HRTFs) are necessary for accurate sound image reproduction. The HRTFs of other listeners often cause the front-back confusion and the errors in elevation perception. It is, however, impractical to measure the HRTFs of any listener for any sound source direction because the measurement requires special apparatus and much time. On the other hand, the estimation of the entire spectrum information of listener's own HRTF still remains as an unsolved difficult issue. One of the authors has shown that the simplified HRTFs, which is recomposed only of the first spectral peak around khz (P1) and the lowest two spectral notches (N1 and N2) above P1, extracted from the listener's measured HRTFs in the median plane, provide almost the same localization accuracy as the measured HRTFs. While the frequency of P1 is almost constant independent of the sound source elevation and the listener, those of N1 and N2 are highly dependent on both the elevation and the listener. The present study proposes a method, which estimates the frequencies of N1 and N2 in the median plane for the individual listener from the anthropometry of the listener's pinna, and examines the validity of the method. Published by the Acoustical Society of America through the American Institute of Physics 21 Acoustical Society of America [DOI:.21/1.] Received 21 Jan 21; published 2 Jun 21 Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page 1

INTRODUCTION The listener s own head-related transfer functions (HRTFs) are necessary for accurate three-dimensional sound image control [1]. The HRTFs of other listeners often cause front-back confusion and errors in elevation perception. However, measuring the HRTFs of a listener for any sound source direction is impractical because the measurement requires a special apparatus and a great deal of time. A scenario for estimating the HRTFs of the individual listener from the anthropometry of his/her pinna, which is obtained, e.g., by photographic means, has been proposed [2]. However, the estimation of the entire spectrum information of the listener s own HRTF is difficult and has not yet been accomplished. Middlebrooks reported that inter-subject differences in external-ear transfer functions could be reduced by appropriately scaling one set of external-ear transfer functions in frequency relative to the other []. However, the reduction of errors in other-ear localization using the frequency scaling was not sufficient for accurate threedimensional sound image control []. Iida et al. [] have shown that simplified HRTFs, which are composed of only the first spectral peak around khz (P1) and the lowest two spectral notches (N1 and N2) above P1 (Fig. 1) extracted from the listener s measured HRTFs in the median plane, provide almost the same localization accuracy as the measured HRTFs (Fig. 2). Although the frequency of P1 is approximately constant independent of the sound source elevation and the listener, the frequencies of N1 and N2 are highly dependent on both the elevation and the listener (Fig. ). Based on these results, they concluded that N1 and N2 can be regarded as spectral cues and that the hearing system of a human being could use P1 as reference information to analyze N1 and N2 in ear-input signals. The present study proposes a method of estimating the frequencies of N1, N2, and P1 in the median plane for the individual listener from the anthropometry of his/her pinnae. The most suitable HRTF, the N1, N2, and P1 frequencies of which are the closest to the estimated values, can then be extracted from the HRTF database for the individual listener without acoustical measurements. Amplitude (db) 2 P1 P2 P P N1 N2 N N measured smoothed 2 12 1 1 1 2 Freq. (khz) FIGURE 1. Example of extracted spectral peaks and notches from a measured HRTF []. Perceived elevation (deg.) 1 1 12 Measured-HRTF 12 1 1 1 1 12 Target elevation (deg.) Only N1, N2, and P1 12 1 1 FIGURE 2. Localization in the median plane by measured HRTF and simplified HRTF recomposed of only N1, N2, and P1 []. Source vertical angle [deg.] 2 1 P1 N1 N2 12 1 2 Frequency [khz] 2 2 FIGURE. Distribution of frequencies of N1, N2, and P1 in the median plane. DATA USED IN THE ANALYSIS Anthropometric parameters of pinna Based on the findings that N1, N2, and P1 are generated by the resonance of three major cavities of the pinna, i.e., concha, fossa, and scapha [], anthropometric parameters (x 1 through x ) of the subject s ear mold (Fig. and Table 1) were measured using a digital vernier caliper. The tilt of the pinna (x a ) was also measured from a photograph of the subject s pinna. Figure shows the measured anthropometric parameters for ears. The distribution ranges of parameters x 1 through x were from to 2 mm and that of x a was approximately degrees. N1, N2, and P1 frequencies The median plane HRTFs of the ears were measured in an anechoic chamber. The distance from the sound sources to the entrance of the ear canal was 1.2 m. The ear-microphones [] were put into the ear canals of the Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page 2

subjects. The diaphragms of the microphones were located at the entrances of the ear canals. Therefore, this is so called the blocked entrances condition. The HRTF was obtained by (1) where F( ) is the Fourier transform of the impulse response, f(t), measured at the point corresponding to the center of the subject s head in the anechoic chamber without a subject using a swept-sine signal (2 1 sample), and G l,r ( ) is that measured at the entrance of the ear canal of the subject with the ear-microphones. N1, N2, and P1 were extracted from the HRTF, which is obtained by FFT of the early part of the head-related impulse response (within 1 ms) [] because the peak and notches are generated by the pinna cavities. Figure shows the distributions of N1, N2, and P1 frequencies of ears for the front direction. N1, N2, and P1 frequencies were distributed from. to. khz (. oct.),. to 12. khz (. oct.), and. to. khz (. oct.), respectively. This indicates that the individual difference of P1 frequency is much smaller than those of N1 and N2 frequencies. FIGURE. Eleven anthropometric parameters used in analysis. x a x x 1 x x 2 x x x x TABLE 1. Definition of anthropometric parameters. x 1 x 2 x x x x x x x x x a width of pinna width of concha width of incisura intertragica width of helix length of pinna length of concha length of cymba conchae length of scapha length of cavity depth of concha tilt of pinna 1 12 Size [mm] 2 2 2 1 Tilt [deg.] Frequency [khz] x 1 x 2 x x x x x x x x x a P1 N1 N2 FIGURE. Distribution of measured anthropometric parameters ( ears). FIGURE. Distribution of N1, N2, and P1 frequencies for the front direction ( ears). ESTIMATION OF INDIVIDUAL N1, N2, AND P1 FREQUENCIES OF THE FRONT DIRECTION Multiple regression analyses were carried out using ears, as objective variables of N1, N2, and P1 frequencies and as explanatory variables of anthropometric parameters of the pinnae: (2) where S,, a i, b, and x i denote the subject, the vertical angle in the median plane, regression coefficients, a constant, and anthropometric parameters, respectively. There was no multicollinearity between the explanatory variables. Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page

TABLE 2. Statistics of regression models correlation significance absolute residual error probability of residual error model coefficient level [Hz] [oct.] is less than.1 octaves [%] A: all params..2. N1 B: except x... A: all params.2.. N2 all all params B: except x.1.. A: all params.2 x x.1 1. P1 B: except x. coefficient. of correlation 1 correlation. coefficient C: x 2, x, x.. 1. (a) N1: Model A R² =. Estimated frequency of N1 (b) N1: Model B R² =. 1 Measured frequency [khz] 1 Measured frequency of N1 (c) N2: Model A (d) N2: Model B 12 R² =. 12 R² =. Estimated frequency of N2 12 1 12 1 Measured frequency of N2 Measured frequency of N2 (e) P1: Model A (f) P1: Model B R² =. R² =.22 Estimated frequency of P1 Measured frequency of P1 Measured frequency of P1 (g) P1: Model C R² =.11 FIGURE. Relation between the measured and estimated N1, N2, and P1 frequencies. Measured frequency [khz] N1 N2 P1 A B A B A B C a 1 -.1-21. -. -. -. -.21 - a 2 1.2 1.2 2. 2. -. -2. -.1 a -1. -1.2 -. -. 1.1 2. - a -2.2 -.2 2.22 2.2 1. 22. - a...... - a -2.2-21. -22. -2.1 21. -1..1 a -1. -1...2 -.1 -.2 - a -21. -21.1 -. -1. -.1 -. - a 2.. -1.1-12. 2.2. - a -11.2 - -. - -1.1 - -.2 a a -1. -2. -1. -1.2. 2.1 - b 1 21 12.% Cumulative frequency [%].%.% 2.% (a) N1 TABLE. Regression coefficients.%.%.% 2.% (b) N2.%.%..1.1.2.2..1.1.2.2.% [oct.] [oct.] (c) P1 Cumulative frequency [%].%.% 2.%.%..1.1.2.2 Absolute value of residual error[oct.] FIGURE. Cumulative frequency of absolute residual error. : regression model A, : B, : C. Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page

In the present study, N1, N2, and P1 frequencies were estimated by two regression models: Model A: All anthropometric parameters were used as explanatory variables. Model B: The anthropometric parameters, which can be obtained using a photograph of the listener s pinna, i.e., x 1 through x and x a, were used as explanatory variables. The depth of concha (x ) is difficult to obtain from a photograph. Tables 2 and show the statistics of the regression models and the regression coefficients, respectively. Since the just noticeable difference in the frequencies of N1 and N2 on vertical perception is considered to be between approximately.1 and.2 octaves [], the percentage of pinnae for which the absolute estimation error is less than.1 octaves was calculated. Fig. shows the relation between the measured and estimated N1, N2, and P1 frequencies. For the N1 frequency, the multiple correlation coefficients for model A and B were. and., respectively. The average absolute residual errors were and Hz. The percentages of pinnae for which the absolute residual error was less than.1 octaves were % for both models. For the N2 frequency, the multiple correlation coefficients for model A and B were.2 and.1, respectively. The average absolute residual errors were and Hz. The percentages of pinnae for which the absolute residual error was less than.1 octaves were % for both models. For the P1 frequency, an extra regression model (model C) was analyzed in addition to models A and B, because the significance level (p) of model B was very high (.). The explanatory variables of model C were chosen based on the results of stepwise backward regression and the fact that P1 is generated by resonance in the concha []. As a result, model C was composed only of x 2, x, and x. The multiple correlation coefficients for models A, B, and C were.2,., and., respectively. The average absolute residual errors were 1, 1, and 1 Hz. The percentages of pinnae for which the absolute residual error is less than.1 octaves were % for all models. Then, the cumulative frequency curves of the absolute residual error were obtained. For the N1 frequency (Fig. (a)), the cumulative frequency curves of models A and B were almost the same. The absolute residual error, at which the cumulative frequency reaches %, was approximately.1 octaves. The cumulative frequency reaches almost % at an absolute residual error of approximately.2 octaves. For the N2 frequency (Fig. (b)), the cumulative frequency curves of models A and B were also almost the same. The absolute residual error, at which the cumulative frequency reaches %, was approximately. octaves. The cumulative frequency reaches almost % at an absolute residual error of approximately.1 octaves. Therefore, the estimation accuracy of the N2 frequency is higher than that of the N1 frequency. For the P1 frequency (Fig. (c)), the cumulative frequency curves of models A, B, and C were almost the same. The absolute residual error, at which the cumulative frequency reaches %, was approximately. octaves. The cumulative frequency reaches almost % at an absolute residual error of approximately.1 octaves. EXTRACTION OF THE MOST SUITABLE FRONT DIRECTION HRTFS FOR AN INDIVIDUAL LISTENER The most suitable front direction HRTF for an individual listener can be extracted from the HRTF database as follows: Step 1: N1, N2, and P1 frequencies for the front direction are estimated from the anthropometry of the listener s pinna using the regression equations mentioned above. Step 2: The HRTFs, the N1, N2, and P1 frequencies of which are closest to the estimated frequencies, are extracted from the HRTF database. The distance between the estimated notch frequencies and the notch frequencies of actual HRTFs in the database can be evaluated based on notch frequency distance (NFD) []. NFD is a physical measure expressing the distance between HRTF j and HRTF k in the octave scale, as defined by Eqs. through. The front direction HRTF for which the NFD is the minimum and is within the jnd is extracted from the HRTF database as the most suitable HRTF for the listener. () () () where and are the frequencies of N1 and N2, respectively. Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page

SOUND IMAGE CONTROL FOR THE ARBITRARY THREE-DIMENSIONAL DIRECTION USING THE EXTRACTED FRONT DIRECTION HRTF A sound image control method for the arbitrary three-dimensional direction using the extracted front direction HRTF is divided into following two stages. Expansion from the front direction to the median plane There are two alternative methods for estimating N1, N2, and P1 frequencies for arbitrary vertical angle in the median plane: Method 1: Use the median plane HRTFs of the subject whose front direction HRTF is extracted from the HRTF database by the estimation method mentioned above. Method 2: Calculate N1 and N2 frequencies for the arbitrary vertical angle,, in the median plane from the N1 and N2 frequencies of the front direction, b N1 and b N2, using the regression curves (Fig. ). The N1 and N2 frequencies of % of subjects have been reported to be strongly correlated (. < r) with the regression curves []. The frequency of P1 can be regarded as constant because P1 is independent of vertical angle, as shown in Fig.. Expansion from the median plane to the arbitrary three-dimensional direction Then, a sound image control in the median plane can be expanded into the arbitrary three-dimensional direction by combining the HRTFs in the median plane obtained by the above-mentioned method with interaural time difference (ITD), which corresponds to the lateral angle of a sound image []. 1 N1, N2 frequencies [khz] 1 1 12 f N2 ( ) = -.2 2 +.2 + b N2 f N1 ( ) = -. 2 + 2. + b N1-12 1 1 2 Vertical angle, [deg.] FIGURE. Relation between vertical angle,, in the median plane and average frequencies of N1 and N2 among 1 ears. CONCLUSIONS In the present study, multiple regression analyses were carried out using ears, as objective variables of the listener s N1, N2, and P1 frequencies and as explanatory variables of the anthropometric parameters of the listener s pinna. The results indicate the following: 1. The N1 and N2 frequencies of the front direction have strong correlations with the anthropometric parameters. The multiple correlation coefficients for N1, N2, and P1 were.,.2, and.2, respectively. 2. The percentages of pinnae for which the absolute residual error was less than.1 octaves for N1, N2, and P1 were %, %, and %, respectively. These results indicate that the N1, N2, and P1 frequencies of an individual listener can be estimated accurately from his/her anthropometric parameters. Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page

ACKNOWLEDGMENT A part of the present study was supported by Grant-in-Aid for Scientific Research (A) 222. The authors would like to thank Mr. S. Nishioka for his cooperation in making the anthropometry measurements of the pinna. REFERENCES 1. M. Morimoto and Y. Ando, On the simulation of sound localization, J. Acoust. Soc. Jpn. (E), 1, 1-1, (1). 2. R. Sottek and K. Genuit, Physical modeling of individual head-related transfer functions, Proc. DAGA, (1).. J. C. Middlebrooks, Individual differences in external-ear transfer functions reduced by scaling in frequency, J. Acoust. Soc. of Am.,, 1-12, (1).. J. C. Middlebrooks, Virtual localization improved by scaling nonindividualized external-ear transfer functions in frequency, J. Acoust. Soc. of Am.,, 1-1, (1).. K. Iida, M. Itoh, A. Itagaki, and M. Morimoto Median plane localization using parametric model of the head-related transfer function based on spectral cues, Applied Acoustics,, -, (2).. H. Takemoto, P. Mokhtari, H. Kato, R. Nishimura, and K. Iida, Mechanism for generating peaks and notches of head-related transfer functions in the median plane, J. Acoust. Soc. of Am., 12, 2-1, (212).. K. Iida, N. Gamoh, Y. Ishii, and M. Morimoto, Contribution of the early part of the head-related impulse responses to the formation of two spectral notches of vertical localization cues, Proc. FORUM ACUSTICUM 2 (Aalborg, Denmark, 2).. K. Iida and Y. Ishii, D sound image control by individualized parametric head-related transfer functions, Proc. inter-noise 2 (Osaka, Japan, 2).. E. A. G. Shaw and R. Teranishi, Sound Pressure Generated in an External-Ear Replica and Real Human Ears by a Nearby Point Source, J. Acoust. Soc. Am., 2-2, (1).. Y Ishii, S. Nishioka, and K. Iida, Consideration on the individual difference of the pinna shape and the spectral notches on the median plane - Construction of the HRTF database with the quantitative individual difference information -, (in Japanese), Proc. Mtg. Acoust. Soc. Jpn., - (Oct. 212).. M. Morimoto, K. Iida and M. Itoh, Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences, Acoustical Science and Technology, 2, 2-2, (2). Proceedings of Meetings on Acoustics, Vol. 1, 1 (21) Page