Improved Head Related Transfer Function Generation and Testing for Acoustic Virtual Reality Development
|
|
- Walter Price
- 6 years ago
- Views:
Transcription
1 Improved Head Related Transfer Function Generation and Testing for Acoustic Virtual Reality Development ZOLTAN HARASZY, DAVID-GEORGE CRISTEA, VIRGIL TIPONUT, TITUS SLAVICI Department of Applied Electronics POLITEHNICA University of Timisoara Bvd Vasile Parvan 2, Timisoara ROMANIA Abstract: - The new Acoustic Virtual Reality (AVR) concept is often used as a man-machine interface in electronic travel aid (ETA), that help blind and visually impaired individuals to navigate in real outdoor environments According to this concept, the presence of obstacles in the surrounding environment and the path to the desired target will be signalized to the blind subject by burst of sounds, whose virtual source position suggests the position of the real obstacles and the direction of movement, respectively The practical implementation of the AVR concept requires the so-called Head Related Transfer Functions (HRTFs) to be known in every point of the 3D space and for each subject These functions can be determined by using a quite complex procedure, which requires many measurements for each individual In the present paper, an improved version of the previously proposed [12] artificial neural network (ANN) is presented and used, in order to obtain the HRTFs The proposed method, valid for only one subject, speeds up the implementation of the AVR concept after the ANN training has been completed Finally, the experimental setup for testing, some experimental results using the new ANNs, conclusions and further developments are also presented Key-Words: head related transfer functions, artificial neural networks, man-machine interface, acoustic virtual reality, visually impaired, localization experiment 1 Introduction In the last years, many efforts have been invested in order to develop electronic travel aid (ETA) with new capabilities, used by blind and visually impaired individuals to navigate in real outdoor environments [1]- [7] These devices, based on sensor technology and signal processing, are capable to improve the mobility of blind users (in terms of safety and speed), in unknown or dynamically changing environment In spite of these efforts, the traditional tools (white can and guiding dogs [1]) still remain the most used travel aid by the blind community The main drawbacks of existing assistive devices are their limited capabilities to detect obstacles in front of the subject and the level of technical expertise required to operate these devices Both these subjects are under development today [8], [9], [10] The success of an ETA device highly depends on the man-machine interface used between the visually impaired person and the travel aid It has been proved [6] that AVR concept can be successfully used in order to implement such a man-machine interface The man-machine interface, based on the AVR and implemented in our previous work [11], has been suggested by the high sensitivity and accuracy of the hearing of blind people This interface must be implemented in such a way to be accepted by the blind community According to the AVR concept, the presence of obstacles in the surrounding environment and the path to the desired target will be signalized to the blind subject by burst of sounds, whose virtual source position suggests the position of the real obstacles and the direction of the movement Actually, the visual reality is substituted by an appropriate acoustic virtual reality The rest of the paper is organized as follows Section 2 describes the proposed method In section 3, the AVR concept is briefly presented Section 4 presents the improved version of the proposed ANN based solution in more detail Last two sections are devoted to the experimental results and future research plans 2 The proposed method In order to generate sounds, whose virtual source position suggests the location of the real obstacle, for a point in 3D space, the so-called Head Related Transfer Functions (HRTFs) are necessary This function represents the relationship between the position in 3D space of the acoustical source and the acoustical pressure, which is present at the ear of the subject For each individual, there are two functions, corresponding to the left and right ear, respectively These functions corresponding to each point in 3D space can be ISSN: ISBN:
2 determined by using a quite complex procedure, which requires many measurements Moreover, because of the individual differences between various subjects (ear, head, body), this procedure has to be repeated for each individual In the present paper, an improved version of the previously presented [12] Artificial Neural Network (ANN) based method is proposed to determine the HRTFs for points in the 3D space Two ANNs are necessary in order to generate the HRTFs corresponding to each ear The azimuth and elevation that define the position of a certain point in the 3D space are applied to the inputs of the ANN, while the values that define the HRTF are obtained at the outputs The implementation of an ANN includes the following stages: the structure development and the training phase In order to train the network, there are necessary pairs of the point s coordinates (azimuth, elevation) and the corresponding values of the HRTF The training phase is performed for a limited of points in 3D space, for which the HRTFs are available These HRTFs can be obtained only as a result of experimental measurements, but there are only a few databases available that include HRTFs for the whole scientific community The development of the author s own HRTF database is a plan for the future 3 The concept of Acoustic Virtual Reality The idea of using different chimes and sounds in order to guide the blind and visual impaired individuals to the desired target, with obstacle avoidance, has been already exploited in other works [1], [2], [4] The human hearing system has remarkable abilities identifying sound source positions in 3D space [13] and allows directional positioning in space Often, this process is aided by visual sense, knowledge and other sensory input In the absence of the seeing sense, the human hearing is not enough in guiding because of obstacles, which does not generate sounds The basic idea of the proposed man-machine interface is to substitute the visual reality for an acoustical virtual reality, this way every obstacle will generate sounds, according to the following rules: The presence of different obstacles in the surrounding environment will be signalized to the subject by burst of sounds, whose virtual source position will suggest the position of real obstacles Different obstacles will be individualized by different frequencies of the sound generated by the virtual sources that suggest their presence in the supervised area The intensity and the repetition frequency of the burst are depending by the distance between the subject and obstacles: the intensity and the repetition frequency increases when the distance decreases A pilot signal, having a constant amplitude and frequency is generated, to indicate the direction of the movement to the target; the subject should follow step by step the position of this virtual source The practical implementation of the above rules encounters some difficulties The most difficult task seems to be the development of a simple and efficient algorithm for generation of appropriate sounds, whose virtual source is perceived by an individual in a certain point of the working place The solution to the above mentioned problem is even more complex When sound waves are propagated from a vibrating source to a listener, the pressure waveform is altered by diffraction caused by the torso, shoulders, head and pinnae In engineering terms, these propagation effects can be expressed by two transfer functions, one for the left and another for the right ear, that specify the relation between the sound pressure of the source and the sound pressures at the left and right ear drums of the listener [15] As a result, there is a pair of filters for every position of a sound source in the space [16] These functions are the earlier mentioned HRTFs, that are acoustic filters which not only vary both with frequency and with the heading, elevation and range to the source [17], but also vary significantly from person to person [18], [19] Inter-subject variations may result in significant localization errors (front-back confusions, elevation errors), when one person hears the source through another person s HRTFs [19] Thus, individualized HRTFs are needed to obtain a faithful perception of spatial location If a monaural sound signal representing the source is passed through these filters and heard through headphones, the listener will hear a sound that seems to come from a particular location (direction) in space Appropriate variation of the filter characteristics will cause the sound to appear to come from any desired spatial location [20], [21] 4 The structure of the ANN Two ANNs are necessary in order to generate a pair of HRTFs (one ANN for each ear) The structure of the proposed ANN is the same with the one described in [12] and presented in Fig 1 The network consists of three parts: an input layer of source nodes (2 inputs), a hidden layer (n neurons) and an output layer (512 neurons) There are two inputs necessary, which will be the azimuth and the elevation of the desired virtual sound source (network inputs) As a result, each of the two ANNs gives us a set of 512 values corresponding to the Head Related Impulse Responses (HRIRs) for the ISSN: ISBN:
3 desired virtual sound source These HRIRs are Fourier pairs of the above mentioned HRTFs The optimal number of neurons for the hidden layer is difficult to be determined It can be estimated experimentally by modifying the number of neurons in the hidden layer and evaluating the performances of the networks The obtained results suggests that the number n of neurons should be somewhere between 40 and 60 In our localization experiments, presented in detail in Section 5, we used 50 neurons in the hidden layer Azimuth Elevation Input layer of source nodes - 2 inputs - Layer of hidden neurons - n neurons - Layer of output neurons neurons - Fig 1 The architecture of the proposed ANN We used the Listen HRTF Database, which is a public HRIR database, for training and testing of the ANNs The development of this database took place in the frame of the Listen project AKG and Ircam, the two partners of the project, have both performed the HRIR/HRTF and morphological measurement sessions The database includes measurement data of 49 test subjects For each subject there are 187 pairs of HRIRs, each of these HRIRs corresponding to a particular point (particular azimuth-elevation pair) in the 3D space The proposed ANN is a multilayer perceptron feedforward backpropagation network This network was selected for its simplicity and it is considered well suited to start our localization experiments In our future work, we will take into consideration other networks as well The Listen HRTF Database contains, as mentioned before, a total of 49 test subject From all these subjects one subject was chosen based on the anthropometric similarities with our test person (ZH) The chosen subject from the Listen HRTF Database was the subject known as IRC_1031 The compared anthropometric measurements were presented in Table 1 The training of the ANNs was conducted using the data set corresponding to the selected subject The purpose of the present paper is to evaluate the performances of the proposed network structure through localization experiments The experimental setup, presented in Section 5, proposes to determine if the practically obtained sound really offers the appropriate perception of the desired virtual sound source in the auditory space of an arbitrary subject In other words, the paper intends to establish if the obtained sound, acquired by filtering a monaural sound with the obtained HRTFs, really seems to come from, or at least near, the azimuth-elevation couple specified at the inputs of the ANNs Extent Measurement IRC_1031 Subject L R L R x 1 Head width x 3 Head depth x 12 Shoulder width d 1 Cavum concha height d 3 Cavum concha width d 4 Fossa height d 5 Pinna height d 6 Pinna width Table 1 Selected anthropometric measurements in [22] Specified values are expressed in millimeters Considering the selected data set, there were two cases considered (and available from the GUI) from the point of view of training and test data distribution: In the first case, after careful consideration, the existing data set for the selected test subject was divided in two smaller sets, as follows Each fourth pair of HRIRs was selected and the obtained data set was used as test data for the ANNs (test data meaning that besides the localization experiment being carried out, the networks performances are evaluated with the aid of the Mean Squared Error (MSE), in the same way as in [12]) The remaining set of HRIRs was used as training data for the ANNs Shortly, approximately 75% of the data set is used in training and the remaining 25% as test data In the second case, all of the existing data set available for the chosen subject (187 pairs of HRIRs) was used in the training phase of the two ANNs, left and right side, respectively (100% training data 0% test data) The authors expect somewhat higher localization performances compared to the first case, because of the greater number of input-output couples available in training phase 5 Experimental setup and results The experimental setup used in our experiments is presented in Fig 2 A short description of the setup follows Our experiments were conducted on a Dell Inspiron 1520 notebook running Windows 7 Ultimate 32-bit operating ISSN: ISBN:
4 system with the following hardware configuration: Intel Core 2 Duo 200 GHz CPU, 2GB DDR2 RAM, 160 GB HDD Sennheiser HD 435 headphones were connected to the headphone output of the notebook and used for headphone listening of the generated acoustic signals As software environment, we used an improved version of the graphical user interface (GUI), presented in [23] The GUI was developed using National Instruments LabVIEW, which is a graphical programming environment widely used by engineers and scientists MATLAB with tools for designing, implementing, visualizing, and simulating neural networks Dell Inspiron Notebook LabVIEW GUI Sennheiser HD 435 headphones Virtual sound source perceived here Test Subject (ZH) MATLAB ANN implementation Listener wearing headphones Fig 2 The experimental setup In the following the GUI will be presented briefly Fig 3 shows a snapshot of the running GUI The basic idea was to implement a basic version of the AVR concept, which was presented in Section 3 One can obtain a simple AVR, which means obtaining the left and right acoustic signals intended for headphone listening, whose virtual sound source is a certain point (specified via an azimuth-elevation pair) in the 3D space The acoustic signal for one of the ears is obtained by convoluting a locally generated sound (for example: sine or white noise) with the corresponding HRIR values for the desired point in space These HRIR values can be obtained from a public database or can be obtained using the ANN based method presented in Section 4, which represent the basis of the current paper The GUI also offers the possibility to choose between different neural network states after training the ANNs for a specified number of epochs (50000, , , , or ) and different training/test data distributions (100% training 0% test or 75% training 25% test) The possibility to choose between states is possible because the proposed ANNs were trained prior to the localization experiments and their states were saved in mat format on the hard disc In the following we will focus only on the proposed ANN based method Our method was implemented using the Neural Network Toolbox, included in MATLAB, which is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation Neural Network Toolbox extends Fig 3 A snapshot of the running GUI [23] The selected transfer functions in our experiments, were the same as in [12] and they are: the hyperbolic tangent sigmoid transfer function for the hidden layer and the linear transfer function for the output layer The whole network has been trained using the trainrp network training function This function updates weight and bias values according to resilient backpropagation algorithm (Rprop) Fig 4 An example of the obtained training error for the case of epochs An example of the obtained training error is presented in Fig 4 One can see that the error decreases with the increase of the number of epochs Our localization experiments were conducted in our Bioinspired Systems Laboratory from the Department ISSN: ISBN:
5 of Applied Electronics, in Politehnica University of Timisoara The test subject (ZH) was placed on a rotating chair, with a field compass placed horizontally between its legs for simple angle measurement A loudspeaker was placed straight ahead of the test subject at the same height with the subject s head Expressed in azimuth-elevation coordinates the loudspeaker position is at (0 ;0 ) The distance between the loudspeaker and the subject s head was fixed at 1,5 meters The loudspeaker served as a reference sound source in this localization experiment The reference signal coming from the loudspeaker was a burst of uniform white noise of 1 s with 0,2 s silence and was repeated throughout the localization experiment Fig 5 Obtained localization results Acoustic signals were played through the headphones to the subject The stimulus signals were generated using the GUI from LabVIEW The subject was told to rotate the chair without rotating its head and modifying the relative height from the ground until the virtual sound source of the acoustic signals (stimulus signals) coincides with the physically present sound source position (reference signal) When the two sources coincide, the angular displacement (azimuth) is read from the field compass The chosen acoustic signal was a burst of uniform white noise of 1 s with 03 s silence and was played repeatedly until the test subject gave an estimation of the virtual sound source azimuth in its auditory space by remaining in the same position with the rotating chair In the next phase, the virtual sound source position and the actually used azimuth-elevation pair is compared (by calculating the absolute difference between the used and measured azimuth values) to verify the accuracy of the sound localization, this way the performance of the whole system The obtained results of the conducted localization experiments are represented in Fig 5 In frame of this experiment the stimulus signals were only generated in the frontal plane and at 0 elevation For azimuth, this means angles between 90 and 90 The authors would also like to underline that in the conducted localization experiments the following ANN states were used: Data distribution 75% training and 25% test; Numbers of training epochs used Table 2 gives us an overview of the obtained absolute error The first row represents the considered absolute azimuth error intervals The second row shows the number of measured azimuth values, which fall into the corresponding error interval Finally, the third row gives us the percentage of one error interval with respect to all of the measured azimuth values Absolute azimuth error intervals Number of judged azimuth values from the specified azimuth error interval Share of the given error interval [0,15) 32 59,26 % [15,30) 10 18,52 % [30,45) 11 20,37 % [45,60) 0 0 % [60, ) 1 1,85 % Table 2 Overview of the obtained azimuth errors After comparing the results from Table 2 with the ones from H Hu et al [22], we can state that the localization performances presented in the current paper are better than in [22], if we compare them with the results obtained using non-individual HRTFs But, when we consider the results obtained using individual HRTFs, the localization using our method is less accurate 6 Conclusions and future research Based on the results of the current research (as shown in Fig 5 and Table 2), it can be stated that the generation of the HRTFs for any visually impaired person, with the aid of the ANNs, is possible This method avoids, or at least significantly shortens, the duration of the complex measurement process currently required to acquire the HRTFs for one certain subject, if there is at least one subject included in the Listen HRTF or any other available database, whose anthropometric measurements are close to the test person s most important (selected) anthropometric measurements If the number of epochs used for the training of the two ANNs is adequate, the resulting localization errors are small In conclusion, the use of ANNs in HRTF generation offers a good alternative to the common complex HRTF measurement process As future research plans, the authors would like to mention the optimization of the presented solution Also, the authors intend to create their own HRIR database Another idea is the implementation of this novel method on an ARM based evaluation board ISSN: ISBN:
6 Acknowledgements This work was supported by the following grants: Bilateral Inter-Governmental S&T Cooperation grant between China and Romania No UEFISCSU 599/ and ANCS 222/ References: [1] A Helal, S, Moore, B Ramachandran-Drishti, An Integrated Navigation System for Visually Impaired and Disabled, International Symposium on Wearable Computers (ISWC), 2001, pp [2] V Kulyukin, C Gharpure, J Nicholson, S Pavithran, RFID in Robot-Assisted Indoor Navigation for the Visual Impaired, IEEE/RSJ Intern Conference on Intelligent Robots and Systems, Sendai, Japan (IROS), 2004, pp [3] I Ulrich, J Borenstein, The GuideCane Applying Mobile Robot Technologies to Assist Visually Impaired, IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, Vol 31, No 2, 2001, pp [4] S Soval, I Ulrich, J Borenstein, Robotics-based Obstacle Avoidance Systems for Blind and Visually Impaired, IEEE Robotics Magazine, Vol 10, No 1, 2003, pp 9-20 [5] H Shim, J Lee, E Lee, A Study on the Sound- Imaging Algorithm of Obstacles Information for the Visually Impaired, The 2002 Intern Conf on Circuits/Systems, Computers and Communications (ITC-CSCC), 2002, pp [6] V Tiponut, A Gacsadi, L Tepelea, C Lar, I Gavrilut, Integrated Environment for Assisted Movement of Visually Impaired, Proceedings of the 15 th International Workshop on Robotics in Alpe- Adria-Danube Region, (RAAD 2006), ISBN: , 2006, pp [7] V Tiponut, S Ionel, C Caleanu, I Lie, Improved Version of an Integrated Environment for Assisted Movement of Visually Impaired, Proceedings of the 11 th WSEAS International Conference on SYSTEMS, ISBN: , ISSN: , 2007, pp [8] R Z Shi, T K Horiuchi, A Neuromorphic VLSI Model of Bat Interaural Level Difference Processing for Azimuthal Echolocation, IEEE Trans Circuits and Systems, Vol 54, 2007, pp [9] J Reijniers, H Peremans, Biometric Sonar System Performing Spectrum-based Localization, IEEE Trans on Robotics, Vol 23, No 6, 2007, pp [10] N Bourbakis, Sensing Surrounding 3-D Space for Navigation of the Blind, IEEE Engineering in Medicine and Biology Magazine, Vol 27, No 1, 2008, pp [11] V Tiponut, Z Haraszy, D Ianchis, I Lie, Acoustic Virtual Reality Performing Man-machine Interfacing of the Blind, Proceedings of 12 th WSEAS International Conference on SYSTEMS, ISBN: , ISSN: , 2008, pp [12] Z Haraszy, D Ianchis, V Tiponut, Generation of the Head Related Transfer Functions Using Artificial Neural Networks, Proceedings of the 13th WSEAS International Conference on CIRCUITS, ISBN: , ISSN: , 2009, pp [13] D R Begault, 3D Sound - For Virtual Reality and Multimedia, NASA Ames Research Center, 2000 [14] V Tiponut, S Popescu, I Bogdanov, C Caleanu, Obstacles detection System for Visually Impaired Guidance, Proceedings of 12 th WSEAS International Conference on SYSTEMS, ISBN: , ISSN: , 2008, pp [15] R O Duda, Modeling Head Related Transfer Functions, Preprint for the Twenty-Seventh Asilomar Conference on Signals, Systems & Computers, 1993 [16] J Blauert, Spatial Hearing, MIT Press, 1997 [17] C P Brown, R O Duda, A Structural Model for Binaural Sound Synthesis, IEEE Transactions on Speech and Audio Processing, Vol 6, No 5, 1998 [18] D J Kistler and F L Wightman, A model of headrelated transfer functions based on principal components analysis and minimum-phase reconstruction, J Acoust Soc Amer, Vol 91, 1992, pp [19] E M Wenzel, M Arruda, D J Kistler, and F L Wightman, Localization using nonindividualized head-related transfer functions, J Acoust Soc Amer, Vol 94, 1993, pp [20] Wightman, F L and D J Kistler, Headphone Simulation of Free-Field Listening II: Psychophysical Validation, J Acoust Soc Amer, Vol 85, 1989, pp [21] R Susnik, J Sodnik, A Umek, S Tomazic, Spatial sound generation using HRTF created by the use of recursive filters, EUROCON, 2003 [22] H Hu, L Zhou, H Ma, Z Wu, HRTF personalization based on artificial neural network in individual virtual auditory space, Applied Acoustics, 2008; 69(2), pp [23] Z Haraszy, D Ianchis, V Tiponut, Acoustic Virtual Reality Generation with Head Related Transfer Function Visualization, Proceedings of Scientific Communication Session Doctor ETC 2009, ISSN: X, 2009, pp ISSN: ISBN:
Work Directions and New Results in Electronic Travel Aids for Blind and Visually Impaired People
Work Directions and New Results in Electronic Travel Aids for Blind and Visually Impaired People VIRGIL TIPONUT DANIEL IANCHIS MIHAI BASH ZOLTAN HARASZY Department of Applied Electronics POLITEHNICA University
More informationORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF
ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF F. Rund, D. Štorek, O. Glaser, M. Barda Faculty of Electrical Engineering Czech Technical University in Prague, Prague, Czech Republic
More informationSpatial Audio Reproduction: Towards Individualized Binaural Sound
Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution
More informationHRIR Customization in the Median Plane via Principal Components Analysis
한국소음진동공학회 27 년춘계학술대회논문집 KSNVE7S-6- HRIR Customization in the Median Plane via Principal Components Analysis 주성분분석을이용한 HRIR 맞춤기법 Sungmok Hwang and Youngjin Park* 황성목 박영진 Key Words : Head-Related Transfer
More informationEnhancing 3D Audio Using Blind Bandwidth Extension
Enhancing 3D Audio Using Blind Bandwidth Extension (PREPRINT) Tim Habigt, Marko Ðurković, Martin Rothbucher, and Klaus Diepold Institute for Data Processing, Technische Universität München, 829 München,
More informationVirtual Acoustic Space as Assistive Technology
Multimedia Technology Group Virtual Acoustic Space as Assistive Technology Czech Technical University in Prague Faculty of Electrical Engineering Department of Radioelectronics Technická 2 166 27 Prague
More informationA Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations
A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations György Wersényi Széchenyi István University, Hungary. József Répás Széchenyi István University, Hungary. Summary
More informationIvan Tashev Microsoft Research
Hannes Gamper Microsoft Research David Johnston Microsoft Research Ivan Tashev Microsoft Research Mark R. P. Thomas Dolby Laboratories Jens Ahrens Chalmers University, Sweden Augmented and virtual reality,
More informationHRTF adaptation and pattern learning
HRTF adaptation and pattern learning FLORIAN KLEIN * AND STEPHAN WERNER Electronic Media Technology Lab, Institute for Media Technology, Technische Universität Ilmenau, D-98693 Ilmenau, Germany The human
More informationA triangulation method for determining the perceptual center of the head for auditory stimuli
A triangulation method for determining the perceptual center of the head for auditory stimuli PACS REFERENCE: 43.66.Qp Brungart, Douglas 1 ; Neelon, Michael 2 ; Kordik, Alexander 3 ; Simpson, Brian 4 1
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2aPPa: Binaural Hearing
More informationAudio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA
Audio Engineering Society Convention Paper Presented at the 131st Convention 2011 October 20 23 New York, NY, USA This Convention paper was selected based on a submitted abstract and 750-word precis that
More informationPsychoacoustic Cues in Room Size Perception
Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,
More informationAuditory Localization
Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 1, 21 http://acousticalsociety.org/ ICA 21 Montreal Montreal, Canada 2 - June 21 Psychological and Physiological Acoustics Session appb: Binaural Hearing (Poster
More informationANALYZING NOTCH PATTERNS OF HEAD RELATED TRANSFER FUNCTIONS IN CIPIC AND SYMARE DATABASES. M. Shahnawaz, L. Bianchi, A. Sarti, S.
ANALYZING NOTCH PATTERNS OF HEAD RELATED TRANSFER FUNCTIONS IN CIPIC AND SYMARE DATABASES M. Shahnawaz, L. Bianchi, A. Sarti, S. Tubaro Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico
More informationSIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS. György Wersényi
SIMULATION OF SMALL HEAD-MOVEMENTS ON A VIRTUAL AUDIO DISPLAY USING HEADPHONE PLAYBACK AND HRTF SYNTHESIS György Wersényi Széchenyi István University Department of Telecommunications Egyetem tér 1, H-9024,
More informationSound Source Localization using HRTF database
ICCAS June -, KINTEX, Gyeonggi-Do, Korea Sound Source Localization using HRTF database Sungmok Hwang*, Youngjin Park and Younsik Park * Center for Noise and Vibration Control, Dept. of Mech. Eng., KAIST,
More informationRobotic Spatial Sound Localization and Its 3-D Sound Human Interface
Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Jie Huang, Katsunori Kume, Akira Saji, Masahiro Nishihashi, Teppei Watanabe and William L. Martens The University of Aizu Aizu-Wakamatsu,
More informationAssisting and Guiding Visually Impaired in Indoor Environments
Avestia Publishing 9 International Journal of Mechanical Engineering and Mechatronics Volume 1, Issue 1, Year 2012 Journal ISSN: 1929-2724 Article ID: 002, DOI: 10.11159/ijmem.2012.002 Assisting and Guiding
More informationStudy on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno
JAIST Reposi https://dspace.j Title Study on method of estimating direct arrival using monaural modulation sp Author(s)Ando, Masaru; Morikawa, Daisuke; Uno Citation Journal of Signal Processing, 18(4):
More informationPERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION
PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION Michał Pec, Michał Bujacz, Paweł Strumiłło Institute of Electronics, Technical University
More informationExternalization in binaural synthesis: effects of recording environment and measurement procedure
Externalization in binaural synthesis: effects of recording environment and measurement procedure F. Völk, F. Heinemann and H. Fastl AG Technische Akustik, MMK, TU München, Arcisstr., 80 München, Germany
More informationConvention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA
Audio Engineering Society Convention Paper 987 Presented at the 143 rd Convention 217 October 18 21, New York, NY, USA This convention paper was selected based on a submitted abstract and 7-word precis
More informationPerceptual effects of visual images on out-of-head localization of sounds produced by binaural recording and reproduction.
Perceptual effects of visual images on out-of-head localization of sounds produced by binaural recording and reproduction Eiichi Miyasaka 1 1 Introduction Large-screen HDTV sets with the screen sizes over
More informationThe analysis of multi-channel sound reproduction algorithms using HRTF data
The analysis of multichannel sound reproduction algorithms using HRTF data B. Wiggins, I. PatersonStephens, P. Schillebeeckx Processing Applications Research Group University of Derby Derby, United Kingdom
More informationSound source localization and its use in multimedia applications
Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,
More informationBINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA
EUROPEAN SYMPOSIUM ON UNDERWATER BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA PACS: Rosas Pérez, Carmen; Luna Ramírez, Salvador Universidad de Málaga Campus de Teatinos, 29071 Málaga, España Tel:+34
More informationListening with Headphones
Listening with Headphones Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back Substantial individual differences Most evident in elevation
More informationIII. Publication III. c 2005 Toni Hirvonen.
III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on
More informationPERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS
PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS Myung-Suk Song #1, Cha Zhang 2, Dinei Florencio 3, and Hong-Goo Kang #4 # Department of Electrical and Electronic, Yonsei University Microsoft Research 1 earth112@dsp.yonsei.ac.kr,
More informationComparison of Haptic and Non-Speech Audio Feedback
Comparison of Haptic and Non-Speech Audio Feedback Cagatay Goncu 1 and Kim Marriott 1 Monash University, Mebourne, Australia, cagatay.goncu@monash.edu, kim.marriott@monash.edu Abstract. We report a usability
More informationUpper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences
Acoust. Sci. & Tech. 24, 5 (23) PAPER Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Masayuki Morimoto 1;, Kazuhiro Iida 2;y and
More informationTHE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS VOL. 8, NO. 3, SEPTEMBER 2015 THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM
More informationConvention Paper 9712 Presented at the 142 nd Convention 2017 May 20 23, Berlin, Germany
Audio Engineering Society Convention Paper 9712 Presented at the 142 nd Convention 2017 May 20 23, Berlin, Germany This convention paper was selected based on a submitted abstract and 750-word precis that
More informationSpatial Audio & The Vestibular System!
! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs
More informationUniversity of Huddersfield Repository
University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid
More informationAnalysis of Frontal Localization in Double Layered Loudspeaker Array System
Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang
More informationWAVELET-BASED SPECTRAL SMOOTHING FOR HEAD-RELATED TRANSFER FUNCTION FILTER DESIGN
WAVELET-BASE SPECTRAL SMOOTHING FOR HEA-RELATE TRANSFER FUNCTION FILTER ESIGN HUSEYIN HACIHABIBOGLU, BANU GUNEL, AN FIONN MURTAGH Sonic Arts Research Centre (SARC), Queen s University Belfast, Belfast,
More informationSOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4
SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................
More informationSpringerBriefs in Computer Science
SpringerBriefs in Computer Science Series Editors Stan Zdonik Shashi Shekhar Jonathan Katz Xindong Wu Lakhmi C. Jain David Padua Xuemin (Sherman) Shen Borko Furht V.S. Subrahmanian Martial Hebert Katsushi
More informationINVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS
20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR
More informationEvaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model
Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Sebastian Merchel and Stephan Groth Chair of Communication Acoustics, Dresden University
More informationResearch on Hand Gesture Recognition Using Convolutional Neural Network
Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:
More informationAcoustics Research Institute
Austrian Academy of Sciences Acoustics Research Institute Spatial SpatialHearing: Hearing: Single SingleSound SoundSource Sourcein infree FreeField Field Piotr PiotrMajdak Majdak&&Bernhard BernhardLaback
More informationMANY emerging applications require the ability to render
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004 553 Rendering Localized Spatial Audio in a Virtual Auditory Space Dmitry N. Zotkin, Ramani Duraiswami, Member, IEEE, and Larry S. Davis, Fellow,
More information3D sound image control by individualized parametric head-related transfer functions
D sound image control by individualized parametric head-related transfer functions Kazuhiro IIDA 1 and Yohji ISHII 1 Chiba Institute of Technology 2-17-1 Tsudanuma, Narashino, Chiba 275-001 JAPAN ABSTRACT
More informationConvention e-brief 400
Audio Engineering Society Convention e-brief 400 Presented at the 143 rd Convention 017 October 18 1, New York, NY, USA This Engineering Brief was selected on the basis of a submitted synopsis. The author
More informationASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED
Proceedings of the 7th WSEAS International Conference on Robotics, Control & Manufacturing Technology, Hangzhou, China, April 15-17, 2007 239 ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY
More informationPAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane
IEICE TRANS. FUNDAMENTALS, VOL.E91 A, NO.1 JANUARY 2008 345 PAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane Ki
More information3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte
Aalborg Universitet 3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte Published in: Proceedings of BNAM2012
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ IA 213 Montreal Montreal, anada 2-7 June 213 Psychological and Physiological Acoustics Session 3pPP: Multimodal Influences
More informationComputational Perception /785
Computational Perception 15-485/785 Assignment 1 Sound Localization due: Thursday, Jan. 31 Introduction This assignment focuses on sound localization. You will develop Matlab programs that synthesize sounds
More informationA binaural auditory model and applications to spatial sound evaluation
A binaural auditory model and applications to spatial sound evaluation Ma r k o Ta k a n e n 1, Ga ë ta n Lo r h o 2, a n d Mat t i Ka r ja l a i n e n 1 1 Helsinki University of Technology, Dept. of Signal
More informationFrom acoustic simulation to virtual auditory displays
PROCEEDINGS of the 22 nd International Congress on Acoustics Plenary Lecture: Paper ICA2016-481 From acoustic simulation to virtual auditory displays Michael Vorländer Institute of Technical Acoustics,
More informationIndoor Navigation Approach for the Visually Impaired
International Journal of Emerging Engineering Research and Technology Volume 3, Issue 7, July 2015, PP 72-78 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Indoor Navigation Approach for the Visually
More information396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011
396 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2011 Obtaining Binaural Room Impulse Responses From B-Format Impulse Responses Using Frequency-Dependent Coherence
More informationBinaural Sound Localization Systems Based on Neural Approaches. Nick Rossenbach June 17, 2016
Binaural Sound Localization Systems Based on Neural Approaches Nick Rossenbach June 17, 2016 Introduction Barn Owl as Biological Example Neural Audio Processing Jeffress model Spence & Pearson Artifical
More informationSound Source Localization in Median Plane using Artificial Ear
International Conference on Control, Automation and Systems 28 Oct. 14-17, 28 in COEX, Seoul, Korea Sound Source Localization in Median Plane using Artificial Ear Sangmoon Lee 1, Sungmok Hwang 2, Youngjin
More informationSMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE
ISSN: 0976-2876 (Print) ISSN: 2250-0138 (Online) SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE L. SAROJINI a1, I. ANBURAJ b, R. ARAVIND c, M. KARTHIKEYAN d AND K. GAYATHRI e a Assistant professor,
More informationComparison of binaural microphones for externalization of sounds
Downloaded from orbit.dtu.dk on: Jul 08, 2018 Comparison of binaural microphones for externalization of sounds Cubick, Jens; Sánchez Rodríguez, C.; Song, Wookeun; MacDonald, Ewen Published in: Proceedings
More informationExploring haptic feedback for robot to human communication
Exploring haptic feedback for robot to human communication GHOSH, Ayan, PENDERS, Jacques , JONES, Peter , REED, Heath
More informationSPAT. Binaural Encoding Tool. Multiformat Room Acoustic Simulation & Localization Processor. Flux All rights reserved
SPAT Multiformat Room Acoustic Simulation & Localization Processor by by Binaural Encoding Tool Flux 2009. All rights reserved Introduction Auditory scene perception Localisation Binaural technology Virtual
More informationModeling Head-Related Transfer Functions Based on Pinna Anthropometry
Second LACCEI International Latin American and Caribbean Conference for Engineering and Technology (LACCEI 24) Challenges and Opportunities for Engineering Education, Research and Development 2-4 June
More informationHarmonic detection by using different artificial neural network topologies
Harmonic detection by using different artificial neural network topologies J.L. Flores Garrido y P. Salmerón Revuelta Department of Electrical Engineering E. P. S., Huelva University Ctra de Palos de la
More informationSmart antenna for doa using music and esprit
IOSR Journal of Electronics and Communication Engineering (IOSRJECE) ISSN : 2278-2834 Volume 1, Issue 1 (May-June 2012), PP 12-17 Smart antenna for doa using music and esprit SURAYA MUBEEN 1, DR.A.M.PRASAD
More informationLabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System
LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System Muralindran Mariappan, Manimehala Nadarajan, and Karthigayan Muthukaruppan Abstract Face identification and tracking has taken a
More informationTHE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES
THE INTERACTION BETWEEN HEAD-TRACKER LATENCY, SOURCE DURATION, AND RESPONSE TIME IN THE LOCALIZATION OF VIRTUAL SOUND SOURCES Douglas S. Brungart Brian D. Simpson Richard L. McKinley Air Force Research
More informationConvention Paper Presented at the 144 th Convention 2018 May 23 26, Milan, Italy
Audio Engineering Society Convention Paper Presented at the 144 th Convention 2018 May 23 26, Milan, Italy This paper was peer-reviewed as a complete manuscript for presentation at this convention. This
More informationMel Spectrum Analysis of Speech Recognition using Single Microphone
International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb10.
More informationConvention e-brief 433
Audio Engineering Society Convention e-brief 433 Presented at the 144 th Convention 2018 May 23 26, Milan, Italy This Engineering Brief was selected on the basis of a submitted synopsis. The author is
More informationSound localization Sound localization in audio-based games for visually impaired children
Sound localization Sound localization in audio-based games for visually impaired children R. Duba B.W. Kootte Delft University of Technology SOUND LOCALIZATION SOUND LOCALIZATION IN AUDIO-BASED GAMES
More informationVirtual Sound Localization by Blind People
ARCHIVES OF ACOUSTICS Vol.40,No.4, pp.561 567(2015) Copyright c 2015byPAN IPPT DOI: 10.1515/aoa-2015-0055 Virtual Sound Localization by Blind People LarisaDUNAI,IsmaelLENGUA,GuillermoPERIS-FAJARNÉS,FernandoBRUSOLA
More informationConvention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA
Audio Engineering Society Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA 9447 This Convention paper was selected based on a submitted abstract and 750-word
More informationHigh performance 3D sound localization for surveillance applications Keyrouz, F.; Dipold, K.; Keyrouz, S.
High performance 3D sound localization for surveillance applications Keyrouz, F.; Dipold, K.; Keyrouz, S. Published in: Conference on Advanced Video and Signal Based Surveillance, 2007. AVSS 2007. DOI:
More informationTu1.D II Current Approaches to 3-D Sound Reproduction. Elizabeth M. Wenzel
Current Approaches to 3-D Sound Reproduction Elizabeth M. Wenzel NASA Ames Research Center Moffett Field, CA 94035 Elizabeth.M.Wenzel@nasa.gov Abstract Current approaches to spatial sound synthesis are
More informationNovel approaches towards more realistic listening environments for experiments in complex acoustic scenes
Novel approaches towards more realistic listening environments for experiments in complex acoustic scenes Janina Fels, Florian Pausch, Josefa Oberem, Ramona Bomhardt, Jan-Gerrit-Richter Teaching and Research
More informationFuzzy-Heuristic Robot Navigation in a Simulated Environment
Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,
More informationISSN: [Jha* et al., 5(12): December, 2016] Impact Factor: 4.116
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY ANALYSIS OF DIRECTIVITY AND BANDWIDTH OF COAXIAL FEED SQUARE MICROSTRIP PATCH ANTENNA USING ARTIFICIAL NEURAL NETWORK Rohit Jha*,
More informationPerformance Improvement of Contactless Distance Sensors using Neural Network
Performance Improvement of Contactless Distance Sensors using Neural Network R. ABDUBRANI and S. S. N. ALHADY School of Electrical and Electronic Engineering Universiti Sains Malaysia Engineering Campus,
More informationCreating three dimensions in virtual auditory displays *
Salvendy, D Harris, & RJ Koubek (eds.), (Proc HCI International 2, New Orleans, 5- August), NJ: Erlbaum, 64-68. Creating three dimensions in virtual auditory displays * Barbara Shinn-Cunningham Boston
More informationA Comparison of the Convolutive Model and Real Recording for Using in Acoustic Echo Cancellation
A Comparison of the Convolutive Model and Real Recording for Using in Acoustic Echo Cancellation SEPTIMIU MISCHIE Faculty of Electronics and Telecommunications Politehnica University of Timisoara Vasile
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 007 A MODEL OF THE HEAD-RELATED TRANSFER FUNCTION BASED ON SPECTRAL CUES PACS: 43.66.Qp, 43.66.Pn, 43.66Ba Iida, Kazuhiro 1 ; Itoh, Motokuni
More informationPersonalized 3D sound rendering for content creation, delivery, and presentation
Personalized 3D sound rendering for content creation, delivery, and presentation Federico Avanzini 1, Luca Mion 2, Simone Spagnol 1 1 Dep. of Information Engineering, University of Padova, Italy; 2 TasLab
More informationVIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION
ARCHIVES OF ACOUSTICS 33, 4, 413 422 (2008) VIRTUAL ACOUSTICS: OPPORTUNITIES AND LIMITS OF SPATIAL SOUND REPRODUCTION Michael VORLÄNDER RWTH Aachen University Institute of Technical Acoustics 52056 Aachen,
More informationIntroduction. 1.1 Surround sound
Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of
More informationFrom Binaural Technology to Virtual Reality
From Binaural Technology to Virtual Reality Jens Blauert, D-Bochum Prominent Prominent Features of of Binaural Binaural Hearing Hearing - Localization Formation of positions of the auditory events (azimuth,
More informationOn distance dependence of pinna spectral patterns in head-related transfer functions
On distance dependence of pinna spectral patterns in head-related transfer functions Simone Spagnol a) Department of Information Engineering, University of Padova, Padova 35131, Italy spagnols@dei.unipd.it
More informationMultiple Sound Sources Localization Using Energetic Analysis Method
VOL.3, NO.4, DECEMBER 1 Multiple Sound Sources Localization Using Energetic Analysis Method Hasan Khaddour, Jiří Schimmel Department of Telecommunications FEEC, Brno University of Technology Purkyňova
More informationComputational Perception. Sound localization 2
Computational Perception 15-485/785 January 22, 2008 Sound localization 2 Last lecture sound propagation: reflection, diffraction, shadowing sound intensity (db) defining computational problems sound lateralization
More informationAcquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind
Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind Lorenzo Picinali Fused Media Lab, De Montfort University, Leicester, UK. Brian FG Katz, Amandine
More informationSOUND 1 -- ACOUSTICS 1
SOUND 1 -- ACOUSTICS 1 SOUND 1 ACOUSTICS AND PSYCHOACOUSTICS SOUND 1 -- ACOUSTICS 2 The Ear: SOUND 1 -- ACOUSTICS 3 The Ear: The ear is the organ of hearing. SOUND 1 -- ACOUSTICS 4 The Ear: The outer ear
More informationSpatial Audio Transmission Technology for Multi-point Mobile Voice Chat
Audio Transmission Technology for Multi-point Mobile Voice Chat Voice Chat Multi-channel Coding Binaural Signal Processing Audio Transmission Technology for Multi-point Mobile Voice Chat We have developed
More informationTDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones and Source Counting
TDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones Source Counting Ali Pourmohammad, Member, IACSIT Seyed Mohammad Ahadi Abstract In outdoor cases, TDOA-based methods
More informationAudio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work
Audio Engineering Society Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract
More informationSimultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array
2012 2nd International Conference on Computer Design and Engineering (ICCDE 2012) IPCSIT vol. 49 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V49.14 Simultaneous Recognition of Speech
More informationAPPLICATION OF THE HEAD RELATED TRANSFER FUNCTIONS IN ROOM ACOUSTICS DESIGN USING BEAMFORMING
APPLICATION OF THE HEAD RELATED TRANSFER FUNCTIONS IN ROOM ACOUSTICS DESIGN USING BEAMFORMING 1 Mojtaba NAVVAB, PhD. Taubman College of Architecture and Urpan Planning TCAUP, Bldg. Tech. Lab UNiversity
More informationExtracting the frequencies of the pinna spectral notches in measured head related impulse responses
Extracting the frequencies of the pinna spectral notches in measured head related impulse responses Vikas C. Raykar a and Ramani Duraiswami b Perceptual Interfaces and Reality Laboratory, Institute for
More informationNEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH
FIFTH INTERNATIONAL CONGRESS ON SOUND AND VIBRATION DECEMBER 15-18, 1997 ADELAIDE, SOUTH AUSTRALIA NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH M. O. Tokhi and R. Wood
More information