Sound Source Localization in Median Plane using Artificial Ear
|
|
- Norah Owen
- 6 years ago
- Views:
Transcription
1 International Conference on Control, Automation and Systems 28 Oct , 28 in COEX, Seoul, Korea Sound Source Localization in Median Plane using Artificial Ear Sangmoon Lee 1, Sungmok Hwang 2, Youngjin Park 3,Youn-sik Park 4 1 Department of Mechanical Engineering, KAIST, Daejeon, Korea (Tel : ; smansl@kaist.ac.kr) 2 Department of Mechanical Engineering, KAIST, Daejeon, Korea (Tel : ; tjdahr78@kaist.ac.kr) 3 Department of Mechanical Engineering, KAIST, Daejeon, Korea (Tel : ; yjpark@kaist.ac.kr) 4 Department of Mechanical Engineering, KAIST, Daejeon, Korea (Tel : ; yspark@kaist.ac.kr) Abstract: Sound source localization is the method using the measurements of the acoustic signals from microphone arrays in acoustical engineering. This technique has been used broadly in 3-D sound technology, humanoid robot and teleconferencing and so on. For robot industry, their ultimate purpose is to be with human being. This is why the industry is demanding applicable robot s auditory system in the form of artificial ears like human s external ear such as ear pinna. It has more benefits to make use of auditory system with ear pinna to humanoid robots for HRI. In this paper, we propose a specific sound source localization method using a pair of artificial ears, each of which consisting of a single ear pinna and two microphones. The feasibility and localization performance of proposed method for speech signal in median plane is shown. Through the experiment in office environment, we confirm that robots with artificial ears can estimate the elevation angle of speech signal just using two microphone output signals. Keywords: Sound source localization, Relative Transfer Function(RTF), group delay, artificial ear 1. INTRODUCTION Sound source localization is a listener s capability to estimating the direction or position of detected sound and also indicates the methods using the measurements of the acoustic signals from microphone arrays in acoustical engineering [1]. Compared to vision which is a well-directed sense, hearing is an undirected sense, i.e. omni-directional. Such ability not constrained in the field of view can supplement vision to identify the location of events of interest outside the field. The other way, visual information compensates localization error of audition as well. For example of application to humanoid robots, audio-visual integration that is the combined use of speech and face recognition can improve recognition of speech signals produced by using a pair of microphones [2-3]. And particularly when vision is blocked by barriers or speakers can t be recognized due to darkness, auditory information plays significant role in this case. Thus, sound source localization as an auditory perception is the step of more natural Human-Robot Interaction [4]. Sound source localization technique has been used broadly in 3-D sound technology, humanoid robot and teleconferencing and so on. For robot industry, their ultimate purpose is to be with human being. This is why the industry is demanding applicable robot s auditory system in the form of artificial ears like human s external ear such as ear pinna. It has more benefits to make use of auditory system with ear pinna to humanoid robots for HRI. Several trials to apply artificial ears for sound source localization for robots have been existed. For instance, the use of vision and audio sensor for sound source localization can make robots learn how to improve their initial localization ability by using supervised learning or visual information [5-7]. However, their learning process can be done only if speakers get in the field of view and they need training time whenever the place where robots exist changes and also need additional imaging systems. The SIG, humanoid robot, uses two pairs of microphones, one of which is on each ear position and the other one is installed inside the cover for cancelling noise induced by motors [8-9]. Keyrouz and Saleh proposed binaural localization using HRTF database measured by four microphones, two placed inside and two outside the ear canals of a KEMAR (Knowles Electronics manikin for Auditory Research) humanoid head. They showed localization performance for the sounds with a large bandwidth such as fingers snapping or percussive noises [1]. They used the direction-dependent spectral features corresponding to different sound source location. But, these spectral features were not in the region of voice frequency band. In order to overcome this problem, Hwang and Park applied artificial ears of large size to robots. They have shown that speech signal can be localized by using their proposed method [11]. Although they all use binaural localization, i.e. two-ear system, their proposed systems are hardly applicable for humanoid robot due to specific reasons mentioned above i.e. the use of too large ear pinna and possibility for sound with large band width. In this paper, we propose a specific sound source localization method using artificial ears consisting of ear pinna and ear canals using four microphones. Our proposed auditory system uses a spherical head and two ears, each of which composes of a single pinna and a pair of microphones. The feasibility and localization performance of proposed method for speech signal in median plane given limited computational resources is shown.
2 Through the experiment in office environment, we confirm that robots with artificial ears can estimate the elevation angle of speech signal just using two microphone output signals. front-back discrimination. Therefore, relative placement of microphones and ear pinna has essential part for front-back discrimination and possible localization range. 2. PROPOSED ARTIFICIAL EAR DESIGN AND HEAD SHAPE The artificial ears and spherical head model were manufactured as depicted in Fig. 1. Fig. 2 Placement of two microphones and ear pinna. Fig. 1 Proposed artificial ear built in a spherical head. Both shape and size of ear pinna attached to the ear flange were designed to attain spectral features distributed in the frequency range from 3 to 4 khz using the Diffraction and Reflection Model(DR model) suggested by Lopez-Poveda and Meddis for accurate reproduction of the spectral notches for elevated sources [12]. However, this DR model was applicable only at the positions in the concha aperture. Thus, on account of the limited reproduction region, we did experiment using several microphones as presented in Fig. 1 (left). 3. FRONT-BACK DISCRIMINATION AND PLACEMENT OF TWO MICROPHONES AND EAR PINNAE 3.1 Front-back confusion When two microphones in free field are used for localization of sound sources in 2-D space, two points sharing the same ITD (Inter-channel Time Difference) will exist and this phenomena is called front-back confusion. And the set of these points in 3-D space is often called cone of confusion [13]. Since the locations of all sounds originating from points on this cone are distinguishable. 3.1 Placement of two microphones and ear pinna If we use just two microphones for localization of sound sources in median plane, then front-back confusion will happen. As depicted in Fig. 2, when ear pinna is missing, cone of confusion occurs with respect to dotted line passing by two attached microphones. To get over cone of confusion, we located an ear pinna to pass between two microphones as shown Fig. 2. As sound source is elevated from lower to upper region, at a single elevation angle of sound source, the microphone output signal levels measured by two microphones will be equal. By letting this elevation position locate in the dotted line, we can perform the 4. ELEVATION ESTIMATION METHOD 4.1 Relative Transfer Function (RTF) Information of input sound is unknown in most practical situations. Especially, in case of localization of voice signal, their characteristics are rapidly changing from word to word and even remarkably dependent on individuals. Therefore, Relative Transfer Function (RTF) measured from two output signals can be useful and applicable as long as RTF doesn t have much side effect induced by addictive noises such as reflected sounds from environmental physical factors. Gxy ( fk ) RTF( fk ) = (1) G ( f ) RTF computation can be done using (1) equation [14]. 4.2 Cleansing method Measured RTF is no longer reliable if reflective wave has more major contribution to both two microphones than by direct wave. Thus, the cleaning procedure is necessary to avoid this side effect due to reflections that makes our auditory system to be hard to estimate real sound source positions accurately. We cleansed out RTF using hamming window whose length 67. 2π n α β.cos( ) w[n]= M, n M, α =.54, β =.46,, otherwise. xx k (2) Windowing length was determined from the smallest distance between a microphone and a dominant reflecting surface [15]. An example of cleansing process is in Fig. 3.
3 Fig. 3 Original Relative Impulse Response (RIR) (blue-line), cleansed RIR(red-line) and hamming window(black-line) are shown. As shown above, the obtained Relative Impulse Response (RIR), counterpart of RTF in time domain, in real environment has reflected components from the objects surrounding the listener. We can exclude reflective waves by using cleansing process. 4.3 Estimation Time Delay of Arrival (TDOA) Group delay is a measure of transmitt time of a signal from the input and output port. By using RTF phase response, we can obtain group delay and we also measure TDOA between microphone U and B [16]. 1 d Group Delay= ( RTF ( f k )) (3) 2π df By applying free-field and far-field conditions, we can directly measure the sound source direction. 5. EXPERIMENT IN AN OFFICE ENVIRONMENT 5.1 Selected microphone positions Proposed localization method mainly relies on RTF s phase responses. On the other hand, front-back discrimination is based on RTF s magnitude response associated with relative placement of microphones and artificial ear pinna in order to avoid cone of confusion. So, we selected microphones, one (Mic. B) of which is behind the ear pinna and the other (Mic. U) is in the upper part of ear flange for having less side effect by tune table refection as shown Fig. 1. Artificial ears fitted with the microphones and experimental set-up is shown in Fig Verification of proposed artificial ear and localization method in median plane In an office environment, there are a lot of noise sources that makes localization performance worse. An experiment was carried out in an office environment. The size of room is 7m 13m 2.5m. Background noise level in this room is 45dB and SNR is 25dB. Male speech signals used as input voice signal are voice 1 ANG NYEONG HA SE YO and voice 2 BANG GAP SEUP NI DA.. The distance between a speaker and the center of artificial head is fixed apart from 1.2m. RTF magnitude response is shown in Fig. 5. Fig. 5 RTF magnitude response For the quantitative measure, Inter-channel Level Difference (ILD) is used for front-back discrimination [16]. ILD= 2log ( RTF( f ) ) = 1 n= N 1 n= 2log ( RTF ( f ) ) df db (4) In Fig. 6, computed ILD is shown for sound source positions. 1 n= N 1 n= df n UB n n Fig. 6 ILD profile Fig. 4 The experimental set-up in an office environment We can find that ILD shift occurs with respect to 6 position having same ILD. As shown in Fig. 2 before, in this case, ρ is equal to 6 and if ILD is less than db, then sound source is located below 6 and if ILD is larger than db, then sound source is located above 6. After front-back discrimination is accomplished, we can find the elevation angle of sound source by using RTF phase response in order to measure group delay
4 which means TDOA. Estimation performance and errors for sound sources located on the median plane from -3 to 21 elevation angle are shown below in Fig. 7 and Table 1. Estimated Elevation Angle [degree] Sound Source Localization Performance True Elevation Angle [degree] Fig. 7 Localization performance for voice 1(red) and voice 2(green) Table 1 Estimation error Elevation angles (degrees) Estimation range -3 ~ 21-3 ~ 7 8 ~ ~ 21 Voice Voice We found out that front-back discrimination can be operated by using RTF magnitude response because magnitude level changes up and down with respect to 6 elevation angle and also by using phase responses of RTF, it is possible to estimate elevation angles of sound sources on median plane. 5. CONDLUSIONS AND FUTURE WORKS We proposed a design of artificial ears consisting of a single ear pinna and two microphones and sound localization method using this artificial ear. By placing ear pinna and two microphones appropriately, we can solve front-back confusion problem. Although we use ear pinna of 7cm characteristic length, our proposed method is applicable for localization of speech signals. Through the experiment conducted in office environment, we showed the feasibility of proposed localization method for sound sources on median plane. In the near future, we ll determine the optimal positions of microphones for sound sources in 3-D space by the experiment in office environment and investigate localization performance. 6. ACKNOWLEDGEMENT This work was supported by the BK21 program, the Intelligent Robotics Development Program, and the Korea Science and Engineering Foundation through the national Research laboratory Program (RA ) funded by the ministry of Education, Science and Technology. REFERENCES [1] M. S. Brandstein and H. Silverman, A practical methodology for speech source localization with microphone arrays, Computer Speech and language, Vol. 11, No. 2, pp , [2] Y. Sasaki and S. Kagami and H. Mizoguchi, Multiple sound source mapping for a mobile robot by self-motion triangulation, In the Proceeding of the 26 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 9-15, 26. [3] Kazuhiro Nakadai and Daisuke Matsuura and Hiroshi G. Okuno and Hiroshi Tsujino, Improvement of recognition of simultaneous speech signals using AV integration and scattering theory for humanoid robots, Speech Communication, Vol. 44, pp , 24. [4] Ira J. Hirsh and Charles S. Watson, Auditory psychophysics and perception, Annual Review of Psychology, Vol. 47, pp , [5] Hiromichi Nakashima and Toshiharu Mukai, 3D Sound Source Localization System Based on Learning of Binaural Hearing, IEEE International Conference on Systems, Man and Cybernetics, 25. [6] Arabi, P and Zaky, S, Integrated Vision and Sound Localization, Proceedings of the third international conference on information fusion, Vol. 3, pp , 2. [7] J Hornstein and M. Lopes and J. Santos-Victor and Francisco Lacerda, Sound Localization for Humanoid Robots-Building Audio-Motor Maps based on the HRTF, Proceedings of the 26 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, October 9-15, 26. [8] Kazuhiro Nakadai and Hiroshi G. Okuno and Hiroaki Kitano, Real-time sound source localization and separation for robot audition, In Proceedings of IEEE international Conference on Spoken Language Processing, pp , 22. [9] Hiroshi G. Okuno and Kazuhiro Nakadai and Hiroaki Kitano, Social Interaction of Humanoid Robot based on Audio-Visual Tracking, Proceedings of Eighteenth International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems (IEA/AIE-22), Vol. 2358, pp , 22. [1] F. Keyrouz and A. Abous Saleh, Intelligent Sound Source Localization Based on Head-related Transfer Functions, IEEE International Conference on Control, Automation and Systems, pp , 27. [11] S. Hwang, Y. Park and Y. Park, Sound direction estimation using artificial ear, In the proceeding of the International Conference on Control, Automation and Systems, pp , October 17-2, 27. [12] E. A. Lopez-Poveda and Ray Meddis, A physical model of sound diffraction and reflections in the human concha, Journal of the Acoustical Society
5 of America, Vol. 1, No. 5, pp , [13] C. I. Cheng and G. H. Wakefield, Introduction to Head-Related Transfer Functions (HRTFs): Representations of HRTFs in Time, Frequency, and Space, Journal of Audio Engineering of Society, Vol. 49, No. 4, pp , 21. [14] Julius S. Bendat and Allan G. Piersol, Random Data: Analysis and Measurement Procedures, Wiley New York, [15] Sangmoon Lee and Youngjin Park and Youn-sik Park, Sound direction estimation using artificial ear for Human-Robot Interface, Control, Automation and Systems Symposium, October 14, 28, Seoul, Korea. [16] Jens Blauert, Spatial hearing, revised edition, MIT press, 1997.
Sound Source Localization using HRTF database
ICCAS June -, KINTEX, Gyeonggi-Do, Korea Sound Source Localization using HRTF database Sungmok Hwang*, Youngjin Park and Younsik Park * Center for Noise and Vibration Control, Dept. of Mech. Eng., KAIST,
More informationHRIR Customization in the Median Plane via Principal Components Analysis
한국소음진동공학회 27 년춘계학술대회논문집 KSNVE7S-6- HRIR Customization in the Median Plane via Principal Components Analysis 주성분분석을이용한 HRIR 맞춤기법 Sungmok Hwang and Youngjin Park* 황성목 박영진 Key Words : Head-Related Transfer
More informationComputational Perception. Sound localization 2
Computational Perception 15-485/785 January 22, 2008 Sound localization 2 Last lecture sound propagation: reflection, diffraction, shadowing sound intensity (db) defining computational problems sound lateralization
More informationRobotic Spatial Sound Localization and Its 3-D Sound Human Interface
Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Jie Huang, Katsunori Kume, Akira Saji, Masahiro Nishihashi, Teppei Watanabe and William L. Martens The University of Aizu Aizu-Wakamatsu,
More informationEE1.el3 (EEE1023): Electronics III. Acoustics lecture 20 Sound localisation. Dr Philip Jackson.
EE1.el3 (EEE1023): Electronics III Acoustics lecture 20 Sound localisation Dr Philip Jackson www.ee.surrey.ac.uk/teaching/courses/ee1.el3 Sound localisation Objectives: calculate frequency response of
More informationTDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones and Source Counting
TDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones Source Counting Ali Pourmohammad, Member, IACSIT Seyed Mohammad Ahadi Abstract In outdoor cases, TDOA-based methods
More informationSpatial Audio Reproduction: Towards Individualized Binaural Sound
Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 1, 21 http://acousticalsociety.org/ ICA 21 Montreal Montreal, Canada 2 - June 21 Psychological and Physiological Acoustics Session appb: Binaural Hearing (Poster
More informationA Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations
A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations György Wersényi Széchenyi István University, Hungary. József Répás Széchenyi István University, Hungary. Summary
More informationBinaural Speaker Recognition for Humanoid Robots
Binaural Speaker Recognition for Humanoid Robots Karim Youssef, Sylvain Argentieri and Jean-Luc Zarader Université Pierre et Marie Curie Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222
More informationUniversity of Huddersfield Repository
University of Huddersfield Repository Moore, David J. and Wakefield, Jonathan P. Surround Sound for Large Audiences: What are the Problems? Original Citation Moore, David J. and Wakefield, Jonathan P.
More informationSimultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array
2012 2nd International Conference on Computer Design and Engineering (ICCDE 2012) IPCSIT vol. 49 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V49.14 Simultaneous Recognition of Speech
More informationMicrophone Array Design and Beamforming
Microphone Array Design and Beamforming Heinrich Löllmann Multimedia Communications and Signal Processing heinrich.loellmann@fau.de with contributions from Vladi Tourbabin and Hendrik Barfuss EUSIPCO Tutorial
More informationStudy on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno
JAIST Reposi https://dspace.j Title Study on method of estimating direct arrival using monaural modulation sp Author(s)Ando, Masaru; Morikawa, Daisuke; Uno Citation Journal of Signal Processing, 18(4):
More informationAcoustics Research Institute
Austrian Academy of Sciences Acoustics Research Institute Spatial SpatialHearing: Hearing: Single SingleSound SoundSource Sourcein infree FreeField Field Piotr PiotrMajdak Majdak&&Bernhard BernhardLaback
More informationIntroduction. 1.1 Surround sound
Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of
More informationINVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS
20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR
More informationBINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA
EUROPEAN SYMPOSIUM ON UNDERWATER BINAURAL RECORDING SYSTEM AND SOUND MAP OF MALAGA PACS: Rosas Pérez, Carmen; Luna Ramírez, Salvador Universidad de Málaga Campus de Teatinos, 29071 Málaga, España Tel:+34
More informationEnhancing 3D Audio Using Blind Bandwidth Extension
Enhancing 3D Audio Using Blind Bandwidth Extension (PREPRINT) Tim Habigt, Marko Ðurković, Martin Rothbucher, and Klaus Diepold Institute for Data Processing, Technische Universität München, 829 München,
More informationConvention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA
Audio Engineering Society Convention Paper 987 Presented at the 143 rd Convention 217 October 18 21, New York, NY, USA This convention paper was selected based on a submitted abstract and 7-word precis
More informationA triangulation method for determining the perceptual center of the head for auditory stimuli
A triangulation method for determining the perceptual center of the head for auditory stimuli PACS REFERENCE: 43.66.Qp Brungart, Douglas 1 ; Neelon, Michael 2 ; Kordik, Alexander 3 ; Simpson, Brian 4 1
More information3D sound image control by individualized parametric head-related transfer functions
D sound image control by individualized parametric head-related transfer functions Kazuhiro IIDA 1 and Yohji ISHII 1 Chiba Institute of Technology 2-17-1 Tsudanuma, Narashino, Chiba 275-001 JAPAN ABSTRACT
More informationFrom Monaural to Binaural Speaker Recognition for Humanoid Robots
From Monaural to Binaural Speaker Recognition for Humanoid Robots Karim Youssef, Sylvain Argentieri and Jean-Luc Zarader Université Pierre et Marie Curie Institut des Systèmes Intelligents et de Robotique,
More informationAudio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work
Audio Engineering Society Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2aPPa: Binaural Hearing
More informationStudy Of Sound Source Localization Using Music Method In Real Acoustic Environment
International Journal of Electronics Engineering Research. ISSN 975-645 Volume 9, Number 4 (27) pp. 545-556 Research India Publications http://www.ripublication.com Study Of Sound Source Localization Using
More informationFrom Binaural Technology to Virtual Reality
From Binaural Technology to Virtual Reality Jens Blauert, D-Bochum Prominent Prominent Features of of Binaural Binaural Hearing Hearing - Localization Formation of positions of the auditory events (azimuth,
More informationActive Audition for Humanoid
Active Audition for Humanoid Kazuhiro Nakadai y, Tino Lourens y, Hiroshi G. Okuno y3, and Hiroaki Kitano yz ykitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp. Mansion 31 Suite
More informationMultiple Sound Sources Localization Using Energetic Analysis Method
VOL.3, NO.4, DECEMBER 1 Multiple Sound Sources Localization Using Energetic Analysis Method Hasan Khaddour, Jiří Schimmel Department of Telecommunications FEEC, Brno University of Technology Purkyňova
More informationSound Source Localization in Reverberant Environment using Visual information
너무 The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Sound Source Localization in Reverberant Environment using Visual information Byoung-gi
More informationThree-Dimensional Sound Source Localization for Unmanned Ground Vehicles with a Self-Rotational Two-Microphone Array
Proceedings of the 5 th International Conference of Control, Dynamic Systems, and Robotics (CDSR'18) Niagara Falls, Canada June 7 9, 2018 Paper No. 104 DOI: 10.11159/cdsr18.104 Three-Dimensional Sound
More informationPAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane
IEICE TRANS. FUNDAMENTALS, VOL.E91 A, NO.1 JANUARY 2008 345 PAPER Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane Ki
More informationHRTF measurement on KEMAR manikin
Proceedings of ACOUSTICS 29 23 25 November 29, Adelaide, Australia HRTF measurement on KEMAR manikin Mengqiu Zhang, Wen Zhang, Rodney A. Kennedy, and Thushara D. Abhayapala ABSTRACT Applied Signal Processing
More informationHigh performance 3D sound localization for surveillance applications Keyrouz, F.; Dipold, K.; Keyrouz, S.
High performance 3D sound localization for surveillance applications Keyrouz, F.; Dipold, K.; Keyrouz, S. Published in: Conference on Advanced Video and Signal Based Surveillance, 2007. AVSS 2007. DOI:
More informationWIND SPEED ESTIMATION AND WIND-INDUCED NOISE REDUCTION USING A 2-CHANNEL SMALL MICROPHONE ARRAY
INTER-NOISE 216 WIND SPEED ESTIMATION AND WIND-INDUCED NOISE REDUCTION USING A 2-CHANNEL SMALL MICROPHONE ARRAY Shumpei SAKAI 1 ; Tetsuro MURAKAMI 2 ; Naoto SAKATA 3 ; Hirohumi NAKAJIMA 4 ; Kazuhiro NAKADAI
More informationUniversity of Huddersfield Repository
University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid
More informationAuditory System For a Mobile Robot
Auditory System For a Mobile Robot PhD Thesis Jean-Marc Valin Department of Electrical Engineering and Computer Engineering Université de Sherbrooke, Québec, Canada Jean-Marc.Valin@USherbrooke.ca Motivations
More informationThe psychoacoustics of reverberation
The psychoacoustics of reverberation Steven van de Par Steven.van.de.Par@uni-oldenburg.de July 19, 2016 Thanks to Julian Grosse and Andreas Häußler 2016 AES International Conference on Sound Field Control
More informationEvaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model
Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Sebastian Merchel and Stephan Groth Chair of Communication Acoustics, Dresden University
More informationRobust Low-Resource Sound Localization in Correlated Noise
INTERSPEECH 2014 Robust Low-Resource Sound Localization in Correlated Noise Lorin Netsch, Jacek Stachurski Texas Instruments, Inc. netsch@ti.com, jacek@ti.com Abstract In this paper we address the problem
More informationBinaural Hearing. Reading: Yost Ch. 12
Binaural Hearing Reading: Yost Ch. 12 Binaural Advantages Sounds in our environment are usually complex, and occur either simultaneously or close together in time. Studies have shown that the ability to
More informationExcelsior Audio Design & Services, llc
Charlie Hughes March 05, 2007 Subwoofer Alignment with Full-Range System I have heard the question How do I align a subwoofer with a full-range loudspeaker system? asked many times. I thought it might
More informationROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES
ROOM AND CONCERT HALL ACOUSTICS The perception of sound by human listeners in a listening space, such as a room or a concert hall is a complicated function of the type of source sound (speech, oration,
More informationIvan Tashev Microsoft Research
Hannes Gamper Microsoft Research David Johnston Microsoft Research Ivan Tashev Microsoft Research Mark R. P. Thomas Dolby Laboratories Jens Ahrens Chalmers University, Sweden Augmented and virtual reality,
More informationA five-microphone method to measure the reflection coefficients of headsets
A five-microphone method to measure the reflection coefficients of headsets Jinlin Liu, Huiqun Deng, Peifeng Ji and Jun Yang Key Laboratory of Noise and Vibration Research Institute of Acoustics, Chinese
More informationEFFECTS OF PHYSICAL CONFIGURATIONS ON ANC HEADPHONE PERFORMANCE
EFFECTS OF PHYSICAL CONFIGURATIONS ON ANC HEADPHONE PERFORMANCE Lifu Wu Nanjing University of Information Science and Technology, School of Electronic & Information Engineering, CICAEET, Nanjing, 210044,
More informationA learning, biologically-inspired sound localization model
A learning, biologically-inspired sound localization model Elena Grassi Neural Systems Lab Institute for Systems Research University of Maryland ITR meeting Oct 12/00 1 Overview HRTF s cues for sound localization.
More informationENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE
BeBeC-2016-D11 ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE 1 Jung-Han Woo, In-Jee Jung, and Jeong-Guon Ih 1 Center for Noise and Vibration Control (NoViC), Department of
More informationReverberant Sound Localization with a Robot Head Based on Direct-Path Relative Transfer Function
Reverberant Sound Localization with a Robot Head Based on Direct-Path Relative Transfer Function Xiaofei Li, Laurent Girin, Fabien Badeig, Radu Horaud PERCEPTION Team, INRIA Grenoble Rhone-Alpes October
More informationAnalysis of Frontal Localization in Double Layered Loudspeaker Array System
Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang
More informationDistance Estimation and Localization of Sound Sources in Reverberant Conditions using Deep Neural Networks
Distance Estimation and Localization of Sound Sources in Reverberant Conditions using Deep Neural Networks Mariam Yiwere 1 and Eun Joo Rhee 2 1 Department of Computer Engineering, Hanbat National University,
More informationBinaural Sound Source Localization Based on Steered Beamformer with Spherical Scatterer
Binaural Sound Source Localization Based on Steered Beamformer with Spherical Scatterer Zhao Shuo, Chen Xun, Hao Xiaohui, Wu Rongbin, Wu Xihong National Laboratory on Machine Perception, School of Electronic
More informationDesign and Evaluation of Two-Channel-Based Sound Source Localization over Entire Azimuth Range for Moving Talkers
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems Acropolis Convention Center Nice, France, Sept, 22-26, 2008 Design and Evaluation of Two-Channel-Based Sound Source Localization
More informationEvaluating Real-time Audio Localization Algorithms for Artificial Audition in Robotics
Evaluating Real-time Audio Localization Algorithms for Artificial Audition in Robotics Anthony Badali, Jean-Marc Valin,François Michaud, and Parham Aarabi University of Toronto Dept. of Electrical & Computer
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Signal Processing in Acoustics Session 2aSP: Array Signal Processing for
More informationRecent Advances in Acoustic Signal Extraction and Dereverberation
Recent Advances in Acoustic Signal Extraction and Dereverberation Emanuël Habets Erlangen Colloquium 2016 Scenario Spatial Filtering Estimated Desired Signal Undesired sound components: Sensor noise Competing
More informationSpatial Audio & The Vestibular System!
! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs
More informationActive Noise Cancellation System Using DSP Prosessor
International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 699 Active Noise Cancellation System Using DSP Prosessor G.U.Priyanga, T.Sangeetha, P.Saranya, Mr.B.Prasad Abstract---This
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence
More informationSpeech Synthesis using Mel-Cepstral Coefficient Feature
Speech Synthesis using Mel-Cepstral Coefficient Feature By Lu Wang Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Mark Hasegawa-Johnson May 2018 Abstract
More informationMel Spectrum Analysis of Speech Recognition using Single Microphone
International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree
More informationSpatial audio is a field that
[applications CORNER] Ville Pulkki and Matti Karjalainen Multichannel Audio Rendering Using Amplitude Panning Spatial audio is a field that investigates techniques to reproduce spatial attributes of sound
More informationSearch and Track Power Charge Docking Station Based on Sound Source for Autonomous Mobile Robot Applications
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Search and Track Power Charge Docking Station Based on Sound Source for Autonomous Mobile
More informationIII. Publication III. c 2005 Toni Hirvonen.
III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on
More informationAN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES
Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Verona, Italy, December 7-9,2 AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Tapio Lokki Telecommunications
More informationOn distance dependence of pinna spectral patterns in head-related transfer functions
On distance dependence of pinna spectral patterns in head-related transfer functions Simone Spagnol a) Department of Information Engineering, University of Padova, Padova 35131, Italy spagnols@dei.unipd.it
More informationSOUND 1 -- ACOUSTICS 1
SOUND 1 -- ACOUSTICS 1 SOUND 1 ACOUSTICS AND PSYCHOACOUSTICS SOUND 1 -- ACOUSTICS 2 The Ear: SOUND 1 -- ACOUSTICS 3 The Ear: The ear is the organ of hearing. SOUND 1 -- ACOUSTICS 4 The Ear: The outer ear
More informationEffect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning
Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning Toshiyuki Kimura and Hiroshi Ando Universal Communication Research Institute, National Institute
More informationCOM325 Computer Speech and Hearing
COM325 Computer Speech and Hearing Part III : Theories and Models of Pitch Perception Dr. Guy Brown Room 145 Regent Court Department of Computer Science University of Sheffield Email: g.brown@dcs.shef.ac.uk
More informationSpeech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter
Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter 1 Gupteswar Sahu, 2 D. Arun Kumar, 3 M. Bala Krishna and 4 Jami Venkata Suman Assistant Professor, Department of ECE,
More informationUpper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences
Acoust. Sci. & Tech. 24, 5 (23) PAPER Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Masayuki Morimoto 1;, Kazuhiro Iida 2;y and
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:
More informationPsychoacoustic Cues in Room Size Perception
Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,
More informationHRTF adaptation and pattern learning
HRTF adaptation and pattern learning FLORIAN KLEIN * AND STEPHAN WERNER Electronic Media Technology Lab, Institute for Media Technology, Technische Universität Ilmenau, D-98693 Ilmenau, Germany The human
More informationEyes n Ears: A System for Attentive Teleconferencing
Eyes n Ears: A System for Attentive Teleconferencing B. Kapralos 1,3, M. Jenkin 1,3, E. Milios 2,3 and J. Tsotsos 1,3 1 Department of Computer Science, York University, North York, Canada M3J 1P3 2 Department
More informationVirtual Acoustic Space as Assistive Technology
Multimedia Technology Group Virtual Acoustic Space as Assistive Technology Czech Technical University in Prague Faculty of Electrical Engineering Department of Radioelectronics Technická 2 166 27 Prague
More informationReducing comb filtering on different musical instruments using time delay estimation
Reducing comb filtering on different musical instruments using time delay estimation Alice Clifford and Josh Reiss Queen Mary, University of London alice.clifford@eecs.qmul.ac.uk Abstract Comb filtering
More informationPerception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.
Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ IA 213 Montreal Montreal, anada 2-7 June 213 Psychological and Physiological Acoustics Session 3pPP: Multimodal Influences
More informationAutomotive three-microphone voice activity detector and noise-canceller
Res. Lett. Inf. Math. Sci., 005, Vol. 7, pp 47-55 47 Available online at http://iims.massey.ac.nz/research/letters/ Automotive three-microphone voice activity detector and noise-canceller Z. QI and T.J.MOIR
More informationSOUND SOURCE LOCATION METHOD
SOUND SOURCE LOCATION METHOD Michal Mandlik 1, Vladimír Brázda 2 Summary: This paper deals with received acoustic signals on microphone array. In this paper the localization system based on a speaker speech
More informationA Predefined Command Recognition System Using a Ceiling Microphone Array in Noisy Housing Environments
Digital Human Symposium 29 March 4th, 29 A Predefined Command Recognition System Using a Ceiling Microphone Array in Noisy Housing Environments Yoko Sasaki a b Satoshi Kagami b c a Hiroshi Mizoguchi a
More informationImplementation of Speaker Identification Using Speaker Localization for Conference System
Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 110 DOI: 10.11159/mhci16.110 Implementation
More informationORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF
ORIENTATION IN SIMPLE VIRTUAL AUDITORY SPACE CREATED WITH MEASURED HRTF F. Rund, D. Štorek, O. Glaser, M. Barda Faculty of Electrical Engineering Czech Technical University in Prague, Prague, Czech Republic
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb10.
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,
More informationAudio Engineering Society. Convention Paper. Presented at the 131st Convention 2011 October New York, NY, USA
Audio Engineering Society Convention Paper Presented at the 131st Convention 2011 October 20 23 New York, NY, USA This Convention paper was selected based on a submitted abstract and 750-word precis that
More informationTone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.
Tone-in-noise detection: Observed discrepancies in spectral integration Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Armin Kohlrausch b) and
More informationTHE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES
THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES J. Bouše, V. Vencovský Department of Radioelectronics, Faculty of Electrical
More informationNonuniform multi level crossing for signal reconstruction
6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven
More informationPerformance Analysis of Parallel Acoustic Communication in OFDM-based System
Performance Analysis of Parallel Acoustic Communication in OFDM-based System Junyeong Bok, Heung-Gyoon Ryu Department of Electronic Engineering, Chungbuk ational University, Korea 36-763 bjy84@nate.com,
More informationSound Processing Technologies for Realistic Sensations in Teleworking
Sound Processing Technologies for Realistic Sensations in Teleworking Takashi Yazu Makoto Morito In an office environment we usually acquire a large amount of information without any particular effort
More informationExtracting the frequencies of the pinna spectral notches in measured head related impulse responses
Extracting the frequencies of the pinna spectral notches in measured head related impulse responses Vikas C. Raykar a and Ramani Duraiswami b Perceptual Interfaces and Reality Laboratory, Institute for
More informationAalborg Universitet. Binaural Technique Hammershøi, Dorte; Møller, Henrik. Published in: Communication Acoustics. Publication date: 2005
Aalborg Universitet Binaural Technique Hammershøi, Dorte; Møller, Henrik Published in: Communication Acoustics Publication date: 25 Link to publication from Aalborg University Citation for published version
More informationAuditory Localization
Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception
More information3D Sound System with Horizontally Arranged Loudspeakers
3D Sound System with Horizontally Arranged Loudspeakers Keita Tanno A DISSERTATION SUBMITTED IN FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN COMPUTER SCIENCE AND ENGINEERING
More informationTHE TEMPORAL and spectral structure of a sound signal
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 1, JANUARY 2005 105 Localization of Virtual Sources in Multichannel Audio Reproduction Ville Pulkki and Toni Hirvonen Abstract The localization
More informationSubband Analysis of Time Delay Estimation in STFT Domain
PAGE 211 Subband Analysis of Time Delay Estimation in STFT Domain S. Wang, D. Sen and W. Lu School of Electrical Engineering & Telecommunications University of ew South Wales, Sydney, Australia sh.wang@student.unsw.edu.au,
More informationMultichannel Audio Technologies. More on Surround Sound Microphone Techniques:
Multichannel Audio Technologies More on Surround Sound Microphone Techniques: In the last lecture we focused on recording for accurate stereophonic imaging using the LCR channels. Today, we look at the
More informationIntegrated Vision and Sound Localization
Integrated Vision and Sound Localization Parham Aarabi Safwat Zaky Department of Electrical and Computer Engineering University of Toronto 10 Kings College Road, Toronto, Ontario, Canada, M5S 3G4 parham@stanford.edu
More information