EE1.el3 (EEE1023): Electronics III. Acoustics lecture 20 Sound localisation. Dr Philip Jackson.

Similar documents
Computational Perception. Sound localization 2

Binaural Hearing. Reading: Yost Ch. 12

Auditory Localization

ONE of the most common and robust beamforming algorithms

Acoustics Research Institute

Sound Source Localization using HRTF database

Indoor Sound Localization

TDE-ILD-HRTF-Based 2D Whole-Plane Sound Source Localization Using Only Two Microphones and Source Counting

Lab S-3: Beamforming with Phasors. N r k. is the time shift applied to r k

3D audio overview : from 2.0 to N.M (?)

The analysis of multi-channel sound reproduction algorithms using HRTF data

Binaural hearing. Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden

Computational Perception /785

Introduction. 1.1 Surround sound

Auditory System For a Mobile Robot

Distance Estimation and Localization of Sound Sources in Reverberant Conditions using Deep Neural Networks

DISTANCE CODING AND PERFORMANCE OF THE MARK 5 AND ST350 SOUNDFIELD MICROPHONES AND THEIR SUITABILITY FOR AMBISONIC REPRODUCTION

ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

Ultrasound Beamforming and Image Formation. Jeremy J. Dahl

Study on method of estimating direct arrival using monaural modulation sp. Author(s)Ando, Masaru; Morikawa, Daisuke; Uno

Sound source localization and its use in multimedia applications

Time-of-arrival estimation for blind beamforming

SOUND 1 -- ACOUSTICS 1

MUS 302 ENGINEERING SECTION

Listening with Headphones

Joint Position-Pitch Decomposition for Multi-Speaker Tracking

Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA. Why Ambisonics Does Work

Improving 5.1 and Stereophonic Mastering/Monitoring by Using Ambiophonic Techniques

Speech Compression. Application Scenarios

Microphone a transducer that converts one type of energy (sound waves) into another corresponding form of energy (electric signal).

Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany

Spatial Audio & The Vestibular System!

Sound Source Localization in Median Plane using Artificial Ear

Auditory Distance Perception. Yan-Chen Lu & Martin Cooke

Monaural and Binaural Speech Separation

Psychoacoustics of 3D Sound Recording: Research and Practice

c 2014 Michael Friedman

URBANA-CHAMPAIGN. CS 498PS Audio Computing Lab. 3D and Virtual Sound. Paris Smaragdis. paris.cs.illinois.

Sound source localisation in a robot

University of Huddersfield Repository

A Virtual Audio Environment for Testing Dummy- Head HRTFs modeling Real Life Situations

Lecture 2: Interference

Proceedings of Meetings on Acoustics

STUDY OF PHASED ARRAY ANTENNA AND RADAR TECHNOLOGY

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Sound Radiation Characteristic of a Shakuhachi with different Playing Techniques

A learning, biologically-inspired sound localization model

UNIT Write short notes on travelling wave antenna? Ans: Travelling Wave Antenna

LOUDSPEAKER ARRAYS FOR TRANSAURAL REPRODUC- TION

PRELIMINARY INFORMATION

2112 J. Acoust. Soc. Am. 117 (4), Pt. 1, April /2005/117(4)/2112/10/$ Acoustical Society of America

Sound localization Sound localization in audio-based games for visually impaired children

sensors ISSN

Binaural Speaker Recognition for Humanoid Robots

Josephson Engineering, Inc.

3D sound image control by individualized parametric head-related transfer functions

Study Of Sound Source Localization Using Music Method In Real Acoustic Environment

Towards a generalized theory of low-frequency sound source localization

3D Sound System with Horizontally Arranged Loudspeakers

Ultrasound Bioinstrumentation. Topic 2 (lecture 3) Beamforming

Intensity Discrimination and Binaural Interaction

Speech Intelligibility Enhancement using Microphone Array via Intra-Vehicular Beamforming

Convention Paper 9870 Presented at the 143 rd Convention 2017 October 18 21, New York, NY, USA

Convention Paper Presented at the 126th Convention 2009 May 7 10 Munich, Germany

Michael E. Lockwood, Satish Mohan, Douglas L. Jones. Quang Su, Ronald N. Miles

Robust Speech Direction Detection for Low Cost Robotics Applications

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

6.014 Lecture 6: Multipath, Arrays, and Frequency Reuse

BIOLOGICALLY INSPIRED BINAURAL ANALOGUE SIGNAL PROCESSING

Simultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array

ADAPTIVE ANTENNAS. TYPES OF BEAMFORMING

Null-steering GPS dual-polarised antenna arrays

Towards an enhanced performance of uniform circular arrays at low frequencies

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

A Simple Adaptive First-Order Differential Microphone

Introducing Twirling720 VR Audio Recorder

Binaural Sound Localization Systems Based on Neural Approaches. Nick Rossenbach June 17, 2016

NAME STUDENT # ELEC 484 Audio Signal Processing. Midterm Exam July Listening test

Envelopment and Small Room Acoustics

Ambisonic Auralizer Tools VST User Guide

Circumaural transducer arrays for binaural synthesis

Subband Analysis of Time Delay Estimation in STFT Domain

ADAPTIVE ANTENNAS. NARROW BAND AND WIDE BAND BEAMFORMING

PERSONALIZED HEAD RELATED TRANSFER FUNCTION MEASUREMENT AND VERIFICATION THROUGH SOUND LOCALIZATION RESOLUTION

Sound Design and Technology. ROP Stagehand Technician

DESIGN AND APPLICATION OF DDS-CONTROLLED, CARDIOID LOUDSPEAKER ARRAYS

Chapter 6: Room Acoustics and 3D Sound Processing

Speech Enhancement Using Microphone Arrays

Aircraft Detection Experimental Results for GPS Bistatic Radar using Phased-array Receiver

Spatial audio is a field that

COMPARISON OF MICROPHONE ARRAY GEOMETRIES FOR MULTI-POINT SOUND FIELD REPRODUCTION

PERFORMANCE COMPARISON BETWEEN STEREAUSIS AND INCOHERENT WIDEBAND MUSIC FOR LOCALIZATION OF GROUND VEHICLES ABSTRACT

Finding the Prototype for Stereo Loudspeakers

Physics. Light Waves & Physical Optics

BEAMFORMING WITHIN THE MODAL SOUND FIELD OF A VEHICLE INTERIOR

I R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG

The Human Auditory System

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

Source Localisation Mapping using Weighted Interaural Cross-Correlation

Transcription:

EE1.el3 (EEE1023): Electronics III Acoustics lecture 20 Sound localisation Dr Philip Jackson www.ee.surrey.ac.uk/teaching/courses/ee1.el3

Sound localisation Objectives: calculate frequency response of a microphone array understand the factors affecting its resolution explain the primary cues in human sound localisation Topics: 2- and N- element microphone arrays Effects of spacing, frequency, and N Binaural cues Sound localisation in 3D S.1

Preparation for sound localisation What acoustic factors give us the ability to localise sounds with our two ears? identify and describe at least two acoustic cues that we use, based on the sound pressure signals arriving at the ears S.2

Microphone arrays Capabilities: localisation of individual sound sources enhancement of sounds from a given direction beamforming for steering the array Applications: directional microphone recordings speech enhancement in a car or meeting robot audition condition monitoring defence and surveillance S.3

Two element microphone array Two omnidirectional microphones are distance d apart in the free field of harmonic point source Q. d R r 0 r 1 θ Q d = dsinθ Each microphone signal depends on its distance r i from Q: p i (r, t) = Q r i e j(ωt kr i) (1) where the wave number is k = ω/c = 2πf/c = 2π/λ. In the far field, r i d and (θ) = d sin θ. At the centre of the array, we have r 0 = R 2 and r 1 = R + 2, or r i = R + ( i 1 2 ) θ S.4

Combining two microphone signals (broadside) From eq. 1 with (θ) = d sin θ, we obtain p i (R, θ, t) Q ( ( ( ))) R ej ωt k R+ i 1 2 = Q ( R ej(ωt kr) e jk 12 i ) (2) Consider output from array when we add the two signals: s 0 (R, θ, t) = Q R ej(ωt kr) (e jk 2 + e jk 2 ) = 2Q R ejωt e jkr cos ( k 2 (θ) ) (3) using cos x = cosh jx = 1 2 ( e jx + e jx) The effect is to steer the array in broadside direction with a directivity pattern that depends on k = 2π/λ = 2πf/c S.5

Directivity of 2-microphone array (broadside) 500 Hz 1 khz 2 khz Steering the array The steering angle φ defines a delay, τ φ = φ /c, where φ = d sin φ which we apply to the closer microphone signal: s φ (R, θ, t) = 2Q R ejωt e jk(r+ φ 2 ) cos ( k 2 ( (θ) φ )) (4) Hence, by adjusting τ φ, we can maximise the response of the array to sound coming from a given azimuth angle. S.6

Directivity of 2-microphone array, φ = 90 500 Hz 1 khz 2 khz Directivity of 2-microphone array, φ = 45 500 Hz 1 khz 2 khz S.7

A linear array of N microphones Extending the array to N microphones gives a more directed response: s N 0 (R, θ, t) = Q ( ( ))) R ej ωt k R+ ( N 1 N 1 2 e jki (5) i=0 (N 1) d = dsinθ θ Using trigonometry, we can write this as s N 0 (R, θ, t) = Q R ej(ωt kr) sin Nk 2 sin k 2 (6) S.8

Directivity of 8-microphone array, φ = {0, 90, 45 } S.9

Human sound localisation Effects of wave propagation and diffraction around a head: S.10

Binaural cues Inter-aural level difference the difference between the sound intensity level at the left and right ears Head acts as a barrier for mid and high frequencies, for D = 0.17 m, f crit = 1 khz -10 db ILD +10 db Inter-aural time difference the time difference of arrival (TDOA) between at the ears Added path up to 0.22 m is half wavelength at 750 Hz -1 ms ITD +1 ms S.11

Front-back disambiguation Movement of the head: can be used to determine whether the sound source is in front of, or behind, the head: Visual cues: if you can t see it, it must be behind you! Pinna: ear lobes act as HF barrier for sounds from rear. S.12

Perception of elevation Direct vs. reflected: sound from above provides reflections from the ground and shoulders. Pinna: variations in frequency response from different directions give information about incident angle of sound source. Perception of distance Overall intensity, spectral shape, direct vs. reflected sound S.13

Sound recordings Stereo recordings exploit summing localisation: spaced mics give time delay, 1 ms for hard panning mixer pan pots give level difference, 10 db for hard panning Binaural recordings Microphones placed in ear canals capture time, intensity and spectral changes, including effects of the pinnae. S.14

Sound localisation 2-element microphone array: effects of distance, spacing & frequency polar response, beams and nulls N-element microphone array: effect of N on aperture and resolution Human sound localisation: head shadowing interaural differences, ILD and ITD critical band processing & Duplex theory front-back disambiguation elevation Localising sound in recordings: stereo and binaural recordings S.15

Preparation for revision Read and digest lecture notes summarise topics identify areas for further study Practise on examples review worked examples complete exercises do additional exercises in books Rehearse exam technique attempt past exam paper S.16