COMPARISON OF TWO BINAURAL BEAMFORMING APPROACHES FOR HEARING AIDS

Similar documents
Emanuël A. P. Habets, Jacob Benesty, and Patrick A. Naylor. Presented by Amir Kiperwas

Recent Advances in Acoustic Signal Extraction and Dereverberation

Towards an intelligent binaural spee enhancement system by integrating me signal extraction. Author(s)Chau, Duc Thanh; Li, Junfeng; Akagi,

Recent advances in noise reduction and dereverberation algorithms for binaural hearing aids

MULTICHANNEL AUDIO DATABASE IN VARIOUS ACOUSTIC ENVIRONMENTS

ROBUST SUPERDIRECTIVE BEAMFORMER WITH OPTIMAL REGULARIZATION

A BROADBAND BEAMFORMER USING CONTROLLABLE CONSTRAINTS AND MINIMUM VARIANCE

/$ IEEE

Speech Enhancement Using Beamforming Dr. G. Ramesh Babu 1, D. Lavanya 2, B. Yamuna 2, H. Divya 2, B. Shiva Kumar 2, B.

A BINAURAL HEARING AID SPEECH ENHANCEMENT METHOD MAINTAINING SPATIAL AWARENESS FOR THE USER

Introduction to distributed speech enhancement algorithms for ad hoc microphone arrays and wireless acoustic sensor networks

Dual Transfer Function GSC and Application to Joint Noise Reduction and Acoustic Echo Cancellation

Microphone Array Design and Beamforming

Published in: Proceedings of the 11th International Workshop on Acoustic Echo and Noise Control

AN ADAPTIVE MICROPHONE ARRAY FOR OPTIMUM BEAMFORMING AND NOISE REDUCTION

Blind Beamforming for Cyclostationary Signals

Microphone Array Feedback Suppression. for Indoor Room Acoustics

Performance Evaluation of Nonlinear Speech Enhancement Based on Virtual Increase of Channels in Reverberant Environments

AN ADAPTIVE MICROPHONE ARRAY FOR OPTIMUM BEAMFORMING AND NOISE REDUCTION

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 21, NO. 5, MAY

Robust Low-Resource Sound Localization in Correlated Noise

Performance Evaluation of Capon and Caponlike Algorithm for Direction of Arrival Estimation

NOISE REDUCTION IN DUAL-MICROPHONE MOBILE PHONES USING A BANK OF PRE-MEASURED TARGET-CANCELLATION FILTERS. P.O.Box 18, Prague 8, Czech Republic

NOISE REDUCTION IN DUAL-MICROPHONE MOBILE PHONES USING A BANK OF PRE-MEASURED TARGET-CANCELLATION FILTERS. P.O.Box 18, Prague 8, Czech Republic

Direction-of-Arrival Estimation Using a Microphone Array with the Multichannel Cross-Correlation Method

HUMAN speech is frequently encountered in several

Binaural Beamforming with Spatial Cues Preservation

Informed Spatial Filtering for Sound Extraction Using Distributed Microphone Arrays

A Three-Microphone Adaptive Noise Canceller for Minimizing Reverberation and Signal Distortion

260 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 2, FEBRUARY /$ IEEE

DISTANT or hands-free audio acquisition is required in

IN REVERBERANT and noisy environments, multi-channel

arxiv: v1 [cs.sd] 4 Dec 2018

Adaptive Beamforming. Chapter Signal Steering Vectors

A COHERENCE-BASED ALGORITHM FOR NOISE REDUCTION IN DUAL-MICROPHONE APPLICATIONS

The Hybrid Simplified Kalman Filter for Adaptive Feedback Cancellation

ELEC E7210: Communication Theory. Lecture 11: MIMO Systems and Space-time Communications

The Role of High Frequencies in Convolutive Blind Source Separation of Speech Signals

Li, Junfeng; Sakamoto, Shuichi; Hong Author(s) Akagi, Masato; Suzuki, Yôiti. Citation Speech Communication, 53(5):

NOISE ESTIMATION IN A SINGLE CHANNEL

Speech enhancement with ad-hoc microphone array using single source activity

Study Of Sound Source Localization Using Music Method In Real Acoustic Environment

546 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 4, MAY /$ IEEE

LOCAL RELATIVE TRANSFER FUNCTION FOR SOUND SOURCE LOCALIZATION

Sound Source Localization using HRTF database

Adaptive Systems Homework Assignment 3

Broadband Microphone Arrays for Speech Acquisition

A generalized framework for binaural spectral subtraction dereverberation

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Speech Enhancement Using Microphone Arrays

A Frequency-Invariant Fixed Beamformer for Speech Enhancement

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

Robust Near-Field Adaptive Beamforming with Distance Discrimination

IS SII BETTER THAN STI AT RECOGNISING THE EFFECTS OF POOR TONAL BALANCE ON INTELLIGIBILITY?

Nonlinear postprocessing for blind speech separation

LETTER Pre-Filtering Algorithm for Dual-Microphone Generalized Sidelobe Canceller Using General Transfer Function

Uplink and Downlink Beamforming for Fading Channels. Mats Bengtsson and Björn Ottersten

ONE of the most common and robust beamforming algorithms

Combining Null-Steering and Adaptive Filtering for Acoustic Feedback Cancellation in a Multi-Microphone Earpiece

SUBJECTIVE SPEECH QUALITY AND SPEECH INTELLIGIBILITY EVALUATION OF SINGLE-CHANNEL DEREVERBERATION ALGORITHMS

Non-intrusive intelligibility prediction for Mandarin speech in noise. Creative Commons: Attribution 3.0 Hong Kong License

Single Channel Speaker Segregation using Sinusoidal Residual Modeling

Sound Processing Technologies for Realistic Sensations in Teleworking

Improving reverberant speech separation with binaural cues using temporal context and convolutional neural networks

SPECTRAL COMBINING FOR MICROPHONE DIVERSITY SYSTEMS

IN RECENT years, wireless multiple-input multiple-output

Comparison of LMS and NLMS algorithm with the using of 4 Linear Microphone Array for Speech Enhancement

MULTICHANNEL systems are often used for

Adaptive selective sidelobe canceller beamformer with applications in radio astronomy

Acoustic Beamforming for Hearing Aids Using Multi Microphone Array by Designing Graphical User Interface

Calibration of Microphone Arrays for Improved Speech Recognition

1856 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 7, SEPTEMBER /$ IEEE

Stefan Launer, Lyon, January 2011 Phonak AG, Stäfa, CH

Michael Brandstein Darren Ward (Eds.) Microphone Arrays. Signal Processing Techniques and Applications. With 149 Figures. Springer

Multiple Sound Sources Localization Using Energetic Analysis Method

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Local Relative Transfer Function for Sound Source Localization

TARGET SPEECH EXTRACTION IN COCKTAIL PARTY BY COMBINING BEAMFORMING AND BLIND SOURCE SEPARATION

THE problem of acoustic echo cancellation (AEC) was

Relaxed Binaural LCMV Beamforming

NTT Network Innovation Laboratories 1-1 Hikarinooka, Yokosuka, Kanagawa, Japan

RECENTLY, there has been an increasing interest in noisy

Airo Interantional Research Journal September, 2013 Volume II, ISSN:

BEAMFORMING WITHIN THE MODAL SOUND FIELD OF A VEHICLE INTERIOR

SMART ANTENNA ARRAY PATTERNS SYNTHESIS: NULL STEERING AND MULTI-USER BEAMFORMING BY PHASE CONTROL

Speech Enhancement Using Robust Generalized Sidelobe Canceller with Multi-Channel Post-Filtering in Adverse Environments

Time Difference of Arrival Estimation Exploiting Multichannel Spatio-Temporal Prediction

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 23, NO. 8, AUGUST Zbyněk Koldovský, Jiří Málek, and Sharon Gannot

INTERFERENCE REJECTION OF ADAPTIVE ARRAY ANTENNAS BY USING LMS AND SMI ALGORITHMS

Applying the Filtered Back-Projection Method to Extract Signal at Specific Position

STAP approach for DOA estimation using microphone arrays

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

Spatial Source Subtraction Based on Incomplete Measurements of Relative Transfer Function

Joint dereverberation and residual echo suppression of speech signals in noisy environments Habets, E.A.P.; Gannot, S.; Cohen, I.; Sommen, P.C.W.

TIIVISTELMÄRAPORTTI (SUMMARY REPORT)

/$ IEEE

Speech Enhancement Using a Mixture-Maximum Model

Null-steering GPS dual-polarised antenna arrays

Transcription:

COMPARISON OF TWO BINAURAL BEAMFORMING APPROACHES FOR HEARING AIDS Elior Hadad, Daniel Marquardt, Wenqiang Pu 3, Sharon Gannot, Simon Doclo, Zhi-Quan Luo, Ivo Merks 5 and Tao Zhang 5 Faculty of Engineering, Bar-Ilan University, Ramat-Gan, Israel University of Oldenburg, Dept. of Medical Physics and Acoustics, Oldenburg, Germany 3 National Lab of Radar Signal Processing, Xidian University, Xi an, China Dept. of Elect. and Computer Engineering, University of Minnesota, Minneapolis, MN, 5555, USA 5 Starkey Hearing Technologies, Eden Prairie, MN, 553, USA ABSTRACT Beamforming algorithms in binaural hearing aids are crucial to improve speech understanding in background noise for hearing impaired persons. In this study, e compare and evaluate the performance of to recently proposed minimum variance (MV) beamforming approaches for binaural hearing aids. The binaural linearly constrained MV () beamformer applies linear constraints to maintain the target source and mitigate the interfering sources, taking into account the reverberant nature of sound propagation. The inequality constrained MV () beamformer applies inequality constraints to maintain the target source and mitigate the interfering sources, utilizing estimates of the direction of arrivals (DOAs) of the target and interfering sources. The similarities and differences beteen these to approaches is discussed and the performance of both algorithms is evaluated using simulated data and using realorld recordings, particularly focusing on the robustness to estimation errors of the relative transfer functions (RTFs) and DOAs. The achieves a good performance if the RTFs are accurately estimated hile the shos a good robustness to DOA estimation errors. Index Terms Binaural signal processing, acoustic beamforming, binaural hearing aids, LCMV, noise reduction of these algorithms may significantly deteriorate in case of estimation errors [, ]. Hence, for binaural hearing aids, approaches have been proposed that aim to increase the robustness to estimation errors by using additional equality constraints [] or inequality constraints [, ]. The aim of this study is to compare the performance and robustness of to minimum variance (MV) approaches for noise reduction in binaural hearing aids hich aim to minimize the background noise component, hile preserving the target source and suppressing interfering sources. On the one hand, the binaural linearly constrained MV () beamformer aims to achieve noise reduction by exploiting estimates of the RTFs of the target and interfering sources. On the other hand, the inequality constrained MV () beamformer aims to achieve noise reduction by exploiting estimates of the DOAs of the target and interfering sources and utilizes additional robustness constraints for the target source in order to increase the robustness to DOA estimation errors [, ]. First, the and the ill be revieed and similarities and differences beteen these to approaches ill be discussed. Second, the performance of both algorithms ill be evaluated in a simulated environment and using real-orld recordings, particularly focusing on the robustness to estimations errors of the estimated RTFs and DOAs.. INTRODUCTION Although in the last decades hearing aids have evolved from simple sound amplifiers to modern digital devices ith complex functionalities, speech understanding is still a challenging problem for the hearing aid user in the presence of background noise and reverberation. Hence, beamforming algorithms in hearing aids are crucial to improve speech understanding in background noise for hearing impaired persons. With the advent of ireless technology, it is currently possible to not only use the microphones of the left or the right hearing aid separately but to use the microphones of both hearing aids simultaneously (binaural configuration) for improved noise reduction [,, 3,, 5,, 7]. Binaural beamforming approaches usually require an estimate of the correlation matrices of the desired target and the undesired interference components and/or an estimate of the direction of arrivals (DOAs) of the target and interfering sources [7]. The estimated correlation matrices can be used to estimate the relative transfer functions (RTFs) or DOAs of the target and interfering sources in order to achieve noise reduction [, 9]. Hoever, the performance. PROBLEM FORMULATION We consider a binaural hearing aid system consisting of to hearing devices ith a total of M microphones and an acoustic scenario comprising one target speech source and N u multiple directional interfering sources in a noisy and reverberant environment. In the frequency-domain, the M-dimensional stacked vector of the received microphone signals y (ω) can be ritten as y (ω) = x (ω) + u (ω) + n (ω), () here x (ω) is the target component, u (ω) the interfering sources component, and n (ω) the background noise component (e.g., diffuse noise). For brevity, the frequency variable ω is henceforth omitted. The target and interfering sources components can be ritten as x = s xh x and u = N u r= su,rhu,r, here sx and su,r denote the target and rth interfering signals and h x and h u,r denote the entire reverberant acoustic transfer function (ATF) vectors relating the target and the rth interfering source to the microphones, respectively. 97--59-7-/7/$3. 7 IEEE 3 ICASSP 7

Note that the entire reverberant ATF vector for the target source can be decomposed as h x = h θ + h reverb, () here θ is the DOA angle from the listener s head center to the source, h θ is the anechoic (direct path) ATF vector, and h reverb is the residual binaural room reverberant (early reflections plus a late reverberation) ATF vector. Similar decomposition can be defined for the interfering sources. The background noise correlation matrix R n = E { nn H} is assumed to be full-rank and E{ } is the expectation operator. In this paper, a binaural setup ith monaural output is described. Without loss of generality, the reference microphone is selected as the first microphone at the left hearing device (e.g., closest to the left ear). The reference microphone signal is given by y L = e H L y, here e L is the M-dimensional selector vector ith one element equal to one and all other elements equal to zero. The monaural output signal for the left hearing aid is obtained by applying the beamformer to all microphone signals from both hearing aids, i.e., z = H y, here is the M-dimensional complex-valued eight vector. 3. TWO BINAURAL BEAMFORMING FORMULATIONS Section 3. and Section 3. briefly revie the considered and beamformers. In Section 3.3, a comparison beteen the to considered beamformers is given. 3.. Binaural LCMV () The beamformer is designed to reproduce the target component at the reference microphone, hile reducing the directional interfering sources and minimizing the background noise poer [3, ]. The formulation of the is min H R n s.t. H hx =, H hu,k = η, k =...N u, here the scaling parameter η, ith η, sets the amount of interference reduction, and h x h x/h x,r and h u,k h u,r/h u,k,r are the reverberant RTF vectors defined as the normalized reverberant ATF vectors ith respect to the rth microphone, hich serves as a reference. The criterion can be ritten in a compact ay, i.e., min { H R n here the constraint set is given by } (3) s.t. C H = g, () C = [ hx ] hu, hu,nu, [ ] g =. (5) η Nu The solution to the problem is given by 3.. = R n C [ C H R n C ] g. () The is a binaural beamforming algorithm, hich is designed ith robustness to variations in the real-orld by imposing DOAbased inequality constraints to protect the target speaker signal and reject interfering source signals. The formulation [] is revisited briefly belo. Suppose that the anechoic ATF vector h θ from different incidence angle θ is available from a pre-existing database. The formulation of can be ritten as min H R n s.t. H hθ ɛ θ, θ Θ H h φ ɛ φ, φ Φ, here h θ h θ /h θ,r is the anechoic RTF vector defined as the normalized anechoic ATF vector ith respect to the rth microphone, hich serves as a reference. The angle set Θ includes a finite number of directions that are close to the estimated DoA τ of the target source, for instance, Θ = {τ, τ, τ + }. The corresponding ɛ θ specifies a tolerable speech distortion (SD). Similarly, the angle set Φ includes the estimated DOAs of the interfering speakers, of hich the amplification strength should not exceed a pre-defined threshold ɛ φ. The optimization problem in (7) is a convex quadratically constrained quadratic program (QCQP), hich does not have a closedform solution. To design a lo-complexity algorithm, e introduce auxiliary variables {δ θ }, θ Θ and {δ φ }, φ Φ and reformulate (7) as min min {δ θ },{δ φ } H R n (7) s.t. H hθ = δ θ, θ Θ (a) H hφ = δ φ, φ Φ (b) δ θ ɛ θ, θ Θ (c) δ φ ɛ φ, φ Φ. (d) The structured optimization problem in () can be solved efficiently by the celebrated ADMM algorithm [5], here in each update step all optimization variables are obtained in closed-form ith lo computational effort. Detailed derivations can be found in []. 3.3. Comparison of Algorithms Both the and beamformers aim to extract the target source hile reducing the interfering sources and minimizing noise, i.e., both criteria are MV subject to constraints for both the target and the interfering sources. Hoever, several major differences beteen the considered beamformers should be noted, hich relate to ) the type of steering vectors that construct the beamformers, ) the type of constraints imposed on their cost function, and 3) their trade-off parameters. The utilizes the reverberant RTF steering vectors. The reverberant RTF vectors are estimated from the recorded data. Hence, the RTF vectors are data-dependent and vary for different recorded data. The beamformer performance strongly relies on the quality of the reverberant RTF estimation. The estimated RTF vectors take into account the specific listener head transfer function. Moreover, the estimated RTF vectors also take into account the specific room transfer function such that the spatial filtering is matched to the room acoustic. Reverberant RTF estimation procedures are described in [3,, 7]. As a result of the equality constraints, a distortionless response toards the target source direction is imposed. The scaling parameter η controls the exact amount of interference reduction, i.e., the depth of the null. 37

Criterion MV MV Steering vectors Reverberant RTFs Anechoic RTFs Data driven/estimated Fixed/From database Constraints Target source Equality Inequality Interfering source Equality Inequality Estimation Requirement Directional sources RTF steering vectors DOA Background noise R n R n Table. Comparison of and beamformers. The utilizes anechoic RTF steering vectors ith respect to an estimated DOA of the sources. The anechoic RTF vectors are fixed (data independent), and obtained from an existing database. The anechoic RTF vectors are typically measured on a head and torso simulator in an anechoic room, and hence, do not take into account the specific listener head related transfer function or the room acoustic. The number of inequality constraints around the estimated DOA of the sources, and the trade-off parameters ɛ θ and ɛ φ control the robustness to head movements and steering errors. In general, as the number of either equality constraints or the inequality constraints increases, the degrees of freedom for the MV minimization decrease, hich results in a loer noise reduction performance. In practice, the number of constraints in the and the is a trade-off beteen robustness and noise reduction. Relation beteen the and : despite the use of different steering vectors (anechoic RTF for the versus estimated reverberant RTF for the LCMV) in the to beamforming approaches, the can be regarded as a generalization of the, since the equality constraints are relaxed to inequality constraints. The problem in () can be solved sequentially in to stages: in the first stage, e minimize H R n subject to the linear constraints (a) and (b) ith fixed right-hand side values δ θ and δ φ, hich is exactly, hereas in the second stage, e optimize δ θ and δ φ in the parameter space defined by (c) and (d). Thus, can be vieed as selecting a beamformer ith the optimal parameters (δ θ and δ φ ) in the linear constraints. The comparison of the considered beamformers is summarized in Table.. EXPERIMENTAL STUDY In this section, e present simulation results for simulated data (Section.) and real-orld recordings (Section.3). All signals ere sampled ith a sampling frequency of khz. For the desired speaker, groups of sentences (each group has a length of at least 3 seconds) from the HINT database [] ere used ith seconds of silence beteen subsequent groups. For the simulated data, the original HINT recordings have been used hile for the real-orld recordings HINT sentences ere spoken by the target speaker. The interfering sources are continuous speech signals taken from the HINT database, the rainbo passage [9], the ISMADHA test signal [] and a male recording of Arizona Travelogue (Cosmos, Inc.) []. Each stimulus started ith a 3 seconds long diffuse babble noise initialization phase to estimate the noise correlation matrix. After the initialization phase each target and interfering speaker talked individually for several seconds hile diffuse background noise is continuously present. These segments have been used to estimate the target source plus noise correlation matrix and the interfering source plus noise correlation matrix. The estimated correlation matrices are then used to estimate the RTFs of each source (required in the ) using generalized eigenvalue decomposition [3, ] and the DOAs for each source (required in the ) are estimated during these segments using the generalized crosscorrelation function ith phase transform []. Based on the DOA estimate, the anechoic ATFs of the hearing aid microphones, hich ere measured on a KEMAR dummy head in an anechoic chamber, ere used in the ith a resolution of 5. The to approaches are evaluated using the intelligibility-eighted signal-to-noise ratio improvement (IW-SNRI) [3] and the intelligibility-eighted speech distortion (IW-SD) []... Simulation Setup and Algorithm Parameters For both the and the the number of constraints depends on the number of interfering sources in the acoustic scenario. While for the the number of linear constraints is equal to the number of interfering sources plus (cf. Section 3.), for the additional robustness constraints for the target source are imposed if a sufficient number of degrees of freedom is available. Since for the simulated data (cf. Section.) altogether hearing aid microphones are available, additional degrees of freedom are utilized to increase the robustness of the to DOA estimation errors. Since for the recorded data (cf. Section.3) only hearing aid microphones are available, only one inequality constraint for each source is imposed due to an insufficient number of degrees of freedom. For the the trade-off parameter η is set to zero for all scenarios and the scenario dependent parameter settings for the are presented in Tables and 3. The signals are processed in a eighted overlap-add frameork ith a block-length of samples and 5 % overlap beteen successive blocks. Target Source Interfering Sources Est. DOA τ = ζ = 33,ζ = 37 Angle Set Θ = {τ, τ, τ + } Φ = {ζ }, {ζ, ζ } Tolerance ɛ θ = {.,.,.} ɛ φ = {.}, {.,.} Table. Setup of for simulated data. Target Source Interfering Sources Est. DOA τ = ζ = 5,ζ = 35 Angle Set Θ = {τ} Φ = {ζ, ζ } Tolerance ɛ θ = {.} ɛ φ = {.5,.5}.. Simulated Data Table 3. Setup of for recorded data. For the simulated data, the room impulse responses from the sources to the left and the right hearing aid microphones are generated using the image method [5], here the hearing aid microphones are simulated as being positioned on a rigid sphere [, 7] in order to take the head shado effect into account. For each hearing aid 3 microphones ith a distance of 7.5 mm are available, i.e. altogether microphones. The size of the room is similar to the room in the Starkey database [] and the reflection coefficient of the surfaces are chosen such that the reverberation time is the same as the reverberation time of the room in the Starkey database hich is. s. The target source is positioned at ith a distance of m to the hearing aid user and the interfering sources are positioned at and 33 ith a distance of.5 m from the HA user. We used a scenario ith one interfering source and to interfering sources ith a signal-to-interference ratio (SIR) of db and SNRs of (5, ) db 3

(cf. Table ). Diffuse babble noise is simulated by adding up the signals of simulated speakers distributed over the simulated room []. The RTFs and DOAs are estimated as described in the beginning of Section. The impact of source position misalignment on the performance of the and the is evaluated by estimating the RTFs and DOAs of all sources ith a mismatch of to the actual source position denoted as (mis.) and (mis.). For the this results in an DOA estimate of for the target source and 7 and 3 for the interfering sources (cf. Table ). In order to increase the robustness of the, an average RTF for the target source has been calculated by averaging the signal statistics over source positions beteen and, denoted as (avg.). The results are depicted in Fig. and Fig.. For the matched case ( and ), the shos the best performance in terms of IW-SNRI and IW-SD for both acoustic scenarios and input SNRs. In the mismatch case ( (mis.) and (mis.)), the performance of the significantly drops hile the shos a good robustness to estimation errors especially in terms of IW-SD. Using the average RTF estimate for the target source in the ( (avg.)) increases the robustness of the, hile the IW-SNRI and IW-SD performance is comparable to the performance of the and the (mis.). Target [ ] SNR [db] SIR [db] Interferers [ ] 5, 5,,33 Table. Simulation conditions for the simulated data To Interfering Sources SNR: db (mis.) (avg.) (mis.) SNR: db Fig.. IW-SNRI and IW-SD for the simulated scenario ith to interfering sources. compared to the results for the simulated data. The generally shos a better performance in terms of IW-SNRI hile the generally performs better in terms of IW-SD. While for an input (SIR,SNR) of ( 5, 5) db the performance of the and the are very similar, for all other conditions the outperforms the by db in terms of IW-SNRI and shos a similar performance in terms of IW-SD. vspace-.cm Target [ ] SNR [db] SIR [db] Interferers [ ] 5, -5,5 5,35 Table 5. Simulation conditions for the recorded data One Interfering Source (mis.) (avg.) (mis.) 7 5 3 7 5 3 SNR: db SNR: db Fig.. IW-SNRI and IW-SD for the simulated scenario ith one interfering source..3. Recorded Data For the second experiment, real-orld recordings from the Starkey database [] have been used. This database contains recordings ith binaural hearing aids ith microphones in each hearing aid mounted on actual people. The target talker as a male talker ho as sitting at a table in front of the hearing aid user. The to interfering talkers are to female talkers ho ere positioned at 5 and 35 at the same table. The diffuse background noise as generated by 5 talking people distributed over the room. The room had a size of.7 3. m and a reverberation time of approximately. s. The SIR as equal to 5 db and 5 db, respectively and the SNR as equal to 5 db and db, respectively (cf. Table 5). The results are depicted in Fig. 3. For the real-orld recordings the IW-SNRI of both the and the are loer ( 5,5) ( 5,) (5,5) (5,) (SIR,SNR) db ( 5,5) ( 5,) (5,5) (5,) (SIR,SNR) db Fig. 3. IW-SNRI and IW-SD for the recorded scenario ith to interfering sources. 5. CONCLUSIONS AND FURTHER RESEARCH In this paper, to MV beamforming approaches for binaural hearing aids application, namely the and the beamformers, ere explored. The to approaches differ in their treatment of the constraint set. While the uses equality constraints regarding the RTFs of the sources, the uses inequality constraints regarding the DOAs of the sources. The beamformer achieves a good performance if the RTF vectors are accurately estimated hile the beamformer shos a good robustness to DOA estimation errors. Integrating inequality reverberant RTF constraints into the MV cost function is topic for further research. 39

. REFERENCES [] V. Hamacher, U. Kornagel, T. Lotter, and H. Puder, Binaural signal processing in hearing aids: Technologies and algorithms, in Advances in Digital Speech Transmission, pp. 9. Wiley, Ne York, NY, USA,. [] M Dörbecker and S. Ernst, Combination of To Channel Spectral Subtraction and Adaptive Wiener Post Filtering for Noise Reduction and Dereverberation, in Proc. European Signal Processing Conference (EUSIPCO), Trieste, Italy, Sep. 99, pp. 995 99. [3] K. Reindl, Y. Zheng, and W. Kellermann, Speech enhancement for binaural hearing aids based on blind source separation, in th Int. Symp. on Communications, Control and Signal Proc. (ISCCSP), March, pp.. [] T. Wittkop and V. Hohmann, Strategy-selective noise reduction for binaural digital hearing aids, Speech Communication, vol. 39, no. -, pp. 3, Jan. 3. [5] T. Lotter and P. Vary, Dual-channel speech enhancement by superdirective beamforming, EURASIP Journal on Applied Signal Processing, vol., pp.,. [] B. Cornelis, M. Moonen, and J. Wouters, Reduced-bandidth multi-channel Wiener filter based binaural noise reduction and localization cue preservation in binaural hearing aids, Signal Processing, vol. 99, pp.,. [7] S. Doclo, W. Kellermann, S. Makino, and S.E. Nordholm, Multichannel Signal Enhancement Algorithms for Assisted Listening Devices: Exploiting spatial diversity using multiple microphones, IEEE Signal Processing Magazine, vol. 3, no., pp. 3, Mar. 5. [] E. Hadad, D. Marquardt, S. Doclo, and S. Gannot, Theoretical Analysis of Binaural Transfer Function MVDR Beamformers ith Interference Cue Preservation Constraints, IEEE/ACM Trans. Audio, Speech and Language Proc., vol. 3, no., pp. 9, Dec. 5. [9] D. Marquardt, E. Hadad, S. Gannot, and S. Doclo, Theoretical Analysis of Linearly Constrained Multi-channel Wiener Filtering Algorithms for Combined Noise Reduction and Binaural Cue Preservation in Binaural Hearing Aids, IEEE/ACM Trans. Audio, Speech and Language Proc., vol. 3, no., pp. 3 397, Dec. 5. [] W. C. Liao, M. Hong, I. Merks, T. Zhang, and Z. Q. Luo, Incorporating spatial information in binaural beamforming for noise suppression in hearing aids, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, Apr. 5. [] W. C. Liao, Z. Q. Luo, I. Merks, and T. Zhang, An effective lo complexity binaural beamforming algorithm for hearing aids, in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 5, pp. 5. [] Y. Suzuki, S. Tsukui, F. Asano, R. Nishimura, and T. Sone, Ne Design Method of a Binaural Microphone Array Using Multiple Constraints, IEICE Trans. Fundamentals, vol. E- A, no., pp. 5 59, Apr 999. [3] E. Hadad, S. Doclo, and S. Gannot, The binaural LCMV beamformer and its performance analysis, IEEE/ACM Trans. Audio, Speech and Language Proc., vol., no. 3, pp. 53 55, Mar.. [] E. Hadad, S. Gannot, and S. Doclo, Binaural linearly constrained minimum variance beamformer for hearing aid applications, in Proc. Int. Workshop on Acoustic Signal Enhancement (IWAENC), Aachen, Germany, Sep., pp. 7. [5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, vol. 3, no., pp.,. [] S. Markovich, S. Gannot, and I. Cohen, Multichannel eigenspace beamforming in a reverberant environment ith multiple interfering speech signals, IEEE/ACM Trans. Audio, Speech and Language Proc., vol. 7, no., pp. 7, Aug. 9. [7] S. Gannot, D. Burshtein, and E. Weinstein, Signal enhancement using beamforming and nonstationarity ith applications to speech, Signal Processing, vol. 9, no., pp., Aug.. [] M. Nilsson, S. D. Soli, and J. A. Sullivan, Development of the Hearing In Noise Test for the measurement of speech reception thresholds in quiet and in noise, J. of the Acoustical Society of America, vol. 95, pp. 5 99, Feb. 99. [9] G. Fairbanks, Voice and articulation drillbook, Harper, Ne York, nd edition, 9, https://.york.ac.uk/me dia/languageandlinguistics/documents/currentstudents/linguis ticsresources/standardised-reading.pdf. [] I. Holube, S. Fredelake, M. Vlaming, and B. Kollmeier, Development and analysis of an International Speech Test Signal (ISTS), International Journal of Audiology, vol. 9, no., pp. 9 93,. [] A. K. Nabelek, M. C. Freyaldenhoven, J. W. Tampas, S. B. Burchfield, and R. A. Muenchen, Acceptable noise level as a predictor of hearing aid use, Journal of the American Academy of Audiology, vol. 7, no. 9,. [] C. Knapp and G. Carter, The generalized correlation method for estimation of time delay, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol., no., pp. 3 37, Aug. 97. [3] J. E. Greenberg, P. M. Peterson, and P. M. Zurek, Intelligibility-eighted measures of speech-to-interference ratio and speech system performance, J. of the Acoustical Society of America, vol. 9, no. 5, pp. 39 3, Nov. 993. [] A. Spriet, M. Moonen, and J. Wouters, Robustness analysis of multichannel iener filtering and generalized sidelobe cancellation for multimicrophone noise reduction in hearing aid applications, IEEE Trans. on Audio, Speech, and Language Processing, vol. 3, no., pp. 7 53, Jul. 5. [5] J. B. Allen and D. A. Berkley, Image method for efficiently simulating small-room acoustics, J. of the Acoustical Society of America, vol. 5, no., pp. 93 95, 979. [] P. M. Morse, Radiation from spheres, in Vibration and Sound, chapter 7. Acoustical Society of America, 3rd edition, 9. [7] P. M. Morse, Scattering of a sound, in Vibration and Sound, chapter 9. Acoustical Society of America, 3rd edition, 9. [] W. S. Woods, E. Hadad, I. Merks, B. Xu, S. Gannot, and T. Zhang, A real-orld recording database for ad hoc microphone arrays, in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct. 5, pp. 5.