Residual noise Control for Coherence Based Dual Microphone Speech Enhancement

Similar documents
SPECTRAL COMBINING FOR MICROPHONE DIVERSITY SYSTEMS

A COHERENCE-BASED ALGORITHM FOR NOISE REDUCTION IN DUAL-MICROPHONE APPLICATIONS

Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter

Single channel noise reduction

MMSE STSA Based Techniques for Single channel Speech Enhancement Application Simit Shah 1, Roma Patel 2

Speech Enhancement Based On Spectral Subtraction For Speech Recognition System With Dpcm

MULTICHANNEL systems are often used for

Speech Enhancement for Nonstationary Noise Environments

IN REVERBERANT and noisy environments, multi-channel

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

ROBUST echo cancellation requires a method for adjusting

CHAPTER 3 SPEECH ENHANCEMENT ALGORITHMS

Automotive three-microphone voice activity detector and noise-canceller

Different Approaches of Spectral Subtraction Method for Speech Enhancement

Perceptual Speech Enhancement Using Multi_band Spectral Attenuation Filter

Speech Signal Enhancement Techniques

Speech Enhancement Using Beamforming Dr. G. Ramesh Babu 1, D. Lavanya 2, B. Yamuna 2, H. Divya 2, B. Shiva Kumar 2, B.

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

Wavelet Speech Enhancement based on the Teager Energy Operator

Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction

Effective post-processing for single-channel frequency-domain speech enhancement Weifeng Li a

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Two-Microphone Binary Mask Speech Enhancement in Diffuse and Directional Noise Fields

PROSE: Perceptual Risk Optimization for Speech Enhancement

Noise Reduction: An Instructional Example

Broadband Microphone Arrays for Speech Acquisition

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Reliable A posteriori Signal-to-Noise Ratio features selection

Dual-Microphone Voice Activity Detection Technique Based on Two-Step Power Level Difference Ratio

RECENTLY, there has been an increasing interest in noisy

MODIFIED DCT BASED SPEECH ENHANCEMENT IN VEHICULAR ENVIRONMENTS

REAL-TIME BROADBAND NOISE REDUCTION

Analysis of the SNR Estimator for Speech Enhancement Using a Cascaded Linear Model

Students: Avihay Barazany Royi Levy Supervisor: Kuti Avargel In Association with: Zoran, Haifa

Subspace Noise Estimation and Gamma Distribution Based Microphone Array Post-filter Design

WIND SPEED ESTIMATION AND WIND-INDUCED NOISE REDUCTION USING A 2-CHANNEL SMALL MICROPHONE ARRAY

Sound Source Localization using HRTF database

ARTICLE IN PRESS. Signal Processing

NOISE ESTIMATION IN A SINGLE CHANNEL

Speech Enhancement Based On Noise Reduction

Adaptive Noise Reduction of Speech. Signals. Wenqing Jiang and Henrique Malvar. July Technical Report MSR-TR Microsoft Research

SPEECH ENHANCEMENT USING A ROBUST KALMAN FILTER POST-PROCESSOR IN THE MODULATION DOMAIN. Yu Wang and Mike Brookes

Adaptive Speech Enhancement Using Partial Differential Equations and Back Propagation Neural Networks

Frequency Domain Analysis for Noise Suppression Using Spectral Processing Methods for Degraded Speech Signal in Speech Enhancement

A BROADBAND BEAMFORMER USING CONTROLLABLE CONSTRAINTS AND MINIMUM VARIANCE

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

Speech Enhancement using Wiener filtering

Published in: Proceedings of the 11th International Workshop on Acoustic Echo and Noise Control

Mikko Myllymäki and Tuomas Virtanen

SPEECH communication among passengers in large motor

Speech Enhancement Techniques using Wiener Filter and Subspace Filter

Speech Enhancement Based on Non-stationary Noise-driven Geometric Spectral Subtraction and Phase Spectrum Compensation

Robust Low-Resource Sound Localization in Correlated Noise

Speech Enhancement: Reduction of Additive Noise in the Digital Processing of Speech

High-speed Noise Cancellation with Microphone Array

Speech Enhancement Using Robust Generalized Sidelobe Canceller with Multi-Channel Post-Filtering in Adverse Environments

Modified Kalman Filter-based Approach in Comparison with Traditional Speech Enhancement Algorithms from Adverse Noisy Environments

A Computational Efficient Method for Assuring Full Duplex Feeling in Hands-free Communication

/$ IEEE

ZLS38500 Firmware for Handsfree Car Kits

arxiv: v1 [cs.sd] 4 Dec 2018

ANUMBER of estimators of the signal magnitude spectrum

Emanuël A. P. Habets, Jacob Benesty, and Patrick A. Naylor. Presented by Amir Kiperwas

Adaptive beamforming using pipelined transform domain filters

Isolated Word Recognition Based on Combination of Multiple Noise-Robust Techniques

Integrated acoustic echo and background noise suppression technique based on soft decision

A Two-Step Adaptive Noise Cancellation System for Dental-Drill Noise Reduction

Systematic Integration of Acoustic Echo Canceller and Noise Reduction Modules for Voice Communication Systems

Noise Estimation based on Standard Deviation and Sigmoid Function Using a Posteriori Signal to Noise Ratio in Nonstationary Noisy Environments

Modulation Domain Spectral Subtraction for Speech Enhancement

Dual-Microphone Speech Dereverberation using a Reference Signal Habets, E.A.P.; Gannot, S.

Speech Enhancement in Noisy Environment using Kalman Filter

Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach

STATISTICAL METHODS FOR THE ENHANCEMENT OF NOISY SPEECH. Rainer Martin

Chapter 3. Speech Enhancement and Detection Techniques: Transform Domain

Robust Voice Activity Detection Based on Discrete Wavelet. Transform

Enhancement of Speech Signal by Adaptation of Scales and Thresholds of Bionic Wavelet Transform Coefficients

Joint dereverberation and residual echo suppression of speech signals in noisy environments Habets, E.A.P.; Gannot, S.; Cohen, I.; Sommen, P.C.W.

The Hybrid Simplified Kalman Filter for Adaptive Feedback Cancellation

SPEECH ENHANCEMENT BASED ON A LOG-SPECTRAL AMPLITUDE ESTIMATOR AND A POSTFILTER DERIVED FROM CLEAN SPEECH CODEBOOK

Dominant Voiced Speech Segregation Using Onset Offset Detection and IBM Based Segmentation

Towards an intelligent binaural spee enhancement system by integrating me signal extraction. Author(s)Chau, Duc Thanh; Li, Junfeng; Akagi,

Recent Advances in Acoustic Signal Extraction and Dereverberation

Available online at ScienceDirect. Anugerah Firdauzi*, Kiki Wirianto, Muhammad Arijal, Trio Adiono

Single Channel Speech Enhancement in Severe Noise Conditions

Local Oscillators Phase Noise Cancellation Methods

ONE of the most common and robust beamforming algorithms

Joint Filtering Scheme for Nonstationary Noise Reduction Jensen, Jesper Rindom; Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Søren Holdt

Phase estimation in speech enhancement unimportant, important, or impossible?

Applying the Filtered Back-Projection Method to Extract Signal at Specific Position

A Three-Microphone Adaptive Noise Canceller for Minimizing Reverberation and Signal Distortion

Voice Activity Detection for Speech Enhancement Applications

A Comparison of the Convolutive Model and Real Recording for Using in Acoustic Echo Cancellation

HUMAN speech is frequently encountered in several

AS DIGITAL speech communication devices, such as

AN ADAPTIVE MICROPHONE ARRAY FOR OPTIMUM BEAMFORMING AND NOISE REDUCTION

AN ADAPTIVE MICROPHONE ARRAY FOR OPTIMUM BEAMFORMING AND NOISE REDUCTION

IMPROVEMENT OF SPEECH SOURCE LOCALIZATION IN NOISY ENVIRONMENT USING OVERCOMPLETE RATIONAL-DILATION WAVELET TRANSFORMS

Spectral estimation using higher-lag autocorrelation coefficients with applications to speech recognition

Advances in Applied and Pure Mathematics

Transcription:

008 International Conference on Computer and Electrical Engineering Residual noise Control for Coherence Based Dual Microphone Speech Enhancement Behzad Zamani Mohsen Rahmani Ahmad Akbari Islamic Azad University Sharekord Branch Iran University of Science and Technology, Iran University of Science and Technology, bzamani@iust.ac.ir m_rahmani@iust.ac.ir akbari@iust.ac.ir Abstract Among various enhancement methods, dual microphone methods are utilized for their low cost implementation and use of spatial noise reduction. In this paper, we present a double microphone noise reduction method for nearby microphones. Our method is based on a modified coherence based filter which reduces correlated noises using cross spectral subtraction. In our proposed method the amount of noise reduction is controlled in order to achieve an acceptable compromise between residual noise and speech distortion. The performance of the enhancement technique is ried out using ESQ measure. Obtained results confirm the ability of the proposed approach to reduce distortion of signal.. Introduction Over the last decades, by developing new communication systems, speech enhancement has become more important than before. For instance, in new mobile headsets, because of the distance between the mouth and the microphone, the received signal is noisy. A noise reduction system helps to increase the quality ooisy speech. Speech enhancement systems need to have acceptable residual noise and acceptable speech distortion. Some of frequency based noise reduction systems use a spectral modification filter to modify the noisy speech spectrum and remove the background noise. In these methods, distortion of speech signal and residual noise are two well-known annoying effects which can not be simultaneously minimized []. In [] some compromises between noise reduction and speech distortion are presented. Some other methods are also presented later in the literature [],[3]. General speech enhancement approaches are divided into two main categories: single microphone and multi microphone methods. Some known single microphone methods are spectral subtraction [4], Wiener filter [5] and minimum mean square error [6]. Single microphone methods have some limitations in real environments. These methods induce musical noise and speech distortion [7]. The advantage of multi-microphone approaches is their ability to exploit spatial noise reduction, which is reduction ooise based on the knowledge of the position of speech sources. Some multi-microphone methods are independent component analysis [8], beamforming techniques [9] [0] and generalized sidelobe canceling [] []. The performance of multiple channel noise reduction algorithms is improved by increasing number of microphones [9] [0]. But larger number of microphones implies higher costs and increasing demands in computational load. We have chosen dual microphone approaches as the lowest cost multi channel methods. Coherence based methods [3], [4] are known as a subclass of the dual microphone methods, which have shown good results in uncorrelated noise environments. But their performance decreases if captured noises are correlated. To cope with this problem, it is proposed in [3] [5] to subtract the Cross ower Spectral Density CSD) ooises from the CSD of the s. ereafter, we refer this method as Cross ower Spectral Subtraction CSS). In this paper, we modify the CSS filter to reduce the distortion of enhanced signal, keeping an accepted residual noise. Results show that by accepting some residual noise and reducing distortion, recognition rate is increased.. Basic Two Microphone Speech Enhancement In our system, it is assumed that each microphone received both noise and speech. For microphone i in STFT domain, the received signal can be written as: X S + i {, } ) i i i 978-0-7695-3504-3/08 $5.00 008 IEEE DOI 0.09/ICCEE.008.56 60 Authorized licensed use limited to: Iran Univ of Science and Tech Trial User. Downloaded on January 8, 009 at 06:46 from IEEE Xplore. Restrictions apply.

Where Xf,n), Sf,n) and f,n) show the noisy speech, the clean speech and the noise in STFT domain, respectively. Furthermore, f and n are frequency and frame indexes, respectively. We can use both of X and X in equation ) in order to compute the spectral modification filter for enhancing speech and reducing noise. This procedure is shown in Fig. As it can be seen in the figure, we apply the computed filter, f,n) on the first channel. noise CSD signals is subtracted from CSD ooisy signals: CSS 4) X X X The subtraction in the numerator of Equation 4) is done in order to achieve the CSD of clean signals Error! Reference source not found.: S S CSS 5) X X X.. Using a priori SR estimation Fig. : Block diagram of the two-microphone speech enhancement Spectral modification filter f,n) can be computed using the coherence function between two channels. The magnitude coherence function of two signals X and X is defined as: Γ X ) X X X In this equation, X f,n), XX f,n) and X f,n) are power spectrum density of X f,n), power spectrum density of X f,n) and cross SD of X f,n) and X f,n), respectively. The cross power spectral density and the power spectrum densities can be estimated as: XiXj λ x ixj f, n ) + λx ) X i X j i, j {,} 3) Where λ x is a smoothing factor in the range [0, ], with a recommended value of 0.7 [5]. In the coherence based methods, the frequency components of the filter vary according to the amount of coherence between channels. It is assumed that the received speech signals are correlated and the received noise signals are uncorrelated. Thus, higher values of coherence function correspond to increased level of desired speech in the signal. On the contrary, lower values of the coherence function correspond to increased level ooise in the signal. These properties lead us to use coherence function in speech enhancement systems. This assumption that the received noise signals are uncorrelated is often not a valid assumption. In [5, authors proposed a way to improve the coherence based method. They adapted coherence based methods for correlated noises. In this approach, an estimation of Like spectral subtraction method, this subtraction may induce musical noise and high distortion in speech signal. To reduce the amount of musical noise and distortion, we propose to employ a decision directed approach Error! Reference source not found.. In our approach, the numerator of the filter is changed as 7) S S X X SS SS SRcs SRcs + + 6) SRcsf,n) is the ratio between the clean speech CSD and the noise CSD, we call it cross SR. To estimate the cross SR, a technique similar to a-priori SR estimation is employed. The advantages of a priori signal to noise ratio are shown in many references such as []. Using a priori SR, we can reduce the amount of musical noise and distortion. A priori cross SR, similar to single channel a priori SR [6], is estimated as follow: X X Rpo max[ -] X f,n )X f,n ) Rpr λdd f,n ) + -λ f,n ) DD )Rpo λ DD is smoothing in the rang of [0-], which is set to values close to one. For example, it is set to 0.97 in [6]. Using equations 5) and 7) and considering cross SR estimation in equation 8), CSS f,n) is modified as follow: Rpr f, n) CSS, Rpr 8) Rpr f, n) + X X X.. Second-order headings 7) A precise estimation of the noise CSD is crucial in order to obtain an accurate estimation of the speech 60 Authorized licensed use limited to: Iran Univ of Science and Tech Trial User. Downloaded on January 8, 009 at 06:46 from IEEE Xplore. Restrictions apply.

ˆ signal. A technique for estimation of the noise CSD is to update the noise estimation in non speech regions and freeze it in speech activity. This can be done using a VAD. A VAD based technique, estimates CSD as follow: λ n ˆ ˆ f,n ) + λn ) X ) f,n X pause speech frames 9) Where, λ n is a smoothing parameter between 0 and. Single or dual microphone VADs can be used to distinguish the speech/pause regions. We use a coherence based VAD [6], in that, the speech/pause regions are determined by tacking a threshold on the coherence function. This is based on this fact that, the coherence values for speech frames are more than the coherence values for noise only frames. Rpr, S S + X X X ) It must be noted, here in the case, the modification filter is not unity and the filter reduces noise. This is because of the spatial noise reduction of the coherence function. But, using parameter, we can tune the filter to values between the CSS and the primitive coherence function. Thus, we can control the level ooise reduction. Figure shows the black diagram of the proposed method. 3. Residual oise Control Spectral modification filters are usually applied with the purpose of attenuating all noise. Eliminating noise in the spectral domain generates musical noise and speech distortion. By controlling the amount ooise reduction, we can reduce these effects. To control the level ooise reduction using a single channel Wiener filter, we can use the following filter []: S S + Winer, 0) + SS In which, is a parameter for noise reduction control and its value varies between 0 and. In ideal noise estimation, the level of background noise reduction can be written as a function of, as shown in the following equation []. 0log ) 0log ) 0log ) ) R db Where, is the power of background noise in the. For example, for adjusting filter gain to reduce 0db of background noise, must be equal to 0.3. Using 0 denotes the conventional Wiener filter, where the estimated noise is supposed to be removed completely. In the case, the filter is equal to unity and there is no noise reduction. As we mentioned, the CSS is a successful method in noise correlated environments, but it makes speech distortion compared to primitive method of coherence. Our goal is to extend the single channel residual noise control to CSS method. To control the level ooise reduction in the CSS method, we propose the following filter. Using this filter, we can have a compromise between residual noise and speech distortion. Fig. : Block diagram of the proposed method. In order to exploit a priori benefits, Equation ) can be rewritten as: X Rpr cc + Rpr, 3) Rpr + 4. Evaluation X X X We used a realistic recording to evaluate the proposed method. We have used a speech database recorded using 4 microphones installed on a headset on a dummy head. The clean speech is played from speaker installed on the mouth of the head. In each experiment, we have used microphones simultaneously. The distances between employed microphones are as follow: Microphones and : It s related to the size of the head. For our model, it is 80mm. Microphones and 3: It s fixed to 66mm. Microphones 3 and 4: It s fixed to 0mm. The microphone position is shown in Fig 3. cc 603 Authorized licensed use limited to: Iran Univ of Science and Tech Trial User. Downloaded on January 8, 009 at 06:46 from IEEE Xplore. Restrictions apply.

To have a better view, The results for noise on microphone pair, is depicted in Figure 4. Fig 3: position of microphones on the head. To generate s, two noise types have been added to speech signals: noise and noise. Car noise has been recorded in a Samand an Iranian similar to eugeot 405) with the speed of about 80 km/h. Babble noise has been recorded in a noisy cafeteria. The noise signals have been added to their corresponding speech signals noise from microphone i is added to speech from microphone i). The performance assessment is ried out using ESQ.86) measure [] which is a psychoacoustics-based objective measure originally proposed to assess the performance of the codecs. Table shows the ESQ score for noisy and enhanced signals. The results are reported for three microphone pairs and 3 input SRs. Table : ESQ scores for noisy and enhanced signals. a) microphone pair,) Input SR db) 0 5 0 0 5 0 microphone ).495.67.308.46.43.068 Eq. 4), 0.065.95.677.88.049.47 Eq. 4), 0..4.403.777.98.57.57 Eq. 4), 0...48.96.86.3.664 Eq. 4), 0.3.07.43.94.86.77.69 Eq. 4), 0.4.966.378.857.74..60 Eq. 4),.646.837.564.398.58.306 b) microphone pair,3) Input SR db) 0 5 0 0 5 0 microphone ).494.67.309.46.43.068 Eq. 4), 0.083.6.563.84.008.34 Eq. 4), 0..4.393.68.97.4.47 Eq. 4), 0..49.468.767.894.6.5 Eq. 4), 0.3.079.407 3.04.85.53.8 Eq. 4), 0.4.05.365.909.8.3.66 Eq. 4),.604.77.459.356.53. c) microphone pair 3,4) Input SR db) 0 5 0 0 5 0 microphone 3).499.674.3.54.48.06 Eq. 4), 0.085.6.566.839.007.3 Eq. 4), 0..8.399.68.965.48.46 Eq. 4), 0..45.464.769.898.5.5 Eq. 4), 0.3.079.40 3.045.8.55.798 Eq. 4), 0.4.049.363.9.808.6.659 Eq. 4),.606.77.46.358.56.09 Fig 4: results for noise, microphone pair,. The ESQ score for on microphone 3 is higher than the score for microphone. This is because of the more distance between speaker and microphone in comparison with the distance between speaker and microphone 3. It is known that, the performance of the coherence based methods increases when the distance between microphones increases. The ESQ scores in the table for microphone pair 3,4 comparing with,) do not confirm it. This may results from that the initial scores ooisy signals for microphone pair, is lower than 3,4. As it is seen, in all the cases the best results are not for the original CSS method. If one compares the results for the coherence based method and the original CSS method, it is seen that for very s 0dB and 5dB input SR) the results obtained by CSS method is better than the results of coherence based method, while for high SR signals 0dB and 0dB input SR) the results obtained by the coherence is better than the CSS results. It is also seen for all methods: as the input SR increases, the methods with lower distortion and higher residual noise are more appropriate. A motivation for this effect is that for high input SR signals high noise reduction is not much essential, but the method with lower distortion is more important. 5. Conclusion In this paper, we have proposed a techniques to control the level of residual noise in the coherence based method. For that reason, we have adjusted the cross power spectral subtraction modification filter to control the amount ooise reduction. The enhanced signals obtained on realistic database confirm the advantage of residual noise control. It was shown that, rigid noise reduction especially when the background noise is low, decreases the recognition rate and speech quality and by controlling the level ooise reduction 604 Authorized licensed use limited to: Iran Univ of Science and Tech Trial User. Downloaded on January 8, 009 at 06:46 from IEEE Xplore. Restrictions apply.

less enhanced speech with low distortion can be achieved. 5. References [] Y. Ephraim,. L. Van Trees, A Signal Subspace Approach for Speech Enhancement IEEE Trans. on Speech and Audio rocessing, 34) 995) 5-66. [] J. C. Benesty, J. Y uang, S Doclo, ew insights into the noise reduction Wiener filter, IEEE Trans. on Audio, Speech, And Language rocessing, 44) 006) 8-34. [3] M. Rahmani, R. Abdipour, A. Akbari, B. Ayad, Background oise Control for Speech Enhancement Applications, The th annual computer society of Iran computer conference, Tehran, Iran, 4-6 Jan 006. pp 36-39. [4] S. F. Boll, "Suppression of acoustic noise in speech using spectral subtraction", IEEE Trans. on ASS, vol. 7, no., pp. 3-0, 979. [5] X. uang, A. Acero, and. W. on, Spoken Language rocessing: A Guide to Theory, Algorithm, and System Development, rentice all TR, Upper Saddle River,.J., 00. [6] Y. Ephraim, D. Malah, Speech Enhancement Using a Minimum Mean-Square Error Short-Time Spectral Amplitude Estimator, IEEE Trans. on Acoustics, Speech, Signal rocessing, vol. 3 6), Dec. 984 [7] J. R. Deller, J.. L. ansen, and J. G. roakis, Discrete-Time rocessing of Speech Signals, IEEE ress, Seconded. ew York, 000. [8] hyvarinen, E. Oja, Independent Component Analysis: Algorithms and Applications, eural etworks, no. 4-5, pp. 4-430, 000. [9] D. Van,K M. Buckley Beamforming: A Versatile Approach to Spatial Filtering. IEEE ASS Magazine, pages 4--4, Apr. 988. [0]. Krim and M. Viberg, "Two decades of array signal processing research", IEEE Signal rocessing Magazine, vol. 3, Jul 996. [] D. R. Campbell, and. W. Shields, Speech enhancement using subband adaptive Griffiths-Jim signal processing, Speech Communication 39, pp. 97-0, 003. [3] R. Le Bouquin, G. Faucon, "Using the Coherence Function for oise Reduction", IEE roc. 393), pp.76-80, June 99. [4] J.B. Allen, D.A. Berkley, J. Blauert, Multi microphone signal processing technique to remove room reverberation from speech signals, J. Acoust. Soc. of Amer., vol. 6, no. 4, pp. 9 95, 977. [5] R. Le Bouquin, A. A. Azirani, G. Faucon, "Enhancement of Speech Degraded by Coherent and Incoherent oise Using a Cross-Spectral Estimator," IEEE Trans. on Speech and Audio rocessing, Sep. 997. [6] R. Le Bouquin, G. Faucon, Voice Activity Detector Based on The Averaged Magnitude Squared Coherence, International Conference on Signal rocessing Applications and Technology, Oct 995. [7] X. Zhang, Y. Jia, A Soft Decision Based oise Cross ower Spectral Density Estimation for Two- Microphone Speech Enhancement Systems, ICASS 005, hiladelphia, March 005. [8] Guerin, R. Le Bouquin, and G. Faucon, "A Two- Sensor oise Reduction System: Applications for andsfree Car Kit, EURASI JAS, pp.5-34, 003. [9] M. Rahmani, A. Akbari, B. Ayad, A modified Coherence based method for dual microphone speech enhancement, IEEE int. conf. on signal processing and communication, Dubai, ovember 007. [0]. Sovka,. ollak, J. Kybic, "Extended Spectral Subtraction", roceeding of European Signal rocessing Conference, EUSICO-96, Trieste, Italia, Sep. 996. []. Scalart, J.V. Filho, Speech enhancement based on a priori signal to noise estimation, ICASS996, vol., pp 69 63, May 996. [] erceptual evaluation of speech quality ESQ), an objective method for end-to-end speech quality assessment oarrowband telephone networks and speech codecs, ITU- T Recommendation.86, Feb 00. [] L. J. Griffiths and C. W. Jim, An alternative approach to linearly constrained adaptive beamforming, IEEE Trans. Antennas and ropagation, vol. 30, no., pp. 7-34, 98. 605 Authorized licensed use limited to: Iran Univ of Science and Tech Trial User. Downloaded on January 8, 009 at 06:46 from IEEE Xplore. Restrictions apply.