Speech Enhancement for Nonstationary Noise Environments

Similar documents
Students: Avihay Barazany Royi Levy Supervisor: Kuti Avargel In Association with: Zoran, Haifa

Speech Signal Enhancement Techniques

MMSE STSA Based Techniques for Single channel Speech Enhancement Application Simit Shah 1, Roma Patel 2

Different Approaches of Spectral Subtraction Method for Speech Enhancement

Effective post-processing for single-channel frequency-domain speech enhancement Weifeng Li a

Signal Processing 91 (2011) Contents lists available at ScienceDirect. Signal Processing. journal homepage:

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

International Journal of Advanced Research in Computer Science and Software Engineering

Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter

Noise Spectrum Estimation in Adverse Environments: Improved Minima Controlled Recursive Averaging

Noise Estimation based on Standard Deviation and Sigmoid Function Using a Posteriori Signal to Noise Ratio in Nonstationary Noisy Environments

Noise Tracking Algorithm for Speech Enhancement

Noise Reduction: An Instructional Example

STATISTICAL METHODS FOR THE ENHANCEMENT OF NOISY SPEECH. Rainer Martin

REAL-TIME BROADBAND NOISE REDUCTION

Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction

ANUMBER of estimators of the signal magnitude spectrum

SPEECH ENHANCEMENT BASED ON A LOG-SPECTRAL AMPLITUDE ESTIMATOR AND A POSTFILTER DERIVED FROM CLEAN SPEECH CODEBOOK

AS DIGITAL speech communication devices, such as

RECENTLY, there has been an increasing interest in noisy

Frequency Domain Analysis for Noise Suppression Using Spectral Processing Methods for Degraded Speech Signal in Speech Enhancement

Optimal Simultaneous Detection and Signal and Noise Power Estimation

Estimation of Non-stationary Noise Power Spectrum using DWT

IN REVERBERANT and noisy environments, multi-channel

CHAPTER 3 SPEECH ENHANCEMENT ALGORITHMS

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

MULTICHANNEL systems are often used for

Joint dereverberation and residual echo suppression of speech signals in noisy environments Habets, E.A.P.; Gannot, S.; Cohen, I.; Sommen, P.C.W.

Dual-Microphone Speech Dereverberation in a Noisy Environment

Recent Advances in Acoustic Signal Extraction and Dereverberation

Modulation Domain Spectral Subtraction for Speech Enhancement

Analysis of the SNR Estimator for Speech Enhancement Using a Cascaded Linear Model

Phase estimation in speech enhancement unimportant, important, or impossible?

Perceptual Speech Enhancement Using Multi_band Spectral Attenuation Filter

SPECTRAL COMBINING FOR MICROPHONE DIVERSITY SYSTEMS

Single channel noise reduction

SPEECH ENHANCEMENT USING A ROBUST KALMAN FILTER POST-PROCESSOR IN THE MODULATION DOMAIN. Yu Wang and Mike Brookes

MODIFIED DCT BASED SPEECH ENHANCEMENT IN VEHICULAR ENVIRONMENTS

Noise Power Spectral Density Estimation Based on Optimal Smoothing and Minimum Statistics

NOISE POWER SPECTRAL DENSITY MATRIX ESTIMATION BASED ON MODIFIED IMCRA. Qipeng Gong, Benoit Champagne and Peter Kabal

Speech Enhancement Based On Spectral Subtraction For Speech Recognition System With Dpcm

Advances in Applied and Pure Mathematics

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Mikko Myllymäki and Tuomas Virtanen

Enhancement of Speech in Noisy Conditions

IMPROVEMENT OF SPEECH SOURCE LOCALIZATION IN NOISY ENVIRONMENT USING OVERCOMPLETE RATIONAL-DILATION WAVELET TRANSFORMS

EMD BASED FILTERING (EMDF) OF LOW FREQUENCY NOISE FOR SPEECH ENHANCEMENT

Chapter 3. Speech Enhancement and Detection Techniques: Transform Domain

Reliable A posteriori Signal-to-Noise Ratio features selection

Available online at ScienceDirect. Procedia Computer Science 54 (2015 )

Chapter 4 SPEECH ENHANCEMENT

PROSE: Perceptual Risk Optimization for Speech Enhancement

Adaptive Speech Enhancement Using Partial Differential Equations and Back Propagation Neural Networks

Speech Enhancement: Reduction of Additive Noise in the Digital Processing of Speech

Wavelet Speech Enhancement based on the Teager Energy Operator

Comparative Performance Analysis of Speech Enhancement Methods

A HYBRID APPROACH TO COMBINING CONVENTIONAL AND DEEP LEARNING TECHNIQUES FOR SINGLE-CHANNEL SPEECH ENHANCEMENT AND RECOGNITION

/$ IEEE

Speech Enhancement using Wiener filtering

24 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 1, JANUARY /$ IEEE

SPEECH ENHANCEMENT WITH SIGNAL SUBSPACE FILTER BASED ON PERCEPTUAL POST FILTERING

SPEECH MEASUREMENTS USING A LASER DOPPLER VIBROMETER SENSOR: APPLICATION TO SPEECH ENHANCEMENT

Transient noise reduction in speech signal with a modified long-term predictor

ARTICLE IN PRESS. Signal Processing

Robust Voice Activity Detection Based on Discrete Wavelet. Transform

Speech Enhancement By Exploiting The Baseband Phase Structure Of Voiced Speech For Effective Non-Stationary Noise Estimation

A HYBRID APPROACH TO COMBINING CONVENTIONAL AND DEEP LEARNING TECHNIQUES FOR SINGLE-CHANNEL SPEECH ENHANCEMENT AND RECOGNITION

Speech Enhancement Using a Mixture-Maximum Model

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 6, AUGUST

Performance Evaluation of Noise Estimation Techniques for Blind Source Separation in Non Stationary Noise Environment

Speech Enhancement based on Fractional Fourier transform

Adaptive Noise Reduction of Speech. Signals. Wenqing Jiang and Henrique Malvar. July Technical Report MSR-TR Microsoft Research

Wavelet Based Adaptive Speech Enhancement

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Integrated acoustic echo and background noise suppression technique based on soft decision

Denoising Of Speech Signal By Classification Into Voiced, Unvoiced And Silence Region

Emanuël A. P. Habets, Jacob Benesty, and Patrick A. Naylor. Presented by Amir Kiperwas

NOISE ESTIMATION IN A SINGLE CHANNEL

Enhancement of Speech Signal by Adaptation of Scales and Thresholds of Bionic Wavelet Transform Coefficients

Evaluation of clipping-noise suppression of stationary-noisy speech based on spectral compensation

[Rao* et al., 5(8): August, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116

GUI Based Performance Analysis of Speech Enhancement Techniques

High-speed Noise Cancellation with Microphone Array

HUMAN speech is frequently encountered in several

Residual noise Control for Coherence Based Dual Microphone Speech Enhancement

Audio Restoration Based on DSP Tools

Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach

Speech Enhancement Based on Non-stationary Noise-driven Geometric Spectral Subtraction and Phase Spectrum Compensation

Analysis Modification synthesis based Optimized Modulation Spectral Subtraction for speech enhancement

Single Channel Speaker Segregation using Sinusoidal Residual Modeling

OFDM Transmission Corrupted by Impulsive Noise

A Two-Step Adaptive Noise Cancellation System for Dental-Drill Noise Reduction

JOINT NOISE AND MASK AWARE TRAINING FOR DNN-BASED SPEECH ENHANCEMENT WITH SUB-BAND FEATURES

A hybrid phase-based single frequency estimator

Performance analysis of voice activity detection algorithm for robust speech recognition system under different noisy environment

Single Channel Speech Enhancement in Severe Noise Conditions

Codebook-based Bayesian speech enhancement for nonstationary environments Srinivasan, S.; Samuelsson, J.; Kleijn, W.B.

Spectral estimation using higher-lag autocorrelation coefficients with applications to speech recognition

A GENERALIZED LOG-SPECTRAL AMPLITUDE ESTIMATOR FOR SINGLE-CHANNEL SPEECH ENHANCEMENT. Aleksej Chinaev, Reinhold Haeb-Umbach

Subspace Noise Estimation and Gamma Distribution Based Microphone Array Post-filter Design

Transcription:

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December Speech Enhancement for Nonstationary Noise Environments Sandhya Hawaldar and Manasi Dixit Department of Electronics, KIT s College of Engineering, Shivai University, Kolhapur, 4634, India Abstract santell7@gmail.com In this paper, we present a simultaneous detection and estimation approach for speech enhancement in nonstationary noise environments. A detector for speech presence in the short-time Fourier transform domain is combined with an estimator, which ointly minimizes a cost function that takes into account both detection and estimation errors. Under speech-presence, the cost is proportional to a quadratic spectral amplitude error, while under speech-absence, the distortion depends on a certain attenuation factor. Experimental results demonstrate the advantage of using the proposed simultaneous detection and estimation approach which facilitate suppression of nonstationary noise with a controlled level of speech distortion. Keywords Estimation, Nonstationary noise, Spectral analysis, Speech enhancement, Decision rule.. Introduction A practical speech enhancement system generally consists of two maor components: the estimation of noise power spectrum, and the estimation of speech. The estimation of noise, when only one microphone source is provided, is based on the assumption of a slowly varying noise environment. In particular, the noise spectrum remains virtually stationary during speech activity. The estimation of speech is based on the assumed statistical model, distortion measure, and the estimated noise. A commonly used approach for estimating the noise power spectrum is to average the noisy signal over sections which do not contain speech. Existing algorithms often focus on estimating the spectral coefficients rather than detecting their existence. The spectralsubtraction algorithm [] [] contains an elementary detector for speech activity in the time frequency domain, but it generates musical noise caused by falsely detecting noise peaks as bins that contain speech, which are randomly scattered in the STFT domain. Subspace approaches for speech enhancement [3] [4] decompose the vector of the noisy signal into a signal-plus-noise subspace and a noise subspace, and the speech spectral coefficients are estimated after removing the noise subspace. Accordingly, these algorithms are aimed at detecting the speech coefficients and subsequently estimating their values. McAulay and Malpass [5] were the first to propose a speech spectral estimator under a two-state model. They derived a maximum-likelihood (ML) estimator for the speech spectral amplitude under speech-presence uncertainty. Ephraim and Malah followed this approach of signal estimation under speech presence uncertainty and derived an estimator which minimizes the mean-square error (MSE) of the short-term spectral amplitude (STSA) [6]. In [7], speech presence probability is evaluated to improve the minimum MSE (MMSE) of the log-spectral amplitude (LSA) estimator, and in [8] a further improvement of the DOI :.5/sipi..4 9

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December MMSE-LSA estimator is achieved based on a two-state model. Under speech absence hypothesis, Cohen and Berdugo [8] considered a constant attenuation factor to enable a more natural residual noise, characterized by reduced musicality. Under slowly time-varying noise conditions, an estimator which minimizes themse of the STSA or the LSA under speech presence uncertainty may yield reasonable results [].However, under quickly time-varying noise conditions, abrupt transients may not be sufficiently attenuated, since speech is falsely detected with some positive probability. Reliable detectors for speech activity and noise transients are necessary to further attenuate noise transients without much degrading the speech components. Despite the sparsity of speech coefficients in the time frequency domain and the importance of signal detection for noise suppression performance, common speech enhancement algorithms deal with speech detection independently of speech estimation. Even when a voice activity detector is available in the STFT domain,it is not straightforward to consider the detection errors when designing the optimal speech estimator. High attenuation of speech spectral coefficients due to missed detection errors may significantly degrade speech quality and intelligibility, while falsely detecting noise transients as speechcontained bins, may produce annoying musical noise. In this paper, we present a simultaneous detection and estimation approach for speech enhancement in nonstationary noise environments. A detector for speech presence in the short-time Fourier transform domain is combined with an estimator, which ointly minimizes a cost function that takes into account both detection and estimation errors. Cost parameters control the tradeoff between speech distortion, caused by missed detection of speech components and residual musical noise resulting from false-detection. Under speech-presence, the cost is proportional to quadratic spectral amplitude (QSA) error [6], while under speech-absence, the distortion depends on a certain attenuation factor [], [8], [9]. The noise spectrum is estimated by recursively averaging past spectral power values, using a smoothing parameter that is adusted by the speech presence probability in subbands [3]. This paper is organized as follows. In Section review of classical speech enhancement. In Section 3, proposed approach for speech enhancement. In Section 4 we compare the performance of the proposed approach to existing algorithms, both under stationary and nonstationary environments. In section 5, we conclude the advantages of simultaneous detection & estimation approach with modified speech absence estimate.. Classical Speech Enhancement Let x( n) and d( n ) denote speech and uncorrelated additive noise signals, and let y( n) = x( n) + d( n) be the observed signal. Applying the STFT to the observed signal, we have, Y = X + D () where l =,,... is the time frame index and k =,,..., K is the frequency-bin index. Let H and H denote, respectively, speech presence and absence hypotheses in the time frequency bin(l, k), i.e., H : Y = X + D () H Y = D : Assume that the noise expansion coefficients can be represented as the sum of two uncorrelated s t noise components: D = D + D where D denotes a quasi-stationary noise component, s and D denotes a highly nonstationary transient component. The transient components are t 3

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December generally rare, but they may be of high energy and thus cause significant degradation to speech quality and intelligibility. But in many applications, a reliable indicator for the transient noise activity may be available in the system. For example, in an emergency vehicle (e.g., police or ambulance) the engine noise may be considered as quasi-stationary, but activating a siren results in a highly nonstationary noise which is perceptually very annoying. Given that a transient noise source is active, a detector for the transient noise in the STFT domain may be designed and its spectrum can be estimated based on training data. The obective of a speech enhancement system is to reconstruct the spectral coefficients of the speech signal such that under speech-presence a certain distortion measure between the spectral coefficient and its estimate, d ( X, X ˆ ), is minimized, and under speech-absence a constant i attenuation of the noisy coefficient would be desired to maintain a natural background noise [6], [9]. Most classical speech enhancement algorithms try to estimate the spectral coefficients rather than detecting their existence, or try to independently design detectors and estimators. The welnown spectral subtraction algorithm estimates the speech spectrum by subtracting the estimated noise spectrum from the noisy squared absolute coefficients [], [], and thresholding the result by some desired residual noise level. Thresholding the spectral coefficients is in fact a detection operation in the time frequency domain, in the sense that speech coefficients are assumed to be absent in the low-energy time frequency bins and present in noisy coefficients whose energy is above the threshold. McAulay and Malpass were the first to propose a two-state model for the speech signal in the time frequency domain [5].The resulting estimator does not detect speech components, but rather, a soft-decision is performed to further attenuate the signal estimate by the a posteriori speech presence probability. If an indicator for the presence of transient noise components is available in a highly nonstationary noise environment, then high-energy transients may be attenuated by using OM- LSA estimator [8] and setting the a priori speech presence probability to a sufficiently small value. Unfortunately, an estimation-only approach under signal presence uncertainty produces larger speech degradation, since the optimal estimate is attenuated by the a posteriori speech presence probability. On the other hand, increasing a priori speech presence probability prevents the estimator from sufficiently attenuating noise components. Integrating a ointly detector and estimator into the speech enhancement system may significantly improve the speech enhancement performance under nonstationary noise environments and allow further reduction of transient components without much degradation of the desired signal. 3. Proposed Approach for Speech Enhancement Let ( ˆ ) C X, X denote the cost of making a decision η and choosing an estimator X ˆ where X is the desired signal. Then, the Bayes risk of the two operations associated with simultaneous detection and estimation is defined by [] and [] R = C (, ˆ ) ( ) ( X X p η Y p Y X ) p ( X ) (3) d X d Y = Ω y Ω x where Ωx and Ω y are the spaces of the speech and noisy signals, respectively. The simultaneous detection and estimation approach is aimed at ointly minimizing the Bayes risk over both the decision rule and the corresponding signal estimate. Let q p( H ) denote the a priori speech presence probability and let X and X denote the real and imaginary parts of the expansion R I coefficient X. Then, the a priori distribution of the speech expansion coefficient follows: p ( X ) = q p ( X H ) + ( q ) p ( X H ) (4) 3

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December where p( X H ) = δ ( X ) and δ ( X ) δ ( X R, X I ) denotes the Dirac-delta function. The cost function C (, ˆ X X ) may be defined differently whether H or H is true. Therefore, we let C ( X, Xˆ ) C ( X, Xˆ H ) denote the cost which is conditioned on the true hypothesis. i i The cost function depends on both the true signal value and its estimate under the decision and therefore couples the operations of detection and estimation. The cost function associated with the pair {, } H η is generally defined by, C ( X, Xˆ ) = b d ( X, Xˆ ) (5) i i i where d (, ˆ i X X ) is an appropriate distortion measure and the cost parameters b i control the tradeoff between the costs associated with the pairs { H, η }.That is, a high-valued b raises the cost of a false alarm, (i.e., decision of speech presence when speech is actually absent) which may result in residual musical noise. Similarly, b is associated with the cost of missed detection of a signal component, which may cause perceptual signal distortion. Under a correct classification, b = b = normalized cost parameters are generally used. However d (.,.) is not necessarily zero since estimation errors are still possible even when there is no detection error. When speech is indeed absent, the distortion function is defined to allow some natural background noise level such that under H, the attenuation factor will be lower bounded by a constant gain floorg f as proposed in [], [8], [9]. The distortion measure of the QSA cost function is defined by, ˆ (, ˆ = d i X X ) = ˆ ( X X ), i = ( G f Y X ), i = and is related to the STSA suppression rule of Ephraim and Malah [6].Assume that both X and D are statistically independent, zero-mean, complex-valued Gaussian random variables with variances λ x and λ d, respectively. Let ξ λ x / λ d, denote the a priori SNR under hypothesis H, let γ Y / λ d,denote the a posteriori SNR and let υ γξ / ( + ξ ). For evaluating the optimal detector and estimator under the QSA cost function we denote by X ae α andy Re θ the clean and noisy spectral coefficients, respectively, where a = X and R = Y.Accordingly, the pdf of the speech expansion coefficient under H satisfies, a a p( a, α H ) = exp( ) (7) πλ λ x x As proposed in [4], the optimal estimation under the decisionη, {, } ˆ ( ξ, γ ) ( ξ, γ ) φ ( ξ, γ ) X = b Λ ξ γ GSTSA ξ γ + b G f φ ξ γ Y G ( ξ, γ ) Y The optimal estimator under decision η is modified with certain attenuation factor based on noise variance λd and Noisy speech power S, i (6) (8) 3

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December ˆX = G Y (9) / Where G = G ( λ / S + e ) and To obtain f d S recursive averaging is employed such that S = ξ S, + ( ξ ) Y s l k s where ζs ( < ζs < ) is a smoothing parameter. This modification reduces greatly the nonstationary noise from Noisy speech as it considers noisy speech power along with its variance. The decision rule [4] under the QSA cost function is, η ξ b G G + ( + υ )( b ) + > Λ ( ξ, γ ) ( + ξ ) γ b ( G G f ) ( G G f ) < ( G bg ) G S T SA η Fig.. shows a block diagram of the simultaneous detection and estimation system, the estimator is obtained by (8) and (9) and the interrelated decision rule () chooses the appropriate estimator for minimizing the combined Bayes risk. () 4. Experimental Results Fig.. Simultaneous Detection and Estimation System. In our experimental study we consider the problem of hands free communication in an emergency vehicle and demonstrate the advantage of the simultaneous detection and estimation approach under stationary & nonstationary noise environments. Speech signals are recorded with sampling frequency at 8 khz and degraded by different stationary & nonstationary additive noise. Nonstationary noise like siren noise is added with car noise for different levels of input SNR. The test signals include speech utterances from different speakers, half male and half female. The noisy signals are transformed into the STFT domain using half-overlapping Hamming windows of 3-ms length, and the background-noise spectrum is estimated by using the IMCRA algorithm[3].the performance evaluation includes obective quality measure- SNR defined, in db, a subective study of spectrograms, and informal listening tests. The proposed approach is compared with the OM-LSA algorithm [8]. The speech presence probability required for the OM-LSA estimator as well as for the simultaneous detection and estimation approach is estimated as proposed in [8]. For the OM-LSA algorithm, the decisiondirected estimator with α =.9 is implemented as specified in [8], and the gain floor 33

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December isg = db. Fig. shows waveforms and spectrograms of a clean signal, noisy signal, and f enhanced signals for Speech degraded by car and siren noise with SNR of 5 db. The speech enhanced by using the OM-LSA algorithm & the simultaneous detection and estimation approach are shown in Fig..(c) and.(d), respectively. However, the simultaneous detection and estimation approach with modified speech absence estimate yields greater reduction of transient noise without affecting the quality of the enhanced speech signal. Fig.. Speech waveforms and spectrograms. (a) Clean speech signal: kamal naman kar. in Marathi uttered by a male subect (b) Speech degraded by car noise and siren noise with SNR of 5dB. (c) Speech enhanced by using the OM-LSA estimator. (d) Speech enhanced by using the simultaneous detection and estimation approach with modified speech absence estimate using, b = b =.5 as proposed by authors. Quality measures for the different input SNRs are shown in Table & Table.The results from Table demonstrate improved speech quality obtained by the simultaneous detection and estimation approach for stationary noise environments. 34

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December TABLE Output SNR (in db) By Using The OM-LSA Estimator & Simultaneous Detection And Estimation Approach for Different Stationary Noise Environments with Varying Input SNR Between 5dB to -5dB. Noise Input SNR OM-LSA Estimator Proposed Simultaneous Detection & Estimation approach White Gaussian Noise 5 8.9.937 5.9 8.7496 5.983 9.787 8.579 7.7648-5 5.9435 7.3687 Car 5 5.59.857.93 6.345 5.566 3.8647 8.33.3499-5 6.857 7.6548 The results from Table demonstrate improved speech quality obtained by the simultaneous detection and estimation approach with modified speech absence estimate for nonstationary noise environments (car with siren noise). TABLE Output SNR (in db) By Using The OM-LSA Estimator & Simultaneous Detection And Estimation Approach for Different Nonstationary Noise Environments with Varying Input SNR Between 5dB to -5dB. Noise Input SNR OM-LSA Estimator Proposed Simultaneous Detection & Estimation approach Car(with siren noise) 5 6.6 9.796.83 6.58 5.369.435 3.348 4.984-5 -3.9558 -.346 Train 5 7.675.6 5.557 7.45 5 4.9869 6.58.86 3.468-5 6.88 7.733 Subective listening tests confirm that the speech quality improvement achieved by using the proposed method. 5. Conclusion We have presented a single-channel speech enhancement approach in the time frequency domain for nonstationary noise environments. A detector for the speech coefficients and a corresponding 35

Signal & Image Processing : An International Journal (SIPIJ) Vol., No.4, December estimator with modified speech absence estimate for their values is ointly designed to minimize a combined Bayes risk. In addition, cost parameters enable to control the tradeoff between speech quality, noise reduction, and residual musical noise. Experimental results show greater noise reduction with improved speech quality when compared with the OM-LSA suppression rules under stationary and nonstationary noise. It is demonstrated that under nonstationary noise environment, greater reduction of nonstationary noise components may be achieved by exploiting reliable information with simultaneous detection and estimation approach. REFERENCES [] S. F. Boll, Suppression of acousting noise in speech using spectral subtraction, IEEE Trans. Acoust.,Speech, Signal Process., vol.assp-7, no., pp. 3, Apr. 979. [] M. Berouti, R. Schwartz, and J. Makhoul, Enhancement of speech corrupted by acoustic noise, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., ICASSP 79, Apr. 979, vol. 4, pp. 8. [3] H. Lev-Ari and Y. Ephraim, Extension of the signal subspace speech enhancement approach to colored noise, IEEE Signal Process. Lett.,vol., no. 4, pp. 4 6, Apr. 3. [4] Y. Hu and P. C. Loizou, A generalized subspace approach for enhancing speech corrupted by colored noise, IEEE Trans. Speech Audio Process., vol., no. 4, pp. 334 34, Jul. 3. [5] R. J. McAulay and M. L. Malpass, Speech enhancement using a soft decision noise suppression filter, IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-8, no., pp. 37 45, Apr. 98. [6] Y. Ephraim and D. Malah, Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator, IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-3, no. 6, pp. 9, Dec. 984. [7] D. Malah, R. V. Cox, and A. J. Accardi, Tracking speech-presence uncertainty to improve speech enhancement in nonstationary noise environments, in Proc. 4th IEEE Int. Conf. Acoust., Speech, Signal Process., ICASSP 99, Phoenix, AZ, Mar. 999, pp. 789 79. [8] I. Cohen and B. Berdugo, Speech enhancement for non-stationary environments, Signal Process., vol. 8, pp. 43 48, Nov.. [9] O. Cappé, Elimination of the musical noise phenomenon with the Ephraim and Malah noise suppressor, IEEE Trans. Speech Audio Process., vol., no., pp. 345 349, Apr. 994. [] D. Middleton and F. Esposito, Simultaneous optimum detection and estimation of signals in noise, IEEE Trans. Inf. Theory, vol. IT-4, no. 3, pp. 434 444, May 968. [] Y. Ephraim and D. Malah, Speech enhancement using a minimum mean-square error log-spectral amplitude estimator, IEEE Trans.Acoust., Speech, Signal Process., vol. 33, no., pp. 443 445, Apr.985. [] A. Fredriksen, D. Middleton, and D. Vandelinde, Simultaneous signal detection and estimation under multiple hypotheses, IEEE Trans. Inf.Theory, vol. IT-8, no. 5, pp. 67 64, 97. [3] I. Cohen, Noise spectrum estimation in adverse environments: Improved minima controlled recursive averaging, IEEE Trans. Speech Audio Process., vol., no. 5, pp. 466 475, Sep. 3. [4] Ari Abramson and Israel Cohen, Simultaneous Detection and Estimation Approach for Speech Enhancement IEEE Transactions On Audio, Speech, And Language Processing, Vol. 5, No. 8, Nov.7 36