Adaptive Noise Reduction of Speech. Signals. Wenqing Jiang and Henrique Malvar. July Technical Report MSR-TR Microsoft Research

Similar documents
Different Approaches of Spectral Subtraction Method for Speech Enhancement

NOISE ESTIMATION IN A SINGLE CHANNEL

Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter

RASTA-PLP SPEECH ANALYSIS. Aruna Bayya. Phil Kohn y TR December 1991

Voice Activity Detection for Speech Enhancement Applications

REAL-TIME BROADBAND NOISE REDUCTION

Frequency Domain Analysis for Noise Suppression Using Spectral Processing Methods for Degraded Speech Signal in Speech Enhancement

Speech Enhancement using Wiener filtering

Effective post-processing for single-channel frequency-domain speech enhancement Weifeng Li a

Robust telephone speech recognition based on channel compensation

Speech Signal Enhancement Techniques

Robust Voice Activity Detection Based on Discrete Wavelet. Transform

Speech Enhancement for Nonstationary Noise Environments

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

MODIFIED DCT BASED SPEECH ENHANCEMENT IN VEHICULAR ENVIRONMENTS

CHAPTER 3 SPEECH ENHANCEMENT ALGORITHMS

MMSE STSA Based Techniques for Single channel Speech Enhancement Application Simit Shah 1, Roma Patel 2

EE482: Digital Signal Processing Applications

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Speech Enhancement Based On Spectral Subtraction For Speech Recognition System With Dpcm

Different Approaches of Spectral Subtraction method for Enhancing the Speech Signal in Noisy Environments

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

Speech Enhancement: Reduction of Additive Noise in the Digital Processing of Speech

Auditory modelling for speech processing in the perceptual domain

Perceptual Speech Enhancement Using Multi_band Spectral Attenuation Filter

SPEECH ENHANCEMENT USING A ROBUST KALMAN FILTER POST-PROCESSOR IN THE MODULATION DOMAIN. Yu Wang and Mike Brookes

Wavelet Speech Enhancement based on the Teager Energy Operator

Speech Enhancement Based on Audible Noise Suppression

Modified Kalman Filter-based Approach in Comparison with Traditional Speech Enhancement Algorithms from Adverse Noisy Environments

Single channel noise reduction

Chapter 4 SPEECH ENHANCEMENT

Abstract Dual-tone Multi-frequency (DTMF) Signals are used in touch-tone telephones as well as many other areas. Since analog devices are rapidly chan

IMPROVED SPEECH QUALITY FOR VMR - WB SPEECH CODING USING EFFICIENT NOISE ESTIMATION ALGORITHM

ROBUST PITCH TRACKING USING LINEAR REGRESSION OF THE PHASE

SPEECH ENHANCEMENT WITH SIGNAL SUBSPACE FILTER BASED ON PERCEPTUAL POST FILTERING

Students: Avihay Barazany Royi Levy Supervisor: Kuti Avargel In Association with: Zoran, Haifa

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

Speech Enhancement Using a Mixture-Maximum Model

GUI Based Performance Analysis of Speech Enhancement Techniques

Speech Enhancement Based On Noise Reduction

TRANSIENT NOISE REDUCTION BASED ON SPEECH RECONSTRUCTION

STATISTICAL METHODS FOR THE ENHANCEMENT OF NOISY SPEECH. Rainer Martin

Enhanced Waveform Interpolative Coding at 4 kbps

Modulation Domain Spectral Subtraction for Speech Enhancement

COM 12 C 288 E October 2011 English only Original: English

Enhancement of Speech Communication Technology Performance Using Adaptive-Control Factor Based Spectral Subtraction Method

Robust Low-Resource Sound Localization in Correlated Noise

Signal Processing 91 (2011) Contents lists available at ScienceDirect. Signal Processing. journal homepage:

Speech Endpoint Detection Based on Sub-band Energy and Harmonic Structure of Voice

Speech Enhancement in Noisy Environment using Kalman Filter

Enhancement of Speech Signal by Adaptation of Scales and Thresholds of Bionic Wavelet Transform Coefficients

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Automotive three-microphone voice activity detector and noise-canceller

Detection, Interpolation and Cancellation Algorithms for GSM burst Removal for Forensic Audio

Voice Activity Detection Using Spectral Entropy. in Bark-Scale Wavelet Domain

Exploring QAM using LabView Simulation *

Noise Estimation based on Standard Deviation and Sigmoid Function Using a Posteriori Signal to Noise Ratio in Nonstationary Noisy Environments

Analysis of the SNR Estimator for Speech Enhancement Using a Cascaded Linear Model

Evaluation of clipping-noise suppression of stationary-noisy speech based on spectral compensation

Spectral estimation using higher-lag autocorrelation coefficients with applications to speech recognition

Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction

Transient noise reduction in speech signal with a modified long-term predictor

Single Channel Speech Enhancement in Severe Noise Conditions

RECENTLY, there has been an increasing interest in noisy

Noise Reduction: An Instructional Example

Chapter 3. Speech Enhancement and Detection Techniques: Transform Domain

Performance analysis of voice activity detection algorithm for robust speech recognition system under different noisy environment

Wavelet Based Adaptive Speech Enhancement

On a Classification of Voiced/Unvoiced by using SNR for Speech Recognition

Fundamental frequency estimation of speech signals using MUSIC algorithm

Speech Synthesis using Mel-Cepstral Coefficient Feature

SPEECH ENHANCEMENT USING SPARSE CODE SHRINKAGE AND GLOBAL SOFT DECISION. Changkyu Choi, Seungho Choi, and Sang-Ryong Kim

ANUMBER of estimators of the signal magnitude spectrum

Reliable A posteriori Signal-to-Noise Ratio features selection

Enhancement of Speech in Noisy Conditions

1. Introduction. Keywords: speech enhancement, spectral subtraction, binary masking, Gamma-tone filter bank, musical noise.

Overview of Code Excited Linear Predictive Coder

SPEECH ENHANCEMENT BASED ON A LOG-SPECTRAL AMPLITUDE ESTIMATOR AND A POSTFILTER DERIVED FROM CLEAN SPEECH CODEBOOK

Audio Restoration Based on DSP Tools

Modulator Domain Adaptive Gain Equalizer for Speech Enhancement

Isolated Word Recognition Based on Combination of Multiple Noise-Robust Techniques

Dominant Voiced Speech Segregation Using Onset Offset Detection and IBM Based Segmentation

A SUPERVISED SIGNAL-TO-NOISE RATIO ESTIMATION OF SPEECH SIGNALS. Pavlos Papadopoulos, Andreas Tsiartas, James Gibson, and Shrikanth Narayanan

Speech Enhancement Techniques using Wiener Filter and Subspace Filter

A Survey and Evaluation of Voice Activity Detection Algorithms

Available online at ScienceDirect. Procedia Computer Science 54 (2015 )

HUMAN speech is frequently encountered in several

Phase estimation in speech enhancement unimportant, important, or impossible?

ROTATIONAL RESET STRATEGY FOR ONLINE SEMI-SUPERVISED NMF-BASED SPEECH ENHANCEMENT FOR LONG RECORDINGS

ADAPTIVE NOISE LEVEL ESTIMATION

Recent Advances in Acoustic Signal Extraction and Dereverberation

Noise Estimation and Noise Removal Techniques for Speech Recognition in Adverse Environment

Residual noise Control for Coherence Based Dual Microphone Speech Enhancement

Adaptive Noise Canceling for Speech Signals

Single Channel Speaker Segregation using Sinusoidal Residual Modeling

A Three-Microphone Adaptive Noise Canceller for Minimizing Reverberation and Signal Distortion

Voice Activity Detection

Digital Signal Processing of Speech for the Hearing Impaired

A Two-Step Adaptive Noise Cancellation System for Dental-Drill Noise Reduction

Transcription:

Adaptive Noise Reduction of Speech Signals Wenqing Jiang and Henrique Malvar July 2000 Technical Report MSR-TR-2000-86 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 http://www.research.microsoft.com

Adaptive Noise Reduction of Speech Signals Wenqing Jiang and Henrique Malvar Abstract We propose a new adaptive speech noise removal algorithm based on a twostage Wiener ltering. A rst Wiener lter is used to produce a smoothed estimate of the a priori signal-to-noise ratio (SNR), aided by a classier that separates speech from noise frames, and a second Wiener lter is used to generate the nal output. Spectral analysis and synthesis is performed by a modulated complex lapped transform (MCLT). For noisy speech atalow10db input SNR, for example, the proposed algorithm can achieve onaverage about 13 db noise-to-mask ratio (NMR) reduction, or about 6 db SNR improvement. 1 Introduction Noise removal is a necessary preprocessing step for speech acquisition in computer telephony and other applications, such as speech-assisted human-computer interfaces. Oce noise from fans and computers, as well as vehicle noise, not only degrades the subjective speech quality, but it also hinders performance of speech coding and recognition systems. Many approaches have been reported in the literature for speech noise reduction, such as the short-time spectral amplitude estimator in [1, 2], the signal subspace approach in [3] and the human auditory system model-based approaches in [4] and [5]. In this paper, we focus our study on short-time spectrum attenuation techniques, which have been shown to be very eective and simple for low cost implementations [1,2,6]. A typical spectrum attenuation technique, assuming an additive uncorrelated noise model, consists of two basic steps [7]: (i) estimation of noise spectrum and (ii) ltering of the noisy speech to obtain the cleaned speech. In spectral subtraction systems, a noise spectral magnitude estimate is actually subtracted from the signal magnitude spectrum. That can lead to larger amounts of noise reduction. Both approaches are usually eective, but they can generate artifacts known as musical noise 1 [6], especially in spectral subtraction systems. Approaches to reduce musical noise include using sophisticated speech/noise classication mechanisms, such as the cepstral detector by Sovka et al. [8], the pitch-based detector by Tucker et al. [9], and the multiple features-based voice activity detector (VAD) in G.729 by Benyassine et al. [10]. 1 The residual noise composed of sinusoidal components with random frequencies that come and go in each short-time frame. It is caused by the mismatch between the noise spectrum estimation and the noise spectrum at each short-time frame. 1

In particular, the system in [10] improves the probability of correct noise frame classication for improved noise spectrum estimation, and smoothes the a priori SNR estimation over time, as in the minimum mean-square error short-time spectral magnitude estimator in [1, 2]. Time smoothing is eective in reducing musical noise, but it leads to reverberation artifacts. In this paper we propose a two-stage Wiener lter system for speech noise removal. For simplicity, we use an adaptive energy-based speech/noise classication technique similarto [11]. To reduce the classication error, specically the error of misclassication of speech frames as noise frames, we smooth the initial energy-based classication result over time. That is justied by the observation that speech frames tend to cluster to each other in time. In other words, both the energy measure and classication results of neighboring frames are used to obtain the nal classication result for each current frame, a context-adaptive classication idea that has been successfully used reducing reconstruction noise in picture coding [12]. Driven by the frame classier, we use a Wiener lter to estimate the speech and noise spectra, or equivalently the a priori SNR. Another Wiener lter then generates a minimum-mean square estimate of the speech signal. This two-stage Wiener ltering approach is simple to implement and performs closely to the best systems reported to date, but with a lower level of musical tones. 2 System Outline A simplied block diagram of our proposed system is shown in Figure 1. The input signal is rst transformed on a frame-by-frame basis using a modulated complex lapped transform (MCLT). The MCLT is similar to a windowed Fourier transform frequency analyzer, but with slightly dierent center frequencies [13]. Frame classi- cation and Wiener ltering, as described in the next sections, are performed in the magnitude MCLT domain. The ltered magnitude information is combined with the original phase information and inverse transformed via the IMCLT. MCLT magnitude Speech/noise Classifier phase Wiener Filter 2 Wiener Filter 1 IMCLT Noise Spectrum Estimator Figure 1: Basic block diagram of the proposed system. Let x be the input signal, s the original speech signal and n the uncorrelated noise. We assume as usual an additive noise model, that is x = s + n (1) 2

Let X(i k) be the input spectrum of frame i at frequency bin k, computed via the MCLT: X(i k) = X 2N;1 n=0 x(in + n)p a (n k) (2) where N is the frame length and p a (n k) is the MCLT analysis kernel [13]. 3 Context-Adaptive Classication Our classier is based on an energy metric. The ith frame energy E 2 (i) is computed from the input spectrum as follows: 1 E 2 (i) = k 1 ; k 0 k1x k=k0 where the average frame magnitude X(i) is given by X(i) = 1 k 1 ; k 0 +1 [jx(i k)j ; X(i)] 2 (3) k1x k=k0 jx(i k)j (4) We usually set k 0 = 300N=f s and k 1 =3000N=f s (where f s is the A/D sampling frequency). That choice is motivated by the fact that for human speech essentially all energy is concentrated in the 300Hz{3000Hz band. Once the energy E 2 (i) is computed, We make an initial decision by hard thresholding: if E(i) >Tthen frame i is classied as speech otherwise, it is labeled as noise. Since speech is nonstationary, we adapt the threshold T from past frames by the simple rule T = E min + (E max ; E min ) (5) where E min = minfe(j)g, E max = maxfe(j)g and j = i ; W e i+1; W e i; 1 with (W e ) respectively the window size (number of past frames) and a relative thresholding constant. We can slow down adaptation of T by increasing the window size W e,andwe can make it more robust to large energy uctuations in noise frames by increasing. Typical values in our experiments are W e =20and =0:3. A problem with this simple hard-thresholding technique is that it often misclassies low energy speech frames (e.g. for unvoiced speech) as noise frames. To reduce this error, we propose the following smoothing rule: if the energies of the current frame and the past W e frames are below the threshold, then the current frame is a noise frame otherwise, the current frame is a speech frame. W s is a smoothing length in our experiments we set W s = 5. The rule is justied because in practice low-energy unvoiced frames usually happen immediately before or after voiced frames. Figure 2 shows an example where we see that this smoothing process helps to reduce the error of misclassifying speech frames into noise frames. 3

0.4 0.35 0.3 0.25 frame energy 0.2 0.15 0.1 0.05 0 50 100 150 200 250 Figure 2: Comparison of energy-based classication results before (hard-decision, dashed lines) and after smoothing (soft-decision, solid lines) (W s =5 =0:2 W e = 20). 4 Two-Stage Wiener Filtering After classication, we use each noise frame to adapt the noise spectrum estimate j ^N(i k)j by j ^N(i k)j = j ^N(i ; 1 k)j +(1; )jx(i k)j (6) where the parameter controls the adaptation speed. In our experiments, we use =0:9. A Wiener lter [14] is the optimal Bayesian linear lter that minimizes the expected mean-squared error E[j^s ; sj 2 ] for the noise corruption model in Eqn. (1). In the frequency domain, the Wiener lter gain can be written as frame G(k) = js(k)j 2 js(k)j 2 + = P (k) jn(k)j 2 1+P (k) (7) where S(k) N(k) are respectively the frequency spectrum of the signal and noise. P (k) js(k)j 2 =jn(k)j 2 is the a priori SNR. The output spectrum ^S(k) is computed by ^S(k) =G(k)X(k). The Wiener lter is essentially an adaptive gain that gets smaller as the SNR P (k) gets smaller. Its eciency is tied to the assumptions that both signal and noise are wide-sense stationary random processes and the a priori SNR is known. In practice, many noise sources such as computers and fans are reasonably stationary, but speech certainly isn't. Therefore, we have to replace the a priori statistics by spectral estimates. When frame-adaptive spectral estimates are used to compute the Wiener lter gains in Eqn. (7), low-level speech frames can make G(k) uctuate rapidly, generating annoying musical noise in the ltered signal [6]. To improve the spectrum estimation of speech signals, we propose to use a twostep Wiener ltering algorithm. In the rst stage, the input signal is Wiener ltered 4

using an adjusted SNR estimate: where P 0 (i k) = ^P (i ; 1 k)+(1; )P (i k) (8) P (i k) =(jx(i k)j 2 ;j^n(i k)j2 )=j ^N(i k)j 2 and ^P (i ; 1 k) is calculated, using the ltered signal from the previous frame, as ^P (i ; 1 k)=j ^S(i ; 1 k)j2 =j ^N(i ; 1 k)j 2 (10) We see that P (i k) is equivalent to that resulted from a spectral subtraction system [5, 11]. However, direct spectral subtraction leads to musical noise while oversubtraction increases speech distortion. With the smoothed estimate P 0 (i k), we reduce variations in the Wiener gain G(i k) over time. This helps to suppress the residual musical noise. The larger the, the lower the level of the residual musical noise. In Figure 3 we show dierent estimations of the SNR. It can be seen that isolated small magnitude pulses (corresponding directly to the musical noise) are suppressed after the smoothing operation. (9) 16 14 12 S(i,k) 2 / N(i,k) 2 10 8 6 4 2 0 30 35 40 45 50 55 frame Figure 3: Dierent SNR estimates. Solid line: P (i k) before smoothing dotted line: P 0 (i k) (after smoothing) with =0:97 dashed line: P 1 (i k) nal estimate. In Figure 3 we note that the smoothed SNR estimate P 0 (i k) is delayed with respect to P (i k) for large (e.g. =0:97). This time delaymay lead to reverberation eects at the end of speech utterances. To avoid that kind of distortion, we propose the use of a second Wiener lter, which recomputes the SNR estimation by P 1 (i k) = ^P (i ; 1 k)+(1; )P u (i k) (11) where P u (i k) = j ^S(i k)j2 =j ^N(i k)j2 with ^S(i k) the ltered signal from the rst Wiener lter. A typical plot of P 1 (i k) is also shown in Figure 3. We note that the newly estimated P 1 (i k) is shifted back and synchronized with that of P old (i k) from spectrum subtraction, while suppressing the small magnitude pulses to avoid musical noise. 5

5 Experimental Results To measure the performance of the proposed algorithm, we compute the sample SNR and the noise-to-masking ratio (NMR) for the ltered speech signals. The sample SNR is dened as SNR = 10 log 10 PN;1 n=0 s 2 (n) PN;1 n=0 [y(n) ; s(n)]2 (12) where N is the length of the original signal s(n) andy(n) is the signal for which we want to compute the SNR (either the input speech x(n) or the ltered output from our system). The NMR is an objective measure based on the human auditory system and it indicates the ratio of audible noise components to the hearing threshold. Therefore, an NMR of 0 db indicates a noise at the threshold of audibility, whereas higher NMRs mean more noticeable noise. The NMR has been found to have a high degree of correlation with subjective tests. The NMR is dened as [5] NMR = 10 M X M;1 i=0 log 10 1 B B;1 X b=0 1 C b Pk=k h k=k l jd(i k)j 2 T 2 b (i) (13) where M is the total number of frames, B is the number of Critical Bands (CB), C b is the number of frequency components for the bth CB, and jd(i k)j 2 is the power spectrum of the noise at frequency bin k and frame i. Thek l k h are respectively the low and high frequency bin indices corresponding to bth CB, and T b is its masking threshold, which depends on the signal spectral magnitudes around the bth band [5]. To generate noisy speech signals, we used Eqn. (1) with six noise patterns. Besides white and pink noise, for more realistic results we also used four noise patterns recorded from oce and conferencing rooms, with a mixture of air conditioning and computer noises. The speech material consisted of short sentences recorded by a male andafemalespeaker. All signals were sampled at 16 khz (which ischaracteristic of \wideband" teleconferencing systems). We adjusted the noise level to an equivalent a priori SNR of 10 db. The results are given in Table 1. The rows indicate the SNR and NMR results before (sux \in") and after (sux \out") noise reduction, for male and female speech (\M:" and \F:" prexes), and the columns indicate the noise patterns the four recorded room noises (a){(d) and pink and white noises (\PN" and \WN"). We see that the proposed algorithm signicantly improves the SNR or equivalently reduces the NMR. The average SNR improvement is 5.8 db or equivalently 12.9 db NMR reduction. That level of SNR improvement is roughly the same as what is obtained with the best spectral subtraction systems [3], but our proposed algorithm leads to a signicant reduction of the musical noise artifact, with low algorithmic complexity andlow processing delay. 6

Table 1: SNR and NMR (in db) before and after noise reduction. 6 Conclusion (a) (b) (c) (d) PN WN M: SNR in 9.9 9.8 10.0 10.0 10.1 10.0 M: SNR out 13.1 12.9 12.6 19.2 14.3 15.6 F: SNR in 9.9 9.9 9.9 10.0 10.2 10.0 F: SNR out 17.7 17.6 16.0 20.7 16.2 15.9 SNR Gain 5.5 5.4 4.4 9.9 4.1 5.7 M: NMR in 11.7 15.0 16.3 11.9 21.7 28.5 M: NMR out 2.7 4.0 4.9-0.1 6.6 11.1 F: NMR in 15.9 19.0 17.4 12.0 19.5 25.2 F: NMR out 3.9 3.7 5.0 1.9 5.3 8.6 NMR Gain 10.5 12.2 11.9 11.1 14.7 17 We proposed an adaptive noise reduction algorithm based on Wiener ltering. It includes two main modications compared to conventional approaches:(i) a smoothing rule for the energy-based speech/noise classication and (ii) a recursive two-stage Wiener ltering structure, to reduce the signal distortion from \musical noise." Preliminary experimental results haveshown an average SNR improvementofabout6db and an NMR reduction of about 13 db, for noisy speech at 10 db input SNR. With speech input, the performance of our system could be enhanced by adding speech production models (e.g. linear prediction { LP) as part of the a priori spectral information. However, such modication could hinder performance on handset-free telephony and similar applications, due to the mismatch of the LPC model to reverberant speech. References [1] Y. Ephraim and D. Malah, \Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator," IEEE Trans. on ASSP, pp. 1109{ 1121, 1984. [2] Y. Ephraim and D. Malah, \Speech enhancement using a minimum mean-square error log-spectral amplitude estimator," IEEE Trans. on ASSP, pp. 443{445, 1985. [3] Y. Ephraim, \A signal subspace approach for speech enhancement," IEEE Trans. on speech and audio processing, pp. 251{266, 1995. 7

[4] N. Virag, \Single channel speech enhancement based on masking properties of the human auditory system," IEEE Trans. on speech and audio processing, pp. 126{137, 1999. [5] D. E. Tsoukalas, J. N. Mourjopoulos, and G. Kokkinakis, \Speech enhancement based on audible noise suppression," IEEE Trans. on speech and audio processing, pp. 497{514, 1997. [6] O. Cappe, \Elimination of the musical noise phenomenon with the Ephraim and Malah noise suppressor," IEEE Trans. on speech and audio processing, pp. 345{349, 1994. [7] P. Vary, \Noise suppression by spectral magnitude estimation: mechamism and theorectical limits," Signal Processing, pp. 387{400, 1985. [8] P. Sovka, V. Davidek, P. Pollak, and J. Uhlir, \Speech/ pause detection for real-time implementation of spectral subtraction algorithm," in The 6th Intl. Conf. on Signal Proc. Applications and Technology, 1995, pp. 1955{1958. [9] R. Tucker, \Voice activity detection using a periodicity measure," IEE Proceedings-I, pp. 377{380, 1992. [10] A. Benyassine, E. Shlomot, and H. Y. Su, \ITU-T recommendation G.729 annex B: A silence compression scheme for use with G.729 optimized for V.70 digital simulations voice and data applications," IEEE Communications Magazine, pp. 64{73, 1997. [11] G. S. Kang and L. J. Fransen, \Quality improvement of LPC-processed noisy speech by using spectral subtraction," IEEE Trans. on ASSP, pp. 939{942, 1989. [12] C. Chrysas and A. Ortega, \Ecient context-based entropy coding for lossy wavelet image compression," in Proc. of DCC'97, Snowbird, UT, Mar. 1997. [13] H. Malvar, \A modulated complex lapped transform and its application to audio processing," in Proc. ICASSP, 1999, pp. 1421{1424. [14] H. L. Van Trees, Detection, Estimation, and Modulation Theory, Part I, New York: Wiley, 1968. 8