IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1. Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition

Similar documents
Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition Chanwoo Kim, Member, IEEE, and Richard M. Stern, Fellow, IEEE

IN recent decades following the introduction of hidden. Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition

Robust speech recognition using temporal masking and thresholding algorithm

Signal Processing for Robust Speech Recognition Motivated by Auditory Processing

Power Function-Based Power Distribution Normalization Algorithm for Robust Speech Recognition

SIGNAL PROCESSING FOR ROBUST SPEECH RECOGNITION MOTIVATED BY AUDITORY PROCESSING CHANWOO KIM

Applying Models of Auditory Processing to Automatic Speech Recognition: Promise and Progress!

Robust Speech Recognition Group Carnegie Mellon University. Telephone: Fax:

RASTA-PLP SPEECH ANALYSIS. Aruna Bayya. Phil Kohn y TR December 1991

Robust Speech Recognition Based on Binaural Auditory Processing

Robust Speech Recognition Based on Binaural Auditory Processing

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Isolated Word Recognition Based on Combination of Multiple Noise-Robust Techniques

DERIVATION OF TRAPS IN AUDITORY DOMAIN

Psycho-acoustics (Sound characteristics, Masking, and Loudness)

Comparison of Spectral Analysis Methods for Automatic Speech Recognition

REAL-TIME BROADBAND NOISE REDUCTION

Auditory modelling for speech processing in the perceptual domain

SPEECH ENHANCEMENT USING A ROBUST KALMAN FILTER POST-PROCESSOR IN THE MODULATION DOMAIN. Yu Wang and Mike Brookes

Auditory Based Feature Vectors for Speech Recognition Systems

Classification of ships using autocorrelation technique for feature extraction of the underwater acoustic noise

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES

Calibration of Microphone Arrays for Improved Speech Recognition

Advanced audio analysis. Martin Gasser

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

Mikko Myllymäki and Tuomas Virtanen

Different Approaches of Spectral Subtraction Method for Speech Enhancement

MOST MODERN automatic speech recognition (ASR)

Chapter 4 SPEECH ENHANCEMENT

Machine recognition of speech trained on data from New Jersey Labs

Introduction of Audio and Music

NOISE ESTIMATION IN A SINGLE CHANNEL

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

FOURIER analysis is a well-known method for nonparametric

Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter

Speech Synthesis; Pitch Detection and Vocoders

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Speech Signal Analysis

Signal Processing for Speech Applications - Part 2-1. Signal Processing For Speech Applications - Part 2

Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

Speech Enhancement Based On Noise Reduction

Audio Restoration Based on DSP Tools

COM 12 C 288 E October 2011 English only Original: English

Testing of Objective Audio Quality Assessment Models on Archive Recordings Artifacts

Spectral estimation using higher-lag autocorrelation coefficients with applications to speech recognition

Nonuniform multi level crossing for signal reconstruction

Robust Low-Resource Sound Localization in Correlated Noise

Pattern Recognition. Part 6: Bandwidth Extension. Gerhard Schmidt

I D I A P. On Factorizing Spectral Dynamics for Robust Speech Recognition R E S E A R C H R E P O R T. Iain McCowan a Hemant Misra a,b

3D Distortion Measurement (DIS)

Modulation Spectrum Power-law Expansion for Robust Speech Recognition

Performance study of Text-independent Speaker identification system using MFCC & IMFCC for Telephone and Microphone Speeches

Effective post-processing for single-channel frequency-domain speech enhancement Weifeng Li a

CHAPTER 3 SPEECH ENHANCEMENT ALGORITHMS

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction

Speech Enhancement Based on Audible Noise Suppression

Speech Enhancement using Wiener filtering

Pre- and Post Ringing Of Impulse Response

SONG RETRIEVAL SYSTEM USING HIDDEN MARKOV MODELS

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

Robust telephone speech recognition based on channel compensation

Sound Synthesis Methods

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

Signal segmentation and waveform characterization. Biosignal processing, S Autumn 2012

ScienceDirect. Unsupervised Speech Segregation Using Pitch Information and Time Frequency Masking

SOUND SOURCE RECOGNITION AND MODELING

EE482: Digital Signal Processing Applications

AN547 - Why you need high performance, ultra-high SNR MEMS microphones

MODIFIED DCT BASED SPEECH ENHANCEMENT IN VEHICULAR ENVIRONMENTS

Quantification of glottal and voiced speech harmonicsto-noise ratios using cepstral-based estimation

Synchronous Overlap and Add of Spectra for Enhancement of Excitation in Artificial Bandwidth Extension of Speech

Dominant Voiced Speech Segregation Using Onset Offset Detection and IBM Based Segmentation

I D I A P. Hierarchical and Parallel Processing of Modulation Spectrum for ASR applications Fabio Valente a and Hynek Hermansky a

A Simplified Extension of X-parameters to Describe Memory Effects for Wideband Modulated Signals

Robust Speech Feature Extraction using RSF/DRA and Burst Noise Skipping

HCS 7367 Speech Perception

PROBLEM SET 6. Note: This version is preliminary in that it does not yet have instructions for uploading the MATLAB problems.

Block diagram of proposed general approach to automatic reduction of speech wave to lowinformation-rate signals.

The EarSpring Model for the Loudness Response in Unimpaired Human Hearing

ROBUST SPEECH RECOGNITION. Richard Stern

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES

AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND TRANSFER FUNCTIONS

VQ Source Models: Perceptual & Phase Issues

High-speed Noise Cancellation with Microphone Array

Enhancement of Speech Signal by Adaptation of Scales and Thresholds of Bionic Wavelet Transform Coefficients

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

A102 Signals and Systems for Hearing and Speech: Final exam answers

Detection, Interpolation and Cancellation Algorithms for GSM burst Removal for Forensic Audio

Time-Frequency Distributions for Automatic Speech Recognition

DESIGN OF VOICE ALARM SYSTEMS FOR TRAFFIC TUNNELS: OPTIMISATION OF SPEECH INTELLIGIBILITY

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Single Channel Speaker Segregation using Sinusoidal Residual Modeling

HUMAN speech is frequently encountered in several

Adaptive Filters Application of Linear Prediction

A Simplified Extension of X-parameters to Describe Memory Effects for Wideband Modulated Signals

Transcription:

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition Chanwoo Kim, Member, IEEE, and Richard M. Stern, Fellow, IEEE Abstract This paper presents a new feature extraction algorithm called power normalized Cepstral coefficients (PNCC) that is motivated by auditory processing. Major new features of PNCC processing include the use of a power-law nonlinearity that replaces the traditional log nonlinearity used in MFCC coefficients, a noise-suppression algorithm based on asymmetric filtering that suppresses background excitation, and a module that accomplishes temporal masking. We also propose the use of medium-time power analysis in which environmental parameters are estimated over a longer duration than is commonly used for speech, as well as frequency smoothing. Experimental results demonstrate that PNCC processing provides substantial improvements in recognition accuracy compared to MFCC and PLP processing for speech in the presence of various types of additive noise and in reverberant environments, with only slightly greater computational cost than conventional MFCC processing, and without degrading the recognition accuracy that is observed while training and testing using clean speech. PNCC processing also provides better recognition accuracy in noisy environments than techniques such as vector Taylor series (VTS) and the ETSI advanced front end (AFE) while requiring much less computation. We describe an implementation of PNCC using online processing that does not require future knowledge of the input. Index Terms Robust speech recognition, feature extraction, physiological modeling, rate-level curve, power function, asymmetric filtering, medium-time power estimation, spectral weight smoothing, temporal masking, modulation filtering, on-line speech processing. 32 I. INTRODUCTION 33 I N recent decades following the introduction of hidden 34 Markov models (e.g. [1]) and statistical language mod- 35 els (e.g.[2]), the performance of speech recognition systems 36 in benign acoustical environments has dramatically improved. 37 Nevertheless, most speech recognition systems remain sensitive 38 to the nature of the acoustical environments within which they 39 are deployed, and their performance deteriorates sharply in the 40 presence of sources of degradation such as additive noise, linear 41 channel distortion, and reverberation. Manuscript received July 14, 2015; revised March 13, 2016; accepted March 14, 2016. This work was supported in part by the National Science Foundation under Grants IIS-0420866 and IIS-0916918 and in part by the Defense Advanced Research Projects Agency (DARPA) under Contract D10PC20024. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Yunxin Zhao. C. Kim is with The Google Corporation, Mountain View, CA 94043 USA (e-mail: chanwcom@google.com). R. M. Stern is with the Language Technologies Institute, Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 USA (e-mail: rms@cs.cmu.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TASLP.2016.2545928 One of the most challenging contemporary problems is that 42 recognition accuracy degrades significantly if the test environ- 43 ment is different from the training environment and/or if the 44 acoustical environment includes disturbances such as additive 45 noise, channel distortion, speaker differences, reverberation, 46 and so on. Over the years dozens if not hundreds of algo- 47 rithms have been introduced to address these problems. Many 48 of these conventional noise compensation algorithms have pro- 49 vided substantial improvement in accuracy for recognizing 50 speech in the presence of quasi-stationary noise (e.g. [3] [10]). 51 Unfortunately these same algorithms frequently do not provide 52 significant improvements in more difficult environments with 53 transitory disturbances such as a single interfering speaker or 54 background music (e.g. [11]). 55 Many of the current systems developed for automatic speech 56 recognition, speaker identification, and related tasks are based 57 on variants of one of two types of features: mel frequency 58 cepstral coefficients (MFCC) [12] or perceptual linear predic- 59 tion (PLP) coefficients [13]. Spectro-temporal features have 60 also been recently introduced with promising results (e.g. [14] 61 [16]). It has been observed that two-dimensional Gabor filters 62 provide a reasonable approximation to the spectro-temporal 63 response fields of neurons in the auditory cortex, which has lead 64 to various approaches to extract features for speech recognition 65 (e.g. [17] [20]). 66 In this paper we describe the development of an addi- 67 tional feature set for speech recognition which we refer to as 68 power-normalized cepstral coefficients (PNCC). While previ- 69 ous implementations of PNCC processing [21], [22] appeared 70 to be promising, they could not be easily implemented for 71 online applications without look-ahead over an entire sentence. 72 In addition, previous implementations of PNCC did not con- 73 sider the effects of temporal masking. The implementation of 74 PNCC processing in the present paper has been significantly 75 revised to address these issues in a fashion that enables it to 76 provide superior recognition accuracy over a broad range of 77 conditions of noise and reverberation using features that are 78 computable in real time using online algorithms that do not 79 require extensive look-ahead, and with a computational com- 80 plexity that is comparable to that of traditional MFCC and PLP 81 features. 82 Previous versions of PNCC processing [21], [22] have been 83 evaluated by various teams of researchers and compared to 84 several different algorithms including zero crossing peak ampli- 85 tude (ZCPA) [23], RASTA-PLP [24], perceptual minimum 86 variance distortionless response (PMVDR) [25], invariant- 87 integration features (IIF) [26], and subband spectral centroid 88 histograms (SSCH) [27]. Results from initial comparisons 89 2329-9290 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

2 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 (e.g. [28] [32]), tend to show that PNCC processing provides better speech recognition accuracy than the other algorithms cited above. The improvements provided by PNCC are typically greatest when the speech recognition system is trained on clean speech and noise and/or reverberation is present in the testing environment. For systems that are trained and tested using large databases of speech with a mixture of environmental conditions, PNCC processing also tends to outperform MFCC and PLP processing, but the differences are smaller. Portions of PNCC processing have also been incorporated into other feature extraction algorithms (e.g [33], [34]). In the subsequent subsections of this Introduction we discuss the broader motivations and overall structure of PNCC processing. We specify the key elements of the processing in some detail in Sec. II. In Sec. III we compare the recognition accuracy provided by PNCC processing under a variety of conditions with that of other processing schemes, and we consider the impact of various components of PNCC on these results. We compare the computational complexity of the MFCC, PLP, and PNCC feature extraction algorithms in Sec. IV, and we summarize our results in the final section. A. Broader Motivation for the PNCC Algorithm The development of PNCC feature extraction was motivated by a desire to obtain a set of practical features for speech recognition that are more robust with respect to acoustical variability in their native form, without loss of performance when the speech signal is undistorted, and with a degree of computational complexity that is comparable to that of MFCC and PLP coefficients. While many of the attributes of PNCC processing have been strongly influenced by consideration of various attributes of human auditory processing (cf. [35], [36], we have favored approaches that provide pragmatic gains in robustness at small computational cost over approaches that are more faithful to auditory physiology in developing the specific processing that is performed. Some of the innovations of the PNCC processing that we consider to be the most important include: The use of medium-time processing with a duration of 50 120 ms to analyze the parameters characterizing environmental degradation, in combination with the traditional short-time Fourier analysis with frames of 20 30 ms used in conventional speech recognition systems. We believe that this approach enables us to estimate environmental degradation more accurately while maintaining the ability to respond to rapidly-changing speech signals, as discussed in Sec. II-B. The use of a form of asymmetric nonlinear filtering to estimate the level of the acoustical background noise for each time frame and frequency bin. We believe that this approach enables us to remove slowly-varying components easily without incurring many of the artifacts associated with over-correction in techniques such as spectral subtraction [37], as discussed in Sec. II-C. The development of a signal processing block that realizes temporal masking with a similar mechanism, as discussed in Sec. II-D. The replacement of the log nonlinearity in MFCC pro- 146 cessing by a power-law nonlinearity that is carefully 147 chosen to approximate the nonlinear relation between sig- 148 nal intensity and auditory-nerve firing rate, which phys- 149 iologists consider to be a measure of short-time signal 150 intensity at a given frequency. We believe that this nonlin- 151 earity provides superior robustness by suppressing small 152 signals and their variability, as discussed in Sec. II-G. 153 The development of computationally-efficient realiza- 154 tions of the algorithms above that support online real- 155 time processing that does not require substantial non- 156 causal look-ahead of the input signal to compute the 157 PNCC coefficients. An analysis of computational com- 158 plexity is provided in Sec. IV. 159 B. Structure of the PNCC algorithm 160 Figure 1 compares the structure of conventional MFCC pro- 161 cessing [12], PLP processing [13], [24], and the new PNCC 162 approach which we introduce in this paper. As was noted 163 above, the major innovations of PNCC processing include 164 the redesigned nonlinear rate-intensity function, along with 165 the series of processing elements to suppress the effects of 166 background acoustical activity based on medium-time analysis. 167 As can be seen from Fig. 1, the initial processing stages 168 of PNCC processing are quite similar to the corresponding 169 stages of MFCC and PLP analysis, except that the frequency 170 analysis is performed using gammatone filters [38]. This is fol- 171 lowed by the series of nonlinear time-varying operations that 172 are performed using the longer-duration temporal analysis that 173 accomplish noise subtraction as well as a degree of robustness 174 with respect to reverberation. The final stages of processing are 175 also similar to MFCC and PLP processing, with the exception 176 of the carefully-chosen power-law nonlinearity with exponent 177 1/15, whichwillbediscussedinsec.ii-gbelow.finally,we 178 note that if the shaded blocks in Fig. 1 are omitted, the pro- 179 cessing that remains is referred to as simple power-normalized 180 cepstral coefficients (SPNCC). SPNCCprocessinghasbeen 181 employed in other studies on robust recognition (e.g. [34]). 182 II. COMPONENTS OF PNCC PROCESSING 183 In this section we describe and discuss the major compo- 184 nents of PNCC processing in greater detail. While the detailed 185 description below assumes a sampling rate of 16 khz, the 186 PNCC features are easily modified to accommodate other 187 sampling frequencies. 188 A. Initial Processing 189 As in the case of MFCC features, a pre-emphasis filter of 190 the form H(z) =1 0.97z 1 is applied. A short-time Fourier 191 transform (STFT) is performed using Hamming windows of 192 duration 25.6 ms, with 10 ms between frames, using a DFT 193 size of 1024. Spectral power in 40 analysis bands is obtained 194 by weighting the magnitude-squared STFT outputs for pos- 195 itive frequencies by the frequency response associated with 196

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 3 F1:1 F1:2 F1:3 197 198 199 200 201 202 203 204 205 206 207 Fig. 1. Comparison of the structure of the MFCC, PLP, and PNCC feature extraction algorithms. The modules of PNCC that function on the basis of mediumtime analysis (with a temporal window of 65.6 ms) are plotted in the rightmost column. If the shaded blocks of PNCC are omitted, the remaining processing is referred to as simple power-normalized cepstral coefficients (SPNCC). a40-channelgammatone-shapedfilterbank[38]whosecenter frequencies are linearly spaced in Equivalent Rectangular Bandwidth (ERB) [39] between 200 Hz and 8000 Hz, using the implementation of gammatone filters in Slaney s Auditory Toolbox [40]. In previous work [21] we observed that the use of gammatone frequency weighting provides slightly better ASR accuracy in white noise, but the differences compared to the traditional triangular weights in MFCC processing are small. The frequency response of the gammatone filterbank is shown in Fig. 2. In each channel the area under the squared transfer function is normalized to unity to satisfy the equation: frequency 2πk/K where K is the DFT size. The correspond- 210 ing continuous-time frequencies are ν k = kf s /K, whereν k is 211 in Hz and F s is the sampling frequency for 0 k K/2. To 212 reduce the amount of computation, we modified the gamma- 213 tone filter responses slightly by setting H l (e jω k ) equal to zero 214 for all values of ω k for which the unmodified H l (e jω k ) would 215 be less than 0.5 percent of its maximum value (corresponding 216 to 46 db). 217 We obtain the short-time spectral power P [m, l] using the 218 squared gammatone summation as below: 219 (K/2) 1 k=0 H l (e jω k ) 2 =1 (1) P [m, l] = (K/2) 1 k=0 X[m, e jω k ]H l (e jω k ) 2 (2) 208 209 where H l (e jω k ) is the response of the l th gammatone channel at frequency ω k,andω k is the dimensionless discrete-time where m and l represent the frame and channel indices, 220 respectively. 221

4 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING F2:1 F2:2 F2:3 F2:4 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 Fig. 2. The frequency response of a gammatone filterbank with each area of the squared frequency response normalized to be unity. Characteristic frequencies are uniformly spaced between 200 and 8000 Hz according to the Equivalent Rectangular Bandwidth (ERB) scale [39]. B. Temporal Integration for Environmental Analysis Most speech recognition and speech coding systems use analysis frames of duration between 20 ms and 30 ms. It is frequently observed that longer analysis windows provide better performance for noise modeling and/or environmental normalization (e.g. [22] [42]), because the power associated with most background noise conditions changes more slowly than the instantaneous power associated with speech. In addition, Hermansky and others have observed that the characterization and exploitation of information about the longer-term envelopes of each gammatone channel can provide complementary information that is useful for improving speech recognition accuracy, as in the TRAPS and FDLP algorithms (e.g. [43] [45]), and it is becoming common to combine features over alongertimespantoimproverecognitionaccuracy,evenin baseline conditions (e.g. [46]). In PNCC processing we estimate a quantity that we refer to as medium-time power Q[m, l] by computing the running average of P [m, l], thepowerobservedinasingleanalysisframe, according to the equation: 1 Q[m, l] = 2M +1 m+m m =m M P [m,l] (3) 242 where m represents the frame index and l is the channel index. 243 We will apply the tilde symbol to all power estimates that are 244 performed using medium-time analysis. 245 We observed experimentally that the choice of the temporal 246 integration factor M has a substantial impact on performance 247 in white noise (and presumably other types of broadband back- 248 ground noise). This factor has less impact on the accuracy 249 that is observed in more dynamic interference or reverberation, 250 although the longer temporal analysis window does provide 251 some benefit in these environments as well [47]. We chose 252 the value of M =2 (corresponding to five consecutive win- 253 dows with a total net duration of 65.6 ms) on the basis of 254 255 these observations, as described in [47]. Since Q[m, l] is the moving average of P [m, l], Q[m, l] is a low-pass function of 256 m.ifm =2,theupperfrequencyisapproximately15Hz. 257 Nevertheless, if we were to use features based on Q[m, l] 258 directly for speech recognition, recognition accuracy would be degraded because onsets and offsets of the frequency compo- 259 nents would become blurred. Hence in PNCC, we use Q[m, l] 260 only for noise estimation and compensation, which are used to 261 modify the information based on the short-time power estimates 262 P [m, l]. Wealsoapplysmoothingoverthevariousfrequency 263 channels, which will be discussed in Sec. II-E below. 264 C. Asymmetric Noise Suppression 265 In this section, we discuss a new approach to noise com- 266 pensation which we refer to as asymmetric noise suppression 267 (ANS). This procedure is motivated by the observation men- 268 tioned above that the speech power in each channel usually 269 changes more rapidly than the background noise power in the 270 same channel. Alternately we might say that speech usually has 271 ahigher-frequencymodulationspectrumthannoise.motivated 272 by this observation, many algorithms, including the widely- 273 used RASTA-PLP processing, have been developed using either 274 high-pass filtering or band-pass filtering in the modulation spec- 275 trum domain either explicitly or implicitly (e.g. [24], [48], [49], 276 [50]). The simplest way to accomplish this objective is to per- 277 form high-pass filtering in each channel (e.g. [51], [52]) which 278 has the effect of removing slowly-varying components which 279 typically represent the effects of additive noise sources rather 280 than the speech signal. 281 One significant problem with the application of conventional 282 linear high-pass filtering in the power domain is that the fil- 283 ter output can become negative. Negative values for the power 284 coefficients are problematic in the formal mathematical sense 285 (in that power itself is positive). They also cause problems in 286 the application of the compressive nonlinearity and in speech 287 resynthesis unless a suitable floor value is applied to the power 288 coefficients (e.g. [49], [52]). Rather than filtering in the power 289 domain, we could perform filtering after applying the logarith- 290 mic nonlinearity, as is done with conventional cepstral mean 291 normalization in MFCC processing. Nevertheless, as will be 292 seen in Sec. III, this approach is not very helpful for environ- 293 ments with additive noise. Spectral subtraction is another way 294 to reduce the effects of noise, whose power changes slowly. In 295 spectral subtraction techniques, the noise level is typically esti- 296 mated from the power of non-speech segments (e.g. [37]) or 297 through the use of a continuous-update approach (e.g. [51]). In 298 the approach that we introduce, we obtain a running estimate 299 of the time-varying noise floor using an asymmetric nonlinear 300 filter, and subtract that from the instantaneous power. 301 Figure 3 is a block diagram of the complete asymmetric non- 302 linear suppression processing with temporal masking. Let us 303 begin by describing the general characteristics of the asymmet- 304 ric nonlinear filter that is the first stage of processing. This filter 305 is represented by the following equation for arbitrary input and 306 output Q in [m, l] and Q out [m, l], respectively: 307 λ a Qout [m 1,l]+(1 λ a ) Q in [m, l], if Q out [m, l] = Q in [m, l] Q out [m 1,l] λ b Qout [m 1,l]+(1 λ b ) Q in [m, l], if Q in [m, l] < Q out [m 1,l] (4)

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 5 F3:1 F3:2 F3:3 F3:4 F3:5 F3:6 F3:7 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 Fig. 3. Functional block diagram of the modules for asymmetric noise suppression (ANS) and temporal masking in PNCC processing. All processing is performed on a channel-by-channel basis. Q[m, l] is the medium-timeaveraged input power as defined by Eq. (3), R[m, l] is the speech output of the ANS module, and Q tm[m, l] is the output after temporal masking (which is applied only to the speech frames). The block labelled Temporal Masking is depicted in detail in Fig. 5. where m is the frame index and l is the channel index, and λ a and λ b are constants between zero and one. If λ a = λ b it is easy to verify that Eq. (4) reduces to a conventional first-order IIR filter (with a pole at z = λ in the z-plane) that is lowpass in nature because the values of the λ parameters are positive, as shown in Fig. 4(a). In contrast, If 1 >λ b > λ a > 0,thenonlinearfilterfunctionsasaconventional upper envelope detector, as illustrated in Fig. 4(b). Finally, and most usefully for our purposes, if 1 >λ a >λ b > 0, thefilteroutput Q out tends to follow the lower envelope of Q in [m, l], asseen in Fig. 4(c). In our processing, we will use this slowly-varying lower envelope in Fig. 4(c) to serve as a model for the estimated medium-time noise level, and the activity above this envelope is assumed to represent speech activity. Hence, subtracting this low-level envelope from the original input Qin [m, l] will remove a slowly varying non-speech component. We will use the notation Q out [m, l] =AF λa,λ b [ Q in [m, l]] (5) Fig. 4. Sample inputs (solid curves) and outputs (dashed curves) of the asymmetric nonlinear filter defined by Eq. (4) for conditions when (a) λ a = λ b (b) λ a <λ b,and(c)λ a >λ b.inthisexample,thechannelindexl is 8. F4:1 F4:2 F4:3 to represent the nonlinear filter described by Eq. (4). We note 325 that this filter operates only on the frame indices m for each 326 channel index l. 327 Keeping the characteristics of the asymmetric filter described 328 above in mind, we may now consider the structure shown 329 in Fig. 3. In the first stage, the lower envelope Q le [m, l], 330 which represents the average noise power, is obtained by ANS 331 processing according to the equation 332 Q le [m, l] =AF 0.999,0.5 [ Q[m, l]] (6) as depicted in Fig. 4(c). Qle [0,l] is initialized to 0.9 Q[m, l]. 333 Q le [m, l] is subtracted from the input Q[m, l],effectivelyhigh- 334 pass filtering the input, and that signal is passed through an 335 ideal half-wave linear rectifier to produce the rectified output 336 Q 0 [m, l]. Theimpactofthespecificvaluesoftheforgetting 337 factors λ a and λ b on speech recognition accuracy is discussed 338 in [47]. 339 The remaining elements of ANS processing in the right-hand 340 side of Fig. 3 (other than the temporal masking block) are 341 included to cope with problems that develop when the rectifier 342 output Q 0 [m, l] remains zero for an interval, or when the local 343 variance of Q 0 [m, l] becomes excessively small. Our approach 344 to this problem is motivated by our previous work [22] in which 345 it was noted that applying a well-motivated flooring level to 346 power is very important for noise robustness. In PNCC process- 347 ing we apply the asymmetric nonlinear filter for a second time 348 to obtain the lower envelope of the rectifier output Qf [m, l], 349 and we use this envelope to establish this floor level. This 350

6 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 351 352 envelope Q f [m, l] is obtained using asymmetric filtering as before: Q f [m, l] =AF 0.999,0.5 [ Q 0 [m, l]] (7) 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 Q f [0,l] is initialized as Q 0 [m, l]. AsshowninFig.3,weuse the lower envelope of the rectified signal Qf [m, l] as a floor level for Q 1 [m, l] after temporal masking: Q 1 [m, l] =max( Q tm [m, l], Q f [m, l]) (8) where Q tm [m, l] is the temporal masking output depicted in Fig. 3. Temporal masking for speech segments is discussed in Sec. II-D. We have found that applying lowpass filtering to the signal segments that do not appear to be driven by a periodic excitation function (as in voiced speech) improves recognition accuracy in noise by a small amount. For this reason we use the lower envelope of the rectified signal Q 0 [m, l] directly for these non-excitation segments. This operation, which is effectively a further lowpass filtering, is not performed for the speech segments because blurring the power coefficients for speech degrades recognition accuracy. Excitation/non-excitation decisions for this purpose are obtained for each value of m and l in a very simple fashion: excitation segment if Q[m, l] c Q le [m, l] non-excitation segment if Q[m, l] <c Q le [m, l] (9a) (9b) where Q le [m, l] is the lower envelope of Q[m, l] as described above, and c is a fixed constant. In other words, a particular value of Q[m, l] is not considered to be a sufficiently large excitation if it is less than a fixed multiple of its own lower envelope. Based on the excitation/non-excitation result shown in (9), the final output of the block in Fig. 3 is given by the following equation: R[m, l] = Q 1 [m, l] if excitation segment (10a) R[m, l] = Q f [m, l] if non-excitation segment (10b) We observed experimentally that while a broad range of values of λ b between 0.25 and 0.75 appear to provide reasonable recognition accuracy, the choice of λ b =0.5 appears to be best under most circumstances [47]. The parameter values used for the current standard implementation are λ a =0.999 and λ b =0.5,whichwerechoseninparttomaximizetherecognition accuracy in clean speech as well as performance in noise. We also observed (in experiments in which the temporal masking described below was bypassed) that the threshold-parameter value c =2provides the best performance for white noise (and presumably other types of broadband noise). The value of c has little impact on performance in background music and in the presence of reverberation, as discusssed in [47]. D. Temporal Masking Many authors have noted that the human auditory system appears to focus more on the onset of an incoming power Fig. 5. Block diagram of the components that accomplish temporal masking in Fig. 3. F5:1 F5:2 envelope rather than the falling edge of that same power enve- 393 lope (e.g. [53], [54]). This observation has led to several onset 394 enhancement algorithms (e.g. [52] [57]). In this section we 395 describe a simple way to incorporate this effect in PNCC pro- 396 cessing, by obtaining a moving peak for each frequency channel 397 l and suppressing the instantaneous power if it falls below this 398 envelope. 399 The processing invoked for temporal masking is depicted in 400 block diagram form in Fig. 5. We first obtain the online peak 401 power Q p [m, l] for each channel using the following equation: 402 ( Q p [m, l] =max λ t Qp [m 1,l], Q ) 0 [m, l] (11) where λ t is the forgetting factor for obtaining the online peak. 403 As before, m is the frame index and l is the channel index. 404 Temporal masking for speech segments is accomplished using 405 the following equation: 406 { Q0 [m, l], Q0 [m, l] λ t Qp [m 1,l] Q tm [m, l] = µ t Qp [m 1,l], Q0 [m, l] <λ t Qp [m 1,l] (12) We have found [47] that if the forgetting factor λ t is equal to 407 or less than 0.85 and if µ t 0.2, recognitionaccuracyremains 408 almost constant for clean speech and most additive noise con- 409 ditions, and if λ t increases beyond 0.85, performance degrades. 410 The value of λ t =0.85 also appears to be best in the reverberant 411 condition. For these reasons we use the values λ t =0.85 and 412 µ t =0.2 in the standard implementation of PNCC. We have 413 chosen a parameter value of λ t =0.85 to maximize recognition 414 accuracy [47]. This value of λ t corresponds to a time con- 415 stant of 28.2 ms, so the offset attenuation lasts approximately 416 100 ms. This characteristic is in accordance with observed data 417 for humans [58]. 418 Figure 6 illustrates the effect of this temporal masking. In 419 general, with temporal masking the response of the system is 420 inhibited for portions of the input signal Q[m, l] other than 421 rising attack transients. The difference between the signals 422 with and without masking is especially pronounced in reverber- 423 ant environments, for which the temporal processing module is 424 especially helpful. 425

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 7 T [m, l] =P [m, l] S[m, l] (14) F6:1 F6:2 F6:3 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 Fig. 6. Demonstration of the effect of temporal masking in the ANS module for (a) clean speech, and (b) speech in simulated reverberation T 60 =0.5 s. In this example, the channel index l is 18. The final output of the asymmetric noise suppression and temporal masking modules is R[m, l] = Q tm [m, l] for the excitation segments and R[m, l] = Q f [m, l] for the non-excitation segments, assuming Q tm [m, l] > Q f [m, l]. E. Spectral Weight Smoothing In our previous research on speech enhancement and noise compensation techniques (e.g., [21],[22],[42],[59],[60]), it has been frequently observed that smoothing the response across channels is helpful. This is true especially in processing schemes such as PNCC where there are nonlinearities and/or thresholds that vary in their effect from channel to channel, as well as processing schemes that are based on inclusion of responses only from a subset of time frames and frequency channels (e.g. [59]) or systems that rely on missing-feature approaches (e.g. [61]). From the discussion above, we can represent the combined effects of asymmetric noise suppression and temporal masking for a specific time frame and frequency bin as the transfer function R[m, l]/ Q[m, l]. Smoothingthetransferfunctionacross frequency is accomplished by computing the running average over the channel index l of the ratio R[m, l]/ Q[m, l]. Hence, the frequency averaged weighting function T [m, l] (which had previously been subjected to temporal averaging) is given by: ( ) 1 l 2 R[m, l S[m, ] l] = (13) l 2 l 1 +1 Q[m, l ] l =l 1 where l 2 =min(l + N,L) and l 1 =max(l N,1), andl is the total number of channels. The time-averaged, frequency-averaged transfer function S[m, l] is used to modulate the original short-time power P [m, l]: In the present implementation of PNCC, we use a value of 454 N =4,and a total number of L =40 gammatone channels, 455 again based on empirical optimization from the results of pilot 456 studies [47]. We note that if we were to use a different number 457 of channels L, theoptimalvalueofn would be also different. 458 F. Mean Power Normalization 459 It is well known that auditory processing includes an auto- 460 matic gain control that reduces the impact of changes of 461 amplitude in the incoming signal, and this processing is often 462 an explicit component of physiologically-motivated models of 463 signal processing (e.g. [49], [62], [63]). In conventional MFCC 464 processing, multiplication of the input signal by a constant 465 scale factor produces only an additive shift of the C 0 coef- 466 ficient because a logarithmic nonlinearity is included in the 467 processing, and this shift is easily removed by cepstral mean 468 normalization. In PNCC processing, however, the replacement 469 of the log nonlinearity by a power-law nonlinearity, as dis- 470 cussed below, causes the response of the processing to be 471 affected by changes in absolute power, even though we have 472 observed that this effect is usually small. In order to reduce 473 the potential impact of amplitude scaling in PNCC further we 474 invoke a stage of mean power normalization. 475 While the easiest way to normalize power would be to divide 476 the instantaneous power by the average power over the utter- 477 ance, this is not feasible for real-time online processing because 478 of the look ahead that would be required. For this reason, we 479 normalize input power in the present online implementation of 480 PNCC by dividing the incoming power by a running average of 481 the overall power. The mean power estimate µ[m] is computed 482 from the simple difference equation: 483 µ[m] =λ µ µ[m 1] + (1 λ L 1 µ) T [m, l] (15) L where m and l are the frame and channel indices, as before, 484 and L represents the number of frequency channels. We use a 485 value of 0.999 for the forgetting factor λ µ.fortheinitialvalue 486 of µ[m], weusethevalueobtainedfromthetrainingdatabase. 487 Since the time constant corresponding to λ µ is around 4.6 sec- 488 onds, we do not need to incorporate a formal voice activity 489 detector (VAD) in conjunction with PNCC if the continuous 490 non-speech portions of an utterance are no longer than 3 to 4 491 seconds. If silences of longer duration are interspersed with the 492 speech, however, we recommend the use of an appropriate VAD 493 in combination with PNCC processing. 494 The normalized power is obtained directly from the running 495 power estimate µ[m]: 496 T [m, l] U[m, l] =k µ[m] l=0 (16) where the value of the constant k is arbitrary. In pilot experi- 497 ments we found that the speech recognition accuracy obtained 498 using the online power normalization described above is 499

8 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING F7:1 F7:2 F7:3 F8:1 F8:2 F8:3 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 Fig. 7. Synapse output for a pure tone input with a carrier frequency of 500 Hz at 60 db SPL. This synapse output is obtained using the auditory model by Heinz et al. [64]. Fig. 8. Comparison of the onset rate (solid curve) and sustained rate (dashed curve) obtained using the model proposed by Heinz et al. [64]. The curves were obtained by averaging responses over seven frequencies. See text for details. comparable to the accuracy that would be obtained by normalizing according to a power estimate that is computed over the entire estimate in offline fashion. G. Rate-Level Nonlinearity Several studies in our group (e.g. [21], [60]) have confirmed the critical importance of the nonlinear function that describes the relationship between incoming signal amplitude in a given frequency channel and the corresponding response of the processing model. This rate-level nonlinearity is explicitly or implicitly a crucial part of every conceptual or physiological model of auditory processing (e.g. [62], [65], [66]). In this section we summarize our approach to the development of the rate-level nonlinearity used in PNCC processing. It is well known that the nonlinear curve relating sound pressure level in decibels to the auditory-nerve firing rate is compressive (e.g. [64],[67]).Ithasalsobeenobservedthat the average auditory-nerve firing rate exhibits an overshoot at the onset of an input signal. As an example, we compare in Fig. 8 the average onset firing rate versus the sustained rate as predicted by the model of Heinz et al. [64]. The curves in this figure were obtained by averaging the rate-intensity values Fig. 9. Comparison between a human rate-intensity relation using the auditory model developed by Heinz et al. [64], a cube root power-law approximation, an MMSE power-law approximation, and a logarithmic function approximation. Upper panel: Comparison using the pressure (Pa) as the x-axis. Lower panel: Comparison using the sound pressure level (SPL) in db as the x-axis. F9:1 F9:2 F9:3 F9:4 F9:5 obtained from sinusoidal tone bursts over seven frequencies, 521 100, 200, 400, 800, 1600, 3200, and 6400 Hz. For the onset-rate 522 results we partitioned the response into bins of length of 2.5 ms, 523 and searched for the bin with maximum rate during the initial 524 10 ms of the tone burst. To measure the sustained rate, we aver- 525 aged the response rate between 50 and 100 ms after the onset 526 of the signals. The curves were generated under the assumption 527 that the spontaneous rate is 50 spikes/second. We observe in 528 Fig. 8 that the sustained firing rate (broken curve) is S-shaped 529 with a threshold around 0 db SPL and a saturating segment that 530 begins at around 30 db SPL. The onset rate (solid curve), on the 531 other hand, increases continuously without apparent saturation 532 over the conversational hearing range of 0 to 80 db SPL. We 533 choose to model the onset rate-intensity curve for PNCC pro- 534 cessing because of the important role that it appears to play in 535 auditory perception. Figure 9 compares the onset rate-intensity 536 curve depicted in Fig. 8 with various analytical functions that 537 approximate this function. The curves are plotted as a function 538 of db SPL in the lower panel of the figure and as a function of 539 absolute pressure in Pascals in the upper panel, and the putative 540 spontaneous firing rate of 50 spikes per second is subtracted 541 from the curves in both cases. 542 The most widely used current feature extraction algo- 543 rithms are Mel Frequency Cepstral Coefficients (MFCC) and 544 Perceptual Linear Prediction (PLP) coefficients. Both the 545

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 9 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 MFCC and PLP procedures include an intrinsic nonlinearity, which is logarithmic in the case of MFCC and a cube-root power function in the case of PLP analysis. We plot these curves relating the power of the input pressure p to the response s in Fig. 9 using values of the arbitrary scaling parameters that are chosen to provide the best fit to the curve of the Heinz et al. model, resulting in the following equations: s cube =4294.1p 2/3 (17) s log =120.2log(p)+1319.3 (18) We note that the exponent of the power function is doubled because we are plotting power rather than pressure. Even though scaling and shifting by fixed constants in Eqs. (17) and (18) do not have any significance in speech recognition systems, we included them in the above equation to fit these curves to the rate-intensity curve in Fig. 9(a). The constants in Eqs. (17) and (18) are obtained using an MMSE criterion for the sound pressure range between 0 db (20µPa) and 80 db (0.2 Pa) from the linear rate-intensity curve in the upper panel of Fig. 8. We have also observed experimentally [47] that a power-law curve with an exponent of 1/15 for sound pressure provides areasonablygoodfittothephysiologicaldatawhileoptimizing recognition accuracy in the presence of noise. We have observed that larger values of the pressure exponent such as 1/5 provide better performance in white noise, but they degrade the recognition accuracy that is obtained for clean speech [47]. We consider the value 1/15 for the pressure exponent to represent apragmaticcompromisethatprovidesreasonableaccuracyin white noise without sacrificing recognition accuracy for clean speech, producing the power-law nonlinearity V [m, l] =U[m, l] 1/15 (19) where again U[m, l] and V [m, l] have the dimensions of power. This curve is closely approximated by the equation s power =1389.6p 0.1264 (20) which is also plotted in Fig. 9. The exponent of 0.1264 happens to be the best fit to the data of Heinz et al. as depicted in the upper panel of Fig. 8. As before, this estimate was developed in the MMSE sense over the sound pressure range between 0 db (20µPa) and 80 db (0.2 Pa). The power law function was chosen for PNCC processing for several reasons. First, it is a relationship that is not affected in form by multiplying the input by a constant. Second, it has the attractive property that its asymptotic response at very low intensities is zero rather than negative infinity, which reduces variance in the response to low-level inputs such as spectral valleys or silence segments. Finally, the power law has been demonstrated to provide a good approximation to the psychophysical transfer functions that are observed in experiments relating the physical intensity of sensation to the perceived intensity using direct magnitude-estimation procedures (e.g. [68]), although the exponent of the power function, 1/15,thatprovidesthebestfittotheonsetratesinthemodelof Heinz et al. [64] is different from the one that provides the best fit to the perceptual data [68]. Fig. 10. The effects of the asymmetric noise suppression, temporal masking, and the rate-level nonlinearity used in PNCC processing. Shown are the outputs of these stages of processing for clean speech and for speech corrupted by street noise at an SNR of 5 db when the logarithmic nonlinearity is used without ANS processing or temporal masking (upper panel), and when the power-law nonlinearity is used with ANS processing and temporal masking (lower panel). In this example, the channel index l is 8. F10:1 F10:2 F10:3 F10:4 F10:5 F10:6 F10:7 Figure 10 is a final comparison of the effects of the asymmet- 595 ric noise suppression, temporal masking, channel weighting, 596 and power-law nonlinearity modules discussed in Secs. II-C 597 through II-G. The curves in both panels compare the response 598 of the system in the channel with center frequency 490 Hz to 599 clean speech and speech in the presence of street noise at an 600 SNR of 5 db. The curves in the upper panel were obtained 601 using conventional MFCC processing, including the logarith- 602 mic nonlinearity and without ANS processing or temporal 603 masking. The curves in the lower panel were obtained using 604 PNCC processing, which includes the power-law transforma- 605 tion described in this section, as well as ANS processing and 606 temporal masking. We note that the difference between the two 607 curves representing clean and noisy speech is much greater with 608 MFCC processing (upper panel), especially for times during 609 which the signal is at a low level. 610 III. EXPERIMENTAL RESULTS 611 In this section we present experimental results that are 612 intended to demonstrate the superiority of PNCC processing 613 over competing approaches in a wide variety of acoustical 614 environments. We begin in Sec. III-A with a review of the 615 experimental procedures that were used. We provide some gen- 616 eral results for PNCC processing, we assess the contributions of 617 its various components in PNCC in Sec. III-B, and we compare 618 PNCC to a small number of other approaches in Sec. III-C. 619 It should be noted that in general we selected an algorithm 620 configuration and associated parameter values that provide very 621 good performance over a wide variety of conditions using a 622 single set of parameters and settings, without sacrificing word 623 error rate in clean conditions relative to MFCC processing. 624 In previous work we had described slightly different feature 625 extraction algorithms that provide even better performance for 626 speech recognition in the presence of reverberation [22] and 627

10 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 in background music [52], but these approaches do not perform as well as MFCC processing in clean speech. As noted in previous studies (e.g. [47], [69]) and above, we have observed that replacing the triangular frequency-weighting functions in MFCC processing by the gammatone filter response, and replacing the log linearity by the power-law nonlinearity, have provided improved recognition accuracy for virtually all types of degradation. The asymmetric noise suppression is especially useful in ameliorating the effects of additive noise, and the temporal masking component of the ANS module is useful for reducing the effects of reverberation. We used five standard testing environments in our work: (1) digitally-added white noise, (2) digitally-added noise that had been recorded live on urban streets, (3) digitally-added singlespeaker interference, (4) digitally-added background music, and (5) passage of the signal through simulated reverberation. The street noise was recorded by us on streets with steady but moderate traffic. The masking signal used for single-speakerinterference experiments consisted of other utterances drawn from the same database as the target speech, and background music was selected from music segments from the original DARPA Hub 4 Broadcast News database. The reverberation simulations were accomplished using the Room Impulse Response open source software package [70] based on the image method [71]. The room size used was 3 4 5 meters, the microphone is in the center of the room, the spacing between the target speaker and the microphone was assumed to be 3 meters, and reverberation time was manipulated by changing the assumed absorption coefficients in the room appropriately. These conditions were selected so that interfering additive noise sources of progressively greater difficulty were included, along with basic reverberation effects. A. Experimental Configuration The PNCC features described in this paper were evaluated by comparing the recognition accuracy obtained with PNCC introduced in this paper to that obtained using MFCC and RASTA- PLP processing. We used the version of conventional MFCC processing implemented as part of sphinx_fe in sphinxbase 0.4.1, bothfromthecmusphinxopensourcecodebase[72]. We used the PLP-RASTA implementation that is available at [73]. In all cases decoding was performed using the publiclyavailable CMU Sphinx 3.8 system [72] using training from SphinxTrain 1.0. WealsocomparedPNCCwiththevector Taylor series (VTS) noise compensation algorithm [4] and the ETSI Advanced Front End (AFE) which has several noise suppression algorithms included [8]. In the case of the ETSI AFE, we excluded the log energy element because this resulted in better results in our experiments. A bigram language model was used in all the experiments. We used feature vectors of length of 39 including delta and delta-delta features. For experiments using the DARPA Resource Management (RM1) database we used subsets of 1600 utterances of clean speech for training and 600 utterances of clean or degraded speech for testing. For experiments based on the DARPA Wall Street Journal (WSJ) 5000-word database we trained the system using the WSJ0 SI-84 training set and tested it on the WSJ0 5K test set. We typically plot word recognition accuracy, which is 100 684 percent minus the word error rate (WER), using the standard 685 definition for WER of the number of insertions, deletions, and 686 substitutions divided by the number of words spoken. 687 B. General Performance of PNCC in Noise and Reverberation 688 In this section we describe the recognition accuracy obtained 689 using PNCC processing in the presence of various types of 690 degradation of the incoming speech signals. Figures 11 and 691 12 describe the recognition accuracy obtained with PNCC 692 processing in the presence of white noise, street noise, back- 693 ground music, and speech from a single interfering speaker as 694 afunctionofsnr,aswellasinthesimulatedreverberantenvi- 695 ronment as a function of reverberation time. These results are 696 plotted for the DARPA RM database in Fig. 11 and for the 697 DARPA WSJ database in Fig. 12. For the experiments con- 698 ducted in noise we prefer to characterize the improvement in 699 recognition accuracy by the amount of lateral shift of the curves 700 provided by the processing, which corresponds to an increase of 701 the effective SNR. For white noise using the RM task, PNCC 702 provides an improvement of about 12 db to 13 db compared 703 to MFCC processing, as shown in Fig. 11. In the presence 704 of street noise, background music, and interfering speech, 705 PNCC provides improvements of approximately 8 db, 3.5 db, 706 and 3.5 db, respectively. We also note that PNCC process- 707 ing provides considerable improvement in reverberation, espe- 708 cially for longer reverberation times. PNCC processing exhibits 709 similar performance trends for speech from the DARPA 710 WSJ0 database in similar environments, as seen in Fig. 12, 711 although the magnitude of the improvement is diminished 712 somewhat, which is commonly observed as we move to larger 713 databases. 714 The curves in Figs. 11 and 12 are also organized in a 715 way that highlights the various contributions of the major 716 components. Beginning with baseline MFCC processing the 717 remaining curves show the effects of adding in sequence (1) 718 the power-law nonlinearity (along with mean power normaliza- 719 tion and the gammatone frequency integration), (2) the ANS 720 processing including spectral smoothing, and finally (3) tempo- 721 ral masking. It can be seen from the curves that a substantial 722 improvement can be obtained by simply replacing the loga- 723 rithmic nonlinearity of MFCC processing by the power-law 724 rate-intensity function described in Sec. II-G. The addition of 725 the ANS processing provides a substantial further improvement 726 for recognition accuracy in noise. Although it is not explic- 727 itly shown in Figs. 11 and 12, temporal masking is particularly 728 helpful in improving accuracy for reverberated speech and for 729 speech in the presence of interfering speech. 730 C. Comparison With Other Algorithms 731 Figures 13 and 14 provide comparisons of PNCC processing 732 to the baseline MFCC processing with cepstral mean normal- 733 ization, MFCC processing combined with the vector Taylor 734 series (VTS) algorithm for noise robustness [4], as well as 735 RASTA-PLP feature extraction [24] and the ETSI Advanced 736

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 11 F11:1 F11:2 F11:3 F11:4 F11:5 F11:6 737 738 739 Fig. 11. Recognition accuracy obtained using PNCC processing in various types of additive noise and reverberation. Curves are plotted separately to indicate the contributions of the power-law nonlinearity, asymmetric noise suppression, and temporal masking. Results are described for the DARPA RM1 database in the presence of (a) white noise, (b) street noise, (c) background music, (d) interfering speech, and (e) artificial reverberation. Front End (AFE) [8]. We compare PNCC processing to MFCC and RASTA-PLP processing because these features are most widely used in baseline systems, even though neither MFCC Fig. 12. Recognition accuracy obtained using PNCC processing in various types of additive noise and reverberation. Curves are plotted separately to indicate the contributions of the power-law nonlinearity, asymmetric noise suppression, and temporal masking. Results are described for the DARPA WSJ0 database in the presence of (a) white noise, (b) street noise, (c) background music, (d) interfering speech, and (e) artificial reverberation. F12:1 F12:2 F12:3 F12:4 F12:5 F12:6 nor PLP features were designed to be robust in the presence of 740 additive noise. The experimental conditions used were the same 741 as those used to produce Figs. 11 and 12. 742

12 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING F13:1 F13:2 F13:3 F13:4 F13:5 Fig. 13. Comparison of recognition accuracy for PNCC with processing using MFCC features, the ETSI AFE, MFCC with VTS, and RASTA-PLP features using the DARPA RM1 corpus. Environmental conditions are (a) white noise, (b) street noise, (c) background music, (d) interfering speech, and (e) reverberation. Fig. 14. Comparison of recognition accuracy for PNCC with processing using MFCC features, ETSI AFE, MFCC with VTS, and RASTA-PLP features using the DARPA WSJ0 corpus. Environmental conditions are (a) white noise, (b) street noise, (c) background music, (d) interfering speech, and (e) reverberation. F14:1 F14:2 F14:3 F14:4 F14:5 743 744 745 746 We note in Figs. 13 and 14 that PNCC provides substantially better recognition accuracy than both MFCC and RASTA-PLP processing for all conditions examined. (We remind the reader that neither MFCC nor PLP coefficients had been developed with the goal of robustness in the presence of noise or rever- 747 beration.) PNCC coefficients also provide recognition accuracy 748 that is better than the combination of MFCC with VTS, and 749 at a substantially lower computational cost than is incurred 750

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 13 T1:1 T1:2 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 TABLE I NUMBER OF MULTIPLICATIONS AND DIVISIONS IN EACH FRAME in implementing VTS. We also note that the VTS algorithm provides little or no improvement over the baseline MFCC performance in difficult environments like background music noise, single-channel interfering speaker, or reverberation. The ETSI Advanced Front End (AFE) [8] generally provides slightly better recognition accuracy than VTS in noisy environments, but the accuracy obtained with the AFE does not approach that obtained with PNCC processing in the most difficult noise conditions. Neither the ETSI AFE nor VTS improve recognition accuracy in reverberant environments compared to MFCC features, while PNCC provides measurable improvements in reverberation, and the closely-related SSF algorithm [52] provides even greater recognition accuracy in reverberation (at the expense of somewhat worse performance in clean speech). IV. COMPUTATIONAL COMPLEXITY Table I provides estimates of the computational demands MFCC, PLP, and PNCC feature extraction. (RASTA processing is not included in these tabulations.) As before, we use the standard open source Sphinx code in sphinx_fe [72] for the implementation of MFCC, and the implementation in [73] for PLP. We assume that the window length is 25.6 ms and that the interval between successive windows is 10 ms. The sampling rate is assumed to be 16 khz, and we use a 1024-pt FFT for each analysis frame. It can be seen in Table I that because all three algorithms use 1024-point FFTs, the greatest difference from algorithm to algorithm in the amount of computation required is associated with the spectral integration component. Specifically, the triangular weighting used in the MFCC calculation encompasses a narrower range of frequencies than the trapezoids used in PLP processing, which is in turn considerably narrower than the gammatone filter shapes, and the amount of computation needed for spectral integration is directly proportional to the effective bandwidth of the channels. For this reason, as mentioned in Sec. II-A, we limited the gammatone filter computation to those frequencies for which the filter transfer function is 0.5 percent or more of the maximum filter gain. In 788 the interest in obtaining the most direct comparisons in Table I, 789 we limited the spectral computation of the weight functions for 790 MFCC and PLP processing in the same fashion. 791 As can be seen in Table I, PLP processing by this tabulation is 792 about 32.9 percent more costly than baseline MFCC processing. 793 PNCC processing is approximately 34.6 percent more costly 794 than MFCC processing and 1.31 percent more costly than PLP 795 processing. 796 V. SUMMARY 797 In this paper we introduce power-normalized cepstral coef- 798 ficients (PNCC), which we characterize as a feature set that 799 provides better recognition accuracy than MFCC and RASTA- 800 PLP processing in the presence of common types of additive 801 noise and reverberation. PNCC processing is motivated by the 802 desire to develop computationally efficient feature extraction 803 for automatic speech recognition that is based on a prag- 804 matic abstraction of various attributes of auditory processing 805 including the rate-level nonlinearity, temporal and spectral inte- 806 gration, and temporal masking. The processing also includes 807 acomponentthatimplementssuppressionofvarioustypesof 808 common additive noise. PNCC processing requires only about 809 33 percent more computation compared to MFCC. 810 Further details about the motivation for and implementa- 811 tion of PNCC processing are available in [47]. This paper also 812 includes additional relevant experimental findings including 813 results obtained for PNCC processing using multi-style training 814 and in combination with speaker-by-speaker MLLR. 815 Open Source MATLAB code for PNCC may be found 816 at http://www.cs.cmu.edu/~robust/archive/algorithms/pncc_ 817 IEEETran. The code in this directory was used for obtaining the 818 results for this paper. Prof. Kazumasa Yamamoto more recently 819 re-implemented PNCC in C; this code may be obtained at 820 http://www.cs.cmu.edu/~robust/archive/algorithms/pncc_c. 821 ACKNOWLEDGMENT 822 The authors are grateful to Bhiksha Raj, Mark Harvilla, 823 Kshitiz Kumar, and Kazumasa Yamamoto for many helpful dis- 824 cussions, and to Hynek Hermansky for very helpful comments 825 on an earlier draft of the manuscript. A summary version of part 826 of this paper was presented at [74]. 827 REFERENCES 828 [1] L. R. Rabiner and B.-H. Juang, Fundamentals of Speech Recognition. 829 Englewood Cliffs, NJ: Prentice Hall, 1993. 830 [2] F. Jelinek, Statistical Methods for Speech Recognition (Language, 831 Speech, and Communication). Cambridge,MA,USA:MITPress,1998. 832 [3] A. Acero and R. M. Stern, Environmental robustness in automatic 833 speech recognition, in Proc. IEEE Int. Conf. Acoust., Speech, Signal 834 Process., Albuquerque,NewMexico,Apr.1990,vol.2,pp.849 852. 835 [4] P. J. Moreno, B. Raj, and R. M. Stern, A vector Taylor series approach 836 for environment-independent speech recognition, in Proc. IEEE Int. 837 Conf. Acoust., Speech Signal Process., May.1996,pp.733 736. 838 [5] P. Pujol, D. Macho, and C. Nadeu, On real-time mean-and-variance 839 normalization of speech recognition features, in Proc. IEEE Int. Conf. 840 Acoust., Speech Signal Process., May2006,vol.1,pp.773 776. 841

14 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 [6] R. M. Stern, B. Raj, and P. J. Moreno, Compensation for environmental degradation in automatic speech recognition, in Proc. ESCA Tut. Res. Workshop Robust Speech Recognit. Unknown Commun. Channels, Apr. 1997, pp. 33 42. [7] R. Singh, R. M. Stern, and B. Raj, Signal and feature compensation methods for robust speech recognition, in Noise Reduction in Speech Applications, G. M. Davis, Ed. Boca Raton, FL, USA: CRC Press, 2002, pp. 219 244. [8] Speech Processing, Transmission and Quality Aspects (STQ); Distributed Speech Recognition; Advanced Front-end Feature Extraction Algorithm; Compression Algorithms, European Telecommunications Standards Institute ES 202 050, Rev. 1.1.5, Jan. 2007. [9] S. Molau, M. Pitz, and H. Ney, Histogram based normalization in the acoustic feature space, in Proc. IEEE Workshop Autom. Speech Recognit. Understand., Nov.2001,pp.21 24. [10] H. Misra, S. Ikbal, H. Bourlard, and H. Hermansky, Spectral entropy based feature for robust ASR, in Proc. IEEE Int. Conf. Acoust. Speech, Signal Process., May2004,pp.193 196. [11] B. Raj, V. N. Parikh, and R. M. Stern, The effects of background music on speech recognition accuracy, in Proc. IEEE Int. Conf., Acoust., Speech Signal Process., Apr.1997,vol.2,pp.851 854. [12] S. B. Davis and P. Mermelstein, Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences, IEEE Trans. Acoust., Speech, Signal Process., vol. 28, no. 4, pp. 357 366, Aug. 1980. [13] H. Hermansky, Perceptual linear prediction analysis of speech, J. Acoust. Soc. Amer., vol.87,no.4,pp.1738 1752,Apr.1990. [14] S. Ganapathy, S. Thomas, and H. Hermansky, Robust spectro-temporal features based on autoregressive models of hilbert envelopes, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Mar.2010,pp.4286 4289. [15] M. Heckmann, X. Domont, F. Joublin, and C. Goerick, A hierarchical framework for spectro-temporal feature extraction, Speech Commun., vol. 53, no. 5, pp. 736 752, May/Jun. 2011. [16] B. T. Meyer and B. Kollmeier, Robustness of spectro-temporal features against intrinsic and extrinsic variations in automatic speech recognition, Speech Commun., vol.53,no.5,pp.753 767,May/Jun.2011. [17] N. Mesgarani, M. Slaney, and S. Shamma, Discrimination of speech from nonspeech based on multiscale spectro-temporal modulations, IEEE Trans. Audio, Speech, Lang. Process., vol.14,no.3,pp.920 929, May 2006. [18] M. Kleinschmidt, Localized spectro-temporal features for automatic speech recognition, in Proc. INTERSPEECH, Sep. 2003, pp. 2573 2576. [19] H. Hermansky and F. Valente, Hierarchical and parallel processing of modulation spectrum for ASR applications, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Mar.2008,pp.4165 4168. [20] S. Y. Zhao and N. Morgan, Multi-stream spectro-temporal features for robust speech recognition, in Proc. INTERSPEECH, Sep. 2008, pp. 898 901. [21] C. Kim and R. M. Stern, Feature extraction for robust speech recognition using a power-law nonlinearity and power-bias subtraction, in Proc. INTERSPEECH, Sep.2009,pp.28 31. [22] C. Kim and R. M. Stern, Feature extraction for robust speech recognition based on maximizing the sharpness of the power distribution and on power flooring, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Mar.2010,pp.4574 4577. [23] D.-S. Kim, S.-Y. Lee, and R. M. Kil, Auditory processing of speech signals for robust speech recognition in real-world noisy environments, IEEE Trans. Speech Audio Process., vol.7,no.1,pp.55 69,Jan.1999. [24] H. Hermansky and N. Morgan, RASTA processing of speech, IEEE. Trans. Speech Audio Process.,vol.2,no.4,pp.578 589,Oct.1994. [25] U. H. Yapanel and J. H. L. Hansen, A new perceptually motivated MVDR-based acoustic front-end (PMVDR) for robust automatic speech recognition, Speech Commun., vol.50,no.2,pp.142 152,Feb.2008. [26] F. Müller and A. Mertins, Contextual invariant-integration features for improved speaker-independent speech recognition, Speech Commun., vol. 53, no. 6, pp. 830 841, Jul. 2011. [27] B. Gajic and K. K. Paliwal, Robust parameters for speech recognition based on subband spectral centroid histograms, in Proc. Eurospeech, Sep. 2001, pp. 591 594. [28] F. Kelly and N. Harte, A comparison of auditory features for robust speech recognition, in Proc. EUSIPCO, Aug2010,pp.1968 1972. [29] F. Kelly and N. Harte, Auditory features revisited for robust speech recognition, in Proc. Int. Conf. Pattern Recognit., Aug.2010,pp.4456 4459. [30] J. K. Siqueira and A. Alcaim, Comparação dos atributos MFCC, 918 SSCH e PNCC para reconhecimento robusto de voz contínua, 919 in Proc. XXIX Simpósio Brasileiro de Telecomunicações, Oct. 920 2011. 921Q1 [31] G. Sárosi, M. Mozsáry, B. Tarján, A. Balog, P. Mihajlik, and T. Fegyó, 922 Recognition of multiple language voice navigation queries in traffic 923 situations, in Proc. COST Int. Conf.,Sep.2010,pp.199 213. 924 [32] G. Sárosi, M. Mozsáry, P. Mihajlik, and T. Fegyó, Comparison of 925 feature extraction methods for speech recognition in noise-free and in 926 traffic noise environment, in Proc. Speech Technol. Hum.-Comput. Dial. 927 (SpeD),May2011,pp.1 8. 928 [33] A. Fazel and S. Chakrabartty, Sparse auditory reproducing ker- 929 nel (SPARK) features for noise-robust speech recognition, IEEE 930 Trans. Audio, Speech, Lang. Process., vol.20,no.4,pp.1362 1371, 931 May 2012. 932 [34] M. J. Harvilla and R. M. Stern, Histogram-based subband power warp- 933 ing and spectral averaging for robust speech recognition under matched 934 and multistyle training, in Proc. IEEE Int. Conf. Acoust. Speech Signal 935 Process., May2012. 936 [35] R. M. Stern and N. Morgan, Features based on auditory physiology and 937 perception, in Techniques for Noise Robustness in Automatic Speech 938 Recognition, T.Virtanen,B.Raj,andR.Singh,Eds.Hoboken,NJ,USA: 939 Wiley, 2012. 940 [36] R. M. Stern and N. Morgan, Hearing is believing: Biologically-inspired 941 feature extraction for robust automatic speech recognition, Signal 942 943Q2 Process. Mag.,2012,submittedforpublication. [37] S. F. Boll, Suppression of acoustic noise in speech using spectral sub- 944 traction, IEEE Trans. Acoust., Speech Signal Process., vol. 27, no. 2, 945 pp. 113 120, Apr. 1979. 946 [38] P. D. Patterson, K. Robinson, J. Holdsworth, D. McKeown, C. Zhang, and 947 M. H. Allerhand, Complex sounds and auditory images, in Auditory 948 and Perception, Y. Cazals, L. Demany, and K. Horner, Eds. New York, 949 NY, USA: Pergamon, 1992, pp. 429 446. 950 [39] B. C. J. Moore and B. R. Glasberg, A revision of Zwicker s loudness 951 model, Acustica Acta Acustica, vol.82,pp.335 345,1996. 952 [40] M. Slaney, Auditory toolbox version 2, Interval Res. Corp. Tech. 953 Rep., no. 10, 1998 [Online]. Available: http://cobweb.ecn.purdue.edu/~ 954 malcolm/interval/1998-010/ 955 [41] D. Gelbart and N. Morgan, Evaluating long-term spectral subtraction 956 for reverberant ASR, in Proc. IEEE Workshop Autom. Speech Recognit. 957 Understand., 2001,pp.103 106. 958 [42] C. Kim and R. M. Stern, Power function-based power distribu- 959 tion normalization algorithm for robust speech recognition, in Proc. 960 IEEE Autom. Speech Recognit. Understand. Workshop, Dec. 2009, 961 pp. 188 193. 962 [43] H. Hermansky and S. Sharma, Temporal patterns (TRAPS) in ASR of 963 noisy speech, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 964 1999. 965 [44] M. Athineos, H. Hermansky, and D. P. W. Ellis, LP-TRAP: Linear pre- 966 dictive temporal patterns, in Proc. Int. Conf. Spoken Lang. Process., 967 2004, pp. 949 952. 968 [45] S. Ganapathy, S. Thomas, and H. Hermansky, Recognition of rever- 969 berant speech using frequency domain linear prediction, IEEE Signal 970 Process. Lett., vol.15,pp.681 684,Nov.2008. 971 [46] S. Rath, D. Povey, K. Veselý, and J. Černocký, Improved feature 972 processing for deep neural networks, in Proc. INTERSPEECH, Lyon, 973 France, Aug. 2013. 974 [47] C. Kim, Signal processing for robust speech recognition motivated 975 by auditory processing, Carnegie Mellon Univ., Pittsburgh, PA USA, 976 Dec. 2010 [Online]. Available: http://www.cs.cmu.edu/~robust/thesis/ 977 ChanwooKimPhDThesis.pdf 978 [48] B. E. D. Kingsbury, N. Morgan, and S. Greenberg, Robust speech recog- 979 nition using the modulation spectrogram, Speech Commun., vol. 25, 980 nos. 1 3, pp. 117 132, Aug. 1998. 981 [49] T. Dau, D. Püschel, and A. Kohlrausch, A quantitative model of the 982 effective signal processing in the auditory system. I. Model structure, 983 J. Acoust. Soc. Amer., vol.99,no.6,pp.3615 3622,1996. 984 [50] T. Chi, Y. Gao, M. C. Guyton, P. Ru, and S. A. Shamma, Spectro- 985 temporal modulation transfer functions and speech intelligibility, 986 J. Acoust. Soc. Amer., vol.106,pp.719 732,1999. 987 [51] H. G. Hirsch and C. Ehrlicher, Noise estimation techniques or robust 988 speech recognition, in Proc. IEEE Int. Conf. Acoust., Speech, Signal 989 Process., May1995,pp.153 156. 990 [52] C. Kim and R. M. Stern, Nonlinear enhancement of onset for robust 991 speech recognition, in Proc. INTERSPEECH, Sep. 2010, pp. 2058 992 2061. 993

KIM AND STERN: POWER-NORMALIZED CEPSTRAL COEFFICIENTS (PNCC) 15 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 [53] C. Lemyre, M. Jelinek, and R. Lefebvre, New approach to voiced onset detection in speech signal and its application for frame error concealment, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., May 2008, pp. 4757 4760. [54] S. R. M. Prasanna and P. Krishnamoorthy, Vowel onset point detection using source, spectral peaks, and modulation spectrum energies, IEEE Trans. Audio, Speech, Lang. Process., vol.17,no.4,pp.556 565,May 2009. [55] K. D. Martin, Echo suppression in a computational model of the precedence effect, in Proc. IEEE ASSP Workshop Appl. Signal Process. Audio Acoust., Oct.1997. [56] C. Kim, K. Kumar, and R. M. Stern, Binaural sound source separation motivated by auditory processing, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., May2011,pp.5072 5075. [57] T. S. Gunawan and E. Ambikairajah, A new forward masking model and its application to speech enhancement, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., May2006,pp.149 152. [58] W. Jesteadt, S. P. Bacon, and J. R. Lehman, Forward masking as a function of frequency, masker level, and signal delay, J. Acoust. Soc. Amer., vol. 71, no. 4, pp. 950 962, Apr. 1982. [59] C. Kim, K. Kumar, B. Raj, and R. M. Stern, Signal separation for robust speech recognition based on phase difference information obtained in the frequency domain, in Proc. INTERSPEECH,Sep.2009,pp.2495 2498. [60] C. Kim, K. Kumar, and R. M. Stern, Robust speech recognition using small power boosting algorithm, in Proc. IEEE Autom. Speech Recognit. Understand. Workshop,Dec.2009,pp.243 248. [61] B. Raj and R. M. Stern, Missing-feature methods for robust automatic speech recognition, IEEE Signal Process. Mag., vol.22,no.5,pp.101 116, Sep. 2005. [62] S. Seneff, A joint synchrony/mean-rate model of auditory speech processing, J. Phonetics, vol.16,no.1,pp.55 76,Jan.1988. [63] K. Wang and S. A. Shamma, Self-normalization and noise-robustness in early auditory representations, IEEE Trans. Speech Audio Process., vol. 2, no. 3, pp. 421 435, Jul. 1994. [64] M. G. Heinz, X. Zhang, I. C. Bruce, and L. H. Carney, Auditorynerve model for predicting performance limits of normal and impaired listeners, Acoust. Res. Lett. Online, vol.2,no.3,pp.91 96,Jul.2001. [65] J. Tchorz and B. Kolllmeier, A model of auditory perception as front end for automatic speech recognition, J. Acoust. Soc. Amer., vol. 106, no. 4, pp. 2040 2050, 1999. [66] X. Zhang, M. G. Heinz, I. C. Bruce, and L. H. Carney, A phenomenological model for the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and suppression, J. Acoust. Soc. Amer., vol. 109,no. 2, pp. 648 670, Feb. 2001. [67] X. Zhang, M. G. Heinz, I. C. Bruce, and L. H. Carney, A phenomenological model for the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and suppression, J. Acoust. Soc. Amer., vol. 109,no. 2, pp. 648 670, Feb. 2001. [68] S. S. Stevens, On the psychophysical law, Psychol. Rev.,vol.64,no.3, pp. 153 181, 1957. [69] C. Kim and R. M. Stern, Power-normalized cepstral coefficients (PNCC) for robust speech recognition, IEEE Trans. Audio, Speech, Lang. Process., submittedforpublication,2012. [70] S. G. McGovern. A Model for Room Acoustics [Online]. Available: http://2pi.us/rir.html [71] J. Allen and D. Berkley, Image method for efficiently simulating smallroom acoustics, J. Acoust. Soc. Amer., vol. 65, no. 4, pp. 943 950, Apr. 1979. [72] CMU Sphinx Consortium Sphinx Consortium. CMU Sphinx Open 1052 Source Toolkit for Speech Recognition: Downloads [Online]. Available: 1053 Q3 http://cmusphinx.sourceforge.net/wiki/download/ 1054 [73] D. Ellis. (2006). PLP and RASTA (and MFCC, and inversion) 1055 in MATLAB Using melfcc.m and invmelfcc.m. [Online]. Available: 1056 http://labrosa.ee.columbia.edu/matlab/rastamat/ 1057 [74] C. Kim and R. M. Stern, Power-normalized cepstral coefficients (PNCC) 1058 for robust speech recognition, in Proc. IEEE Int. Conf. Acoust., Speech, 1059 Signal Process., Mar.2012. 1060 Chanwoo Kim (M xx) received the B.S. and M.S. 1061 Q4 degrees in electrical engineering from Seoul National 1062 University, Seoul, South Korea, in 1998 and 2001, 1063 respectively, and the Ph.D. degree from the Language 1064 Technologies Institute, Carnegie Mellon University 1065 School of Computer Science, Pittsburgh, PA, USA, in 1066 2010. He has been a Software Engineer with Google, 1067 Inc., since 2013. He was a Speech Scientist and a 1068 Software Development Engineer with Microsoft from 1069 2011 to 2013. His doctoral research was focused on 1070 enhancing the robustness of automatic speech recog- 1071 nition systems in noisy environments. Toward this end, he has developed 1072 a number of different algorithms for single-microphone applications, dual- 1073 microphone applications, and multiple-microphone applications, which have 1074 been applied to various real-world applications. Between 2003 and 2005, he 1075 was a Senior Research Engineer with LG Electronics, where he worked primar- 1076 ily on embedded signal processing and protocol stacks for multimedia systems. 1077 Prior to his employment at LG, he worked for EdumediaTek and SK Teletech, 1078 as an R&D Engineer. 1079 1080 Richard M. Stern (F xx) received the S.B. degree 1080 from the Massachusetts Institute of Technology, 1081 Cambridge, MA, USA, in 1970, the M.S. degree 1082 from the University of California, Berkeley, Berkeley, 1083 CA, USA, in 1972, and the Ph.D. degree from 1084 MIT, in 1977, all in electrical engineering. He has 1085 been on the Faculty of Carnegie Mellon University, 1086 Pittsburgh, PA, USA, since 1977, where he is cur- 1087 rently a Professor with the Department of Electrical 1088 and Computer Engineering, Department of Computer 1089 Science, and the Language Technologies Institute. He 1090 is also a Lecturer with the School of Music. 1091 His research interests inlcude spoken language systems, where he is par- 1092 ticularly concerned with the development of techniques with which automatic 1093 speech recognition can be made more robust with respect to changes in envi- 1094 ronment and acoustical ambience. He has also developed sentence parsing and 1095 speaker adaptation algorithms for earlier CMU speech systems. In addition to 1096 his work in speech recognition, he also maintains an active research program 1097 in psychoacoustics, where he is best known for theoretical work in binau- 1098 ral perception. He is a Fellow of the Acoustical Society of America and the 1099 International Speech Communication Association (ISCA). He is also a mem- 1100 ber of the Audio Engineering Society. He was the recipient of the Allen Newell 1101 Award for Research Excellence in 1992. He served as the General Chair of 1102 Interspeech 2006 and as the 2008 2009 ISCA Distinguished Lecturer. 1103