1734 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 6, AUGUST 2011 On Regularization in Adaptive Filtering Jacob Benesty, Constantin Paleologu, Member, IEEE, and Silviu Ciochină, Member, IEEE Abstract Regularization plays a fundamental role in adaptive filtering. An adaptive filter that is not properly regularized will perform very poorly. In spite of this, regularization in our opinion is underestimated and rarely discussed in the literature of adaptive filtering. There are, very likely, many different ways to regularize an adaptive filter. In this paper, we propose one possible way to do it based on a condition that intuitively makes sense. From this condition, we show how to regularize four important algorithms: the normalized least-mean-square (NLMS), the signed-regressor NLMS (SR-NLMS), the improved proportionate NLMS (IPNLMS), and the SR-IPNLMS. Index Terms Adaptive filters, echo cancellation, improved proportionate NLMS (IPNLMS), normalized least-mean-square (NLMS), regularization, signed-regressor NLMS (SR-NLMS), SR-IPNLMS. I. INTRODUCTION R EGULARIZATION plays a fundamental role in all illposed problems, especially when the observation data is noisy, which is usually the case in all applications. In adaptive filtering, we always have a linear system of equations (overdetermined or underdetermined) to solve, explicitly or implicitly, so that we face an ill-conditioned problem or a rank-deficient problem [1]. As a result, regularization is an important design part in any adaptive filter if we want this one to behave properly. Let us denote by the regularization parameter. In many adaptive filters [2], [3], this regularization is chosen as is the variance of the zero-mean input signal, with denoting mathematical expectation, and is a positive constant. In practice though, is more a variable that depends on the level of the additive noise. The more the noise, the larger is the value of. In the rest of this work, we will refer to as the normalized (with respect to the variance of the input signal) regularization parameter. The regularization as proposed in (1) seems to work well in practice (with, of course, a good choice of ) since the misalignment, which is a distance measure between the true impulse response and the estimated Manuscript received September 14, 2010; revised November 22, 2010; accepted November 23, 2010. Date of publication December 06, 2010; date of current version June 01, 2011. This work was supported by the UEFISCSU Romania under Grant PN-II-RU-TE 7/05.08.2010. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Jingdong Chen. J. Benesty is with INRS-EMT, University of Quebec, Montreal, QC H5A 1K6, Canada (e-mail: benesty@emt.inrs.ca). C. Paleologu and S. Ciochină are with the Telecommunications Department, University Politehnica of Bucharest, Bucharest 060042, Romania (e-mail: pale@comm.pub.ro; silviu@comm.pub.ro). Digital Object Identifier 10.1109/TASL.2010.2097251 (1) one with an adaptive algorithm, decreases smoothly with time and converges to a stable and small value. Without this, the misalignment of the adaptive filter may fluctuate a lot and may even never converge. However, (1) was never really justified from a theoretical point of view. Even popular books [4], [5] discuss the regularization problem in a very superficial way and refer to it as a small positive number, which is not really true. Indeed, in our experience, the regularization parameter can vary from very small to very large, depending on the level of the additive noise. For the normalized least-mean-square (NLMS) algorithm, for example, we often take, but we also know by experience that if the noise is very high, we should take much higher than 20. Then, many questions arise, e.g., this value of comes from? Can it be justified? Can we find an optimal and in which sense? What about other adaptive filters? In this paper, we are not interested in a variable regularized parameter as discussed in many publications [6] [8], although we could do so. We are mainly interested in a constant regularization that would guaranty a stable behavior of the adaptive filter. As a consequence, with an appropriate, we could compare fairly different adaptive algorithms. There are, very likely, many different ways to regularize an adaptive filter. In this study, we show how to derive a regularization parameter from a condition that intuitively makes sense. We discuss the regularization of four important algorithms: the NLMS [4], [5], the signed-regressor NLMS (SR-NLMS) [2], [9], the improved proportionate NLMS (IPNLMS) [10], which is an improved version of the PNLMS [11], and the SR-IPNLMS [2]. II. SIGNAL MODEL We have the observed or desired signal is the discrete-time index, is the impulse response (of length ) of the system that we need to identify, superscript denotes transpose of a vector or a matrix, is a vector containing the most recent samples of the zeromean input signal, and is a zero-mean additive noise signal, which is independent of. The signal is called the echo in the context of echo cancellation. (2) (3) (4) 1558-7916/$26.00 2010 IEEE
BENESTY et al.: ON REGULARIZATION IN ADAPTIVE FILTERING 1735 From (2), we define the echo-to-noise ratio () [2], which is also the signal-to-noise ratio (SNR), as and are the variances of and, respectively, and is the correlation matrix of. Our objective is then to estimate or identify with an adaptive filter in such a way that for a reasonable value of (normalized) misalignment: (5) (6), we have for the is a predetermined small positive number and is the norm. III. REGULARIZATION OF THE NLMS ALGORITHM The classical NLMS algorithm is summarized by the following two expressions [2] [5]: is the normalized step-size parameter and is the regularization parameter of the NLMS. It is easy to see that the update equation can be rewritten as is the identity matrix of size, and (7) (8) (9) (10) (11) (12) is the correction component of the NLMS algorithm, which depends on the new observation. Note that does not depend on the noise signal. In fact, is one of the solutions of the underdetermined linear system of one equation. Clearly it is not the optimal one. The regularized version of the minimum -norm solution of that linear system is obtained by solving (13) So the other vector in (10) can be seen as a good initialization of the adaptive filter. The question now is how to find? Since is the error signal between the desired signal and the estimated signal obtained from the filter optimized in (13), we should find in such a way that the expected value of is equal to the variance of the noise, i.e., (14) This is reasonable if we want to attenuate the effects of the noise in the estimator. To derive the optimal according to (14), we assume in the rest that and is stationary. As a result, (15) Developing (14) and using (15), we easily derive the quadratic equation from which we deduce the obvious solution (16) (17) (18) is the normalized regularization parameter of the NLMS. We see that depends on three elements: the length of the adaptive filter, the variance,, of the input signal, and the. In both network and acoustic echo cancellation, the first two elements ( and ) are known, while the is often roughly known or can be estimated. Therefore, it is not hard to find a good value for in these applications. For example, we often take in simulations and db. With these values, we find that, which is very close to the value discussed in the introduction. This is clearly justified here as well as in our simulations. Furthermore, we have (19) (20) which is what we desire. It is of importance to check the evolution of as a function of the. In Fig. 1, the normalized regularization parameter (18) is plotted for with different values of the (between 0 and 50 db). As expected, the importance of becomes more apparent for low s. Also, as it can be noticed from the detailed figure presented in Fig. 2, the usual ad-hoc choice corresponds to a value of the close to 30 db, which is also a common choice in many simulation scenarios related to echo cancellation.
1736 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 6, AUGUST 2011 and (24) is the correction component of the SR-NLMS algorithm, which can be seen as an approximate solution of (13). For and a stationary signal,wehave (25) If we further assume that theorem, we find that is Gaussian and using Price s Fig. 1. Normalized regularization parameter with L =512. The varies from 0 to 50 db. as a function of the (26) (27) As a result, (28) Using the condition (14) and (28), we easily derive the quadratic equation Fig. 2. Normalized regularization parameter as a function of the with L = 512. The varies from 20 to 50 db. IV. REGULARIZATION OF THE SR-NLMS ALGORITHM The equations of the SR-NLMS algorithm are [2], [9] (29) Our solution is then (30) (31) is the normalized regularization parameter of the SR-NLMS. (21) (22) is the sign of each component of and is the regularization parameter of the SR-NLMS. This algorithm is very interesting from a practical point of view because its performance is equivalent to the NLMS but requires less multiplications at each iteration time as noticed in (22). The SR-NLMS algorithm can be decomposed as in (10) but now (23) V. REGULARIZATION OF THE IPNLMS ALGORITHM When the target impulse response is sparse, it is possible to take advantage of this sparsity to improve the performance of the classical adaptive filters. Duttweiler was one of the first researchers to come up with an elegant idea more than a decade ago by proposing the PNLMS algorithm [3], [11]. The idea behind the PNLMS is to update each coefficient of the filter independently of the others by adjusting the adaptation step size in proportion to the magnitude of the estimated filter coefficient. It redistributes the adaptation gains among all coefficients and emphasizes the large ones (in magnitude) in order to speed up their convergence and, consequently, achieving a fast initial convergence rate. The IPNLMS [3], [10] is an improved version of
BENESTY et al.: ON REGULARIZATION IN ADAPTIVE FILTERING 1737 the PNLMS and works very well even if the impulse response is not sparse, which is not the case for the PNLMS. The IPNLMS expressions are For and a stationary signal,wehave (32) (33) is the regularization parameter of the IPNLMS, (40) is an diagonal matrix, (34) Using the condition (14) and (40), we easily derive the quadratic equation from which the desired solution is (41) (35) is a parameter that controls the amount of proportionality in the IPNLMS, and is the norm. For, it can be easily checked that the IPNLMS and NLMS algorithms are identical. For close to 1, IPNLMS behaves like PNLMS. In practice, good choices for are 0.5 and 0. In simulations, with these recommended choices of, the IPNLMS algorithm always performs better than NLMS and PNLMS. The update equation of the IPNLMS can be rewritten as and (36) (37) (38) is the correction component of the IPNLMS algorithm, which depends on the new observation. Note that does not depend on the noise signal. It can be shown [12] that is a good approximation of the optimization problem (39) The previous optimization is the regularized version of the minimum -norm solution of the linear system of one equation. Therefore, we can use the condition (14) to derive. (42) (43) is the normalized regularization parameter of the IPNLMS. It is interesting to observe that the regularization does not depend on the parameter. In fact, the regularization of the IPNLMS is equivalent to the regularization of the NLMS up to the scaling factor, which is due to the definition of [see (35)]. VI. REGULARIZATION OF THE SR-IPNLMS ALGORITHM The SR-PNLMS was proposed in [2]. The extension of the SR principle to the IPNLMS is straightforward. Therefore, the SR-IPNLMS is summarized by the following two equations: (44) (45) is the regularization parameter of the SR-IPNLMS and is defined in the previous section. Using the approximation (46) is defined in (27), with the condition (14), we easily find the regularization parameter (47) (48) is the normalized regularization parameter of the SR-IPNLMS.
1738 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 6, AUGUST 2011 Fig. 3. Acoustic impulse response used in simulations. VII. SIMULATIONS Simulations were performed in the context of acoustic echo cancellation. This application is basically a system identification problem [4], an adaptive filter is used to identify an unknown system, i.e., the acoustic echo path between the loudspeaker and the microphone. In this context, the level of the background noise (i.e., the noise that corrupts the microphone signal) can be high. As a result, low values can be expected and, consequently, the importance of the regularization parameter becomes more apparent. The measured acoustic impulse response used in simulations is depicted in Fig. 3. It has 512 coefficients and the same length is used for the adaptive filters (i.e., ); the sampling rate is 8 khz. The far-end (input) signal,, is either a white Gaussian noise or a speech sequence. An independent white Gaussian noise is added to the echo signal with different values of the. Only the single-talk case is considered, i.e., the near-end talker is absent. In order to evaluate the tracking capabilities of the algorithms, an echo path change scenario is simulated by shifting the impulse response to the right by 12 samples. The performance is evaluated in terms of the normalized misalignment (in db), defined as (49) and the results are averaged over 20 independent trials. In order to outline the influence and the importance of the regularization parameter, the normalized step-size parameter of the adaptive algorithms is set to for most of the experiments (except when a speech sequence is used as input). In this way, we provide the fastest convergence rate for the adaptive filters, so that the difference between the algorithms (in terms of the misalignment level) is influenced only by the regularization parameter. In the first set of experiments, the performance of the NLMS algorithm is evaluated. Fig. 4 presents the misalignment of this algorithm using different values of the normalized regularization constant [see (1)], as compared to the optimal normalized regularization given in (18). The is set Fig. 4. Misalignment of the NLMS algorithm using different values of the =1, L =512, and =30dB. Fig. 5. Misalignment of the NLMS algorithm using different values of the =1, L = 512, and =10dB. to 30 db and the input signal is white and Gaussian. According to this figure, it is clear that a lower misalignment level is achieved for a higher normalized regularization constant, but with a slower convergence rate and tracking. Also, it can be noticed that the performance obtained using the optimal normalized regularization is similar to the classical and the convergence rate or tracking is not affected as compared to the case there is little regularization. The same experiment is repeated in Fig. 5, but using a lower value of the, i.e., 10 db. It is clear that the importance of the optimal regularization becomes more apparent. In this case, a higher value of the normalized regularization constant is required (i.e., ). Fig. 6 also supports this fact; for this experiment, the is set to 0 db. In order to match the performance obtained with, the normalized regularization constant needs to be further increased (i.e., ). All these results are in consistence with Fig. 1, which provides the values of as a function of the.
BENESTY et al.: ON REGULARIZATION IN ADAPTIVE FILTERING 1739 Fig. 6. Misalignment of the NLMS algorithm using different values of the =1, L =512, and =0dB. Fig. 8. Misalignment of the SR-NLMS algorithm using different values of the =1, L = 512, and =10dB. Fig. 7. Misalignment of the NLMS algorithm using different values of the normalized regularization parameter. The input signal is speech, = 0:5, L = 512, and =10dB. The character of the input signal significantly influences the performance of the adaptive filters. Fig. 7 presents the behavior of the NLMS with the speech sequence as input, db, and. It can be noticed that the regularization process is critical in this case. For a low value of the normalized regularization constant (i.e., ), the misalignment of the adaptive filter fluctuates a lot and never converges. Also, the classical value does not lead to a satisfactory performance. It is clear that the NLMS algorithm using the optimal value (which, in this case, is close to the value ) performs much better in this scenario. Commonly, the SR-NLMS algorithm uses a similar regularization to the NLMS algorithm [see (1)]. However, as it was proved in Section IV, the regularization parameters of these two algorithms differ by the factor given in (27), i.e.,. Fig. 8 presents the misalignment of the SR-NLMS algorithm with different values of from (1), as compared to the optimal normalized regularization given in (31). The input signal is white Fig. 9. Misalignment of the SR-NLMS algorithm using different values of the =1, L = 512, and =0dB. and Gaussian, and db. It can be noticed that the classical normalized regularization is not appropriate in this case. The SR-NLMS algorithm with (which is close to the value ) performs much better in terms of both fast convergence/tracking and misalignment. However, for lower values of the, the normalized regularization constant needs to be further increased. The experiment reported in Fig. 9 is performed with db. Again, the SR-NLMS algorithm with (which is now close to the value ) gives the best performance. The IPNLMS algorithm is very useful when we need to identify sparse impulse responses, which is often the case in network and acoustic echo cancellation. In [10], it was intuitively shown that the regularization parameter of this algorithm should be taken as. However, as it was proved in Section V, the regularization of the IPNLMS algorithm does not depend on the parameter (that controls the amount of proportionality in the algorithm). The optimal regularization of the IPNLMS algorithm is given in (42) and it is
1740 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 6, AUGUST 2011 Fig. 10. Misalignment of the IPNLMS algorithm using different values of the =1, =0, L =512, and =30dB. Fig. 12. Misalignment of the IPNLMS algorithm using different values of the =1, =0, L = 512, and =0dB. Fig. 11. Misalignment of the IPNLMS algorithm using different values of the =1, =0, L = 512, and =10dB. based on the parameter from (43). In fact, it is equivalent to the regularization of the NLMS up to the scaling factor, i.e.,. The next set of experiments evaluates the performance of the IPNLMS algorithm. The proportionality parameter is set to. Fig. 10 presents the misalignment of this algorithm using the classical normalized regularization constant,as compared to the optimal normalized regularization. The input signal is white and Gaussian, and db. It can be noticed that the performance of the algorithms is very similar. However, this is not the case for lower s. Indeed, the previous experiment is repeated in Fig. 11, but with db. In this case, a much higher value of the normalized regularization constant is required [i.e., ], in order to match the performance obtained using. This fact is also supported in Fig. 12, db, so that the normalized regularization constant needs to be further increased [up to ] in order that the IPNLMS performs in a similar way when the optimal choice is used. Fig. 13. Misalignment of the NLMS and IPNLMS algorithms using different values of the normalized regularization parameter. The input signal is speech, =0:4, =0, L = 512, and =10dB. In Fig. 13, the speech signal is used as input, db, and. The NLMS and IPNLMS algorithms using the classical normalized regularizations [i.e., for the NLMS and for the IPNLMS] are compared to their counterparts that use the optimal normalized regularizations, i.e., from (18) and from (43), respectively. As expected, the IPNLMS algorithm outperforms the NLMS in terms of convergence rate and tracking, when using the optimal regularization parameters. Besides, both algorithms outperform, and by far, their counterparts that use the classical normalized regularizations. Finally, the performance of the SR-IPNLMS algorithm is evaluated. Usually, the regularization of this algorithm is identical to the IPNLMS one. However, as it was shown in Section VI, the relation between the regularization parameters of the SR-IPNLMS and IPNLMS algorithms is similar to the one between the SR-NLMS and NLMS algorithms, i.e.,, is defined in (27). In Fig. 14, the input signal is white and Gaussian, and
BENESTY et al.: ON REGULARIZATION IN ADAPTIVE FILTERING 1741 filters must be regularized in a noisy environment. With no regularization, the adaptive filter may behave very poorly and may not perform as expected. Until now, we have always used an ad-hoc regularization that is proportional to the variance of the input signal. Unfortunately, this regularization is far to be consistent and hard to tune at low s. In this paper, we have proposed a simple condition, that intuitively makes sense, for the derivation of an optimal regularization parameter. From this condition we have derived the optimal regularization parameters of four algorithms: the NLMS, the SR-NLMS, the IPNLMS, and the SR-IPNLMS. Extensive simulations have shown that with the proposed regularization, the adaptive algorithms behave extremely well at all levels. Fig. 14. Misalignment of the SR-IPNLMS algorithm using different values of the =1, =0, L =512, and =10dB. ACKNOWLEDGMENT The authors would like to thank the Associate Editor and the reviewers for the valuable comments and suggestions. Fig. 15. Misalignment of the SR-IPNLMS algorithm using different values of the =1, =0, L = 512, and =0dB. db. According to this figure, it is clear that the SR-IPNLMS algorithm using the optimal value performs better as compared to the regular normalized regularization. Also, it can be noticed that a lower misalignment level can be obtained by using a higher normalized regularization parameter, i.e.,. However, this value is not appropriate anymore when the decreases. Indeed, in Fig. 15, we consider db. It is clear that a higher normalized regularization parameter is required now [i.e., ] to match the performance obtained with. VIII. CONCLUSION Regularization is an important component of any adaptive filter. It is as much important as the step-size parameter that controls the stability and convergence of the algorithm. All adaptive REFERENCES [1] P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. Philadelphia, PA: SIAM, 1998. [2] J. Benesty, T. Gaensler, D. R. Morgan, M. M. Sondhi, and S. L. Gay, Advances in Network and Acoustic Echo Cancellation. Berlin, Germany: Springer-Verlag, 2001. [3] C. Paleologu, J. Benesty, and S. Ciochină, Sparse Adaptive Filters for Echo Cancellation. San Rafael: Morgan & Claypool, 2010. [4] S. Haykin, Adaptive Filter Theory, 4th ed. Upper Saddle River, NJ: Prentice-Hall, 2002. [5] A. H. Sayed, Fundamentals of Adaptive Filtering. Hoboken, NJ: Wiley, 2003. [6] E. Hänsler and G. Schmidt, Acoustic Echo and Noise Control-A Practical Approach. Hoboken, NJ: Wiley, 2004. [7] H. Rey, L. Rey Vega, S. Tressens, and J. Benesty, Variable explicit regularization in affine projection algorithm: Robustness issues and optimal choice, IEEE Trans. Signal Process., vol. 55, no. 5, pp. 2096 2108, May 2007. [8] J. Benesty, H. Rey, L. Rey Vega, and S. Tressens, A non-parametric VSS-NLMS algorithm, IEEE Signal Process. Lett., vol. 13, no. 10, pp. 581 584, Oct. 2006. [9] P. M. Clarkson, Optimal and Adaptive Signal Processing. London, U.K.: CRC, 1993. [10] J. Benesty and S. L. Gay, An improved PNLMS algorithm, in Proc. IEEE ICASSP, 2002, pp. 1881 1884. [11] D. L. Duttweiler, Proportionate normalized least-mean-squares adaptation in echo cancelers, IEEE Trans. Speech, Audio Process., vol. 8, no. 5, pp. 508 518, Sep. 2000. [12] J. Benesty, C. Paleologu, and S. Ciochină, Proportionate adaptive filters from a basis pursuit perspective, IEEE Signal Process. Lett., vol. 17, no. 12, pp. 985 988, Dec. 2010. Jacob Benesty was born in 1963. He received the M.S. degree in microwaves from Pierre and Marie Curie University, Paris, France, in 1987, and the Ph.D. degree in control and signal processing from Orsay University, Paris, France, in April 1991. During the Ph.D. degree (from November 1989 to April 1991), he worked on adaptive filters and fast algorithms at the Centre National d Etudes des Telecommunications (CNET), Paris. From January 1994 to July 1995, he worked at Telecom Paris University on multichannel adaptive filters and acoustic echo cancellation. From October 1995 to May 2003, he was first a Consultant and then a Member of the Technical Staff at Bell Laboratories, Murray Hill,
1742 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 6, AUGUST 2011 NJ. In May 2003, he joined INRS-EMT, University of Quebec, Montreal, QC, Canada, as a Professor. His research interests are in signal processing, acoustic signal processing, and multimedia communications. He coauthored the books Sparse Adaptive Filters for Echo Cancellation (Morgan and Claypool, 2010), Noise Reduction in Speech Processing (Springer-Verlag, 2009), Microphone Array Signal Processing (Springer-Verlag, 2008), Acoustic MIMO Signal Processing (Springer-Verlag, 2006), and Advances in Network and Acoustic Echo Cancellation (Springer-Verlag, 2001). He is the editor-in-chief of the reference Springer Handbook of Speech Processing (Springer-Verlag, 2007). He is also a coeditor/coauthor of the books Speech Processing in Modern Communication: Challenges and Perspectives (Springer-Verlag, 2010), Speech Enhancement (Springer-Verlag, 2005), Audio Signal Processing for Next Generation Multimedia communication Systems (Kluwer, 2004), Adaptive Signal Processing: Applications to Real-World Problems (Springer-Verlag, 2003), and Acoustic Signal Processing for Telecommunication (Kluwer, 2000). Dr. Benesty received the 2001 and 2008 Best Paper Awards from the IEEE Signal Processing Society. He was a member of the editorial board of the EURASIP Journal on Applied Signal Processing, a member of the IEEE Audio and Electroacoustics Technical Committee, the co-chair of the 1999 International Workshop on Acoustic Echo and Noise Control (IWAENC), and the general cochair of the 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). Silviu Ciochină (M 92) received the M.S. degree in electronics and telecommunications and the Ph.D. degree in communications from the University Politehnica of Bucharest, Bucharest, Romania, in 1971 and 1978, respectively. From 1979 to 1995, was a Lecturer at the Faculty of Electronics, Telecommunications, and Information Technology, University Politehnica of Bucharest. Since 1995, he has been a Professor at the same university. Since 2004, he has been the Head of the Telecommunications Department. His main research interests are in the areas of signal processing and wireless communications, including adaptive algorithms, spectrum estimation, fast algorithms, channel estimation, multi-antenna systems, and broadband wireless technologies. He coauthored the books Sparse Adaptive Filters for Echo Cancellation (Morgan and Claypool, 2010) and Speech Processing in Modern Communication: Challenges and Perspectives (Springer-Verlag, 2010). Dr. Ciochină received the Traian Vuia award in 1981 and the Gheorghe Cartianu award in 1997, both from the Romanian Academy. Constantin Paleologu (M 07) was born in Romania in 1975. He received the M.S. degree in telecommunications networks, the M.S. degree in digital signal processing, and the Ph.D. degree in adaptive signal processing from the Faculty of Electronics, Telecommunications, and Information Technology, University Politehnica of Bucharest, Bucharest, Romania, in 1998, 1999, and 2003, respectively. During the Ph.D. degree (from December 1999 to July 2003), he worked on adaptive filters and echo cancellation. Since October 1998, he has been with the Telecommunications Department, University Politehnica of Bucharest, he is currently an Associate Professor. His research interests include adaptive signal processing, speech enhancement, and multimedia communications. He coauthored the books Sparse Adaptive Filters for Echo Cancellation (Morgan and Claypool, 2010) and Speech Processing in Modern Communication: Challenges and Perspectives (Springer-Verlag, 2010). Dr. Paleologu received the IN HOC SIGNO VINCES award from the Romanian National Research Council in 2009 and the IN TEMPORE OPPOR- TUNO award from University Politehnica of Bucharest in 2010. He serves as the Editor-in-Chief of the International Journal on Advances in Systems and Measurements. He has been a Fellow of the International Academy, Research, and Industry Association (IARIA) since 2008.