Project due. Final exam: two hours, close book/notes. Office hours. Mainly cover Part-2 and Part-3 May involve basic multirate concepts from Part-1

Similar documents
IN357: ADAPTIVE FILTERS

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

Report 3. Kalman or Wiener Filters

Chapter 4 SPEECH ENHANCEMENT

Digital Signal Processing

A variable step-size LMS adaptive filtering algorithm for speech denoising in VoIP

REAL TIME DIGITAL SIGNAL PROCESSING

Temporal Clutter Filtering via Adaptive Techniques

MATLAB SIMULATOR FOR ADAPTIVE FILTERS

A New Least Mean Squares Adaptive Algorithm over Distributed Networks Based on Incremental Strategy

MITIGATING INTERFERENCE TO GPS OPERATION USING VARIABLE FORGETTING FACTOR BASED RECURSIVE LEAST SQUARES ESTIMATION

Adaptive Kalman Filter based Channel Equalizer

Performance Comparison of ZF, LMS and RLS Algorithms for Linear Adaptive Equalizer

EE 6422 Adaptive Signal Processing

Development of Real-Time Adaptive Noise Canceller and Echo Canceller

Impulsive Noise Reduction Method Based on Clipping and Adaptive Filters in AWGN Channel

Why is scramble needed for DFE. Gordon Wu

ENF ANALYSIS ON RECAPTURED AUDIO RECORDINGS

Multirate Algorithm for Acoustic Echo Cancellation

Analysis of LMS and NLMS Adaptive Beamforming Algorithms

An Effective Implementation of Noise Cancellation for Audio Enhancement using Adaptive Filtering Algorithm

Analysis on Extraction of Modulated Signal Using Adaptive Filtering Algorithms against Ambient Noises in Underwater Communication

Study of Different Adaptive Filter Algorithms for Noise Cancellation in Real-Time Environment

Lab 8. Signal Analysis Using Matlab Simulink

Speech Enhancement Based On Noise Reduction

ELT Receiver Architectures and Signal Processing Fall Mandatory homework exercises

Performance Analysis of gradient decent adaptive filters for noise cancellation in Signal Processing

Adaptive Noise Cancellation using Multirate Technique

Lecture 4 Biosignal Processing. Digital Signal Processing and Analysis in Biomedical Systems

High-speed Noise Cancellation with Microphone Array

IMPLEMENTATION CONSIDERATIONS FOR FPGA-BASED ADAPTIVE TRANSVERSAL FILTER DESIGNS

A Novel Hybrid Technique for Acoustic Echo Cancellation and Noise reduction Using LMS Filter and ANFIS Based Nonlinear Filter

Noureddine Mansour Department of Chemical Engineering, College of Engineering, University of Bahrain, POBox 32038, Bahrain

A Three-Microphone Adaptive Noise Canceller for Minimizing Reverberation and Signal Distortion

Optimal Adaptive Filtering Technique for Tamil Speech Enhancement

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Narrow-Band Interference Rejection in DS/CDMA Systems Using Adaptive (QRD-LSL)-Based Nonlinear ACM Interpolators

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering

Digital Signal Processing

EE 451: Digital Signal Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Variable Step-Size LMS Adaptive Filters for CDMA Multiuser Detection

COMPARISON OF CHANNEL ESTIMATION AND EQUALIZATION TECHNIQUES FOR OFDM SYSTEMS

Adaptive Systems Homework Assignment 3

A Review on Beamforming Techniques in Wireless Communication

Teaching Scheme. Credits Assigned (hrs/week) Theory Practical Tutorial Theory Oral & Tutorial Total

Adaptive Filters Wiener Filter

Adaptive Filters Linear Prediction

Lab 6. Advanced Filter Design in Matlab

Biosignal filtering and artifact rejection. Biosignal processing, S Autumn 2012

2.161 Signal Processing: Continuous and Discrete Fall 2008

Analysis of LMS Algorithm in Wavelet Domain

IMPROVING AUDIO WATERMARK DETECTION USING NOISE MODELLING AND TURBO CODING

NOISE ESTIMATION IN A SINGLE CHANNEL

EE 451: Digital Signal Processing

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

IMPULSE NOISE CANCELLATION ON POWER LINES

Introduction of Audio and Music

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

A Novel Adaptive Algorithm for

Lecture 3 Review of Signals and Systems: Part 2. EE4900/EE6720 Digital Communications

BER Analysis ofimpulse Noise inofdm System Using LMS,NLMS&RLS

Frugal Sensing Spectral Analysis from Power Inequalities

THE USE OF THE ADAPTIVE NOISE CANCELLATION FOR VOICE COMMUNICATION WITH THE CONTROL SYSTEM

Passive Inter-modulation Cancellation in FDD System

Active Noise Cancellation in Audio Signal Processing

Adaptive Noise Reduction Algorithm for Speech Enhancement

Noise Reduction for L-3 Nautronix Receivers

SGN Advanced Signal Processing

EE482: Digital Signal Processing Applications

University of Washington Department of Electrical Engineering Computer Speech Processing EE516 Winter 2005

Modified Least Mean Square Adaptive Noise Reduction algorithm for Tamil Speech Signal under Noisy Environments

Evaluation of a Multiple versus a Single Reference MIMO ANC Algorithm on Dornier 328 Test Data Set

System Identification and CDMA Communication

ECE 429 / 529 Digital Signal Processing

Comprehensive Performance Analysis of Non Blind LMS Beamforming Algorithm using a Prefilter

The Effects of Aperture Jitter and Clock Jitter in Wideband ADCs

Spatially Varying Color Correction Matrices for Reduced Noise

Adaptive Filters Application of Linear Prediction

Fig(1). Basic diagram of smart antenna

Adaptive Lattice Filters for CDMA Overlay. Wang, J; Prahatheesan, V. IEEE Transactions on Communications, 2000, v. 48 n. 5, p

Multimedia Signal Processing: Theory and Applications in Speech, Music and Communications

Noise Reduction Technique for ECG Signals Using Adaptive Filters

LMS and RLS based Adaptive Filter Design for Different Signals

Architecture design for Adaptive Noise Cancellation

Application of Affine Projection Algorithm in Adaptive Noise Cancellation

Index Terms Uniform Linear Array (ULA), Direction of Arrival (DOA), Multiple User Signal Classification (MUSIC), Least Mean Square (LMS).

SIMULATIONS OF ADAPTIVE ALGORITHMS FOR SPATIAL BEAMFORMING

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

Adaptive Linear Predictive Frequency Tracking and CPM Demodulation

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu>

Performance Study of A Non-Blind Algorithm for Smart Antenna System

The University of Texas at Austin Dept. of Electrical and Computer Engineering Midterm #2. Prof. Brian L. Evans. Scooby-Doo

Synthesis Algorithms and Validation

DIGITAL SIGNAL PROCESSING (Date of document: 6 th May 2014)

ELEC-C5230 Digitaalisen signaalinkäsittelyn perusteet

Interband Alias-Free Subband Adaptive Filtering with Critical Sampling

Cancellation of Unwanted Audio to Support Interactive Computer Music

On Kalman Filtering. The 1960s: A Decade to Remember

Transcription:

End of Semester Logistics Project due Further Discussions and Beyond EE630 Electrical & Computer Engineering g University of Maryland, College Park Acknowledgment: The ENEE630 slides here were made by Prof. Min Wu. Contact: minwu@umd.edu Final exam: two hours, close book/notes Mainly cover Part-2 and Part-3 May involve basic multirate concepts from Part-1 (decimation, expansion, basic filter bank) Office hours UMD ENEE630 Advanced Signal Processing UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [2] Higher-Order Signal Analysis: Brief Introduction Information contained in the power spectrum Reflect the 2 nd -order statistics of a signal (i.e. autocorrelation) => Power spectrum is sufficient for complete statistical description of a Gaussian process, but not so for many other processes Motivation for higher-order statistics Higher-order statistics contain additional info. to measure the deviation of a non-gaussian process from normality Help suppress Gaussian noise of unknown spectral characteristics. The higher-order spectra may become high SNR domains in which one can perform detection, parameter estimation, or signal reconstruction Help identify a nonlinear system or to detect and characterize nonlinearities in a time series UMD ENEE630 Advanced Signal Processing (v.1212) Frequency estimation [3] m th order Moments of A Random Variable Moments: m k = E[ X k ]; Central moments: subtract the mean = k k E[ (X X ) ] o Mean: X = m 1 = E[X] Statistical centroid ( center of gravity ) o Variance: X2 = 2 = E[ (X - X ) 2 ] Describe the spread/dispersion of the p.d.f. o 3 rd Moment: normalize into K 3 = 3 / X 3 Represent Skewness of p.d.f. zero for symmetric p.d.f. o 4 th Moment: normalize into K 4 = 4 / X4 3 Kurtosis for flat/peakiness deviation from Gaussian p.d.f. (which is zero) See Manolakis Sec.3.1.2 for further discussions UMD ENEE630 Advanced Signal Processing (v.1212) Frequency estimation [4]

First five cumulants for zero-mean r.v. Relations Among 3+ Samples of a Random Process Generalize from autocorrelation function between a pair of samples for a zero-mean stationary random process Triplets of samples: 3 rd order cumulant Quadruplets of samples: 4 th order cumulant ( Figures/Equations are from Manolakis Book Section 3.1; Note moments of 3 rd and above for Gaussian can be expressed in terms of and.) UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [5] [ Eq. UMD from ENEE630 Manolakis Advanced Book Signal Processing Section (v.1212) 12.1 ] Frequency estimation [7] High-order Spectra Multi-variable DTFT on cumulant functions Bispectrum & Trispectrum: may exhibit patterns in magnitude & phase Extend properties under LTI to high-order stats See Manolakis et al. McGraw Hill book Statistical & Adaptive S.P. Sec.12.1 High-order statistics for further discussions [ Eq. UMD from ENEE630 Manolakis Advanced Book Signal Processing Section (v.1212) 12.1 ] Discussions [8] UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [9]

Resource on Signal Processing IEEE Signal Processing Magazine E-copy on IEEE Xplore; Hard-copy by student membership IEEE Inside Signal Processing enewsletter http://signalprocessingsociety.org/newsletter/ p// p g g/ / Signal Processing related journals/transactions Related conferences: ICASSP, ICIP, etc. Additional 2-cents beyond courses Attend talks/seminars to broaden your vision Oral communications (oral exams, presentations, ti etc) Related Courses Beyond EE630 Adaptive and space-time signal processing: ENEE634* Image/video & audio/speech processing: ENEE631*, 632 Detection/estimation & information theory: ENEE621*, 627* See also SP for digital communication in ENEE623 Pattern recognition and machine learning: ENEE633 Special topic courses and seminars in signal processing: Occasionally offered. E.g. on info forensics & multimedia security, compressive sensing, etc. See also related applied math and statistics courses UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [10] UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [11] Digital Image and Video Processing (ENEE631) Human visual perception; color vision Image enhancement Image restoration Image transform, quantization and coding Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 8). Use previous pixel predictor. Difference image has mid-range gray representing zero and amplifying factor of 8. UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [12] Motion analysis and video coding Feature extraction and analysis Security and forensic issues UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [13]

Forensic Question on Time and Place Ubiquitous Forensic Fingerprints from Power Grid When was the video actually shot? And where? m e (in seconds) Tim 600 500 400 300 200 100 96 9.6 10 10.4 10.8 Frequency (in Hz) Was the sound track captured at the same time as the picture? Or super-imposed afterward? Explore the fingerprint influenced by power grid onto sensor recordings UMD ENEE630 Advanced Signal Processing (v.1212) -80-90 -100-110 -120-130 -140-150 Time (in seco nds) 600 500 400 300 200 100 9.6 10 10.4 10.8 Frequency (in Hz) -80-90 -100-110 -120-130 -140-150 Time (in seco onds) 500 400 300 200 100 49.5 50 50.5 51 51.5 Frequency (in Hz) -20-40 -60-80 -100 Correlation co efficient 0.9 0.8 0.7 0.6 0.5 0.4 0.3-30 -20-10 0 10 20 30 Time frame lag Video ENF signal Power ENF signal Normalized correlation ENF matching result demonstrating similar variations in the ENF signal extracted from video and from power signal recorded in India Electric Network Frequency (ENF): 50/60 Hz nominal Varies slightly over time; main trendsconsistent in same grid Can be seen or heard in sensor recordings Help determine recording time, detect tampering, etc. Other potential applications on smart grid & media management Ref: Garg et al. ACM Multimedia 2011, CCS 2012 and APSIPA 2012 Tampering Detection Using ENF equency (in Hz) Fre n Hz) Frequency (in ENF matching result demonstrating the detection of video tampering based on the ENF traces 10.3 10.2 10.1 10 50.2 50.1 50 49.9 ENF signal from Video Inserted clip 160 320 480 640 800 960 Time (in seconds) Ground truth ENF signal 160 320 480 640 800 Time (in seconds) Adding a clip between the original video leads to discontinuity in the ENF signal extracted from video Clip insertion can also be dt detected tdby comparing the video ENF signal with the power ENF signal at corresponding time UMD ENEE630 Advanced Signal Processing (v.1212) 16 Aliasing Revisit: Downsample A Sinusoid If the RF signal [white] is not sampled at least twice per cycle, aliasing will occur. But by properly adjusting the sampling interval [indicated by vertical lines], you can down-convert the RF to whatever lower frequency is desired [blue and yellow]. IEEE Spectrum Magazine April 2009 Universal Handset Alias Harnessed for software-defined radio http://spectrum.ieee.org/computing/embedded-systems/the-universal-handset/0/cellsb01 UMD ENEE630 Advanced Signal Processing (v.1212) Discussions [17]

ENEE630 Look Ahead Introduction to Adaptive Filtering Electrical & Computer Engineering University of Maryland, College Park Acknowledgment: the additional overview/introductory slides for beyond ENEE630 were made by Prof. Min Wu and FFP Teaching Fellow Mr. Wei- Hong Chuang, with reference to textbooks by Hayes and Haykins and ENEE634 class notes by Prof. Ray Liu. Contact: minwu@umd.edu Stationarity Assumption in Wiener Filtering Wiener filtering is optimum in a stationary environment Unfortunately, most real signals are non-stationary One remedy: process the non-stationary signal in blocks, where the signal is assumed to be stationary Not always effective For rapidly varying signals, the block length may be too small to estimate relevant parameters Can t accommodate oda step changes within analysis a s intervals Solution imposes incorrect data model, i.e., piecewise stationary => Try to begin with non-stationarity to develop solutions UMD ENEE630 Advanced Signal Processing (ver.1112) UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [2] Recursive Update of Filter Coefficients Wiener Filtering: solve the normal equation R w r x r dx If non-stationary, optimal filter coefficients will depend on time n R x ( n) w r ( n) n Not always feasible (e.g. high computational complexity) dx General Structure of Adaptive Filtering (Fig. from Hayes book p495) Can be much simplified with adaptive filtering: => Form w n+1 by adding correction w n to w n at each iteration w n11 w n w n Measure the error e(n) at each time n, determine how to update filter coefficients accordingly UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [3] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [4]

FIR Adaptive Filter (Fig. from Hayes book Chapter 9) Coefficient Update: Desired Properties Corrections should reduce mean-square error ( n ) E e ( n ) In a stationary environment, w n should converge to Wiener-Hopf solution 2 Simple & efficient algorithms for coefficient adjustment lim w n n R 1 x r dx Often perform well enough Stability is easily controlled Feasible performance analysis Avoid explicit signal statistics for w n if possible Built-in estimation of statistics If non-stationary, filter should track the solution UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [5] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [6] Steepest-Descent Adaptive Filter (Fig. from Hayes book Chapter 9) Method of Steepest Descent (Fig. from Hayes book Chapter 9) Recall direct approach: minimizing the MSE by setting partial derivative = 0 (this may involve matrix inverse) The steepest direction is given by gradient => Alternative: search solution iteratively using a numerical method of steepest descent Find the filter coefficients that minimize the error on the error surface At every iteration, moves along the direction of the steepest descent of error Update Equation μ: step size, controls the rate at which the coefficients move UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [7] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [8]

Method of Steepest Descent Stability of Steepest Descent It can be shown that If x(n) and d(n) are w.s.s., Correction term is zero if (i.e., a fixed point of the update) Does the coefficient i update converge? UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [9] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [10] Convergence of Steepest Descent For w.s.s. d(n) and x(n), the steepest descent adaptive filter converges to Wiener-Hopf solution Effects of Condition Number on Convergence (Error surface figures from Hayes book Chapter 9) if the step size satisfies λ max : maximal eigenvalue of R x Canbeshownbydiagonalizing R Can be shown by diagonalizing R x Convergence rate (how fast the update converges) is determined by the spread of eigenvalues small eigenvalue spread large eigenvalue spread UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [11] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [12]

Least Mean Squares (LMS) Algotithm LMS Algorithm for pth-order FIR Adaptive Filter Recall the update in steepest descent: Practical challenge: the expectation may be unknown or difficult to estimate on the fly The LMS replaces the expectation by an one-shot estimate Very crude estimate, but often performs well in practice (Algorithm from Hayes book Chapter 9) UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [13] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [14] Randomness of the LMS Algorithm (Fig. from Hayes book Chapter 9) Example: Adaptive Linear Prediction The one-shot estimate approximates the steepest descent direction (i.e., the statistical average) The one-shot nature makes w n move randomly in a neighborhood, even if initialized from the Wiener solution (Fig: Hayes book p509) UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [15] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [16]

Example: Adaptive Linear Prediction 1.2728 1.2728 Convergence of the LMS Algorithm Examine convergence properties of LMS under a statistical framework For w.s.s. d(n) and x(n) the LMS adaptive filter converges in the mean sense if -0.81-0.81 μ=0.02: faster convergence, less stable μ=0.004: slower convergence, more stable More stringent condition is required for convergence in mean square sense UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [17] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [18] Typical Learning Curves: MSE vs. time Recursive Least Squares (RLS) Algorithm Mean-square error v.s. least squares error Mean-square error does not depend on incoming data, but their ensemble statistics Least squares error depends explicitly on x(n) and d(n) μ=0.004 (slower convergence, smaller misadjustment) RLS: minimizes least squares error, where old data are gradually forgotten μ=0.02 0 < λ < 1: forgetting factor UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [20] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [21]

Recursive Least Squares (RLS) Algorithm Learning Rates of RLS and LMS Least squares normal equation R x (n) and r dx (n) can be calculated recursively R -1 x (n) can also be calculated recursively using Matrix Inversion Formula http://www.mathworks.com/matlabcentral/fileexchange/32498-performance-of-rls-and-lms-in-system-identification UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [22] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [23] Characteristics of RLS Algorithm Convergence rate is an order of magnitude faster than that of LMS, at the cost of higher complexity Convergence rate is insensitive to eigenvalue spread In theory, RLS produces zero excess error or misadjustment RLS can be understood under the unifying framework of Kalman filteringi Summary Adaptive filters Address non-stationary signal processing Low computational complexity Recursive update of filter coefficients Method of steepest descent Moves in the negative gradient direction converges to Wiener-Hopf if stationary LMS algorithm Crude one-shot gradient estimation; reasonable practical performance RLS algorithm: recursively minimizes i i least-squares error UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [24] UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [25]

References for Further Explorations M. Hayes, Statistical Digital Signal Processing and Modeling, Wiley, 1996. Chapter 9 All figures except one used in this lecture are from the book S. Haykin, Adaptive Filter Theory, 4th edition, Prentice-Hall, 2002, Chapters 4 & 5 => See more detailed development in ENEE634 (offered in alternating spring semester) UMD ENEE630 Advanced Signal Processing (ver.1212) Adaptive Filtering [26]