System Identification and CDMA Communication

Similar documents
(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

Experiment 2 Effects of Filtering

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

ESE531 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing

Kalman Tracking and Bayesian Detection for Radar RFI Blanking

Electrical & Computer Engineering Technology

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal.

Performance Comparison of ZF, LMS and RLS Algorithms for Linear Adaptive Equalizer

Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK DS-CDMA

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function.

Multirate Digital Signal Processing

Adaptive Systems Homework Assignment 3

ECE 5650/4650 Exam II November 20, 2018 Name:

EE 215 Semester Project SPECTRAL ANALYSIS USING FOURIER TRANSFORM

EE 6422 Adaptive Signal Processing

Lakehead University. Department of Electrical Engineering

Handout 11: Digital Baseband Transmission

Lab 8. Signal Analysis Using Matlab Simulink

ELT Receiver Architectures and Signal Processing Fall Mandatory homework exercises

Design of FIR Filters

Understanding Apparent Increasing Random Jitter with Increasing PRBS Test Pattern Lengths

MATLAB SIMULATOR FOR ADAPTIVE FILTERS

CHAPTER 6 SIGNAL PROCESSING TECHNIQUES TO IMPROVE PRECISION OF SPECTRAL FIT ALGORITHM

Digital Filters IIR (& Their Corresponding Analog Filters) Week Date Lecture Title

Biomedical Signals. Signals and Images in Medicine Dr Nabeel Anwar

EE 422G - Signals and Systems Laboratory

A Design of the Matched Filter for the Passive Radar Sensor

Chapter 2 Channel Equalization

A Steady State Decoupled Kalman Filter Technique for Multiuser Detection

Lab 3.0. Pulse Shaping and Rayleigh Channel. Faculty of Information Engineering & Technology. The Communications Department

Optimum Beamforming. ECE 754 Supplemental Notes Kathleen E. Wage. March 31, Background Beampatterns for optimal processors Array gain

New Metrics Developed for a Complex Cepstrum Depth Program

Matched filter. Contents. Derivation of the matched filter

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 6: Fading

Impulsive Noise Reduction Method Based on Clipping and Adaptive Filters in AWGN Channel

Wireless Communication Systems Laboratory Lab#1: An introduction to basic digital baseband communication through MATLAB simulation Objective

Chapter 9. Digital Communication Through Band-Limited Channels. Muris Sarajlic

Adaptive Waveforms for Target Class Discrimination

Digital Processing of Continuous-Time Signals

The fundamentals of detection theory

Digital Processing of

Chapter 5 Window Functions. periodic with a period of N (number of samples). This is observed in table (3.1).

MAKING TRANSIENT ANTENNA MEASUREMENTS

DSP First Lab 08: Frequency Response: Bandpass and Nulling Filters

CHAPTER 2. Instructor: Mr. Abhijit Parmar Course: Mobile Computing and Wireless Communication ( )

Multiple Input Multiple Output (MIMO) Operation Principles

NRZ Bandwidth (-3db HF Cutoff vs SNR) How Much Bandwidth is Enough?

Fourier Theory & Practice, Part II: Practice Operating the Agilent Series Scope with Measurement/Storage Module

QUESTION BANK SUBJECT: DIGITAL COMMUNICATION (15EC61)

Decoding a Signal in Noise

Lecture 4 Biosignal Processing. Digital Signal Processing and Analysis in Biomedical Systems

Improved Detection by Peak Shape Recognition Using Artificial Neural Networks

SARDAR RAJA COLLEGE OF ENGINEERING ALANGULAM

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 6: Fading

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

Non-coherent pulse compression - concept and waveforms Nadav Levanon and Uri Peer Tel Aviv University

Acoustic Echo Cancellation using LMS Algorithm

Lecture 3, Multirate Signal Processing

6.555 Lab1: The Electrocardiogram

Experiments #6. Convolution and Linear Time Invariant Systems

ELEC-C5230 Digitaalisen signaalinkäsittelyn perusteet

Digital Signal Processing

Performance Optimization in Wireless Channel Using Adaptive Fractional Space CMA

A Soft-Limiting Receiver Structure for Time-Hopping UWB in Multiple Access Interference

REAL TIME DIGITAL SIGNAL PROCESSING

ELT COMMUNICATION THEORY

Discrete Fourier Transform (DFT)

Computational Complexity of Multiuser. Receivers in DS-CDMA Systems. Syed Rizvi. Department of Electrical & Computer Engineering

Jitter Analysis Techniques Using an Agilent Infiniium Oscilloscope

6.02 Introduction to EECS II Spring Quiz 1

Digital Signal Processing. VO Embedded Systems Engineering Armin Wasicek WS 2009/10

Laboratory Assignment 5 Amplitude Modulation

MITIGATING INTERFERENCE TO GPS OPERATION USING VARIABLE FORGETTING FACTOR BASED RECURSIVE LEAST SQUARES ESTIMATION

Digital Signal Processing PW1 Signals, Correlation functions and Spectra

Digital Filtering: Realization

Unraveling Zero Crossing and Full Spectrum What does it all mean?

Lab/Project Error Control Coding using LDPC Codes and HARQ

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS

UNIVERSITY OF UTAH ELECTRICAL AND COMPUTER ENGINEERING DEPARTMENT

The Metrication Waveforms

Multiple attenuation via predictive deconvolution in the radial domain

DSP Laboratory (EELE 4110) Lab#10 Finite Impulse Response (FIR) Filters

C2. Design of Digital Filters

Impulse Response as a Measurement of the Quality of Chirp Radar Pulses

DFT: Discrete Fourier Transform & Linear Signal Processing

Analysis on the Detection of Sinusoidal Signals with Unknown Parameters

Encoding a Hidden Digital Signature onto an Audio Signal Using Psychoacoustic Masking

Theory of Telecommunications Networks

Noise estimation and power spectrum analysis using different window techniques

Analysis of Processing Parameters of GPS Signal Acquisition Scheme

4. Design of Discrete-Time Filters

Experiment 1 Introduction to MATLAB and Simulink

ELEC Dr Reji Mathew Electrical Engineering UNSW

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

Figure S3. Histogram of spike widths of recorded units.

QUESTION BANK EC 1351 DIGITAL COMMUNICATION YEAR / SEM : III / VI UNIT I- PULSE MODULATION PART-A (2 Marks) 1. What is the purpose of sample and hold

Comparison of Q-estimation methods: an update

Department of Electronic Engineering NED University of Engineering & Technology. LABORATORY WORKBOOK For the Course SIGNALS & SYSTEMS (TC-202)

Signals and Systems Using MATLAB

Transcription:

System Identification and CDMA Communication A (partial) sample report by Nathan A. Goodman Abstract This (sample) report describes theory and simulations associated with a class project on system identification and code-division multiple access (CDMA) communication. The goals of the project are to gain experience applying principles of digital signal processing, to obtain understanding of how signal correlation is used in signal processing applications, and to practice technical writing and scientific communication. 1. Introduction Correlation is a fundamental quantity used to analyze and quantify the content of information-bearing signals. Correlation quantifies the similarity between two signals; therefore, correlation can be used to find hidden structure embedded well below noise and distortion levels. In electrical and computer engineering, for example, signal correlation can be used to detect communication symbols or radar pulses reflected by a target, to estimate channel properties, or to estimate signal parameters. In many cases, correlation is even the optimum processing operation. In fact, the well-known Fourier Transform is essentially a correlation operator. In this report, we study and demonstrate the usefulness of signal correlation through two application examples. The first example is system identification (ID) where the cross-correlation between an unknown system s input and output signals can be used to estimate the channel s impulse response. We will examine system ID performance as a function of the input signal s structure and energy. The second application example is the detection of digital communication symbols buried in noise and multi-user interference. II. System Identification A. Background Suppose that a signal can be measured only after it passes through an unknown linear, timeinvariant (LTI) system. It is sometimes important to characterize the unknown system so that distortions introduced by the system can be removed from the measured signal. In some cases, knowledge of the system can even be used to modify the input to the system in an advantageous way. For example, if a communication channel can be characterized, the transmitted waveform can be pre-conditioned such that it arrives at the receiver in the desired shape, thus reducing the need for adaptive equalization. The system ID technique tested in this project is based on cross-correlation between the measured output signal and a known input signal. We know that the output, y( n ),of a LTI system is 1

where x( n ) is the input signal and y ( n) = x( n) h( n) = h( k) x( n k) (1) h n is the system s impulse response. We also know that the cross-correlation between two sequences can be expressed as the convolution of the first sequence with the time-reversed copy of the second. Thus, the cross-correlation between the output and input signals is When (1) is substituted into (), the result is k = r n = y n x n. () = ( ) = h( n) r ( n) r n h n x n x n (3) where r n is the autocorrelation of the input signal. Taking the discrete-time Fourier Transform (DTFT) of both sides of (3) yields where S ( f ) is the DTFT of r ( n ), H S ( f ) X ( f ) S f = H f S f = H f X f (4) f is the system s transfer function, and = is the input signal s energy spectrum. From (4), the system transfer function is H ( f ) S ( f ) ( f ) and h( n ) can be found from the inverse DTFT of (5). =, (5) In practical situations, however, it is impossible to measure a noise-free version of the system s output signal. Hence, a better model for the output signal is X y( n) = x( n) h( n) + w( n) (6) where wn is a noise process. Substituting (6) (rather than (1)) into () results in an additional noise term not accounted for in the preceding analysis. The output of the cross-correlation is then and the DTFT now yields r n = h n r n + w n x n, (7)

Finally, solving for H S f = H f X f + W f X f. (8) f shows the corruption that occurs due to the presence of noise. The estimate of the system transfer function is now B. Simulation Results ( f ) ( f ) ˆ S f W f X f H( f ) = = H ( f ) + X f X f W = H( f ) + X. (9) We have developed a simulation to test and quantify the performance of the system ID technique described above as a function of input signal structure and signal-to-noise ratio (SNR). We focus on a finite-impulse response (FIR) system and assume that the order (and, therefore, length) of the system is known. In practice, the system order would usually need to be estimated as well. We began with simple signals and with noise power set to zero to verify the simulation was working properly and to find the correct location of the impulse response in the final output sequence. Once this was accomplished, we studied the effects of signal structure and noise power as described below. The first signal studied was an impulse with power equal to 1. Thus, the input signal was x( n) = 1δ ( n) where δ ( n) is the standard unit impulse function for discrete-time signals. The autocorrelation sequence of this input signal is simply and the signal s energy spectrum is 1 δ * 1δ 1δ r n = n n = n, (1) X ( f ) = 1;.5 f.5. (11) The noise power was set to one, and a 1-sample impulse response was generated using the optimum equiripple design for a bandpass filter. Since the simulated noise is different each time the simulation is executed, the estimated impulse response will also vary. One example of the estimated impulse response is shown in Fig. 1 along with the true impulse response. Figure 1 shows that the impulse response has been estimated well, but not perfectly. By comparison, if the noise power is set to P n = 1, a sample result is shown in Fig.. The estimation error seen in Fig. is clearly larger than the estimation error seen in Fig. 1. Obviously, this behavior is expected since the noise power has increased. 3

.5 Impulse Response Amplitude, h(n)..15.1.5.5.1.15. True Impulse Response Estimated Impulse Response.5 5 1 15 Sample Index, n Figure 1. Sample result for the estimated system impulse response when the input signal is x n = 1δ n and the noise power is P n = 1..5 Impulse Response Amplitude, h(n)..15.1.5.5.1.15..5 True Impulse Response Estimated Impulse Response 5 1 15 Sample Index, n Figure. Sample result for the estimated system impulse response when the input signal is x n = 1δ n and the noise power is P n = 1. The second signal studied is a pseudonoise random waveform of length L created using Matlab s Gaussian random number generator. The advantage of a noise waveform is that, on average, the autocorrelation function is again a unit impulse. In other words, = E ( ) = δ r n xlxl n P n x (1) where [ ] E is the expected value operator and P x is the random waveform s average power. Of course, only a single realization of the random waveform can be transmitted; hence, the autocorrelation of the actual input waveform will not be a perfect impulse. Figure 3a shows a sample realization of a noise waveform with P x = 1. Figure 3b shows the autocorrelation 4

Random Waveform, x(n) 1 5 5 1 5 1 15 5 3 35 4 45 5 Sample Index, n Autocorrelation, r (n) 1 1 5 4 3 1 1 3 4 5 Lag Index, n Avg. Autocorrelation 1 1 5 4 3 1 1 3 4 5 Lag Index, n Figure 3. Top Panel: A sample realization of a pseudorandom waveform. Middle Panel: The autocorrelation of the waveform realization. Bottom Panel: Autocorrelation averaged over 5 waveform realizations. function for the realization shown in Fig. 3a, and Fig. 3c shows the autocorrelation function that resulted from averaging the autocorrelation function over 5 realizations of the noise waveform. The average power spectrum of the random waveform is the DTFT of the average autocorrelation function. Since the average autocorrelation is the unit impulse seen in Fig. 3c, the average power spectrum is flat, which is the optimum scenario for system ID. A pseudorandom input waveform has approximately the same autocorrelation function as a unit impulse. However, a single realization of the random waveform will have larger energy than the unit impulse. Hence, we expect to see improved performance over that obtained with a unit impulse of the same power. We also note that for best performance, the system ID algorithm should have knowledge of the particular waveform realization. Figure 4 shows an example of estimation performance when a pseudorandom waveform with length 5 is used and the noise power is P n = 1. The impulse response estimate in Fig. 4 is clearly improved over the estimate obtained with the unit impulse for the same noise power. In comparing Figs. 1,, and 4, it becomes apparent that the noise power and length of the input waveform both effect system ID performance. We hypothesize that SNR determines relative performance between input waveforms having the same power spectrum (average power spectrum in the case of pseudorandom waveform). Thus, we define SNR as 5

.5 Impulse Response Amplitude, h(n)..15.1.5.5.1.15..5 True Impulse Response Estimated Impulse Response 5 1 15 Sample Index, n Figure 4. Sample result for the estimated system impulse response when the input signal is a pseudonoise waveform with P x = 1 and the noise power is P n = 1. where P x = 1 for the unit impulse, P x ( n) Px SNR = (13) P x = E = 1 for the pseudorandom waveform, and P n is the noise power. To quantify mean-squared error (MSE) performance, we repeated the system ID simulation 5 times for the unit impulse and for pseudorandom waveforms of varying length. We also varied the SNR through control of the noise power. We then calculated the average squared-error for the 5 trials according to n ( k) ( k) 5 1 T ˆ ˆ MSE = h h h h (14) 5 k = 1 where h is a column vector containing the values of the true impulse response, h ˆ k is a vector containing the estimated impulse response from the k th trial, and ( ) T is the transpose operator. The results of our Monte Carlo simulations were very interesting. The squared-error values for the unit impulse were well behaved, but values for the pseudorandom waveform were sometimes extremely large. Figure 5 shows a histogram of the squared-error values for the length-5 pseudorandom waveform at an SNR of 4 db. Despite using 1 bins for the histogram, 99.8% of the squared-error values fall into the lowest bin while the remaining values vary significantly but rarely occur. In fact, these other values occur so rarely that they cannot be seen on the same scale that is appropriate for the smallest bin. Figure 6 shows the same histogram with a reduced vertical scale. Unfortunately, the outlying squared-error values seen in Fig. 6 seemed to de-stabilize the average squared-error performance results. In other words, average squared-error plots using all 5, trials produced curves that were not as smooth as expected or desired. Therefore, we 6

.5 x 14 Number of Occurrences 1.5 1.5 1 3 4 5 Squared Error Value Figure 5. 1-bin histogram of squared-error values for SNR = 4 db and a pseudorandom waveform with L = 5. 5 45 4 Number of Occurrences 35 3 5 15 1 5 1 3 4 5 Squared Error Value Figure 6. Vertically scaled version of Figure 5 showing rarely occurring values. decided to ignore the outliers by using only the lowest 4,9 values in the mean calculation for a given waveform and SNR. The result of this procedure is shown in Fig. 7. In Fig. 7, it is seen that the unit impulse and the pseudorandom (labeled PN, for pseudonoise ) waveform of length five perform nearly the same. This was unexpected since most of the pseudorandom waveform realizations have higher energy than the unit impulse. Our explanation for the unexpected result is that the autocorrelation properties of a given random waveform realization strongly affect performance. We believe that in extreme cases, poor autocorrelation properties produce the outlying results seen in Fig. 6. In less extreme cases, waveform realizations with weak autocorrelation properties still produce worse estimates than can be achieved with a unit impulse. Hence, on average and excluding the extreme outliers, the waveforms perform nearly the same. 7

Mean Squared Estimation Error 1 1 1 1 1 1 1 1 3 Unit Impulse PN Sequence, L = 5 PN Sequence, L = 5 PN Sequence, L = 5 1 4 5 1 15 5 3 35 4 SNR (db) Figure 7. (Approximate) MSE performance vs. SNR and parameterized by waveform. 6 X(f) (db) 4.5.5 Normalized Frequency 6 (a) X(f) (db) 4.5.5 Normalized Frequency (b) Figure 8. The power spectrum of a poorly performing pseudorandom waveform in (a), and a satisfactorily performing pseudorandom waveform in (b). Another interesting aspect seen in Fig. 7 is the shift between curves associated with pseudorandom waveforms with different lengths. The lengths of the waveforms differ by factors of 1 when going from L = 5, to L = 5, to L = 5. Likewise, the performance curves are shifted by approximately 1log 1 (1) = 1 db relative to each other. Since the average energy in the pseudorandom waveforms is Ex = LPx, it appears that the relative shift in performance is directly proportional to the pseudorandom waveform s average energy. Finally, we test our explanation that correlation properties of an individual waveform strongly affect squared-error performance. To test our hypothesis, we generated noise waveforms until we found two waveforms that performed quite differently. The power spectra of these two waveforms are shown in Figs. 8a and 8b. Figure 8a, which shows the spectrum of a 8

poorly performing waveform, has a noticeable null located around f = cycles/sample. Since the estimated transfer function is obtained by dividing S ( f ) by the waveform s power spectrum, very low values in the power spectrum correspond to dividing by a small number. Furthermore, in the presence of noise, division by a small number can produce unstable results. Further investigation confirmed that poorly performing waveforms consistently had a deep null in their spectrum while waveforms that performed well consistently did not have a deep null. C. Conclusions We have successfully implemented a system identification algorithm based on crosscorrelation between the input and output sequences of the system. We have analyzed performance in terms of waveform structure and SNR. This performance analysis yielded a surprising result in that the L = 5 pseudorandom waveform performed approximately the same as the unit impulse despite having higher average energy. This behavior was investigated further, and we discovered that pseudorandom waveforms with a deep null in their spectrum produced unstable estimates of the system s impulse response. This instability creates a much wider performance range for pseudorandom waveforms than for the unit impulse waveform. To faithfully reflect the performance of pseudorandom waveforms, we have excluded the simulation trials resulting in large estimation errors deemed to be outliers. This approach resulted in stable performance results that we feel are representative of the true relative performance between different waveforms. III. CDMA Communication This sections background, results, and conclusions IV. Final Summary and Conclusions Potentially a section giving the conclusions from the entire project, but no need to rehash conclusions from individual sections of the project. 9