Theory of Telecommunications Networks

Similar documents
Theory of Telecommunications Networks

Theory of Telecommunications Networks

Theory of Telecommunications Networks

Chapter 2: Signal Representation

Signals A Preliminary Discussion EE442 Analog & Digital Communication Systems Lecture 2

Lecture 3 Complex Exponential Signals

Lecture #11 Overview. Vector representation of signal waveforms. Two-dimensional signal waveforms. 1 ENGN3226: Digital Communications L#

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal.

DIGITAL COMMUNICATIONS SYSTEMS. MSc in Electronic Technologies and Communications

Real and Complex Modulation

Detection and Estimation of Signals in Noise. Dr. Robert Schober Department of Electrical and Computer Engineering University of British Columbia

QUESTION BANK SUBJECT: DIGITAL COMMUNICATION (15EC61)

Fund. of Digital Communications Ch. 3: Digital Modulation

Handout 13: Intersymbol Interference

System analysis and signal processing

Chapter 4. Part 2(a) Digital Modulation Techniques

Speech, music, images, and video are examples of analog signals. Each of these signals is characterized by its bandwidth, dynamic range, and the

Chapter 6 Passband Data Transmission

Objectives. Presentation Outline. Digital Modulation Lecture 03

Amplitude Frequency Phase

Communication Systems

Signals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM)

(Refer Slide Time: 01:45)

Biomedical Signals. Signals and Images in Medicine Dr Nabeel Anwar

Chapter 2. Signals and Spectra

CSE4214 Digital Communications. Bandpass Modulation and Demodulation/Detection. Bandpass Modulation. Page 1

Communication Systems

Principles of Communications

CHAPTER 4 SIGNAL SPACE. Xijun Wang

Modulation. Digital Data Transmission. COMP476 Networked Computer Systems. Analog and Digital Signals. Analog and Digital Examples.

Digital modulation techniques

PRINCIPLES OF COMMUNICATIONS

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

(Refer Slide Time: 3:11)

Outline. Communications Engineering 1

+ a(t) exp( 2πif t)dt (1.1) In order to go back to the independent variable t, we define the inverse transform as: + A(f) exp(2πif t)df (1.

Introduction to Wavelet Transform. Chapter 7 Instructor: Hossein Pourghassem

What if the bandpass and complex baseband signals are random processes? How are their statistics (autocorrelation, power density) related?

Lecture 2: SIGNALS. 1 st semester By: Elham Sunbu

Multiple Input Multiple Output (MIMO) Operation Principles

Instruction Manual for Concept Simulators. Signals and Systems. M. J. Roberts

8.1 Geometric Representation of Signal Waveforms

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

Matched filter. Contents. Derivation of the matched filter

Outline. EECS 3213 Fall Sebastian Magierowski York University. Review Passband Modulation. Constellations ASK, FSK, PSK.

Part A: Question & Answers UNIT I AMPLITUDE MODULATION

21. Orthonormal Representation of Signals

ECE5713 : Advanced Digital Communications

Communication Channels

Understanding Digital Signal Processing

Department of Electronics and Communication Engineering 1

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function.

Chapter 2. Fourier Series & Fourier Transform. Updated:2/11/15

Problems from the 3 rd edition

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

Modulation and Coding Tradeoffs

Communications II. Professor Kin K. Leung EEE Departments Imperial College London

CHANNEL ENCODING & DECODING. Binary Interface

Application of Fourier Transform in Signal Processing

THE SINUSOIDAL WAVEFORM

Digital Communication Digital Modulation Schemes

CSC475 Music Information Retrieval

EE3723 : Digital Communications

Introduction to signals and systems

SAMPLING THEORY. Representing continuous signals with discrete numbers

Digital Signal Processing

ECE 201: Introduction to Signal Analysis

Lecture 17 z-transforms 2

Chapter 2 Direct-Sequence Systems

Transmission Fundamentals

PULSE SHAPING AND RECEIVE FILTERING

10. Introduction and Chapter Objectives

Lecture 7 Frequency Modulation

Digital Signal Processing

Chapter 3: Analog Modulation Cengage Learning Engineering. All Rights Reserved.

Lab course Analog Part of a State-of-the-Art Mobile Radio Receiver

II. Random Processes Review

EE228 Applications of Course Concepts. DePiero

Charan Langton, Editor

Laboratory Assignment 4. Fourier Sound Synthesis

Principles of Communications

COMMUNICATION SYSTEMS

Digital Communication System

6.02 Practice Problems: Modulation & Demodulation

Linear Time-Invariant Systems

Basic Signals and Systems

Chpater 8 Digital Transmission through Bandlimited AWGN Channels

Thus there are three basic modulation techniques: 1) AMPLITUDE SHIFT KEYING 2) FREQUENCY SHIFT KEYING 3) PHASE SHIFT KEYING

Chapter-2 SAMPLING PROCESS

Mobile Radio Systems OPAM: Understanding OFDM and Spread Spectrum

Digital Video and Audio Processing. Winter term 2002/ 2003 Computer-based exercises

TSKS01 Digital Communication Lecture 1

UNIT I AMPLITUDE MODULATION

Department of Electronic Engineering NED University of Engineering & Technology. LABORATORY WORKBOOK For the Course SIGNALS & SYSTEMS (TC-202)

EE202 Circuit Theory II , Spring

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

Digital Communication System

Module 3 : Sampling and Reconstruction Problem Set 3

EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June COMMUNICATIONS IV (ELEC ENG 4035)

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Transcription:

Theory of Telecommunications Networks Anton Čižmár Ján Papaj Department of electronics and multimedia telecommunications

CONTENTS Preface... 5 Introduction... 6. Mathematical models for communication channels... 8.2 Channel capacity for digital communication... 0.2. Shannon Capacity and Interpretation... 0.2.2 Hartley Channel Capacity... 2.2.3 Solved Problems... 3.3 Noise in digital communication system... 5.3. White Noise... 7.3.2 Thermal Noise... 8.3.3 Solved Problems... 9.4 Summary... 20.5 Exercises... 2 2 Signal and Spectra... 23 2. Deterministic and random signals... 23 2.2 Periodic and nonperiodic signals... 23 2.3 Analog and discrete Signals... 23 2.4 Energy and power Signals... 23 2.5 Spectral Density... 25 2.5. Energy Spectral Density... 25 2.5.2 Power Spectral Density... 25 2.5.3 Solved Problems... 26 2.6 Autocorrelation... 27 2.6. Autocorrelation of an Energy Signal... 27 2.6.2 Autocorrelation of a Periodic Signal... 27 2.7 Baseband versus Bandpass... 28 2.8 Summary... 29 2.9 Exercises... 30 3 Probability and stochastic processes... 3 3. Probability... 3 3.. Joint Events and Joint Probabilities... 3 3..2 Conditional Probabilities... 32 3..3 Statistical Independence... 33 3..4 Solved Problems... 33 3.2 Random Variables, Probability Distributions, and probability Densities... 36 3.2. Statistically Independent Random Variables... 37

3.2.2 Statistical Averages of Random Variables... 37 3.2.3 Some Useful Probability Distributions... 38 3.3 Stochastic processes... 4 3.3. Stationary Stochastic Processes... 4 3.3.2 Statistical Averages... 4 3.3.3 Power Density Spectrum... 43 3.3.4 Response of a Linear Time-Invariant System (channel) to a Random Input Signal... 43 3.3.5 Sampling Theorem for Band-Limited Stochastic Processes... 44 3.3.6 Discrete-Time Stochastic Signals and Systems... 45 3.3.7 Cyclostationary Processes... 46 3.3.8 Solved Problems... 47 3.4 Summary... 50 3.5 Exercises... 52 4 Signal space concept... 55 4. Representation Of Band-Pass Signals And Systems... 55 4.. Representation of Band-Pass Signals... 55 4..2 Representation of Band-Pass Stationary Stochastic Processes... 58 4.2 Introduction of the Hilbert transform... 59 4.3 Different look at the Hilbert transform... 59 4.3. Hilbert Transform, Analytic Signal and the Complex Envelope... 59 4.3.2 Hilbert Transform in Frequency Domain... 6 4.3.3 Hilbert Transform in Time Domain... 62 4.3.4 Analytic Signal... 64 4.3.5 Solved Problems... 66 4.4 Signal Space Representation... 69 4.4. Vector Space Concepts... 69 4.4.2 Signal Space Concepts... 70 4.4.3 Orthogonal Expansions of Signals... 70 4.4.4 Gram-Schmidt procedure )... 7 4.4.5 Solved Problems... 74 4.4.6 Summary... 78 4.5 Exercises... 79 5 Digital modulation schemes... 82 5. Signal Space Representation... 82 5.2 Memoryless Modulation Methods... 82 5.2. Pulse-amplitude-modulated (PAM) signals (ASK)... 83 2

5.2.2 Phase-modulated signal (PSK)... 85 5.2.3 Quadrature Amplitude Modulation (QAM)... 86 5.3 Multidimensional Signals... 88 5.3. Orthogonal multidimensional signals... 88 5.3.2 Linear Modulation with Memory... 92 5.3.3 Non-Linear Modulation Methods with Memory... 95 5.4 Spectral Characteristic Of Digitally Modulated Signals... 0 5.4. Power Spectra of Linearly Modulated Signals... 0 5.4.2 Power Spectra of CPFSK and CPM Signals... 03 5.4.3 Solved Problems... 06 5.5 Summary... 0 5.6 Exercises... 0 6 Optimum Receivers for the AWGN Channel... 3 6. Optimum Receivers For Signals Corrupted By Awgn... 3 6.. Correlation demodulator... 4 6..2 Matched-Filter demodulator... 6 6..3 The Optimum detector... 8 6..4 The Maximum-Likelihood Sequence Detector... 20 6.2 Performance Of The Optimum Receiver For Memoryless Modulation... 23 6.2. Probability of Error for Binary Modulation... 23 6.2.2 Probability of Error for M-ary Orthogonal Signals... 26 6.2.3 Probability of Error for M-ary Biorthogonal Signals... 27 6.2.4 Probability of Error for Simplex Signals... 29 6.2.5 Probability of Error for M-ary Binary-Coded Signals... 29 6.2.6 Probability of Error for M-ary PAM... 30 6.2.7 Probability of Error for M-ary PSK... 30 6.2.8 Probability of Error for QAM... 32 6.3 Solved Problems... 34 6.4 Summary... 4 6.5 Exercises... 42 7 Performance analysis of digital modulations... 44 7. Goals Of The Communications System Designer... 44 7.2 Error Probability Plane... 44 7.3 Nyquist Minimum Bandwidth... 46 7.4 Shannon-Hartley Capacity Theorem... 46 7.4. Shannon Limit... 48 7.5 Bandwidth-Efficiency Plane... 50 3

7.5. Bandwidth Efficiency of MPSK and MFSK Modulation... 5 7.5.2 Analogies Between Bandwidth-Efficiency and Error-Probability Planes... 52 7.6 Modulation And Coding Trade-Offs... 53 7.7 Defining, Designing, And Evaluating Digital Communication Systems... 54 7.7. M-ary Signaling... 54 7.7.2 Bandwidth-Limited Systems... 55 7.7.3 Power-Limited Systems... 56 7.7.4 Requirements for MPSK and MFSK Signaling... 57 7.7.5 Bandwidth-Limited Uncoded System Example... 58 7.7.6 Power-Limited Uncoded System Example... 60 7.8 Solved Problems... 62 7.9 Summary... 65 7.0 Exercise... 66 8 Why use error-correction coding... 67 8. Trade-Off : Error Performance versus Bandwidth... 67 8.2 Trade-Off 2: Power versus Bandwidth... 68 8.3 Coding Gain... 68 8.4 Trade-Off 3: Data Rate versus Bandwidth... 68 8.5 Trade-Off 4: Capacity versus Bandwidth... 69 8.6 Code Performance at Low Values of E b /N 0... 69 8.7 Solved problem... 70 8.8 Exercise... 7 Appendix A... 73 The Q-function... 73 The Error Function... 74 Appendix B... 75 Comparison of M-ary signaling techniques... 75 Error performance of M-ary signaling techniques... 75 References... 76 4

PREFACE Providing the theory of digital communication systems, this textbook prepares senior undergraduate and graduate students for the engineering practices required in the real word. With this textbook, students can understand how digital communication systems operate in practice, learn how to design subsystems, and evaluate end-to-end performance. The book contains many examples to help students achieve an understanding of the subject. The problems are at the end of the each chapter follow closely the order of the sections. The entire book is suitable for one semester course in digital communication. All materials for teaching texts were drawn from sources listed in References. 5

4 SIGNAL SPACE CONCEPT 4. REPRESENTATION OF BAND-PASS SIGNALS AND SYSTEMS Many digital information-bearing signals are transmitted by some type of carrier modulation. Signals and channels (systems) that satisfy the condition that their bandwidth is much smaller than the carrier frequency are termed narrowband band-pass signals and channels (systems). With no loss of generality and for mathematical convenience, it is desirable to reduce all band-pass signals and channels to equivalent low-pass signals and channels. As a consequence, the results of the performance of the various modulation and demodulation techniques presented in the subsequent chapters are independent of carrier frequencies and channel frequency bands. 4.. Representation of Band-Pass Signals Suppose that real valued signal s(t) has a frequency content concentrated in a narrow band of frequencies in the vicinity of a frequency fc as show in Figure 4.. -f c Figure 4. Signal Spectrum of a bandpass signal Our objective is to develop a mathematical representation of such signals. First, we consider a signal that contains only positive frequencies in s(t). Such a signal may be expressed as 2 (4.) where is the Fourrier transform of and is the unit step function. The equivalent time domain expression is S(f) 0 2 (4.2) f c f The signal is called the analytic signal or pre-envelope of. We note that 2 (4.3) Hence 55

We define st (4.4) (4.5) The signal may by viewed as the output of the filter with impulse response ht ; (4.6) when excited by the input signal. Such a filter is called a Hilbert transformer. The frequency response of this filter is simply Hf We observe that and that the phase response Θ for 0 and Θ for 0. Therefore, this filter is basically a 90 phase shifter for all frequencies in the input signal. The analytical signal is a band-pass signal. We may obtain an equivalent low-pass representation by performing a frequency translation of. Thus we define as or The equivalent time-domain is (4.7) (4.8) (4.9) (4.0) In general, the signal is complex-valued and may be expressed as (4.) After substitution and equation real and imaginary part 2 2 (4.2) 2 2 (4.3) The expression (4.2) is the desired form for the representation of a band-pass signal. The lowfrequency signal components and may be viewed as amplitude modulation impressed on the carrier components 2 and 2. Since these carrier components are in phase quadrature, and are called the quadrature components of the band-pass signal. 56

Another representation of the signal is A third possible representation is expressing where Then (4.4) (4.5) (4.6) Θ (4.7) 2 Θ (4.8) Therefore, 4.2, 4.4 and 4.8 are equivalent representation of band-pass signals. The Fourier transform of is Use of the identity in 4. 9 yields the results (4.9) (4.20) (4.2) This is the basic relationship between the spectrum of the real band-pass signal and the spectrumof the equivalent low-pass signal. The energy in the signal is defined as When the identity in Equation 4. 20 is used in Equation 4. 22, we obtain the following results: (4.22) cos 4 2Θ (4.23) Consider the second integral in Equation 4.23. Since the signal narrow-band, the real envelope or, equivalently, varies slowly relative to the rapid variations exhibited by the cosine function. A graphical illustration of the integrand in the second integral of Equation 4.23 is shown in Figure 4.2. The value of the integral is just the net area under the cosine function modulated by. Since the modulating waveform varies slowly relative to the cosine function, the net area contributed by the second integral is very small relative to the value of the first integral in Equation 4.23 and, hence, it can be neglected. Thus, for all practical purposes, the energy in the bandpass signal, expressed in terms of the equivalent low-pass signal is 57

(4.24) where is just the envelope of. a 2 (t) cos[4πf c t+2θ(t)] Figure 4.2 The signal tcos 4 2Θ 4..2 Representation of Band-Pass Stationary Stochastic Processes Suppose that is a sample function of a wide-sense stationary stochastic process with zero mean and power spectral density Φ.The power spectral density is assumed to be zero outside of an interval of frequencies centered around. The stochastic process is said to be a narrowband band-pass process if the width of the spectral density is much smaller than. Under this condition, a sample function of can be represented by any of the three equivalent forms tcos 2 Θ (4.25) cos 2 sin 2 (4.26) (4.27) where is the envelope and Θ is the phase of the real-valued signal, and are the quadrature components of, and is called the complex envelope of. Let us consider 4.26 in more detail. First, we observe that if ) is zero mean, then and must also have zero mean values. In addition, the stationarity of n(t) implies that the autocorrelation and cross-correlation functions of x(t) and y(t) satisfy the following properties: (4.28) (4.29) The autocorrelation function of the band-pass stochastic process is (4.30) The power density spectrum of the stochastic process is the Fourier transform of Φ 58

Φ Φ (4.3) 4.2 INTRODUCTION OF THE HILBERT TRANSFORM Signal processing is a fast growing area today and a desired effectiveness in utilization of bandwidth and energy makes the progress even faster. Special signal processors have been developed to make it possible to implement the theoretical knowledge in an efficient way. Signal processors are nowadays frequently used in equipment for radio, transportation, medicine and production etc. In 743 a famous Swiss mathematician named Leonard Euler (707-783) derived the formula cos (4.32) 50 years later the physicist Arthur E: Kennelly and the scientist Charles P: Steinmetz used this formula to introduce the complex notation of harmonic wave forms in electrical engineering, that is cos (4.33) Later on, in the beginning of the 20th century, the German scientist David Hilbert (862-943) finally showed that the function is the Hilbert transform cos. This gave us the 2 phase-shift operator which is a basic property of the Hilbert transform. 4.3 DIFFERENT LOOK AT THE HILBERT TRANSFORM A real function and its Hilbert transform are related to each other in such a way that they together create a so called strong analytic signal. The strong analytic signal can be written with an amplitude and a phase where the derivative of the phase can be identified as the instantaneous frequency. The Fourier transform of the strong analytic signal gives us a one-sided spectrum in the frequency domain. It is not hard to see that a function and its Hilbert transform also are orthogonal. This orthogonality is not always realized in applications because of truncations in numerical calculations. However, a function and its Hilbert transform has the same energy and therefore the energy can be used to measure the calculation accuracy of the approximated Hilbert transform. The Hilbert transform defined in the time domain is a convolution between the Hilbert transformer and a function. 4.3. Hilbert Transform, Analytic Signal and the Complex Envelope In Digital Signal Processing we often need to look at relationships between real and imaginary parts of a complex signal. These relationships are generally described by Hilbert transforms. Hilbert transform not only helps us relate the I and Q components but it is also used to create a special class of causal signals called analytic which are especially important in simulation. The analytic signals help us to represent bandpass signals as complex signals which have specially attractive properties for signal processing. 59

X Low-pass Filter /2 I g(t) cos f c t Hilbert transformer sin f c t Oscillator X Low-pass Filter -/2 Q Figure 4.3 Role of Hilbert Transform in modulation Hilbert Transform is not a particularly complex concept and can be much better understood if we take an intuitive approach first before delving into its formula which is related to convolution and is hard to grasp. The following diagram that is often seen in text books describing modulation gives us a clue as to what a Hilbert Transform does. The role of Hilbert transform as we can guess here is to take the carrier which is a cosine wave and create a sine wave out of it. So let s take a closer look at a cosine wave to see how this is done by the Hilbert transformer. Figure 4.4 a) shows the amplitude and the phase spectrum of a cosine wave. Now recall that the Fourier Series is written as (4.34) and Where And and are the spectral amplitudes of cosine and sine waves. Now take a look at the phase spectrum. The phase spectrum is computed by (4.35) Cosine wave has no sine spectral content, so is zero. The phase calculated is 90 for both positive and negative frequency from above formula. The wave has two spectral components each of magnitude /2, both positive and lying in the real plane. (the real plane is described as that passing vertically (R-V plane) and the Imaginary plane as one horizontally (R-I plane) through the Imaginary axis). 60

v(t) f t Q- [V] A/2 Q+ A/2 Real Imaginary [V] A/2 A/2 -f f f Spectral Amplitude a) Magnitude Spectrum v(t) Figure 4.4 a) Cosine Wave properties, b) Sine Wave properties Figure 4.4 b) shows the same two spectrums for a sine wave. The sine wave phase is not symmetric because the amplitude spectrum is not symmetric. The quantity is zero and has either a positive or negative value. The phase is 90 for the positive frequency and 90 for the negative frequency. Now we wish to convert the cosine wave to a sine wave. There are two ways of doing that, one in time domain and the other in frequency domain. 4.3.2 Hilbert Transform in Frequency Domain f t Q- A/2 [V] Now compare Figure 4.4 a) and b), in particular the spectral amplitudes. The cosine spectral amplitudes are both positive and lie in the real plane. The sine wave has spectral components that lie in the Imaginary plane and are of opposite sign. To turn cosine into sine, as shown in Figure 4.5 below, we need to rotate the negative frequency component of the cosine by 90 and the positive frequency component by 90. We will need to rotate the phasor by 90 or in other words multiply it by -j. We also need to rotate the phasor by 90 or multiply it by j. Real Spectral Amplitude b) Imaginary Q+ A/2 [V] A/2 A/2 -f f Magnitude Spectrum Q- +90 A/2 [V] Real Imaginary -90 Q+ A/2 f Figure 4.5 Rotating phasors to create a sine wave out of a cosine We can describe this transformation process called the Hilbert Transform as follows: 6

All negative frequencies of a signal get a phase shift and all positive frequencies get a phase shift. If we put a cosine wave through this transformer, we get a sine wave. This phase rotation process is true for all signals put through the Hilbert transform and not just the cosine. For any signal g(t), its Hilbert Transform has the following property 0 0 (4.36) (Putting a little hat over the capital letter representing the time domain signal is the typical way a Hilbert Transform is written.) A sine wave through a Hilbert Transformer will come out as a negative cosine. A negative cosine will come out a negative sine wave and one more transformation will return it to the original cosine wave, each time its phase being changed by 90. cos sin cos sin cos For this reason Hilbert transform is also called a quadrature filter. We can draw this filter as shown below in Figure 4.6. [φ] +90 Figure 4.6 Hilbert Transform shifts the phase of positive frequencies by -90 and negative frequencies by +90. So here are two things we can say about the Hilbert Transform.. It is a peculiar sort of filter that changes the phase of the spectral components depending on the sign of their frequency. 2. It only effects the phase of the signal. It has no effect on the amplitude at all. 4.3.3 Hilbert Transform in Time Domain Now look at the signal in time domain. Given a signal s(t), Hilbert Transform of this signal is defined as -90 Real Imaginary (4.37) Another way to write this definition is to recognize that Hilbert Transform is also the convolution of function with the signal s(t). So we can write the above equation as (4.38) 62

Achieving a Hilbert Transform in time domain means convolving the signal with the function. Why the function, what is its significance? Let s look at the Fourier Transform of this function. What does that tell us? Given in Equation 4.39, the transform looks a lot like the Hilbert transform we talked about before. (4.39) The term sgn in Equation 4.39 above, called signum is simpler than it seems. Here is the way we could have written it which would have been more understandable. 0 = 0 In Figure 4.7 we show the signum function and its decomposition into two familiar functions. - sgn(f)=2u(f)- 2 f f f = 2u(f) - f = Figure 4.7 Signum Function decomposed into a unit function and a constant (4.40) For shortcut, writing sgn is useful but it is better if it is understood as a sum of the above two much simpler functions. (We will use this relationship later.) 2 (4.4) We see in Figure 4.8 that although is a real function, is has a Fourier transform that lies strictly in the imaginary plane. Do you recall what this means in terms of Fourier Series coefficients? What does it tell us about a function if it has no real components in its Fourier transform? It says that this function can be represented completely by a sum of sine waves. It has no cosine component at all. A/2 [V] Imaginary [φ] Imaginary Time Real Figure 4.8 Function and its Fourier transform In Figure 4.9, we see a function composed of a sum of 50 sine waves. We see the similarity of this function with that of. Now you can see that although the function looks nothing at all a sinusoid, we can still approximate it with a sum of sinusoids. 63

The function gives us a spectrum that explains the Hilbert Transform in time domain, albeit this way of looking at the Hilbert Transform is indeed very hard to grasp. We limit our discussion of Hilbert transform to Frequency domain due to this difficulty. Figure 4.9 Aximating function with a sum of 50 sine wave We can add the following to our list of observations about the Hilbert Transform. 3. The signal and its Hilbert Transform are orthogonal. This is because by rotating the signal 90 we have now made it orthogonal to the original signal, that being the definition of orthogonality. 4. The signal and its Hilbert Transform have identical energy because phase shift do not change the energy of the signal only amplitude changes can do that. 4.3.4 Analytic Signal Hilbert Transform has other interesting properties. One of these comes in handy in the formulation of an Analytic signal. Analytic signals are used in Double and Single side-band processing (about SSB and DSB later) as well as in creating the I and Q components of a real signal. An analytic signal is defined as follows. (4.42) An analytic signal is a complex signal created by taking a signal and then adding in quadrature its Hilbert Transform. It is also called the pre-envelope of the real signal. So what is the analytic signal of a cosine? Substitute cos for in Equation 4.42, knowing that its Hilbert transform is a sine, we get cossin (4.43) The analytic function of a cosine is the now familiar phasor or the complex exponential,. What is the analytic signal of a sine? Now substitute sin for in Equation 4.42, knowing that its Hilbert transform is a, we get once again a complex exponential. sin cos (4.44) 64

Do you remember what the spectrum of a complex exponential looks like? To remind you, I repeat here the figure. v(t) sqrt(2)*a t [V] sqrt(2)*a Imaginary f Real Figure 4.0 Fourier transform of a complex exponential We can see from the figure above, that whereas the spectrum of a sine and cosine spans both the negative and positive frequencies, the spectrum of the analytic signal, in this case the complex exponential, is in fact present only in the positive domain. This is true for both sine and cosine and in fact for all real signals. Restating the results: the Analytic signal for both and sine and cosine is the complex exponential. Even though both sine and cosine have a two sided spectrum as we see in figures above, the complex exponential which is the analytic signal of a sinusoid has a one-sided spectrum. We can generalize from this: An analytic signal (composed of a real signal and its Hilbert transform) has a spectrum that exists only in the positive frequency domain. Let s take at a look at the analytic signal again. The conjugate of this signal is also a useful quantity. This signal has components only in the negative frequencies and can be used to separate out the lower side-bands. Now back to the analytic signal. Let s extend our understanding by taking Fourier Transform of both sides of Equation 4.42. We get (4.45) The first term is the Fourier transform of the signal, and the second term is the inverse Hilbert Transform. We can rewrite by use of property 2 Equation 4.45 as One more simplification gives us 2 (4.46) 2 0 0 0 0 0 (4.47) 65

This is a very important result and is applicable to both lowpass and modulated signals. For modulated or bandpass signals, its net effect is to translate the signal down to baseband, double the spectral magnitudes and then chop-off all negative components. Complex Envelope The Complex Envelope is defined as (4.48) We now see clearly that the Complex Envelope is just the frequency shifted version of the analytic signal. Recognizing that multiplication with the complex exponential in time domain results in frequency shift in the Frequency domain, using the Fourier Transform results for the analytic signal above, we get 2 0 0 0 0 0 (4.49) So here is what we have been trying to get at all this time. This result says that the Fourier Transform of the analytic signal is just the one-sided spectrum. The carrier signal drops out entirely and the spectrum is no longer symmetrical. This property is very valuable in simulation. We no longer have to do simulation at carrier frequencies but only at the highest frequency of the baseband signal. The process applies equally to other transformation such as filters etc. which are also down shifted. It even works when non-linearities are present in the channel and result in additional frequencies. There are other uses of complex representation which we will discuss as we explore these topics however its main use is in simulation. 4.3.5 Solved Problems Let s do an example. Here is a real baseband signal. (I have left out the factor 2 for purposes of simplification) 42 63 v(t) t f Figure 4. A baseband signal 66

The spectrum of this signal is shown below, both its individual spectral amplitudes and its magnitude spectrum. The magnitude spectrum shows one spectral component of magnitude 2 at f = 2 and -2 and an another one of magnitude 3 at f = 3 and -3. imaginary Figure 4.2 Spectral amplitudes and the magnitude spectrum Now let s multiply it with a carrier signal of 00 to modulate it and to create a bandpass signal, 0 cos00 42 cos 00 6 sin3 cos 00 Figure 4.3 The modulated signal and its envelope Let s take the Hilbert Transform of this signal. But before we do that we need to simplify the above so we only have sinusoids and not their products. This step will make it easy to compute the Hilbert Transform. By using these trigonometric relationships, we rewrite the above signal as 0 Spectral amplitudes real Frequency -0 0 0.2 0.4 0.6 0.8-3 -2 2 3 The Magnitude Spectrum sinsin sin cos 2 coscos cos cos 2 The envelope of the modulated signal is the information signal 2200 2 cos 2 00 3 sin3 00 3sin 3 00 Now we take the Hilbert Transform of each term and get 2sin200 2sin200 3cos300 3 cos3 00 67

Now create the analytic signal by adding the original signal and its Hilbert Transform. 2cos2 00 2cos2 00 3sin3 00 3sin3 00 2sin2 00 2sin2 00 3cos3 00 3cos3 00 Let s once again rearrange the terms in the above signal 2cos2 00 2sin2 00 2 cos2 00 2sin2 00 3 sin3 00 3cos3 00 3 sin3 00 3cos3 00 Recognizing that each pair of terms is the Euler s representation of a sinusoid, we can now rewrite the analytic signal as 4cos26sin3 But wait a minute, isn t this the original signal and the carrier written in the complex exponential? So why all the calculations just to get the original signal back? Now let s take the Fourier Transform of the analytic signal and the complex envelope we have computed to show the real advantage of the complex envelope representation of signals. 6 4 2 3 The magnitude Spectrum of the Complex Envelope Frequency Frequency 02 03 The Analytic Signal Fig.4.4 The Magnitude Spectrum of the Complex Envelope vs. The Analytic Signal Although this was a passband signal, we see that its complex envelope spectrum is centered around zero and not the carrier frequency. Also the spectral components are double those in Figure 4.2 and they are only on the positive side. If you think the result looks suspiciously like a one-sided Fourier transform, then you would be right. We do all this because of something Nyquist said. He said that in order to properly reconstruct a signal, any signal, baseband or passband, needs to be sampled at least two times its highest spectral frequency. That requires that we sample at frequency of 200. But we just showed that if we take a modulated signal and go through all this math and create an analytic signal (which by the way does not require any knowledge of the original signal) we can separate the information signal the baseband signal s(t)) from the carrier. We do this by dividing the analytic signal by the carrier. Now all we have left is the baseband signal. All processing can be done at a sampling frequency which is 6 (two times the maximum frequency of 3) instead of 200. 6 4 68

The point here is that this mathematical concept help us get around the signal processing requirements by Nyquist for sampling of bandpass systems. The complex envelope is useful primarily for passband signals. In a lowpass signal the complex envelope of the signal is the signal itself. But in passband signal, the complex envelope representation allows us to easily separate out the carrier. Take a look at the complex envelope again for this signal 4cos26sin3 4 cos 2 6 sin 3 We see the advantage of this form right away. The complex envelope is just the low-pass part of the analytic signal. The analytic signal low-pass signal has been multiplied by the complex exponential at the carrier frequency. The Fourier transform of this representation will lead to the signal translated back down the baseband (and doubled with no negative frequency components) making it possible to get around the Nyquist sampling requirement and reduce computational load. 4.4 SIGNAL SPACE REPRESENTATION In this section, we demonstrate that signals have characteristics that are similar to vectors and develop a vector representation for signal waveforms 4.4. Vector Space Concepts A vector v in n-dimensional space is characterized by its n components,,,,. It may also be represented as a linear combination of unit vectors or basis vectors, (4.50) The inner product of two n-dimensional vectors,,, and,,, is Orthogonal vectors are if The norm of a vector (simply its length). (4.5). 0 (4.52) (4.53) A set of m vectors is said to be orthonormal if the vectors are orthogonal and each vector has a unit norm. A set of m vectors is said to be linearly independent if no one vector can be represented as a linear combination of the remaining vectors. The norm square of the sum of two vectors may be expressed as If and are ortogonal, then 2. (4.54) 69

(4.55) This is the Pythagorean relation for two orthogonal n-dimensional vectors. Finaly, let us review the Gram-Schmidt procedure for constructing a set of orthonormal vectors from a set of n-dimensional vectors ; m. We begin by arbitrarily selecting a vector from the set, say. By normalizing its length, we obtain the first vector, say u v v (4.56) Next, we may select v 2 and, first, substract the projection of v 2 onto u. Thus, we obtain v 2 v 2. (4.57) Then, we normalize the vector to unit length. This yields u 2 (4.58) The procedure continues by selecting v 3 and subtracting the projections of v 3 into and 2. Thus, we have Then, the orthonormal vector 3 is v 3 v 3. v 3. 2 2 (4.59) u 3 (4.60) By continuing this procedure, we construct a set of orthonormal vectors, where, in general. If, then, and if, then. 4.4.2 Signal Space Concepts The inner product of two generally complex-valued signals is The norm of the signal is The triangle inequality is Cauchy-Schwarz inequality is, (4.6) (4.62) (4.63) (4.64) 4.4.3 Orthogonal Expansions of Signals In this section, we develop a vector representation for signal waveforms, and, thus, we demonstrate an equivalence between a signal waveform and its vector representation. 70

Suppose that is a deterministic real-valued signal with finite energy (4.65) There exists a set of function,,2,, that are orthonormal. We may approximate the signal by a weighted linear combination of these functions, i.e., (4.66) where are the coefficients in the approximation of. The approximation error is (4.67) Let us select the coefficients so as to minimize the energy of the approximation error. (4.68) The optimum coefficients in the series expansion of may be found by differentiating Equation 4.64 with respect to each of the coefficients and setting the first derivatives to zero. Under the condition that 0 we may expres s(t) as (4.69) When every finite energy signal can be represented by a series expansion of the form (4.69) for which 0, the set of orthonormal functions is said to be complete. 4.4.4 Gram-Schmidt procedure We have a set of finite energy signal waveforms,,2,, and we wish to construct a set of orthonormal waveforms. The first orthonormal waveform is simply constructed as f t s E (4.70) Thus,, is simply, normalized to unit energy. The second waveform is constructed from, by first computing the projection of, onto, which is Then And again in general c 2 dt (4.7) 2 c 2 (4.72) f 2 t E (4.73) f k t E (4.74) 7

where And k (4.75) c ik t dt (4.76) Thus, the orthogonalization process is continued until all the M signal waveforms have been exhausted and orthonormal waveforms have been constructed. The dimensionality N of the signal space will be equal to M if all the signal waveforms are linearly independent, i.e., none of the signal waveform is a linear combination of the other signal waveforms. Once we have constructed the set of orthonormal waveforms, we can express the M signals as linear combinations of the. Thus we may write and (4.77) (4.78) Based on the expression in Equation 4.77, each signal may be represented by the vector (4.79) or, equivalently as a point in N-dimensional signal space with coordinates,,2,,. The energy in the k th signal is simply the square of the length of the vector or, equivalently, the square of the Euclidean distance from the origin to the point in the N-dimensional space. Thus, any signal can be represented geometrically as a point in the signal space spanned by the orthonormal functions. We have demonstrated that a set of M finite energy waveforms can be represented by a weighted linear combination of orthonormal functions of dimensionality. The functions are obtained by applying the Gram-Schmidt orthogonalization procedure on. It should be emphasized, however, that the functions obtained from Gram-Schmidt procedure are not unique. If we alter the order in which the orthogonalization of the signals is performed, the orthonormal waveforms will be different and the corresponding vector representation of the signals will depend on the choice of the orthonormal functions. Nevertheless, the vectors will retain their geometrical configuration and their lengths will be invariant to the choice of orthonormal functions. The orthogonal expansions described above were developed for real-valued signal waveforms. Finally, let us consider the case in which the signal waveforms are band-pass and represented as 72

(4.80) where denote the equivalent low-pass signals. Signal energy may be expressed either in terms of or, as (4.8) The similarity between any pair of signal waveforms, say and is measured by the normalized cross correlation We define the complex-valued cross-correlation coefficient as then or, equivalently.. (4.82) (4.83) (4.84) (4.85) The cross-correlation coefficients between pairs of signal waveforms or signal vectors comprise one set of parameters that characterize the similarity of a set of signals. Another related parameter is the Euclidean distance between a pair of signals When for all m and k, this expression simplifies to ={ 2 (4.86) 2 (4.87) Thus, the Euclidean distance is an alternative measure of the similarity (or dissimilarity) of the set of signal waveforms or the corresponding signal vectors. In the following section, we describe digitally modulated signals and make use of the signal space representation for such signals. We shall observe that digitally modulated signals, which are classified as linear, are conveniently expanded in terms of two orthonormal basis functions of the form cos 2 sin 2 (4.88) 73

Hence, if is expressed as, it follows that in Equation 4.80 may be expressed as (4.89) where and represent the signal modulations. 4.4.5 Solved Problems Problem Determine the autocorrelation function of the stochastic process sin2, where is a constant and is a uniformly distributed phase, i.e.,,02. Solution sin 2 sin 2 2 cos 2 2 Ecos 2 2 2 Where the last equality follows from the trigonometric identity But Hence Problem 2 sin sin coscos 2 Ecos 2 2 2 2 cos 2 2 2 cos 2 2 2 0 2 cos 2 Let us apply the Gram-Schmidt procedure to the set of four waveforms illustrated in Figure. s (t) s 2 (t) s 3 (t) s 4 (t) 2 t 2 t 2 3 t - 2 3 t Solution The waveform s t has energy E 2, so that 74

f t s s E 2 Next we observe that c 2 0; hence, s and f are orthogonal. Therefore f t s 2 s 2 E 2 2 To obtain f t, we compute c 3, c 23, which are c 3 2 and c 23 0. Thus,, 2 3 s t- 2f t-0f 2 t 0, Since has unit energy, it follows that f t. In determining f t, we find that c 4 2,c 24 2, c 34. Hence s t 2f t-0f 2 t-f 3 t0 Consequently, s t is a linear combination of f t and f t and, hence, f t0. The three orthonormal functions are illustrated in this figure. Problem 3 Solution 2 f (t) 2 3 t 2 f 2 (t) - 2 3 t - 2 3 2 Let us obtain the vector representation of the four signals shown from Problem. Since the dimensionality of the signal space is N=3, each signal is described by three components. The signal is characterized by the vector s t 2, 0,0. Similarly, the signals s t, s tand s t are characterized by the vectors s t0, 2, 0, s t 2, 0, and s t- 2, 0, respectively. These vectors are shown in Figure: f 2 s 2 f 3 (t) t s f s 4 s 3 f 2 Their lengths are 2, 2, 3, and 3 and the corresponding signal energies,,2,3,4. 75

Problem 4 Determine the correlation coefficient among the four signal waveforms shown in Figure in Problem 2, and the corresponding Euclidean distances. Solution For real-valued signals the correlation coefficients are given by: And the Euclidean is: distances by: 2 For the signals in problem: 2, 2, 3, 3 0, 2 6, 2 6, 0, 0, 3, and 2 2, 232 6 6, 2 232 6 6 3 23 5, 5 332 3 2 2 Problem 5 Carry out the Gram-Schmidt orthogonalization of the signals in Problem in the order,,,, and, thus, obtain a set of orthonormal functions. Then, determine the vector representations of the signals and determine the signal energies. Solution The first basis function 76

Then, for the second basis function f 4 t s 4 s 4 E 3 3,03 0, And, Hence: c 43 s 3 f 4 3 Where E 3 denotes the energy of : E 3 For the third basis function: And Where E f 2 Finally for the fourth basis function: 2,0 2 3 s t-c 43 f 4 t 4 3,23 0, f 3 t 6,02 E 3 2 6,23 0, c 42 c 32 f s 2 f 4 0, s 2 f 3 0, f 2 t 2,0 E 2 2,2 0, c 4 s f 4 2 3, 77

2 c 3 s f 3 6, c 2 s f 4 0, Hence: s c 4 f 4 c 3 f 3 c 2 f 2 0, f t E 0 The last results is expected, since the dimensionality of the vector space generated by these signals is 3. Based on the basis functions f 2 t,f t,f t the basis representation of the signals is: 4.4.6 Summary s 4 0,0, 3 E 4 3 s 3 0, 8 3, 3 E 3 3 s 2,0,0 E 2 2 s 2 6, 2 3,0 E 2 Suppose we further impose constraint that the complex baseband signal s(t) is approximately bandlimited to /2 Hz (and time-limited to,, say), and impose no other constraints on the signal space. Then the appropriate basis functions for the signal space are the Prolate Spheroidal Wave Functions (PSWF s). See the papers by Slepian, Landau and Pollack for a description of PSWF s. This basis is optimum in the sense that, although there are a countably infinite number of functions in the set, at most WT of these are enough to capture most of the energy for any signal in this signal space. So the signal space of complex signals that are approximately bandlimited to /2 Hz and time limited to, is approximately finite dimensional. More typically in communication systems, s(t) is one of M possible signals,,. If we let,,, then dim. The signal can then be considered to belong to the n-dimensional space S. One can find an orthonormal basis for S by the standard Gram-Schmidt procedure. The energy of a signal is denoted by E and is given by The correlation between two signals and, which is a measure of the similarity between these two signals, is given by 78

.. The distance between two signals and, which is also a measure of the similarity between these two signals, is given by 4.5 EXERCISES When for all m and k, this expression simplifies to 2. Prove the following properties of Hilbert transforms: a. If, b. If, c. If cos, sin d. If sin, cos 2. Find a set of orthonormal basis functions for the signals given below that are defined on the interval :, t t t 3. Use the Gram Schmidt procedure to find an orthonormal basis for the signal set given below. Express each signal in terms of the orthonormal basis set found.,02 cos,02 sin,02 sin, 0 2 4. Use the Gram Schmidt procedure to find a set of orthonormal basis functions corresponding to the signals given in Figure: 79

s (t) s 2 (t) s 3 (t) 0 2 3 t 0 2 3 t 0 2 3 t 5. Determine the correlation coefficient ρ among the signals shown in Figure, and the corresponding Euclidean distances. Suppose that s(t) is either a real- or complex-valued signal that is represented as a linear combination of orthonormal functions, i.e., Where, 0, Determine the expressions for the coefficients, in the expansion, that minimize the energy End the corresponding residual error. s 2 - f 2 (t) 2 s 2 s 0 f 2 (t) 0 s f (t) f (t) s 2-0 -2 f 2 (t) f 2 (t) 0 s s 2 s f (t) f (t) 80

6. Suppose that a set of M signal waveforms is complex-valued. Derive the equations for the Gram-Schmidt procedure that will result in a set of orthonormal signal waveforms. 7. Consider the three waveforms shown in Figure. a. Show that these waveforms are orthonormal. b. Express the waveform as a linear combination of,,2,3 if, 0, 3, 34 and determine the weighting coefficients. 8. Consider the four waveforms shown in Figure. a. Determine the dimensionality of the waveforms and a set of basis functions. b. Use the basis functions to represent the four waveforms by vectors,,,. c. Determine the minimum distance between any pair of vectors. 9. Determine a set of orthonormal functions for the four signals shown in Figure 8

Department of electronics and multimedia telecommunications