Theory of Telecommunications Networks

Similar documents
Theory of Telecommunications Networks

Detection and Estimation of Signals in Noise. Dr. Robert Schober Department of Electrical and Computer Engineering University of British Columbia

Theory of Telecommunications Networks

Theory of Telecommunications Networks

Chapter 2: Signal Representation

QUESTION BANK SUBJECT: DIGITAL COMMUNICATION (15EC61)

Lecture 3: Data Transmission

two computers. 2- Providing a channel between them for transmitting and receiving the signals through it.

Nyquist, Shannon and the information carrying capacity of signals

Modulation and Coding Tradeoffs

EE303: Communication Systems

Downloaded from 1

Department of Electronics and Communication Engineering 1

Fundamentals of Digital Communication

Digital modulation techniques

Noise and Distortion in Microwave System

EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June COMMUNICATIONS IV (ELEC ENG 4035)

Exam in 1TT850, 1E275. Modulation, Demodulation and Coding course

Data Communications & Computer Networks

Problem Sheet 1 Probability, random processes, and noise

Review of Lecture 2. Data and Signals - Theoretical Concepts. Review of Lecture 2. Review of Lecture 2. Review of Lecture 2. Review of Lecture 2

CALIFORNIA STATE UNIVERSITY, NORTHRIDGE FADING CHANNEL CHARACTERIZATION AND MODELING

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function.

Communication Systems

Lecture 3 Concepts for the Data Communications and Computer Interconnection

Course 2: Channels 1 1

Transmission Impairments

Communication Systems

EE4601 Communication Systems

Chapter 3 Data and Signals

Matched filter. Contents. Derivation of the matched filter

Chapter 1 Introduction

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

Chapter 6 Passband Data Transmission

Physical Layer: Outline

Multiple Input Multiple Output (MIMO) Operation Principles

DIGITAL COMMUNICATIONS SYSTEMS. MSc in Electronic Technologies and Communications

ECE 630: Statistical Communication Theory

Mobile Radio Propagation: Small-Scale Fading and Multi-path

CT-516 Advanced Digital Communications

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

Lab 3.0. Pulse Shaping and Rayleigh Channel. Faculty of Information Engineering & Technology. The Communications Department

Outline / Wireless Networks and Applications Lecture 3: Physical Layer Signals, Modulation, Multiplexing. Cartoon View 1 A Wave of Energy

Introduction to Communications Part Two: Physical Layer Ch3: Data & Signals

Handout 11: Digital Baseband Transmission

Today s menu. Last lecture. Series mode interference. Noise and interferences R/2 V SM Z L. E Th R/2. Voltage transmission system

Chapter 2 Direct-Sequence Systems

PRINCIPLES OF COMMUNICATIONS

Solutions to Information Theory Exercise Problems 5 8

Exercises for chapter 2

UNIT I Source Coding Systems

Amplitude Frequency Phase

Announcements : Wireless Networks Lecture 3: Physical Layer. Bird s Eye View. Outline. Page 1

Point-to-Point Communications

Basic Concepts in Data Transmission

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Chapter 3 Digital Transmission Fundamentals

Transmission Fundamentals

EC 2301 Digital communication Question bank

Digital Modulation Schemes

Wireless Communication: Concepts, Techniques, and Models. Hongwei Zhang

18.8 Channel Capacity

Physical Layer. Networks: Physical Layer 1

EEE482F: Problem Set 1

Objectives. Presentation Outline. Digital Modulation Revision

Multi-Path Fading Channel

Chapter 2. Physical Layer

Lecture Fundamentals of Data and signals

Contents. Telecom Service Chae Y. Lee. Data Signal Transmission Transmission Impairments Channel Capacity

Principles of Communications

WIRELESS COMMUNICATION TECHNOLOGIES (16:332:546) LECTURE 5 SMALL SCALE FADING

Analog Communication (10EC53)

Introduction to Telecommunications and Computer Engineering Unit 3: Communications Systems & Signals

Objectives. Presentation Outline. Digital Modulation Lecture 03

Speech, music, images, and video are examples of analog signals. Each of these signals is characterized by its bandwidth, dynamic range, and the

Lab/Project Error Control Coding using LDPC Codes and HARQ

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Communication Channels

Chapter 2 Channel Equalization

Channel. Muhammad Ali Jinnah University, Islamabad Campus, Pakistan. Multi-Path Fading. Dr. Noor M Khan EE, MAJU

Data Communications and Networks

Outline Chapter 3: Principles of Digital Communications

6.976 High Speed Communication Circuits and Systems Lecture 20 Performance Measures of Wireless Communication

Part A: Question & Answers UNIT I AMPLITUDE MODULATION

ANALOGUE TRANSMISSION OVER FADING CHANNELS

PERFORMANCE ANALYSIS OF DIFFERENT M-ARY MODULATION TECHNIQUES IN FADING CHANNELS USING DIFFERENT DIVERSITY

Data Communication. Chapter 3 Data Transmission

ELEC 7073 Digital Communication III

UNIVERSITY OF SOUTHAMPTON

Announcement : Wireless Networks Lecture 3: Physical Layer. A Reminder about Prerequisites. Outline. Page 1

Communication Theory

COMMUNICATION SYSTEMS

Performance Evaluation Of Digital Modulation Techniques In Awgn Communication Channel

Chapter 0. Overview. 0.1 Digital communication systems

Chapter 14 MODULATION INTRODUCTION

ECE 4600 Communication Systems

Chapter 9. Digital Communication Through Band-Limited Channels. Muris Sarajlic

QUESTION BANK EC 1351 DIGITAL COMMUNICATION YEAR / SEM : III / VI UNIT I- PULSE MODULATION PART-A (2 Marks) 1. What is the purpose of sample and hold

CDMA Systems Engineering Handbook

Transcription:

Theory of Telecommunications Networks Anton Čižmár Ján Papaj Department of electronics and multimedia telecommunications

CONTENTS Preface... 5 1 Introduction... 6 1.1 Mathematical models for communication channels... 8 1.2 Channel capacity for digital communication... 10 1.2.1 Shannon Capacity and Interpretation... 10 1.2.2 Hartley Channel Capacity... 12 1.2.3 Solved Problems... 13 1.3 Noise in digital communication system... 15 1.3.1 White Noise... 17 1.3.2 Thermal Noise... 18 1.3.3 Solved Problems... 19 1.4 Summary... 20 1.5 Exercises... 21 2 Signal and Spectra... 23 2.1 Deterministic and random signals... 23 2.2 Periodic and nonperiodic signals... 23 2.3 Analog and discrete Signals... 23 2.4 Energy and power Signals... 23 2.5 Spectral Density... 25 2.5.1 Energy Spectral Density... 25 2.5.2 Power Spectral Density... 25 2.5.3 Solved Problems... 26 2.6 Autocorrelation... 27 2.6.1 Autocorrelation of an Energy Signal... 27 2.6.2 Autocorrelation of a Periodic Signal... 27 2.7 Baseband versus Bandpass... 28 2.8 Summary... 29 2.9 Exercises... 30 3 Probability and stochastic processes... 31 3.1 Probability... 31 3.1.1 Joint Events and Joint Probabilities... 31 3.1.2 Conditional Probabilities... 32 3.1.3 Statistical Independence... 33 3.1.4 Solved Problems... 33 3.2 Random Variables, Probability Distributions, and probability Densities... 36 3.2.1 Statistically Independent Random Variables... 37 1

3.2.2 Statistical Averages of Random Variables... 37 3.2.3 Some Useful Probability Distributions... 38 3.3 Stochastic processes... 41 3.3.1 Stationary Stochastic Processes... 41 3.3.2 Statistical Averages... 41 3.3.3 Power Density Spectrum... 43 3.3.4 Response of a Linear Time-Invariant System (channel) to a Random Input Signal... 43 3.3.5 Sampling Theorem for Band-Limited Stochastic Processes... 44 3.3.6 Discrete-Time Stochastic Signals and Systems... 45 3.3.7 Cyclostationary Processes... 46 3.3.8 Solved Problems... 47 3.4 Summary... 50 3.5 Exercises... 52 4 Signal space concept... 55 4.1 Representation Of Band-Pass Signals And Systems... 55 4.1.1 Representation of Band-Pass Signals... 55 4.1.2 Representation of Band-Pass Stationary Stochastic Processes... 58 4.2 Introduction of the Hilbert transform... 59 4.3 Different look at the Hilbert transform... 59 4.3.1 Hilbert Transform, Analytic Signal and the Complex Envelope... 59 4.3.2 Hilbert Transform in Frequency Domain... 61 4.3.3 Hilbert Transform in Time Domain... 62 4.3.4 Analytic Signal... 64 4.3.5 Solved Problems... 66 4.4 Signal Space Representation... 69 4.4.1 Vector Space Concepts... 69 4.4.2 Signal Space Concepts... 70 4.4.3 Orthogonal Expansions of Signals... 70 4.4.4 Gram-Schmidt procedure )... 71 4.4.5 Solved Problems... 74 4.4.6 Summary... 78 4.5 Exercises... 79 5 Digital modulation schemes... 82 5.1 Signal Space Representation... 82 5.2 Memoryless Modulation Methods... 82 5.2.1 Pulse-amplitude-modulated (PAM) signals (ASK)... 83 2

5.2.2 Phase-modulated signal (PSK)... 85 5.2.3 Quadrature Amplitude Modulation (QAM)... 86 5.3 Multidimensional Signals... 88 5.3.1 Orthogonal multidimensional signals... 88 5.3.2 Linear Modulation with Memory... 92 5.3.3 Non-Linear Modulation Methods with Memory... 95 5.4 Spectral Characteristic Of Digitally Modulated Signals... 101 5.4.1 Power Spectra of Linearly Modulated Signals... 101 5.4.2 Power Spectra of CPFSK and CPM Signals... 103 5.4.3 Solved Problems... 106 5.5 Summary... 110 5.6 Exercises... 110 6 Optimum Receivers for the AWGN Channel... 113 6.1 Optimum Receivers For Signals Corrupted By Awgn... 113 6.1.1 Correlation demodulator... 114 6.1.2 Matched-Filter demodulator... 116 6.1.3 The Optimum detector... 118 6.1.4 The Maximum-Likelihood Sequence Detector... 120 6.2 Performance Of The Optimum Receiver For Memoryless Modulation... 123 6.2.1 Probability of Error for Binary Modulation... 123 6.2.2 Probability of Error for M-ary Orthogonal Signals... 126 6.2.3 Probability of Error for M-ary Biorthogonal Signals... 127 6.2.4 Probability of Error for Simplex Signals... 129 6.2.5 Probability of Error for M-ary Binary-Coded Signals... 129 6.2.6 Probability of Error for M-ary PAM... 130 6.2.7 Probability of Error for M-ary PSK... 130 6.2.8 Probability of Error for QAM... 132 6.3 Solved Problems... 134 6.4 Summary... 141 6.5 Exercises... 142 7 Performance analysis of digital modulations... 144 7.1 Goals Of The Communications System Designer... 144 7.2 Error Probability Plane... 144 7.3 Nyquist Minimum Bandwidth... 146 7.4 Shannon-Hartley Capacity Theorem... 146 7.4.1 Shannon Limit... 148 7.5 Bandwidth-Efficiency Plane... 150 3

7.5.1 Bandwidth Efficiency of MPSK and MFSK Modulation... 151 7.5.2 Analogies Between Bandwidth-Efficiency and Error-Probability Planes... 152 7.6 Modulation And Coding Trade-Offs... 153 7.7 Defining, Designing, And Evaluating Digital Communication Systems... 154 7.7.1 M-ary Signaling... 154 7.7.2 Bandwidth-Limited Systems... 155 7.7.3 Power-Limited Systems... 156 7.7.4 Requirements for MPSK and MFSK Signaling... 157 7.7.5 Bandwidth-Limited Uncoded System Example... 158 7.7.6 Power-Limited Uncoded System Example... 160 7.8 Solved Problems... 162 7.9 Summary... 165 7.10 Exercise... 166 8 Why use error-correction coding... 167 8.1 Trade-Off 1: Error Performance versus Bandwidth... 167 8.2 Trade-Off 2: Power versus Bandwidth... 168 8.3 Coding Gain... 168 8.4 Trade-Off 3: Data Rate versus Bandwidth... 168 8.5 Trade-Off 4: Capacity versus Bandwidth... 169 8.6 Code Performance at Low Values of E b /N 0... 169 8.7 Solved problem... 170 8.8 Exercise... 171 Appendix A... 173 The Q-function... 173 The Error Function... 174 Appendix B... 175 Comparison of M-ary signaling techniques... 175 Error performance of M-ary signaling techniques... 175 References... 176 4

PREFACE Providing the theory of digital communication systems, this textbook prepares senior undergraduate and graduate students for the engineering practices required in the real word. With this textbook, students can understand how digital communication systems operate in practice, learn how to design subsystems, and evaluate end-to-end performance. The book contains many examples to help students achieve an understanding of the subject. The problems are at the end of the each chapter follow closely the order of the sections. The entire book is suitable for one semester course in digital communication. All materials for teaching texts were drawn from sources listed in References. 5

1 INTRODUCTION We present the basic principles that underlie the analysis and design of digital communication systems. The subject of digital communications involves the transmission of information in digital form from a source that generates the information to one or more destinations. Of particular importance in the analysis and design of communication systems are the characteristics of the physical channels through which the information is transmitted. The characteristics of the channel generally affect the design of the basic building blocks of the communication system. Figure 1.1 illustrates the functional diagram and the basic elements of a digital communication system. Information source output Figure 1.1 Basic elements of a digital communication system Analog signal (audio, video) Digital signal (teletype machine data) Source encoder Analog signal or Bits Information Source Analog or Digital Information Sink Estimate of Source Source encoder Digitizer and/or Digital Compresion Source Decoder Analog Signal Reconstruction and/or Digital Decompresion Source Bits rate R s [b/s] Channel - Coded Bits rate n/k R s [b/s] Channel Digital encoder modulator Encodes redundancy Expands rate by n/k Code word k/n Message converted into a sequence of binary digit Little or no redundancy Source encoding (data compression) 2 b possible waveforms Each waveform encodes b bits Estimate of Estimate of Source Channel - coded Bits bits Channel Decoder Digital demodulator Waveforms rate n/kb R s [bauds] (symbols/s) Waveform Channel Subject of EECS 224 Channel encoder Introduce a control Some redundancy in the binary information sequence Trivial form repeat each binary digit m times More sophisticated encoding (n,k), (k/n) Digital modulator 6

Interface to the communication channel To map binary info sequence into signal waveform Binary modulation 0 s (t),1 s (t) M-ary modulation M=2 b distinct waveforms s i (t), where is number of bits/symbol Communication channel The physical medium Additive (thermal) noise (electronic devices, automobile ignition, atmospheric) Measure of the performance (demodulator, decoder) Average probability of a bit-error at the output of the decoder Function of code characteristic, type of waveform, transmitted power, channel characteristics, method of demodulation (decoding) Source decoder output An approximation to the original source output Difference between the original signal and the reconstructed signal is a measure of the distortion introduced by the digital communication system Basic terms of digital communication system: Data rate Bit Error rate Bandwidth The measurement of the speed of a digital communication system. Its unit is usually bits/second or bit/s. The quality of a digital communication system is measured by bit error probability of equivalently bit error rate (BER). Its definition is = Power Bandwidth is the range of frequency spectrum used by a communication system. It is usually controlled by government and one has to pay for its usage Power is a serious concern for any communication system. One reason for this is that any electronic device can only handle limited power level. Another issue is that the signal of one user can be the unwanted interference of another user. Minimizing the power usage of every user can lead to minimizing interference in a system. This is particularly important for a mobile cellular system. 7

Signal to noise ratio (S/N) We will see later that the relative value of the signal to noise power ration (S/N) is usually more important that the absolute value of power level. Channel capacity AWGN channel =log 1 / or = log (1 )/ (1.1) where B is the channel bandwidth, S is the average transmitted power and N 0 is the power spectral density of the additive noise. The significance of the channel capacity is as follows: If the information rate R from the source is less than C (R<C), than it is theoretically possible to achieve reliable (error-free) transmission through the channel by appropriate coding. On the other hand, if R>C, reliable transmission is not possible regardless of amount of signal processing performed at the transmitter and receiver. Criteria of a good digital communication system: An ideal good digital communication system should meet the following parameters: High data rate (high speed) Low error rate (high quality) Less bandwidth (low bandwidth cost) Less power (low power cost) Les hardware and software complexity (low equipment cost) However, it is impossible to meet all these requirement without any limitation. In fact, Shannon has laid some fundamental limits of communication system. 1.1 MATHEMATICAL MODELS FOR COMMUNICATION CHANNELS In the design of communication systems for transmitting information through physical channels, we find it convenient to construct mathematical models that reflect the most important characteristics of the transmission medium. Then, the mathematical model for the channel is used in the design of the channel encoder and modulator at the transmitter and the demodulator and channel decoder at the receiver. Next, we provide a brief description of the channel models that are frequently used to characterize many of the physical channels that we encounter in practice. The Additive Noise Channel The simplest mathematical model for a communication channel is the additive noise channel, illustrated in Figure 1.2. In this model, the transmitted signal s(t) is corrupted by an additive random noise process n(t). Physically, the additive noise process may arise from electronic components and amplifiers at the receiver of the communication system or from interference encountered in transmission (as in the case of radio signal transmission). 8

s(t) + r(t)=s(t)+n(t) n(t) Channel Figure 1.2 The additive noise channel If the noise is introduced primarily by electronic components and amplifiers at the receiver, it may be characterized as thermal noise. This type of noise is characterized statistically as a Gaussian noise process. Hence, the resulting mathematical model for the channel is usually called the Additive Gaussian Noise Channel (AWGN). Because this channel model applies to a broad class of physical communication channels and because of its mathematical tractability, this is the predominant channel model used in our communication system analysis and design. Channel attenuation is easily incorporated into the model. () =() () (1.2) When the signal undergoes attenuation in transmission through the where α is the attenuation factor. The Linear Filter Channel Filters are used to ensure that the transmitted signals do not exceed specified bandwidth limitations and thus do not interfere with one another. s(t) Channel Linear filter h(t) Figure 1.3 The linear filter channel with additive noise () = () () () = () ( ) () (1.3) where h(t) is the impulse response of the linear filter and denotes convolution. n(t) r(t)=s(t)*h(t)+n(t) The Linear Time-Variant Filter Channel Underwater acoustic channels, ionospheric radio channels that result in time-variant multipath propagation of the transmitted signals. The channel output is 9

() = () (; ) (), = (; ) () () (1.4) Figure 1.4 Linear time-variant filter channel with additive noise A good model for multipath signal propagation through physical channels, such as the ionosphere (at frequencies below 30 MHz) and mobile cellular radio channels is a special case of (1.4) in which the time-variant impulse response has the form: (; ) = ()( ) (1.5) where the {a k (t)} represents the possibly time-variant attenuation factors for the L multipath propagation paths and {τ k } are the corresponding time delays. If (1.5) is substituted into (1.4), the received signal has the form () = ()( ) () (1.6) 1.2 CHANNEL CAPACITY FOR DIGITAL COMMUNICATION 1.2.1 Shannon Capacity and Interpretation The amount of noise present in the receiver can be represented in terms of its power =, where is the characteristic impedance of the channel, as seen by the receiver and v n is the rms noise voltage. Similarly, the message bearing signal can be represented by its power we can represent a typical message in terms of its average signal power =, where v s is the rms voltage of the signal. Now, it is reasonable to assume that the signal and noise are uncorrelated i.e., they are not related in any way and we cannot predict one from the other. If P r is the total power received due to the combination of signal and noise, which are uncorrelated random processes, we can write =, i.e., Interpretation of Shannon - Hartley channel capacity = (1 ): = (1.7) a) We observe that the capacity of a channel can be increased by either increasing the channel bandwidth or increasing the signal power or 10

reducing the in-band noise power or any judicious combination of the three. Each approach in practice has its own merits and demerits. It is indeed, interesting to note that, all practical digital communication systems, designed so far, operate far below the capacity promised by Shannon-Hartley equation and utilizes only a fraction of the capacity. There are multiple yet interesting reasons for this. One of the overriding requirements in a practical system is sustained and reliable performance within the regulations in force. However, advances in coding theory (especially turbo coding), signal processing techniques and VLSI techniques are now making it feasible to push the operating point closer to the Shannon limit b) If,, we apparently have infinite capacity but it is not true. As, the in-band noise power, N also tends to infinity [ =, : single-sided noise power spectral density, a constant for AWGN] and hence, for any finite signal power S and ( ) also tends to zero. So, it needs some more careful interpretation and we can expect an asymptotic limit. At capacity, the bit rate of transmission R b = C and the duration of a bit = =. If the energy received per information bit is E b, the signal power S can be expressed as, S = energy received per unit time = E b R b = E b C. So, the signal-to-noise ratio can be expressed as, Now, we can write This implies = (1.8) =log (1 ) (1.9) = 2 1 (1.10) 1 2 1, (1.11) =log 2, (1.12) = 1,6 (1.13) So, the limiting, in db is -1.6 db. So, ideally, a system designer can expect to achieve almost errorless transmission only when the is more than -1.6 db and there is no constraint in bandwidth. c) In the above observation, we set R b = C to appreciate the limit and we also saw that if R b > C, the noise v n is capable of distorting the group of b information bits. We say that the bit rate has exceeded the capacity of the channel and hence errors are not controllable by any means. 11

Figure 1.5 Interpretation of Shannon-Hartley channel capacity To reiterate, all practical systems obey the inequality and most of the civilian digital transmission systems utilize the available bandwidth efficiently, which means B (in Hz) and C (in bits per second) are comparable. For bandwidth efficient transmission, the strategy is to increase the bandwidth factor while. This is achieved by adopting suitable modulation and reception strategies. 1.2.2 Hartley Channel Capacity Consider a noise free channel where the limitation on data rate is simply the bandwidth of the signal. Nyquist states that if the rate of signal transmission is 2B, then a signal with frequencies no greater than B is sufficient to carry the signal rate. Conversely given a bandwidth of B, the highest signal rate that can be carried is 2B. This limitation is due to the effect of inter symbol interference, such as is produced by delay distortion. If the signals to be transmitted are binary (two voltage levels), then the data rate that can be supported by B Hz is 2B [bps]. However signals with more than two levels can be used; that is, each signal element can represent more than one bit. For example, if four possible voltage levels are used as signals, then each signal element can represent two bits. With multilevel signaling, the Nyquist formulation becomes: where M is the number of discrete signal or voltage levels. =2log (1.14) 12

1.2.3 Solved Problems Problem 1 If B = 3 khz and S/N is maintained at 30 db for a typical telephone channel, the channel capacity C is about 30 kbits/s. The theorem implies that error-free transmission is possible if we do not send information at a rate greater than the channel capacity. Thus, the information capacity theorem defines the fundamental limit on the rate of error-free transmission for a power limited, band-limited Gaussian channel. Figure 1.6 shows the general form of encoding scheme suggested by Shannon. A binary sequence of length R b bits in a second are encoded into a binary sequence of length R b T b bits in T b seconds before transmission. However, the design of the encoder and decoder is left unspecified. Binary source R b bits/s Channel encoder R b T b bits in R b seconds Figure 1.6 Error - free transmission system model It can be seen that the encoding time is T b seconds. There is a encoding delay of T b seconds in transmission and a decoding delay of T b seconds at the receiver. A total delay of 2 T b seconds is entailed. We can reduce the delay by decreasing the value of T b, but we require more channel bandwidth for transmission. In the case of no bandwidth limitation, it can be shown that the channel capacity approaches a limiting value C given by C= log e = 1.44 The channel capacity variation with bandwidth is shown in Figure 1.7: S Bandlimited AWGN channel Channel encoder C [bits/s] C 0 W [Hz] Figure 1.7 Channel capacity variation with bandwidth Proof: =log(1/) =log (1 ) 13

= ( )log (1 ) = log (1 ) Since lim (1) =, we have Problem 2 = log =1.44 Given a channel with an intended capacity of 20 Mbps. The bandwidth of the channel is 3MHz. What signal-to-noise ratio is required in order to achieve this capacity? According to Shannon s Capacity formula the maximum channel capacity (in bps) is given by the equation: =log 1. Where B is the bandwidth and S/N is the signal-to-noise ratio. Given B = 3 MHz = 3.10 6 Hz, and C = 20 Mbps = 20.10 6 bps, So, Hence, S/N = 101, and Problem 3 20. 10 =3.10 log (1 ) log 1 20 = 3 6,667 1 =102 =10 20,04 A digital signaling system is required to operate at 9600 bps. (a) If a signal element encodes a 4-bit word, what is the minimum required bandwidth of the channel? Repeat part (a) for the case of 8-bit words. Solution By Nyquist s formula, the channel capacity is related to bandwidth and signaling levels by the equation C =2Blog 2 M, where N is the number of discrete signal or voltage levels. Here C = 9600 bps, log 2 M = 4 (because a signal element encodes a 4-bit word). 14

2 = = = 9600 8 log (2xlog ) = 1200 For this case, take log 2 M = 8 (because a signal element encodes a 8-bit word) Proceeding in a similar way, we get B = 600 Hz. Problem 4 Television channels are 6 MHz wide. How many bits/sec can be sent if four-level digital signals are used? Assume a noiseless channel. Solution Bandwidth = 6 MHz (given) = 6 x 10 6. Using Nyquist s Theorem Problem 5 C=2Blog M C = 2 6. 10 x log 4 C=24 Mbps What is the channel capacity for a tele-printer channel with a 300 Hz bandwidth and a signal-to-noise ratio of 3 db? Solution Using Shannon s equation: =3 and we know that ( ) =10 =log 1 =10, = 300 log (110, ) = 300 log (2,995) = 474 1.3 NOISE IN DIGITAL COMMUNICATION SYSTEM The term noise refers to unwanted electrical signals that are always present in electrical systems. The presence of noise superimposed on a signal tends to obscure or mask the signal; it limits the receiver s ability to make correct symbol decisions, and thereby limits the rate of information transmission. Noise arises from a variety of sources, both manmade and natural. The man-made noise includes such sources as spark-plug ignition noise, switching transients, and other radiating 15

electromagnetic signals. Natural noise includes such elements as the atmosphere, the sun, and other galactic sources. Good engineering design can eliminate much of the noise or its undesirable effect through filtering, shielding, the choice of modulation, and the selection of an optimum receiver site. For example, sensitive radio astronomy measurements are typically located at remote desert locations, far from man-made noise sources. However, there is one natural source of noise, called thermal or Johnson noise that cannot be eliminated. Thermal noise is caused by the thermal motion of electrons in all dissipative components-resistors, wires, and so on. The same electrons that are responsible for electrical conduction are also responsible for thermal noise. We can describe thermal noise as a zeromean Gaussian random process. A Gaussian process n(t) is a random function whose value n at any arbitrary time t is statistically characterized by the Gaussian probability density function () = exp ( ) (1.15) where σ is the variance of n. The normalized or standardized Gaussian density function of a zeromean process is obtained by assuming that σ =1. This normalized pdf is shown sketched in Figure 1.8. We will often represent a random signal as the sum of a Gaussian noise random variable and a dc signal. That is, = (1.16) where z is the random signal, a is the dc component, and n is the Gaussian noise random variable. The pdf p(z) is then expressed as () = exp ( ) (1.17) where, as before, σ is the variance of n. The Gaussian distribution is often used as the system noise model because of a theorem, called the central limit theorem, which states that under very general conditions the probability distribution of the sum of j statistically independent random variables approaches the Gaussian distribution as j, no matter what the individual distribution functions may be. Therefore, even though individual noise mechanisms might have other than Gaussian distributions, the aggregate of many such mechanisms will tend toward the Gaussian distribution. 0.399 0.3 σ =1 0.242 0.2 σ 0.1 0.054 2 σ 3 2 1 0 1 2 3 n Figure 1.8 Normalized (σ =1) Gaussian probability density function 16

1.3.1 White Noise The primary spectral characteristic of thermal noise is that its power spectral density is the same for all frequencies of interest in most communication systems; in other words, a thermal noise source emanates an equal amount of noise power per unit bandwidth at all frequencies from dc to about 10 12 Hz. Therefore, a simple model for thermal noise assumes that its power spectral density Φ (f) is flat for all frequencies, as shown in Figure 1.9 a), and is denoted as Φ (f) =, f (1.18) where the factor of 2 is included to indicate that Φ (f) is a two-sided power spectral density. When the noise power has such a uniform spectral density we refer to it as white noise. The adjective white is used in the same sense as it is with white light, which contains equal amounts of all frequencies within the visible band of electromagnetic radiation. The autocorrelation function of white noise is given by the inverse Fourier transform of the noise power spectral density, denoted as follows: ϕ (τ) = Φ (f) = Φ (τ)e df = Thus the autocorrelation of white noise is a delta function weighted by the factor δ(τ) (1.19) and occurring at =0, as seen in Figure 1.9 b). Note that ϕ (τ) is zero for τ 0; that is, any two different samples of white noise, no matter how close together in time they are taken, are uncorrelated. The average power P n of white noise is infinite because its bandwidth is infinite. This can be seen to yield: N 0 /2 Φ n (f) = = (1.20) Figure 1.9 a) Power spectral density of white noise, b) autocorrelation function of white noise Although white noise is a useful abstraction, no noise process can truly be white; however, the noise encountered in many real systems can be assumed to be approximately white. We can only observe such noise after it has passed through a real system which will have a finite bandwidth. Thus, as long as the bandwidth of the noise is appreciably larger than that of the system, the noise can be considered to have an infinite bandwidth. The delta function in Equation 1.19 means that the noise signal n(t) is totally decorrelated from its time-shifted version, for any 0. Equation 1.19 indicates that any two different samples of a white noise process are uncorrelated. Since thermal noise is a Gaussian process and the samples are uncorrelated, the noise samples are also independent. Therefore, the effect on the detection process of a channel with additive white Gaussian noise (AWGN) is that the noise affects each transmitted symbol independently. Such a channel is called a memoryless channel. The term additive means that φ n (τ) f 0 a) b) N 0 /2 τ 17

the noise is simply superimposed or added to the signal-that there are no multiplicative mechanisms at work. Since thermal noise is present in all communication systems and is the prominent noise source for most systems, the thermal noise characteristics - additive, white, and Gaussian - are most often used to model the noise in communication systems. Since zero-mean Gaussian noise is completely characterized by its variance, this model is particularly simple to use in the detection of signals and in the design of optimum receivers. In this book we shall assume, unless otherwise stated, that the system is corrupted by additive zero-mean white Gaussian noise, even though this is sometimes an oversimplification. 1.3.2 Thermal Noise Thermal noise is caused by the thermal motion of electrons in all conductors. It is generated in the lossy coupling between an antenna and receiver and in the first stages of the receivers. The noise power spectral density is constant at all frequencies up to about 10 12 Hz, giving rise to the name white noise. The thermal noise process in communication receivers is modeled as an additive white Gaussian noise (AWGN) process. The physical model for thermal or Johnson noise is a noise generator with an open-circuit mean square voltage of 4, where: = Botlzmann s constant = 1.38 10-23 [J/K] or [W/K-Hz] = -228,6 [dbw/k-hz] = temperature [K] = bandwidth [Hz] = resistance [Ω] The maximum thermal noise power N that could be coupled from the noise generator into the front end of an amplifier is: = (1.21) Thus, the maximum single-sided noise power spectral density N 0 (noise power in a 1 Hz bandwidth) available at the amplifier input is = = / (1.22) It might seen that the noise power should depend on the magnitude of the resistance-but it does not. Consider an intuitive argument to verify this. Electrically connect a large resistance to a small one, such that they form a closed path and such that physical temperatures are the same. If noise power were a function of resistance, there would be a net power flow from the large resistance to the small one would become warmer. This violates our experience, not to mention the second law of thermodynamics. Therefore, the power delivered from the large resistance to the small one must be equal to the power it receives. Figure 1.10 Equivalent circuits for a noisy resistor. (a) Thevenin. (b) Norton. 18

The available power from a thermal noise source is depend on the ambient temperature of the source (the noise temperature) as is seen in Equation 1.21. This leads to the useful concept of an effective noise temperature for noise sources that are not necessarily thermal in origin (e.g. galactic, atmospheric, interfering signals) that can be introduced into the receiving antenna. The effective noise temperature of such a noise source is defined as the temperature of a hypothetical thermal noise source that would give rise to an equivalent amount of interfering power. 1.3.3 Solved Problems Problem 1 Using a noise generator with mean-square voltage equal to 4κT BR, demonstrate that the maximum amount of noise power that can be coupled from this source into amplifier is = Solution A theorem from network theory states that maximum power is delivered to a load when the value of the load impedance is made equal to the complex conjugate of the generator impedance. In this case the generator impedance is a pure resistance R; therefore, the condition for maximum power transfer is fulfilled when the input resistance of the amplifier equals R. Figure 1.11 illustrates such a network. The input thermal noise source is represented by an electrically equivalent model consisting of a noiseless source resistor in series with an ideal voltage generator whore rms noise voltage is 4κT BR. The input resistance of the amplifier is made equal to R. the noise voltage delivered to the amplifier input is just one half the generator voltage, following basic circuit principles. The noise power delivered to the amplifier input can accordingly be expressed as: = ( 4κT BR/2) = 4κT BR =κt B 4 Figure 1.11 Electrical model of maximum available thermal noise power at amplifier input Problem 2 Consider the resistor network shown in Figure 1.12. Assuming room temperature of T = 290 K, find the rms noise voltage appearing at the output terminals in a 100 khz bandwidth. 19

Solution: Figure 1.12 Circuits For Noise Calculation. (A) Resistor Network. (B) Noise Equivalent Circuit. We use voltage division to find the noise voltage due to each resistor across the output terminals. Then, since powers due to independent sources add, we find the rms output voltage v 0 by summing the square of the voltages due to each resistor (proportional to power), which gives the total mean-square voltage, and take the square root to give the rms voltage. The calculation yields Where = = 4 = 4 ( ) = 4 ( ) In the above expressions, 4 represents the rms voltage across resistor R i. Thus = (4 1,38. 10 290 10 ) ( ()() () =4 ( ) ( ) ( ) ( ) = ()() () = 9,16.10 () ) 8.39.10 () 1.4 SUMMARY The AWGN model is frequently used in the analysis of communications systems. However, the AWGN assumption is only valid over a certain bandwidth, and this bandwidth is a function of temperature. At a temperature of 3 K this bandwidth is approximately 10 GHz. If the temperature increases the bandwidth over which the white noise assumption is valid also increases. At standard temperature (290 K), the white noise assumption is valid to bandwidths exceeding 1000 GHz. Thermal noise results from the combined effect of many charge carries. The Gaussian assumption follows from the central-limit theorem. 20

The SNR at the output of a baseband communication system operating in an additive Gaussian noise environment is, where S is the signal power, N is the single-sided power spectral density of the noise ( N is the two-sided power spectral density), and B is the signal bandwidth. The information associated with the occurrence of an event is defined as the logarithm of the probability of the event. If a base 2 logarithm is used, the measure of information is the bit. Error-free transmission on a noisy channel can be accomplished if the source rate is less than the channel capacity. This is accomplished using channel codes. The capacity of an AWGN channel is =log (1 ), where B is the channel bandwidth and S/N is the signal to. This is known as the Shannon Hartley law. Use of the Shannon Hartley law yields the concept of optimum modulation for a system operating in an AWGN environment. The result is performance of an optimum system in terms of predetection and postdetection bandwidth. The trade-off between bandwidth and SNR is easily seen. 1.5 EXERCISES 1. Suppose that the spectrum of a channel is between 10 MHz and 12 Mhz, and an intended capacity of 8 Mbps. (1) What should be the S/N in order to obtain this capacity? (2) How many signaling levels are required to obtain this capacity? (3) What would be the capacity if the environment starts suffering lesser noise and the S/N goes up to 27 db. (4) Same question as (2) but for the capacity in (3) 2. Consider an AWGN channel with bandwidth 50 MHz, received signal power 10 mw, and noise density N 0 = 10 9 W/Hz. How much does capacity increase by doubling the received power? How much does the capacity increase by doubling the channel bandwidth? 3. Each sample of a Gaussian memoryless source has a variance equal to 4, and the source produces 8000 samples per second. The source is to be transmitted via an additive white Gaussian noise channel with a bandwidth equal to 4000 Hz, and it is desirable to have a distortion per sample not exceeding 1 at the destination (assume squared-error distortion). 1. What is the minimum required signal-to-noise ratio of the channel? 2. If it is further assumed that, on the same channel, a BPSK scheme is employed with hard decision decoding, what will be the minimum required channel signal-to-noise ratio? Note: the signal-to-noise ratio of the channel is defined by 4. For binary phase-shift keying = 8.4 db is required for a bit error rate of (probability of error = If the effective noise temperature is 290 K (room temperature) and the data rate is 2400 bps, what received signal level is required? 5. Find the increase in the required signal power (at the receiving end of the channel) in order to keep the channel capacity unchanged if the output of a baseband channel with a cute off frequency of 1 GHz is lowpass filtered with f c =500 MHz. 6. Channel C 1 is an additive white Gaussian noise channel with a bandwidth B, average transmitter power P, and noise power spectral density. Channel C 2 is an additive Gaussian noise channel with the same bandwidth and power as channel C 1 but with noise power spectral 21

TT S T KE M Department of electronics and multimedia telecommunications