Communication Systems

Size: px
Start display at page:

Download "Communication Systems"

Transcription

1 Unit -I Amplitude Modulation

2 contents Time-domain and frequency-domain descriptions of continuous-wave modulation AM Amplitude modulation DSB-SC SSB VSB Angle modulation FM PM Noise performance pertaining to modulation schemes

3 . Introduction Figure. Components of a continuous-wave modulation system: (a) transmitter, and (b) receiver. In addition to the signal received from the transmitter, the receiver input includes channel noise. The degradation in receiver performance due to channel noise is determined by the type of modulation used. So, it is necessary to study various modulation types and their noise performance. 3

4 Basic concepts about modulation Message signal Information-bearing signal Baseband signal Modulating signal/wave Carrier, sinusoidal wave Modulated signal/wave 4

5 Modulation and Demodulation Modulation: refers to the process by which some characteristic of a carrier is varied in accordance with a modulating signal. It is also a process of shifting frequency range. Demodulation: is the reverse of the modulation process. 5

6 Demo for AM and FM signals (a) Carrier wave (b) Sinusoidal modulating signal (c) Amplitudemodulated signal (d) Frequencymodulated signal 6

7 . Amplitude Modulation AM Carrier wave c(t): c(t ) Ac cos( f ct ) Ac: carrier amplitude fc: carrier frequency AM is defined as a process in which the amplitude of the carrier wave c(t) is varied about a mean value, linearly with the baseband signal. Amplitude-modulated wave s(t) s(t ) Ac ka m(t ) cos( f ct ) Ka : amplitude sensitivity 7

8 Baseband signal m(t) AM wave for kam(t) > AM wave for kam(t) < Figure.3 Illustrating the amplitude modulation process. 8

9 Observations from figure.3 If kam(t) < for all t, the envelope of modulated signal s(t) is linear with the modulating signal m(t). Therefore, we can use envelope detector to recover the message signal in the receiver. If kam(t) > for any t, carrier phase reversals happen. This is called as overmodulation. In this case, the envelope of modulated signal s(t) is no longer linear with the modulating signal m(t). Message signal can not be recovered by envelope detector. 9

10 In AM, two requirements must be satisfied:. f c W, W: message bandwidth, so that the envelope of s(t) can be visualized satisfactorily. for all t, so that. ka m(t ) overmodulation can be avoided. 0

11 Descriptions of AM signal Time-domain description: s(t ) Ac ka m(t ) cos( f ct ) Frequency-domain description: Ac ka Ac S ( f ) [ ( f fc ) ( f f c )] [M ( f f c ) M ( f f c )] FT m(t) M(f) IFT

12 (a) Spectrum of baseband signal (b) Spectrum of AM wave Negative frequency Upper band and lower band Transmission bandwidth BT=W

13 AM practical circuits In transmitter, it is accomplished using a nonlinear device. In receiver, it is also accomplished using a nonlinear device. 3

14 Virtues and Limitations of AM Virtue: Simplicity of implementation Limitations:. AM is wasteful of power. AM is wasteful of bandwidth (a) Spectrum of baseband signal (b) Spectrum of AM wave 4

15 How to overcome these limitations? Step : Suppress the carrier: DSB-SC Step : Modify the sidebands: SSB, VSB DSB-SC: double sideband-suppressed carrier where only the upper and lower sidebands are transmitted. No carrier frequency component. SSB: single sideband, where only one sideband (the lower sideband or the upper sideband) is transmitted. VSB: vestigial sideband, where only a vestige of one of the sidebands and a corresponding modified version of the other sideband are transmitted. 5

16 .3 linear Modulation Schemes Linear modulation is defined by: (Narrowband signal) s(t ) si (t ) cos( fct ) sq (t )sin( f ct ) SI(t): in-phase component of s(t) SQ(t): quadrature component of s(t) In linear modulation, both SI(t) and SQ(t) are low-pass signals that are linearly related to the message signal m(t). 6

17 Linear modulation is defined by s(t ) si (t ) cos( fct ) sq (t )sin( f ct ) 7

18 Table. different forms of linear modulation Type of modulation SI(t) SQ(t) AM +kam(t) 0 DSB-SC m(t) 0 SSB VSB USB ½ m(t) +½ m ˆ (t ) LSB ½ m(t) ˆ (t ) -½ m V-LSB ½ m(t) +½ m (t) V-USB ½ m(t) -½m (t) m(t)=message signal ˆ (t ) =Hilbert transform m of m(t) 8

19 Descriptions of AM signals AM: s(t ) Ac ka m(t ) cos( f ct ) DSB: s(t ) Ac m(t ) cos( f ct ) SSB: ˆ ( t ) sin( f c t ) s(t ) Ac m( t ) cos( f c t ) Ac m + - VSB: lower sideband transmitted upper sideband transmitted s t Ac m t cos f ct m t Ac m t cos f ct + - vestige of the upper sideband vestige of the lower sideband 9

20 Two important points from table.. The in-phase component SI(t) is solely dependent on the message signal m(t).. The quadrature component SQ(t) is a filtered version of m(t). The spectral modification of the modulated wave s(t) is solely due to SQ(t). To be more specific, the role of the quadrature component is merely to interfere with the in-phase component, so as to reduce or eliminate power in one of the sidebands of the modulated signal s(t), depending on how the quadrature component is defined. 0

21 DSB-SC Double Sideband-Suppressed Carrier Modulation AM: s(t ) Ac ka m(t ) cos( f ct ) Ac ka Ac S( f ) [ ( f f c ) ( f f c )] [ M ( f f c ) M ( f f c )] DSB: s(t ) Ac m(t ) cos( f ct ) S ( f ) Ac M ( f f c ) M ( f f c )

22 Figure.5 (a) Block diagram of product modulator. (b) Baseband signal. (c) DSB-SC modulated wave. The DSB modulated signal undergoes a phase reversal whenever the message signal crosses zero. Consequently, the envelope of DSB signal is different from the message signal. This is unlike the case of AM wave.

23 (a) Spectrum of baseband signal (a) Spectrum of baseband signal3 (b) Spectrum of DSB wave (b) Spectrum of AM wave

24 Demodulation of DSB Can we use envelope detector to demodulate DSB signals? Why? NO. The envelope of DSB signal is no longer linear with modulating signal. How to demodulate DSB signals? The baseband signal m(t) can be recovered from a DSB wave by using coherent detection. 4

25 Coherent Detection Coherent detection Synchronous demodulation Why is it called Coherent Detection? Local oscillator signal is exactly synchronized with carrier in both frequency and phase. 5

26 Coherent Detection Process the local oscillator signal is supposed as A cos( pfc t f ) ' c DSB signal: s(t ) Ac m(t )cos( f c t ) Then, output of product modulator is v t Ac cos f c t s( t ) Ac Ac cos f c t cos f c t m t Ac Ac cos 4 f c t m t Ac Ac cos m t 6

27 v t Ac Ac cos 4 f ct m t Ac Ac cos m t Output of low-pass filter v 0 ( t ) Ac Ac ' cos m( t ) 7

28 Discussion : Phase difference v0 (t ) Ac Ac ' cos m(t ) Case : constant, v0 ( t ) Ac Ac ' km( t ) m(t) can be recovered without any distortion. 0, v0 ( t ) Ac Ac ' m( t ) Maximum 90, v0 (t ) 0 Maximum 0 Quadrature Null Effect 8

29 Case : In practice, varies randomly with time. So synchronism must be ensured both in frequency and phase. How to keep synchronization? Phase Lock Loop Square Loop Costas Loop 9

30 Virtues and limitations of DSB Virtue: saving transmitted power Limitations: Complexity Waste of bandwidth The resulting system complexity is the price that must be paid for suppressing the carrier wave to save transmitted power. 30

31 Quadrature-Carrier Multiplexing Quadrature Amplitude Modulation QAM Theory basis: quadrature null effect The quadrature null effect of the coherent detector may also be put to good use in the construction of quadrature-carrier multiplexing. 3

32 QAM is a bandwidth-conservation scheme. why? This scheme enables two DSB signals to occupy the same channel bandwidth, and yet it allows for the separation of the two message signals at the output. It is therefore a bandwidth-conservation scheme. 3

33 Figure.0 Quadrature-carrier multiplexing system Transmitter Receiver. 33

34 Modulation and Demodulation process The transmitted signal s(t) consists of sum of two product modulator outputs, as shown by s t Ac m t cos f ct m t Ac m t sin f ct m(t) and m(t) are two different message signals. demodulation s(t ) cos( f c t ) LPF Ac m (t ) s(t ) sin( f c t ) LPF Ac m (t ) 34

35 Single-Sideband Modulation SSB SSB: single sideband, where only one sideband (the lower sideband or the upper sideband) is transmitted. ˆ ( t )] e } sssb ( t ) = Re{[m( t ) j m ˆ ( t ) sin c t = m( t ) cos c t m j c t 35

36 Hilbert Transform Definition : xˆ (t ) x(t ) t x(t ) xˆ (t ) h(t ) t j, 0 FT j sgn( ) t j, 0 36

37 Methods to generate SSB signals ) Frequency-discrimination method to generate SSB ) Phase-shift method to generate SSB 3) Weaver s method to generate SSB 37

38 Energy Gap Figure. (a) Spectrum of a message signal m(t) with an energy gap of width fa centered on the origin. (b) Spectrum of corresponding SSB signal containing the upper sideband. Typical example: telephone voice f ~(300, 300Hz) energy gap: 600 Hz 38

39 SSB Coherent Detection sssb ( t ) Low Pass filter m(t ) Ac ' cos c t How to keep synchronization between local oscillator and carrier in the transmitter?. A low-power pilot carrier. A highly stable oscillator 39

40 Vestigial Sideband Modulation VSB One of the sideband is partially suppressed and a vestige of the other sideband is transmitted to compensate for that suppression. Vestige of lower sideband Vestige of upper sideband 40

41 Frequency Discrimination Method to Generate VSB Key: the design of band-pass filter 4

42 Magnitude response of VSB filter Odd symmetry around the fc BT=W+fv. The sum of the values of the magnitude response H(f) at any two frequencies equally displaced above and below fc is unity.. The phase response arg(h(f)) is linear. H ( f fc ) H ( f fc ) for W f W 4

43 VSB description in time-domain s(t ) Ac m(t ) cos( f c t ) Ac m '(t )sin( f ct ) + - vestige of the upper sideband vestige of the lower sideband m(t ) HQ ( f ) j m' (t ) cos c t svsb (t ) -π/ phase shift method to generate VSB 43

44 .4 Frequency Translation The basic operation in SSB is in fact a form of frequency translation. SSB modulation is also called frequency changing, mixing, or heterodyning. 44

45 Mixer The mixer is a device that consists of a product modulator followed by a band-pass filter. Sum frequency f f f l cos ft cos f l t Difference frequency f f f l 45

46 Up conversion f f f l or f l f f or f l f f Down conversion f f f l 46

47 Figure.7 Mixer Spectrum of modulated signal s(t) at the mixer input BPF Spectrum of the corresponding signal s (t) at the output of the product modulator in the mixer 47

48 .5 Frequency-Division Multiplexing FDM Multiplexing refers to a number of independent signals are combined into a composite signal suitable for transmission over a common channel. 48

49 Types of Multiplexing FDM Separate the signals according to frequency. TDM Separate the signals according to time. CDM Separate the signals according to code. 49

50 Block diagram of FDM system 50

51 Example. Carrier Telephone System SSB/FDM Figure.9 Illustrating the modulation steps in an FDM system. 5

52 .9 Superheterodyne Receiver =superhet Since Armstrong invented the superheterotyne radio receiver in 98, almost all radio and TV receivers now being made are of the superhetrodyne type. 5

53 Receivers in a broadcasting system performs following functions: Carrier-frequency tuning Filtering amplification Seperhet is a special type of receiver that fulfills all three functions in an elegant and practical fashion. 53

54 Image Interference Image frequency f 象频 f c f IF f c f IF 如果f Lo f c 如果f Lo f c 54

55 Figure.3 Basic elements of an AM radio receiver of the superheterodyne type. f IF f LO f RF 55

56 Unit -II Angle Modulation

57 .6 Angle Modulation Definition: The angle of the carrier wave is varied according to the baseband signal. Whereas, the amplitude of the carrier is maintained constant. It consists of PM and FM. An important feature: It provides better discrimination against noise and interference than amplitude modulation. 57

58 Tradeoff This improvement in performance is achieved at the expense of increased transmission bandwidth. That is, angle modulation provides us with a practical means of exchanging channel bandwidth for improved noise performance. Such a tradeoff is not possible with amplitude modulation, regardless of its form. 58

59 Basic Definition of Angle Modulation Let i(t) denote the angle of a modulated carrier, then angle-modulated wave can be expressed as s(t ) Ac cos i t i(t ) m(t ) If i(t) increases monotonically with time, the average frequency in Hertz, over an interval from t to t+ t, is given by i t t i t f t (t ). 0 t 59

60 Instantaneous frequency f i ( t ) lim f t ( t ) t 0 i t t i t lim t 0 t d i t. dt i (t ) f (t )dt i In the simple case of an unmodulated carrier, the angle i(t) is : i t fct c The constant c is the value of i(t) at t=0. Usually it is assumed to be zero for convenience. 60

61 Angle modulation is defined as the angle of the carrier wave varying with modulating signal. i(t ) m(t ) There are an infinite number of ways in which the angle may be varied in some manner with the message signal. However, we shall consider only two commonly used methods. They are FM and PM. 6

62 Phase Modulation PM The angle i(t) is varied linearly with the message signal m(t) i t fct k p m(t ) kp: phase sensitivity PM signal is described in the time domain by spm (t ) Ac cos[ i (t )] Ac cos[ fc t k p m(t )].3 6

63 Frequency Modulation FM The instantaneous frequency fi(t) is varied linearly with the message signal m(t) f i t fc k f m(t ).4 kf: frequency sensitivity i (t ) t f (t )dt f t k m( )d i c f 0.5 FM signal is described in the time domain by sfm ( t ) Ac cos[ i ( t )] t Ac cos[ f c t 63 k f m( )d ].6 0

64 Relationship between FM and PM s PM (t ) Ac cos[ f c t k p m(t )].3 t sfm ( t ) Ac cos[ f c t k f m( )d ].6 0 d i (t ) f i (t ) dt i (t ) f i (t )dt So, we may deduce all the properties of PM signals from those of FM signals and vice versa. Hence, we concentrate our attention on FM signals. 64

65 Direct method to generate FM signal Modulating signal Frequency modulator FM signal Ac cos( fc t ) Indirect method to generate FM signal Modulating signal Integrator Phase modulator Ac cos( fc t ) 65 FM signal

66 Direct method to generate PM signal Modulating signal PM signal Phase modulator Ac cos( fc t ) Indirect method to generate PM signal Modulating Differentiator signal Frequency modulator PM signal Ac cos( fc t ) 66

67 .7 Frequency Modulation FM The FM signal s(t) is a nonlinear function of the modulating signal m(t), which makes the frequency modulation a nonlinear modulation process. Consequently, unlike amplitude modulation, the spectrum of an FM signal is not related in a simple manner to that of the modulating signal; rather, its analysis is much more difficult than that of an AM signal. 67

68 How can we tackle the spectral analysis of an FM signal? We propose to provide an empirical answer to this question by proceeding in the following manner:. First consider the simplest case: a single-tone modulation, narrowband FM. Then consider more general case: a single-tone modulation, wideband FM Objective: to establish an empirical formula between the transmission bandwidth of an FM signal and the bandwidth of message signal. 68

69 Single-tone frequency modulation A single tone modulating signal is defined by m(t ) Am cos( f mt ).7 fm: modulation frequency of modulating signal The instantaneous frequency of the FM signal f i t f c k f Am cos f m t f c f cos f m t Where f k f Am f : Frequency deviation 69.8

70 i t t 0 f f i d f c t sin f m t.30 fm We define f Modulation index fm i t fct sin f mt FM signal for single-tone modulating signal: s(t ) Ac cos f c t sin f m t 70.33

71 Two important concepts in FM Page 0 f : Frequency deviation maximum departure of the instantaneous frequency of the FM signal from the carrier frequency fc. It is proportional to the amplitude of the modulating signal and is independent of the modulation frequency fm. Modulation index Phase deviation, the maximum departure of the angle i(t) from the angle fct of the unmodulated carrier. 7

72 Narrowband and Wideband FM Depending on the value of the modulation index, there are two cases of frequency modulation: Narrowband FM: << Wideband FM: otherwise 7

73 .7. Narrowband Frequency Modulation NBFM FM signal for single-tone m(t): s(t ) Ac cos fc t sin f m t Expanding it: s( t ) Ac cos f c t cos sin f m t Ac sin f c t sin sin f m t When <<.34 cos[ sin( f m t )] sin[ sin( f m t )] sin( f m t ) Hence, NBFM signal can be expressed as: snbfm (t ) s(t ) Ac cos( fc t ) Ac sin( f m t )sin( fc t ) 73

74 Narrowband Frequency Modulator snbfm (t ) Ac cos( f ct ) Ac sin( f mt ) sin( f ct ) - Modulating Integrator signal Ac sin( fc t ) NBFM signal + -π/ phase-shifter NBPM modulator 74 Carrier wave Ac cos( fc t )

75 NBFM is different from ideal FM. Why?. The envelope contains a residual amplitude modulation. The angle contains harmonic distortion If <0.3 radians, the effects of residual AM and harmonic PM are limited to negligible levels. 75

76 NBFM is similar to AM. s NBFM ( t ) Ac cos( f c t ) Ac sin( f c t ) sin( f m t ) Ac cos( f c t ) Ac {cos[ ( f c f m )t ] cos[ ( f c f m )t ]}.36 s AM ( t ) Ac [ k a m( t )] cos( f c t ) Ac [ k a Am cos( f m t )] cos( f c t ) Ac cos( f c t ) k a Ac {cos[ ( f c f m )t ] cos[ ( f c f m )t ]}.37 Basic difference: the algebraic sign of the lower side frequency 76

77 .7. Wideband Frequency Modulation WBFM We have known that single-tone FM signal is s(t ) Ac cos f c t sin f m t.33 For convenience, we use complex form to describe band pass signal. It is changed into: s( t ) Re Ac exp j f c t j sin f m t Re s t exp j f c t 77.38

78 Complex envelope s t Ac exp j sin f mt.39 s t cn exp j nf m t.40 Where, Fourier coefficient cn cn f m / f m / f m f m Ac s t exp j nf m t dt / f m / f m s t exp j sin f m t j nf m t dt 78

79 x f m t Hence, we may rewrite equation.4 in new form Ac cn exp j sin x nx dx Jn exp j sin x nx dx So, we may reduce Cn cn Ac J n

80 Substituting Cn in s ( t ), we get s t Ac J n exp j nf m t Therefore, s t Ac Re J n exp j f c nf m t.48 FT Ac S f J f f n 80 c nf m f f c nf m

81 Figure.3 Plots of Bessel functions of the first kind for varying order. 8

82 Bessel Function Properties. J n J n for all n, both positive and negative n. For small values of modulation index, J 0 ( ) ( ) J J n ( ) 0, n 3..5 J.5 n n 8

83 Observations About FM. The spectrum of an FM signal contains a carrier component and an infinite set of side frequencies.. If <, the FM signal is effectively composed of a carrier and a single pair of side frequencies at fc±fm. This situation corresponds to NBFM. 3. The average power of FM signal is constant. P Ac 83

84 How to estimate transmission bandwidth of FM signals? Percent method Cason s rule 84

85 percent method BT nmax f m Carson s Rule fm ) f ( ) B T f f m f ( f f ( ) f m ( ) f m fm 85.55

86 Single tone signal arbitrary signal Deviation ratio f frequency deviation D W highest modulation frequency D is similar to. Carson s rule is modified BT f W f ( ) ( D)W D 86

87 Example.3 FM radio broadcasting: W=5 khz, f=75 khz, BT =? Deviation ratio is: D=75/5=5 ) According to Carson s rule, we get transmission bandwidth: BT=(75+5)=80 khz ) According to percent method, universal curve tells: BT=3. f=3. 75=40 khz In practice, a bandwidth of 00 khz is allocated to each FM transmitter. 87

88 Generation of FM signals Two basic methods:. Direct FM m(t) Voltage-controlled Oscillator sfm(t). Indirect FM m(t) NBFM Modulator Frequency Multiplier 88 sfm(t)

89 Figure.7 Block diagram of the indirect method of generating a wideband FM signal NBFM 89

90 Why can a frequency multiplier change NBFM into WBFM? Figure.8 Block diagram of frequency multiplier The input-output relation of a nonlinear device may be expressed in the general form t as t a s t 90 an s t n

91 The input is an FM signal defined by t s t Ac cos f ct k f m d 0 Instantaneous frequency is Output signal s (t) is fi t f c k f m t t s t Ac cos nf ct nk f m d 0 Instantaneous frequency of output signal s (t) is fi t nfc nk f m t So f ' n f n f c nfc NBFM WBFM 9

92 Demodulation of FM Signals Page Two basic methods:. Indirect method: phase-locked loop X Figure.5. Direct method: frequency discriminator FM wave Slope circuit Envelope Detector 9 Baseband signal

93 Fig..30 balanced frequency discriminator + 93

94 Unit III Random Process

95 Review of last lecture The point worth noting are : The source coding algorithm plays an important role in higher code rate (compressing data) The channel encoder introduce redundancy in data The modulation scheme plays important role in deciding the data rate and immunity of signal towards the errors introduced by the channel Channel can introduce many types of errors 95

96 Review: Layering of Source Coding Source coding includes Sampling Quantization Symbols to bits Compression Decoding includes Decompression Bits to symbols Symbols to sequence of numbers Sequence to waveform (Reconstruction) 96

97 Review: Layering of Source Coding 97

98 Review: Layering of Channel Coding Channel Coding is divided into Discrete encoder\decoder Used to correct channel Errors Modulation\Demodulation Used to map bits to waveform for transmission 98

99 Review: Layering of Channel Coding 99

100 Review: Resources of a Communication System Transmitted Power Average power of the transmitted signal Bandwidth (spectrum) Band of frequencies allocated for the signal Type of Communication system Power limited System Space communication links Band limited Systems Telephone systems 00

101 Review: Digital communication system Important features of a DCS: Transmitter sends a waveform from a finite set of possible waveforms during a limited time Channel distorts, attenuates the transmitted signal and adds noise to it. Receiver decides which waveform was transmitted from the noisy received signal Probability of erroneous decision is an important measure for the system performance 0

102 Review of Probability

103 Sample Space and Probability Random experiment: its outcome, for some reason, cannot be predicted with certainty. Examples: throwing a die, flipping a coin and drawing a card from a deck. Sample space: the set of all possible outcomes, denoted by S. Outcomes are denoted by E s and each E lies in S, i.e., E S. A sample space can be discrete or 03

104 Three Axioms of Probability For a discrete sample space S, define a probability measure P on as a set function that assigns nonnegative values to all events, denoted by E, in such that the following conditions are satisfied Axiom : 0 P(E) for all E S Axiom : P(S) = (when an experiment is conducted there has to be an outcome). Axiom 3: For mutually exclusive events E, E, E3,... we have 04

105 Conditional Probability We observe or are told that event E has occurred but are actually interested in event E: Knowledge that of E has occurred changes the probability of E occurring. If it was P(E) before, it now becomes P(E E), the probability of E occurring given that event E has occurred. This conditional probability is given by If P(E E) = P(E), or P(E E) = P(E)P(E), then E and E are said to be statistically independent. Bayes rule P(E E) = P(E E)P(E)/P(E) 05

106 Mathematical Model for Signals Mathematical models for representing signals Deterministic Stochastic Deterministic signal: No uncertainty with respect to the signal value at any time. Deterministic signals or waveforms are modeled by explicit mathematical expressions, such as x(t) = 5 cos(0*t). Inappropriate for real-world problems??? Stochastic/Random signal: Some degree of uncertainty in signal values before it actually occurs. For a random waveform it is not possible to write such an explicit expression. Random waveform/ random process, may exhibit certain regularities that can be described in terms of probabilities and statistical averages. e.g. thermal noise in electronic circuits due to the random movement of electrons 06

107 Energy and Power Signals The performance of a communication system depends on the received signal energy: higher energy signals are detected more reliably (with fewer errors) than are lower energy signals. An electrical signal can be represented as a voltage v(t) or a current i(t) with instantaneous power p(t) across a resistor defined by OR v (t ) p (t ) p(t ) i (t ) 07

108 Energy and Power Signals In communication systems, power is often normalized by assuming R to be. The normalization convention allows us to express the instantaneous power as where x(t) is either a voltage or apcurrent (t ) signal. x (t ) The energy dissipated during the time interval (-T/, T/) by a real signal with instantaneous power expressed by Equation (.4) can then be written as: The average power dissipated by the signal during the interval is: 08

109 Energy and Power Signals We classify x(t) as an energy signal if, and only if, it has nonzero but finite energy (0 < Ex < ) for all time, where An energy signal has finite energy but zero average power Signals that are both deterministic and non-periodic are termed as Energy Signals 09

110 Energy and Power Signals Power is the rate at which the energy is delivered We classify x(t) as an power signal if, and only if, it has nonzero but finite energy (0 < Px < ) for all time, where A power signal has finite power but infinite energy Signals that are random or periodic termed as Power Signals 0

111 Random Variable Functions whose domain is a sample space and whose range is a some set of real numbers is called random variables. Type of RV s Discrete E.g. outcomes of flipping a coin etc Continuous E.g. amplitude of a noise voltage at a particular instant of time

112 Random Variables Random Variables All useful signals are random, i.e. the receiver does not know a priori what wave form is going to be sent by the transmitter Let a random variable X(A) represent the functional relationship between a random event A and a real number. The distribution function Fx(x) of the random variable X is given by

113 Random Variable A random variable is a mapping from the sample space to the set of real numbers. We shall denote random variables by boldface, i.e., x, y, etc., while individual or specific values of the mapping x are denoted by x(w). 3

114 Random process Real number A random process is a collection of time functions, or signals, corresponding to various outcomes of a random experiment. For each outcome, there exists a deterministic function, which is called a sample function or a realization. Random variables Sample functions or realizations 4 (deterministic time (t) function)

115 Random Process A mapping from a sample space to a set of time functions. 5

116 Random Process contd Ensemble: The set of possible time functions that one sees. Denote this set by x(t), where the time functions x(t, w), x(t, w), x3(t, w3),... are specific members of the ensemble. At any time instant, t = tk, we have random variable x(tk). At any two time instants, say t and t, we have two different random variables x(t) and x(t). Any realationship b/w any two random variables is called Joint PDF 6

117 Classification of Random Processes Based on whether its statistics change with time: the process is non-stationary or stationary. Different levels of stationary: Strictly stationary: the joint pdf of any order is independent of a shift in time. Nth-order stationary: the joint pdf does not depend on the time shift, but depends on time spacing 7

118 Cumulative Distribution Function (cdf) cdf gives a complete description of the random variable. It is defined as: FX(x) = P(E S : X(E) x) = P(X x). The cdf has the following properties: 0 FX(x) (this follows from Axiom of the probability measure). Fx(x) is non-decreasing: Fx(x) Fx(x) if x x (this is because event x(e) x is contained in event x(e) x). Fx( ) = 0 and Fx(+ ) = (x(e) is the empty set, hence an impossible event, while x(e) is the whole sample space, i.e., a certain event). P(a < x b) = Fx(b) Fx(a). 8

119 Probability Density Function The pdf is defined as the derivative of the cdf: It follows that: fx(x) = d/dx Fx(x) Note that, for all i, one has pi 0 and pi =. 9

120 Cumulative Joint PDF Joint PDF Often encountered when dealing with combined experiments or repeated trials of a single experiment. Multiple random variables are basically multidimensional functions defined on a sample space of a combined experiment. Experiment S = {x, x,,xm} Experiment S = {y, y,, yn} If we take any one element from S and S 0 <= P(xi, yj) <= (Joint Probability of two or more outcomes) Marginal probabilty distributions Sum all j P(xi, yj) = P(xi) Sum all i P(xi, yj) = P(yi) 0

121 Expectation of Random Variables (Statistical averages) Statistical averages, or moments, play an important role in the characterization of the random variable. The first moment of the probability distribution of a random variable X is called mean value mx or expected value of a random variable X The second moment of a probability distribution is mean-square value of X Central moments are the moments of the difference between X and mx, and second central moment is the variance of x. Variance is equal to the difference between the mean-square value and the square of the mean

122 Contd The variance provides a measure of the variable s randomness. The mean and variance of a random variable give a partial description of its pdf.

123 Time Averaging and Ergodicity A process where any member of the ensemble exhibits the same statistical behavior as that of the whole ensemble. For an ergodic process: To measure various statistical averages, it is sufficient to look at only one realization of the process and find the corresponding time average. For a process to be ergodic it must be stationary. The converse is not true. 3

124 Gaussian (or Normal) Random Variable (Process) A continuous random variable whose pdf is: μ and are parameters. Usually denoted as N(μ, ). Most important and frequently encountered random variable in 4

125 Central Limit Theorem CLT provides justification for using Gaussian Process as a model based if The random variables are statistically independent The random variables have probability with same mean and variance 5

126 CLT The central limit theorem states that The probability distribution of Vn approaches a normalized Gaussian Distribution N(0, ) in the limit as the number of random variables approach infinity At times when N is finite it may provide a poor approximation of for the actual probability distribution 6

127 Autocorrelation Autocorrelation of Energy Signals Correlation is a matching process; autocorrelation refers to the matching of a signal with a delayed version of itself The autocorrelation function of a real-valued energy signal x(t) is defined as: The autocorrelation function Rx( ) provides a measure of how closely the signal matches a copy of itself as the copy is shifted units in time. Rx( ) is not a function of time; it is only a function of the time difference between the waveform and its shifted copy. 7

128 Autocorrelation symmetrical in about zero maximum value occurs at the origin autocorrelation and ESD form a Fourier transform pair, as designated by the double-headed arrows value at the origin is equal to the energy of the signal 8

129 AUTOCORRELATION OF A PERIODIC (POWER) SIGNAL The autocorrelation function of a real-valued power signal x(t) is defined as: When the power signal x(t) is periodic with period T0, the autocorrelation function can be expressed as: 9

130 Autocorrelation of power signals The autocorrelation function of a real-valued periodic signal has properties similar to those of an energy signal: symmetrical in about zero maximum value occurs at the origin autocorrelation and PSD form a Fourier transform pair, as designated by the double-headed arrows value at the origin is equal to the average power of the signal 30

131 3

132 3

133 Spectral Density

134 SPECTRAL DENSITY The spectral density of a signal characterizes the distribution of the signal s energy or power, in the frequency domain This concept is particularly important when considering filtering in communication systems while evaluating the signal and noise at the filter output. The energy spectral density (ESD) or the power spectral density (PSD) is used in the evaluation. Need to determine how the average power or energy of the process is distributed in frequency. 34

135 Spectral Density Taking the Fourier transform of the random process does not work 35

136 ENERGY SPECTRAL DENSITY Energy spectral density describes the energy per unit bandwidth measured in joules/hertz Represented as x(t), the squared magnitude spectrum x(t) = x(f) According to Parseval s Relation Therefore The Energy spectral density is symmetrical in frequency about origin and total energy of the signal x(t) can be expressed as 36

137 Power Spectral Density The power spectral density (PSD) function Gx(f) of the periodic signal x(t) is a real, even ad nonnegative function of frequency that gives the distribution of the power of x(t) in the frequency domain. PSD is represented as (Fourier Series): PSD of non-periodic signals: Whereas the average power of a periodic signal x(t) is represented as: 37

138 Noise in the Communication System The term noise refers to unwanted electrical signals that are always present in electrical systems: e.g. spark-plug ignition noise, switching transients and other electro-magnetic signals or atmosphere: the sun and other galactic sources Can describe thermal noise as zero-mean Gaussian random process A Gaussian process n(t) is a random function whose value n at any arbitrary time t is statistically characterized by the Gaussian probability density function 38

139 WHITE NOISE The primary spectral characteristic of thermal noise is that its power spectral density is the same for all frequencies of interest in most communication systems A thermal noise source emanates an equal amount of noise power per unit bandwidth at all frequencies from dc to about 0 Hz. Power spectral density G(f) Autocorrelation function of white noise is The average power P of white noise if infinite 39

140 WHITE NOISE 40

141 White Noise Since Rw( T) = 0 for T = 0, any two different samples of white noise, no matter how close in time they are taken, are uncorrelated. Since the noise samples of white noise are uncorrelated, if the noise is both white and Gaussian (for example, thermal noise) then the noise samples are also independent. 4

142 ADDITIVE WHITE GAUSSIAN NOISE (AWGN) The effect on the detection process of a channel with Additive White Gaussian Noise (AWGN) is that the noise affects each transmitted symbol independently Such a channel is called a memoryless channel The term additive means that the noise is simply superimposed or added to the signal that there are no multiplicative mechanisms at work 4

143 Random Processes and Linear Systems If a random process forms the input to a timeinvariant linear system, the output will also be a random 43

144 Distortion less Transmission 44 Remember linear and non-linear group delays in DSP

145 DISTORTION LESS TRANSMISSION What is required of a network for it to behave like an ideal transmission line? The output signal from an ideal transmission line may have some time delay and different amplitude as compared with the input It must have no distortion it must have the same shape as the input For idea distortion less transmission 45

146 Ideal Distortion Less Transmission The overall system response must have a constant magnitude response The phase shift must be linear with frequency All of the signal s frequency components must also arrive with identical time delay in order to add up correctly The time delay t0 is related to the phase shift and the radian frequency = f by A characteristic often used to measure delay distortion of a signal is called envelope delay or group delay, which is defined as 46

147 BANDWIDTH OF DIGITAL DATA Baseband signals Signals containing frequencies ranging from 0 to some frequency fs Bandpass or Passband Signals Signals containing frequencies ranging from fs to some frequency fs 47

148 Unit IV Noise Characterization

149 .0 Noise in CW Modulation Systems Formulating two models:. Channel model AWGN: additive white Gaussian noise. Receiver model Modulated signal s (t) Band-pass x(t) Demodulator filter Noise w(t) 49 Output signal

150 Signal-to-Noise Ratio: Basic Definitions average power of signal SNR average power of noise Power spectral density of White noise w(t): N0 SW ( f ) f (-, ) N0 is the average noise power per unit bandwidth. 50

151 Ideal band-pass filtered noise N0BT Average noise power Furthermore, the output noise of filter can be regarded as narrowband noise n t ni t cos fct nq t sin fct 5

152 Modulated signal s (t) Band-pass x(t) Demodulator filter Output signal Noise w(t) SNRI The input of the demodulator is x t s t n t s(t) is useful modulated signal n(t) is narrowband noise 5 SNRO

153 Requirements of Fair comparison:. The modulated signal s(t) has the same average power.. The channel noise w(t) has the same average power measured in the message bandwidth W. 53

154 SNRc Figure.35 The baseband transmission model, assuming a message signal of bandwidth W, used for calculating the channel signal-to-noise ratio. (SNR)O Figure of merit (SNR) C 54

155 . Noise in Linear Receivers Using Coherent Detection Linear receiver: DSB-SC and SSB coherent detector Nonlinear receiver: AM envelope detector Take DSB for example to analyze noise performance of coherent detection. 55

156 Model of DSB-SC receiver using coherent detection DSB signal s (t) x(t) v(t) BPF LPF y(t) cos fc t Noise w(t) sdsb t CAc m t cos f ct C: system-dependent scaling factor 设消息信号m(t)的功率谱密度为SM(f), 则它的 平均功率P 为 W P SM f df W 56

157 Average power of DSB S DSB N WN 0 Average noise power SNR C DSB, c C AP WN0 Ni WN0 so SNR I C Ac P c C AP 4WN

158 SNRO Input of the product-modulator is x t s t n t CAc cos f c t m t ni t cos f c t nq t cos f c t The output of the product-modulator is v t x t cos f c t CAc m t ni t CAc m t ni t cos 4 f c t nq t sin 4 f c t Output of LPF y t CA m t n t c I 58.86

159 y t CAc m t ni t The receiver output Output message signal: so ( t ) CAc m( t ) Power of output signal S o C Ac P 4 Average Power of filtered noise n(t) Noise output Ni N 0W n0 ( t ) ni ( t ) Average power of the in-phase noise component ni(t) is the same as that of the filtered noise n(t). Output noise power N out ( ) WN 0 WN 0 59

160 The output SNR for a DSB-SC receiver using coherent detection is therefore SNR O DSB _ SC, Therefore C Ac P / 4 C Ac P WN 0 WN0 / SNR O Figure of merit= SNR C.87 DSB Similar analysis to SSB demodulator Problem.49, we can get SNR O Figure of merit= SNR C SSB SSB has the same figure of merit as DSB. 60

161 . Noise in AM receivers Using Envelope Detection Figure.37 Model of AM receiver. AM signal s t Ac ka m t cos f ct Power of AM signal Noise power S AM Ac ( ka P ) N WN 0 6

162 N I WN0 SNR I, AM SI S AM NI NI Ac ka P 4WN 0 Channel SNR SNR C, AM Ac ka P WN

163 x t s t n t.9 Ac Ac ka m t ni t cos f c t nq t sin f c t y t envelope of x t = Ac Ac ka m t ni t nq t.9 It s difficult to get the relationship between signal and noise. So, we just discuss it under different conditions. 63

164 ) When Signal>>Noise A c [ ka m(t )] ni (t ) nq (t ) y t Ac Ac ka m t ni t nq t Ac Ac ka m t Ac Ac ka m t ni t ni t nq t Ac Ac ka m t Ac Ac ka m t ni t Ac Ac ka m t ni t Ac Ac ka m t ni t Ac Ac ka m t A A k m t c c a x x 当 x << 时 y(t ) Ac Ac ka m(t ) ni (t ) y(t ) Ac ka m(t ) ni (t ) 64

165 y(t ) Ac ka m(t ) ni (t ) SO Ac Ka P NO WN0 Output SNR ( SNR)O, AM ( SNR)O figure of merit = ( SNR)C GAM SNRo ka P SNRI ka P 65 Ac k a P WN 0 a AM k P ka P

166 Comparison: DSB SSB AM the figure of merit of DSB and SSB receivers using coherent detection are always unity, the corresponding figure of merit of an AM receiver using envelope detection is always less than unity. In other words, the noise performance of a full AM receiver is always inferior to that of a DSB or SSB receiver. This is due to the wastage of transmission power, which results from transmitting the carrier as a component of the AM wave. 66

167 Example.4 Single-tone Modulation Single-tone modulating signal m t Am cos f m t The corresponding AM wave sam t Ac cos f m t cos fc t Modulation factor ka Am Average power of m(t) P Am SNR O SNR C 67 AM ka Am ka Am

168 discussion SNR O SNR C AM ka Am ka Am When =, it corresponds to 00% modulation, we get a figure of merit equal to /3. This means that, other factors being equal, an AM system must transmit three times as much as average power as a suppressedcarrier system to achieve the same quality of noise performance. 68

169 Figure.37 Model of AM receiver. y t envelope of x t = Ac Ac ka m t ni t nq t 69.9

170 ) When Signal<<Noise A c [ ka m(t )] ni (t ) nq (t ) r (t ) n t r t cos f c t t y( t ) Ac [ ka m(t )] (t ) r (t ) y(t ) r (t ) Ac cos[ (t )] Ac k a m(t ) cos[ (t )] In this case, the detector output has no component strictly proportional to the message signal m(t). 70

171 Threshold Effect Threshold effect: the loss of a message in an envelope detector that operates at a low SNR is referred to as threshold effect. Threshold : we mean a value of the SNR below which the noise performance of a detector deteriorates much more rapidly than proportionately to the SNR. 7

172 Unit IV Noise Characterization

173 .3 Noise in FM Receivers x(t) sfm (t) BPF Noise w(t) v(t) Limiter Discriminator y(t) LPF Figure.40 Model of an FM receiver. In theory, discriminator consists two parts; In practice, these two parts are usually implemented as integral parts as a single physical unit. 73

174 x(t) sfm (t) BPF Noise w(t) v(t) Limiter Discriminator y(t) LPF Figure.40 Model of an FM receiver. n t ni t cos f ct nq t sin f ct In terms of its envelope and phase n t r t cos f ct t r t n t n t I Q / nq t t tan ni t r(t) is Rayleigh distributed, (t) is uniformly distributed. 74

175 sfm (t) Communication Systems x(t) BPF v(t) Limiter Noise w(t) Discriminator y(t) LPF Figure.40 Model of an FM receiver. The incoming FM signal t s t Ac cos f c t k f m d 0 We define t k f m d t thus 0 s t Ac cos f c t t The noisy signal at the band-pass filter output x t s t n t Ac cos f c t t r t cos f c t t 75

176 Figure.4 Phasor diagram for FM wave plus narrowband noise for the case of high carrier-to-noise ratio. r t sin t t t t tan.37 Ac r t cos t t The envelope of x(t) is of no interest to us, because any envelope variations at the band-pass filter output are removed by the limiter. So, we only focus on the phase of x(t). 76

177 When CNR >>, that is r(t)<<ac CNR: Carrier-to-noise r (t ) sin[ ( t ) ( t )] (t ) (t ) Ac k f t r (t ) m( )d sin[ ( t ) ( t )] 0 Ac.39 The discriminator output d t v t k f m t nd t.40 dt Noise term of V(t) d r t sin t t nd t Ac dt.4 Equation.40 shows that if CNR is large, the output of the discriminator consists of message signal plus noise. 77

178 IF (t ) is uniformly distributed over radians. If CNR is high, it can be proved that (t ) (t ) Then is also uniformly distributed over radians. Then, we may simplify Equation.4 as: d nd ( t ) {r ( t ) sin[ ( t )]} Ac dt Because nq (t ) r (t ) sin[ (t )] Thus.4 dnq ( t ) nd ( t ) Ac dt.44 This means that the additive noise nd(t) is determined by the carrier amplitude Ac and the quadrature component nq(t) of the narrowband noise n(t). 78

179 To calculate SNRO average power of the demodulated signal ( SNR)O at the receiver output average power of the noise Output d t v t k f m t nd t dt Output signal= k f m(t ) Average signal power= P is the power of m(t). nq(t) Differentiator H(f) j f jf H( f ) Ac Ac79 nd(t) kf P

180 To calculate output noise power The power spectral density of nd(t): f S N d f S NQ f Ac Because SY ( f ) H ( f ) SX ( f ).58 nq(t) is low-pass filtered N0 f, noise. Thus S N f Ac d BT f 0, otherwise.46 Power spectral density of n0(t) at the receiver out (after low-pass filter) N0 f, f W S No f Ac 0, otherwise 80.47

181 Figure.4 Noise analysis of FM receiver. (a) Power spectral density of quadrature component nq(t) of narrowband noise n(t). (b) Power spectral density of noise nd(t) at the discriminator output. (c) Power spectral density of noise no(t) at the receiver output. 8

182 N0 Average power of output noise Ac W W kf Average signal power= Therefore, SNR O, FM c 3A k P N0W Average power of FM 3 S FM Average noise power Therefore f 3 N W 0 f df 3 Ac.48 P.49 Ac N WN 0 SNR C,FM 8 Ac.50 N 0W

183 SNR O, FM SNR C, FM c f 3A k P SNR O SNR C N 0W 3 Ac N 0W FM 3k f P W.5 Deviation ratio 偏移率, 频偏比 f frequency deviation D W highest modulation frequency BT f W f ( Since f k f So ) ( D)W D D Thus ( SNR)O figure merit ( SNR)C 83 D is similar to modulation index. kf W 3 PD FM

184 ( SNR)O figure merit ( SNR)C 3 PD FM D Figure merit Good! D Transmission bandwidth BT Bad! Conclusion when carrier-to-noise is high, an increase in the transmission bandwidth BT provides a corresponding quadratic increase in the output signal-to-noise ratio of figure of merit. FM improves noise performance at the cost of transmission bandwidth. 84

185 FM Threshold Reduction Threshold reduction in FM receivers may be achieved by using an FM demodulator with negative feedback, or by using a phase-locked loop demodulator. Figure.47 FM demodulator with negative feedback. 85

186 Pre-Emphasis and De-Emphasis in FM Figure.48 (a) Power spectral density of noise at FM receiver output. (b) Power spectral density of a typical message signal. 86

187 Pre-emphasis Hpe(f) m(t) FM Transmitter FM receiver Noise w(t) De-emphasis Hde(f) Recoverd signal m(t) Fig..49 Use of pre-emphasis and de-emphasis in an FM system. 87

188 Pre-emphasize the high-frequency components of the message signal only in the transmitter; De-emphasize the high-frequency components of the message signal and noise in the receiver. So effectively increase the output SNR Frequency response Hpe(f) of the pre-emphasis filter and frequency response Hde(f) satisfy following form: H de ( f ), H pe ( f ) W f W 88

189 Without pre- and depre N0 f BT, f S Nd f Ac 0, otherwise.46 With pre- and depre N0 f BT ( ), H f f de H de ( f ) S Nd f Ac 0, otherwise After Low Pass filter (-W,W), Average output noise power with de-emphasis N0 f = Ac W W f H de ( f ) df 89

190 I Average output noise power without pre-emphasis and de-emphasis Average output noise power with pre-emphasis and de-emphasis I W 3 W W.60 f H de ( f ) df 90

191 Example.6 (a) Pre-emphasis filter. (b) De-emphasis filter. jf H pe f f0 H de f jf / f 0 9

192 3 (W / f 0 ) W I W 3[(W / f 0 ) tan (W / f 0 )] f 3 df W ( f / f ) 0 In commercial FM broadcasting: f0. khz, W 5kHz I 3dB The improvement is Remarkable. 9

193 Unit :. Introduction: Communication Information Theory Communication involves explicitly the transmission of information from one point to another, through a succession of processes. Basic elements to every communication system o Transmitter o Channel and o Receiver Communication System Source of information Transmitter CHANNEL Receiver User of information Message signal Transmitted signal Received signal Estimate of message signal Information sources are classified as: INFORMATION SOURCE ANALOG DISCRETE Source definition Analog : Emit a continuous amplitude, continuous time electrical wave from. Discrete : Emit a sequence of letters of symbols. The output of a discrete information source is a string or sequence of symbols.. Measure the information: To measure the information content of a message quantitatively, we are required to arrive at an intuitive concept of the amount of information. Consider the following examples: A trip to Mercara (Coorg) in the winter time during evening hours,. It is a cold day. It is a cloudy day 3. Possible snow flurries

194 Amount of information received is obviously different for these messages. o Message () Contains very little information since the weather in coorg is cold for most part of the time during winter season. o The forecast of cloudy day contains more information, since it is not an event that occurs often. o In contrast, the forecast of snow flurries conveys even more information, since the occurrence of snow in coorg is a rare event. On an intuitive basis, then with a knowledge of the occurrence of an event, what can be said about the amount of information conveyed? It is related to the probability of occurrence of the event. What do you conclude from the above example with regard to quantity of information? Message associated with an event least likely to occur contains most information. The information content of a message can be expressed quantitatively as follows: The above concepts can now be formed interns of probabilities as follows: Say that, an information source emits one of q possible messages m, m m q with p, p p q as their probs. of occurrence. Based on the above intusion, the information content of the k th message, can be written as I (m k ) α p k Also to satisfy the intuitive concept, of information. Therefore, I (m k ) must zero as p k I (m k ) > I (m j ); if p k < p j I (m k ) O (m j ); if p k I I (m k ) O; when O < p k < Another requirement is that when two independent messages are received, the total information content is Sum of the information conveyed by each of the messages. Thus, we have I (m k & m q ) I (m k & m q ) = I mk + I mq I We can define a measure of information as I (m k ) = log p k III

195 Unit of information measure Base of the logarithm will determine the unit assigned to the information content. Natural logarithm base Base - 0 Base - : bit : nat : Hartley / decit Use of binary digit as the unit of information? Is based on the fact that if two possible binary digits occur with equal proby (p = p = ½) then the correct identification of the binary digit conveys an amount of information. I (m ) = I (m ) = log (½ ) = bit One bit is the amount if information that we gain when one of two possible and equally likely events occurs. Illustrative Example A source puts out one of five possible messages during each message interval. The probs. of these messages are p = ; p = ; p = : p =, p What is the information content of these messages? I (m ) = - log = bit I (m ) = - log = bits 4 I (m 3 ) = - log = 3 bits 8 I (m 4 ) = - log 6 I (m 5 ) = - log 6 = 4 bits = 4 bits HW: Calculate I for the above messages in nats and Hartley

196 Digital Communication System: Source of information Message signal Estimate of the Message signal User of information Transmitter Source encoder Source code word Channel encoder Channel code word Source decoder Estimate of source codeword Channel decoder Estimate of channel codeword Receiver Modulator Demodulator Waveform Channel Received signal Entropy and rate of Information of an Information Source / Model of a Mark off Source.3 Average Information Content of Symbols in Long Independence Sequences Suppose that a source is emitting one of M possible symbols s 0, s.. s M in a statically independent sequence Let p, p,.. p M be the problems of occurrence of the M-symbols resply. suppose further that during a long period of transmission a sequence of N symbols have been generated. On an average s will occur NP times S will occur NP times : : s i will occur NP i times The information content of the i th symbol is I (s i ) = log P i N occurrences of s i contributes an information content of P i N. I (s i ) = P i N. log p i Total information content of the message is = Sum of the contribution due to each of bits SJBIT/ECE 8 p i bits

197 M i.e., I total = NP log bits i = pi Averageinforamtion content per symbolin given by This is equation used by Shannon H = I total N = M i = M symbols of the source alphabet NP log p i bits per symbol Average information content per symbol is also called the source entropy IV.4 The average information associated with an extremely unlikely message, with an extremely likely message and the dependence of H on the probabilities of messages consider the situation where you have just two messages of probs. p and (-p). Average information per message is H = At p = O, H = O and at p =, H = O again, p log The maximum value of H can be easily obtained as, H max = ½ log + ½ log = log = H max = bit / message Plot and H can be shown below H p + ( p) log p O ½ P The above observation can be generalized for a source with an alphabet of M symbols. Entropy will attain its maximum value, when the symbol probabilities are equal, i.e., when p = p = p 3 =. = p M = M H max = log M bits / symbol H max = p H max = p M M log log p M M

198 H max = log M = log M M Information rate If the source is emitting symbols at a fixed rate of r s symbols / sec, the average source information rate R is defined as R = r s. H bits / sec Illustrative Examples. Consider a discrete memoryless source with a source alphabet A = { s o, s, s } with respective probs. p 0 = ¼, p = ¼, p = ½. Find the entropy of the source. Solution: By definition, the entropy of a source is given by M H = p i = i log bits/ symbol p i H for this example is H (A) = p H (A) = i = 0 i log p i Substituting the values given, we get p o log + P log P o p p + = ¼ log 4 + ¼ log 4 + ½ log log p 3 = =.5 bits if r s = per sec, then H (A) = r s H (A) =.5 bits/sec. An analog signal is band limited to B Hz, sampled at the Nyquist rate, and the samples are quantized into 4-levels. The quantization levels Q, Q, Q 3, and Q 4 (messages) are assumed independent and occur with probs. P = P = 8 and P = P 3 = 8 3. Find the information rate of the source. Solution: By definition, the average information H is given by H = p log p + p log p + p 3 log Substituting the values given, we get p + p 4 log 3 p 4 0

199 H = 8 log log log log 8 =.8 bits/ message. Information rate of the source by definition is R = r s H R = B, (.8) = (3.6 B) bits/sec 3. Compute the values of H and R, if in the above example, the quantities levels are so chosen that they are equally likely to occur, Solution: Average information per message is H = 4 (¼ log 4) = bits/message and R = r s H = B () = (4B) bits/sec.5 Mark off Model for Information Sources Assumption A source puts out symbols belonging to a finite alphabet according to certain probabilities depending on preceding symbols as well as the particular symbol in question. Define a random process A statistical model of a system that produces a sequence of symbols stated above is and which is governed by a set of probs. is known as a random process. Therefore, we may consider a discrete source as a random process And the converse is also true. i.e. A random process that produces a discrete sequence of symbols chosen from a finite set may be considered as a discrete source. Discrete stationary Mark off process? Provides a statistical model for the symbol sequences emitted by a discrete source. General description of the model can be given as below:. At the beginning of each symbol interval, the source will be in the one of n possible states,,.. n Where n is defined as

200 n (M) m M = no of symbol / letters in the alphabet of a discrete stationery source, m = source is emitting a symbol sequence with a residual influence lasting m symbols. i.e. m: represents the order of the source. m = means a nd order source m = means a first order source. The source changes state once during each symbol interval from say i to j. The probabilityy of this transition is P ij. P ij depends only on the initial state i and the final state j but does not depend on the states during any of the preceeding symbol intervals.. When the source changes state from i to j it emits a symbol. Symbol emitted depends on the initial state i and the transition i j. 3. Let s, s,.. s M be the symbols of the alphabet, and let x, x, x 3, x k, be a sequence of random variables, where x k represents the k th symbol in a sequence emitted by the source. Then, the probability that the k th symbol emitted is s q will depend on the previous symbols x, x, x 3,, x k emitted by the source. i.e., P (X k = s q / x, x,, x k ) 4. The residual influence of x, x,, x k on x k is represented by the state of the system at the beginning of the k th symbol interval. i.e. P (x k = s q / x, x,, x k ) = P (x k = s q / S k ) When S k in a discrete random variable representing the state of the system at the beginning of the k th interval. Term states is used to remember past history or residual influence in the same context as the use of state variables in system theory / states in sequential logic circuits. System Analysis with regard to Markoff sources Representation of Discrete Stationary Markoff sources: o Are represented in a graph form with the nodes in the graph to represent states and the transition between states by a directed line from the initial to the final state.

201 o Transition probs. and the symbols emitted corresponding to the transition will be shown marked along the lines of the graph. A typical example for such a source is given below. C ½ P () = / 3 P () = / 3 ¼ C A ¼ B ¼ C ¼ P 3() = / 3 ¼ ½ A B A 3 B ½ ¼ o It is an example of a source emitting one of three symbols A, B, and C o The probability of occurrence of a symbol depends on the particular symbol in question and the symbol immediately proceeding it. o Residual or past influence lasts only for a duration of one symbol. Last symbol emitted by this source o The last symbol emitted by the source can be A or B or C. Hence past history can be represented by three states- one for each of the three symbols of the alphabet. Nodes of the source o Suppose that the system is in state () and the last symbol emitted by the source was A. o The source now emits symbol (A) with probability ½ and returns to state (). OR o The source emits letter (B) with probability ¼ and goes to state (3) OR o The source emits symbol (C) with probability ¼ and goes to state (). ½ A ¼ C ¼ B To state To state 3 State transition and symbol generation can also be illustrated using a tree diagram. Tree diagram Tree diagram is a planar graph where the nodes correspond to states and branches correspond to transitions. Transitions between states occur once every T s seconds. Along the branches of the tree, the transition probabilities and symbols emitted will be indicated.tree diagram for the source considered 3

202 Symbol probs. / 3 Symbols emitted ½ ¼ A C ½ ¼ ¼ A C B A C B 3 3 AA AC AB CA CC CB Symbol sequence ¼ B 3 A C B 3 BA BC BB / 3 ¼ ½ A C A C B A C B 3 3 AA AC AB CA CC CB ¼ B 3 A C B 3 BA BC BB / 3 3 Initial state ¼ ¼ ½ A C B 3 State at the end of the first symbol internal A C B A C B A C B AA AC AB CA CC CB BA BC BB State at the end of the second symbol internal 4

203 Use of the tree diagram Tree diagram can be used to obtain the probabilities of generating various symbol sequences. Generation a symbol sequence say AB This can be generated by any one of the following transitions: 3 OR 3 OR 3 3 Therefore proby of the source emitting the two symbol sequence AB is given by P(AB) = P ( S =, S =, S 3 = 3 ) Or P ( S =, S =, S 3 = 3 ) () Or P ( S = 3, S =, S 3 = 3 ) Note that the three transition paths are disjoint. Therefore P (AB) = P ( S =, S =, S 3 = 3 ) + P ( S =, S =, S 3 = 3 ) + P ( S =, S =, S 3 = 3 ) () The first term on the RHS of the equation () can be written as P ( S =, S =, S 3 = 3 ) = P ( S = ) P (S = / S = ) P (S 3 = 3 / S =, S = ) = P ( S = ) P (S = / S = ) P (S 3 = 3 / S = ) 5

204 Recall the Markoff property. Transition probability to S 3 depends on S but not on how the system got to S. Therefore, P (S =, S =, S 3 = 3 ) = / 3 x ½ x ¼ Similarly other terms on the RHS of equation () can be evaluated. Therefore P (AB) = / 3 x ½ x ¼ + / 3 x ¼ x ¼ + 4 / 3 x ¼ x ¼ = = 48 Similarly the probs of occurrence of other symbol sequences can be computed. Therefore, In general the probability of the source emitting a particular symbol sequence can be computed by summing the product of probabilities in the tree diagram along all the paths that yield the particular sequences of interest. Illustrative Example:. For the information source given draw the tree diagram and find the probs. of messages of lengths, and 3. 3 / 4 A ¼ C C ¼ B 3 / 4 p = ½ P = ½ Source given emits one of 3 symbols A, B and C Tree diagram for the source outputs can be easily drawn as shown. 6

205 ½ ½ A ¾ C ¼ C ¼ B ¾ A C C B A C C B ¾ ¼ ¼ ¾ ¾ ¼ ¼ ¾ A C C B A C C B A C C B A C C B ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 AAA AAC ACC ACB CCA CCC CBC CBB CAA CAC CCC CCB BCA BCC BBC BBB Messages of length () and their probs A ½ x ¾ = 3 / 8 B ½ x ¾ = 3 / 8 C ½ x ¼ + ½ x ¼ = + = ¼ 8 8 Message of length () How may such messages are there? Seven Which are they? AA, AC, CB, CC, BB, BC & CA What are their probabilities? 9 Message AA : ½ x ¾ x ¾ = 3 Message AC: ½ x ¾ x ¼ = 3 3 and so on. 7

206 Messages of Length () Messages of Length () Messages of Length (3) 3 A 8 3 B 8 C 4 AA AC CB CC BB BC 3 CA 3 3 AAA AAC ACC ACB BBB 8 BBC BCC 8 BCA CCA 8 3 CCB 8 CCC 8 3 CBC 8 3 CAC 8 9 CBB 8 9 CAA 8 8

207 A second order Markoff source Model shown is an example of a source where the probability of occurrence of a symbol depends not only on the particular symbol in question, but also on the two symbols proceeding it. A / 8 (AA) (AB) 7 / 8 No. of states: n (M) m ; 4 M M = A (AA) 3 / 4 / 4 B B 3 / 7 4 / 8 B (BB) B 4 3 / 4 / 8 B m = No. of symbols for which the residual influence lasts (duration of symbols) or M = No. of letters / symbols in the alphabet. Say the system in the state 3 at the beginning of the symbols emitted by the source were BA. Similar comment applies for other states. P () 8 P () 8 7 P 3 () 8 7 P 4 () 8.6 Entropy and Information Rate of Markoff Sources Definition of the entropy of the source Assume that, the probability of being in state i at he beginning of the first symbol interval is the same as the probability of being in state i at the beginning of the second symbol interval, and so on. The probability of going from state i to j also doesn t depend on time, Entropy of state i is defined as the average information content of the symbols emitted from the i-th state. n H i = pij log bits / symbol () p j = ij Entropy of the source is defined as the average of the entropy of each state. n i.e. H = E(H i ) = p i H i () Where, j= Pi = the proby that the source is in state i'. Using eqn (), eqn. () becomes, 9

208 H = n i = p i n j = p ij log bits / symbol (3) pij Average information rate for the source is defined as R = r s. H bits/sec Where, r s is the number of state transitions per second or the symbol rate of the source. The above concepts can be illustrated with an example Illustrative Example:. Consider an information source modeled by a discrete stationary Mark off random process shown in the figure. Find the source entropy H and the average information content per symbol in messages containing one, two and three symbols. 3 / 4 A ¼ C C ¼ B 3 / 4 p = ½ P = ½ The source emits one of three symbols A, B and C. A tree diagram can be drawn as illustrated in the previous session to understand the various symbol sequences and their probabilities. ½ ½ A ¾ C ¼ C ¼ B ¾ A C C B A C C B ¾ ¼ ¼ ¾ ¾ ¼ ¼ ¾ A C C B A C C B A C C B A C C B ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 AAA AAC ACC ACB CCA CCC CBC CBB CAA CAC CCC CCB BCA BCC BBC BBB 0

209 As per the outcome of the previous session we have Messages of Length () Messages of Length () Messages of Length (3) 3 A 8 3 B 8 C 4 AA AC CB CC BB BC 3 CA 3 3 AAA AAC ACC ACB BBB 8 BBC BCC 8 BCA CCA 8 3 CCB 8 CCC 8 3 CBC 8 3 CAC 8 9 CBB 8 9 CAA 8

210 ITC 0EC55 H Put i =, H By definition H i is given by i = n j= n = = i j= p p ij j log log p log + p p p ij p j log p Substituting the values we get, H 3 4 = 3 4 log 3/ ( 4) log = log log ( 4) H = 0.83 / 4 Similarly H = 4 log log 3 4 = 0.83 By definition, the source entropy is given by, H = n i = p i H i = i = p i H i = (0.83) + (0.83) = (0.83) bits / symbol To calculate the average information content per symbol in messages containing two symbols. How many messages of length () are present? And what is the information content of these messages? There are seven such messages and their information content is: I (AA) = I (BB) = log i.e., I (AA) = I (BB) = log ( AA ) = log ( BB) (9 / 3) =.83 bits Similarly calculate for other messages and verify that they are I (BB) = I (AC) = I (CB) = I (CA) = log = 3.45 bits (3/ 3) SJBIT/ECE

211 I (CC) = log = = 4 bits P(/3) Computation of the average information content of these messages. Thus, we have 7 H (two) = Pi log bits / sym. P i = 7 = Pi. Ii i = i Where I i = the I s calculated above for different messages of length two Substituting the values we get, H (two) = (.83) x (3.45) + x (3.45) x (.83) (3.45) + 3 (4) x (3.45) H (two) =.56 bits Computation of the average information content per symbol in messages containing two symbols using the relation. Average information content of the messages of length N G N = Number of symbolsin the message Here, N = G N = = Average information content of the messages of length () H (two).56 = =.8 bits / symbol G =.8 Similarly compute other G s of interest for the problem under discussion viz G & G 3. You get them as G =.56 bits / symbol And G 3 =.0970 bits / symbol from the values of G s calculated We note that, 3

212 G > G > G 3 > H Statement It can be stated that the average information per symbol in the message reduces as the length of the message increases. The generalized form of the above statement If P (m i ) is the probability of a sequence m i of N symbols form the source with the average information content per symbol in the messages of N symbols defined by G N = i P(m ) log P(m ) i N i Where the sum is over all sequences m i containing N symbols, then G N is a monotonic decreasing function of N and in the limiting case it becomes. Lim G N = H bits / symbol N Recall H = entropy of the source The above example illustrates the basic concept that the average information content per symbol from a source emitting dependent sequence decreases as the message length increases. It can also be stated as, Alternatively, it tells us that the average number of bits per symbol needed to represent a message decreases as the message length increases. Problems: Example The state diagram of the stationary Mark off source is shown below Find (i) the entropy of each state (ii) The entropy of the source (iii) G, G and verify that G G H the entropy of the source. C ½ P(state) = P(state) = P(state3) = /3 ¼ C A ¼ B ¼ C ¼ ½ A ¼ B A ¼ 3 B ½ Example SJBIT/ECE 4

213 ITC 0EC55 For the Mark off source shown, calculate the information rate. By definition, the average information rate for the source is given by R = r s. H bits/sec () Where, r s is the symbol rate of the source And H is the entropy of the source. To compute H ½ L p = ¼ ½ S L ¼ Calculate the entropy of each state using, S P = ½ ½ ½ S R ¼ P 3 = ¼ 3 R / Solution: n Hi = pij log bits / sym () p j= For this example, ij 3 Hi = pij log ; i =,, 3 p j= ij (3) Put i = 3 H i = p j log p j j = = - p log p p log p p 3 log p 3 Substituting the values, we get H = - x log - log - 0 = + log () + log () H = bit / symbol Put i =, in eqn. () we get, 3 H = - p j = j log p j i.e., H = - [ p log p + p log p + p log ] 3 p3 Substituting the values given we get, SJBIT/ECE 5

214 H = - log log + 4 log 4 = + 4 log 4 + log + 4 log 4 = log + + log 4 H =.5 bits/symbol Similarly calculate H 3 and it will be H 3 = bit / symbol With H i computed you can now compute H, the source entropy, using. 3 H = i = P i H i = p H + p H + p 3 H 3 Substituting the values we get, H = x + x.5 + x = = =.5 H =.5 bits/symbol =.5 bits / symbol Now, using equation () we have Source information rate = R = r s.5 Taking r s as one per second we get R = x.5 =.5 bits / sec 6

215 Review questions: () Explain the terms (i) Self information (ii) Average information (iii) Mutual Information. () Discuss the reason for using logarithmic measure for measuring the amount of information. (3) Explain the concept of amount of information associated with message. Also explain what infinite information is and zero information. (4) A binary source emitting an independent sequence of 0 s and s with probabilities p and (- p) respectively. Plot the entropy of the source. (5) Explain the concept of information, average information, information rate and redundancy as referred to information transmission. (6) Let X represents the outcome of a single roll of a fair dice. What is the entropy of X? (7) A code is composed of dots and dashes. Assume that the dash is 3 times as long as the dot and has one-third the probability of occurrence. (i) Calculate the information in dot and that in a dash; (ii) Calculate the average information in dot-dash code; and (iii) Assume that a dot lasts for 0 ms and this same time interval is allowed between symbols. Calculate the average rate of information transmission. (8) What do you understand by the term extension of a discrete memory less source? Show that the entropy of the nth extension of a DMS is n times the entropy of the original source. (9) A card is drawn from a deck of playing cards. A) You are informed that the card you draw is spade. How much information did you receive in bits? B) How much information did you receive if you are told that the card you drew is an ace? C) How much information did you receive if you are told that the card you drew is an ace of spades? Is the information content of the message ace of spades the sum of the information contents of the messages spade and ace? (0) A block and white TV picture consists of 55 lines of picture information. Assume that each consists of 55 picture elements and that each element can have 56 brightness levels. Pictures are repeated the rate of 30/sec. Calculate the average rate of information conveyed by a TV set to a viewer. () A zero memory source has a source alphabet S= {S, S, S3} with P= {/, /4, /4}. Find the entropy of the source. Also determine the entropy of its second extension and verify that H (S ) = H(S). () Show that the entropy is maximum when source transmits symbols with equal probability. Plot the entropy of this source versus p (0<p<). (3) The output of an information source consists OF 8 symbols, 6 of which occurs with probability of /3 and remaining occur with a probability of /4. The source emits 000 symbols/sec. assuming that the symbols are chosen independently; find the rate of information of the source. 7

216 Symbol sequence ¼ A A 3 / 3 ½ C C B ¼ B 3 A C B 3 BA BC BB / 3 3 Initial state ¼ ¼ ½ A C B 3 State at the end of the first symbol internal A C B A C B A C B AA AC AB CA CC CB BA BC BB State at the end of the second symbol internal 8

217 Unit - : SOURCE CODING Syllabus: Encoding of the source output, Shannon s encoding algorithm. Communication Channels, Discrete communication channels, Continuous channels. Text Books: Digital and analog communication systems, K. Sam Shanmugam, John Wiley, 996. Reference Books: Digital Communications - Glover and Grant; Pearson Ed. nd Ed 008 9

218 Unit - : SOURCE CODING. Encoding of the Source Output: Need for encoding Suppose that, M messages = N, which are equally likely to occur. Then recall that average information per messages interval in H = N. Say further that each message is coded into N bits, H Average information carried by an individual bit is = = bit N If the messages are not equally likely, then H will be less than N and each bit will carry less than one bit of information. Is it possible to improve the situation? Yes, by using a code in which not all messages are encoded into the same number of bits. The more likely a message is, the fewer the number of bits that should be used in its code word. Source encoding Process by which the output of an information source is converted into a binary sequence. Symbol sequence emitted by the information source Input Source Encoder Output : a binary sequence If the encoder operates on blocks of N symbols, the bit rate of the encoder is given as Produces an average bit rate of G N bits / symbol Where, G N = N i p(m ) i log p(mi ) p(m i ) = Probability of sequence m i of N symbols from the source, Sum is over all sequences m i containing N symbols. G N in a monotonic decreasing function of N and Lim N GN = H bits / symbol Performance measuring factor for the encoder Coding efficiency: η c Definition of ηc = η c = H(S) ^ H N Source inf ormation rate Average output bit rate of the encoder 30

219 ITC 0EC55. Shannon s Encoding Algorithm: Formulation of the design of the source encoder Can be formulated as follows: One of q possible messages A message INPUT Source encoder OUTPUT A unique binary code word c i of length n i bits for the message m i N-symbols Replaces the input message symbols by a sequence of binary digits q messages : m, m,..m i,.., m q Probs. of messages : p, p,....p i,..., p q n i : an integer The objective of the designer To find n i and c i for i =,,..., q such that the average number of bits per symbol used in the coding scheme is as close to G N as possible. Where, ^ q H N = N i = n i p i ^ H N and G N = N q i = p i log p i.e., the objective is to have i ^ H G N as closely as possible N The algorithm proposed by Shannon and Fano Step : Messages for a given block size (N) m, m,... m q are to be arranged in decreasing order of probability. Step : The number of n i (an integer) assigned to message m i is bounded by log p i < n i < + log p i Step 3: The code word is generated from the binary fraction expansion of F i defined as SJBIT/ECE 3

220 ITC 0EC55 i F i = p k = k, with F taken to be zero. Step 4: Choose n i bits in the expansion of step (3) Say, i =, then if n i as per step () is = 3 and If the F i as per stop (3) is Then step (4) says that the code word is: 00 for message () With similar comments for other messages of the source. The codeword for the message m i is the binary fraction expansion of F i upto n i bits. i.e., C i = (F i ) binary, ni bits Step 5: Design of the encoder can be completed by repeating the above steps for all the messages of block length chosen. Illustrative Example Design of source encoder for the information source given, 3 / 4 A ¼ C C ¼ B 3 / 4 p = ½ P = ½ Compare the average output bit rate and efficiency of the coder for N =, & 3. Solution: The value of N is to be specified. Case I: Say N = 3 Block size Step : Write the tree diagram and get the symbol sequence of length = 3. Tree diagram for illustrative example () of session (3) 3

221 ½ ½ ¼ ¾ ¾ ¼ From the previous session we know that the source emits fifteen (5) distinct three symbol messages. They are listed below: C B A C A C C B A C C B ¾ ¼ ¼ ¾ ¾ ¼ ¼ ¾ A C C B A C C B A C C B A C C B ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 ¾ ¼ ¼ 3 / 4 AAA AAC ACC ACB CCA CCC CBC CBB CAA CAC CCC CCB BCA BCC BBC BBB Messages Probability AAA AAC ACC ACB BBB BBC BCC BCA CCA CCB CCC CBC CAC CBB CAA Step : Arrange the messages m i in decreasing order of probability. Messages AAA BBB CAA CBB BCA BBC AAC ACB CBC CAC CCB CCA BCC ACC CCC m i Probability p i Step 3: Compute the number of bits to be assigned to a message m i using. Log p i < n i < + log ; i =,,. 5 p Say i =, then bound on n i is 8 log < n 7 < + 8 log 7 i 33

222 i.e.,.45 < n < 3.45 Recall n i has to be an integer n can be taken as, n = 3 Step 4: Generate the codeword using the binary fraction expansion of F i defined as i F i = p k ; with F = 0 k = Say i =, i.e., the second message, then calculate n you should get it as 3 bits. Next, calculate F = get it as : k = = k = p k p k = 7 8 Step 5: Since n i = 3, truncate this exploration to 3 bits. The codeword is: Get the binary fraction expansion of. You 8 Step 6: Repeat the above steps and complete the design of the encoder for other messages listed above. The following table may be constructed Message m i p i F i n i Binary expansion of F i Code c i word AAA BBB CAA CBB BCA BBC AAC ACB /8 54/8 63/8 7/8 8/8 90/8 99/

223 CBC CAC CCB CCA BCC ACC /8 /8 4/8 7/8 0/8 3/ CCC 6/8 6 0 the average number of bits per symbol used by the encoder Average number of bits = n i pi Substituting the values from the table we get, Average Number of bits = 3.89 Average Number of bits per symbol = Here N = 3, H^ = =.3 bits / symbol 3 State entropy is given by H i = n p ij log bits / symbol p j= ij Here number of states the source can be in are two i.e., n = H i = p log p ij j= ij Say i =, then entropy of state () is H i = p j= ij log = p log p p + ^ H log j p p Substituting the values known we get, N = n i N n i 35

224 H = x log 3/ + 4 log / ( 4) = x log + log( 4) H = 0.83 Similarly we can compute, H as H = j = p log = p p + p log p Substituting we get, H = 4 x log 4 x log / ( 4) log 4 3 = ( ) 4 3 log 4 3/ 4 H = 0.83 Entropy of the source by definition is n H = p H ; j= i i P i = Probability that the source is in the i th state. H = p ; = p H + p H i = i H i Substituting the values, we get, H = ½ x ½ x 0.83 = 0.83 H = 0.83 bits / sym. 36

225 What is the efficiency of the encoder? By definition we have H H 0.83 η c = = x 00 = x 00 = x 00 = 6.4% ^ ^ H H.3 η c for N = 3 is, 6.4% 3 Case II Say N = The number of messages of length two and their probabilities (obtained from the tree diagram) can be listed as shown in the table. Given below Calculate ^ N N = Message p i n i c i AA BB AC CB BC CA CC 9/3 9/3 3/3 3/3 3/3 3/3 /3 H and verify that it is.44 bits / sym. Encoder efficiency for this case is H η c = x00 ^ H N Substituting the values we get, η c = 56.34% Case III: N = Proceeding on the same lines you would see that N = Message p i n i c i A 3/8 00 B 3/8 0 C /4 37

226 ^ H = bits / symbol and η c = 40.56% Conclusion for the above example We note that the average output bit rate of the encoder hence the efficiency of the encoder increases as N increases. H ^ N decreases as N increases and Operation of the Source Encoder Designed: I. Consider a symbol string ACBBCAAACBBB at the encoder input. If the encoder uses a block size of 3, find the output of the encoder. 3 / 4 A ¼ C C ¼ B 3 / 4 SOURCE ENCODER OUTPUT p = ½ P = ½ Recall from the outcome of session (5) that for the source given possible three symbol sequences and their corresponding code words are given by Message m i AAA BBB CAA CBB BCA BBC AAC ACB CBC CAC CCB CCA BCC ACC INFORMN. SOURCE n i Codeword c i Determination of the code words and their size as illustrated in the previous session 38

227 3 3 CCC 6 Output of the encoder can be obtained by replacing successive groups of three input symbols by the code words shown in the table. Input symbol string is ACB 3 BCA AAC BBB { Encoded version of the symbol string II. If the encoder operates on two symbols at a time what is the output of the encoder for the same symbol string? Again recall from the previous session that for the source given, different two-symbol sequences and their encoded bits are given by N = Message m i AA BB AC CB BC CA CC No. of bits n i c i For this case, the symbol string will be encoded as AC { BB { CA { AA { CB { BB { Encoded message DECODING How is decoding accomplished? table. By starting at the left-most bit and making groups of bits with the codewords listed in the Case I: N = 3 i) Take the first 3 bit group viz 0 why? ii) Check for a matching word in the table. iii) If no match is obtained, then try the first 4-bit group 00 and again check for the matching word. iv) On matching decode the group. 39

228 NOTE: For this example, step (ii) is not satisfied and with step (iii) a match is found and the decoding results in ACB. Repeat this procedure beginning with the fifth bit to decode the remaining symbol groups. Symbol string would be ACB BCA AAC BCA Conclusion from the above example with regard to decoding It is clear that the decoding can be done easily by knowing the codeword lengths apriori if no errors occur in the bit string in the transmission process. The effect of bit errors in transmission Leads to serious decoding problems. Example: For the case of N = 3, if the bit string, was received at the decoder input with one bit error as What then is the decoded message? Solution: Received bit string is Error bit CBC CAA CCB () For the errorless bit string you have already seen that the decoded symbol string is ACB BCA AAC BCA () () and () reveal the decoding problem with bit error. Illustrative examples on source encoding. A source emits independent sequences of symbols from a source alphabet containing five symbols with probabilities 0.4, 0., 0., 0. and 0.. i) Compute the entropy of the source ii) Design a source encoder with a block size of two. Solution: Source alphabet = (s, s, s 3, s 4, s 5 ) Probs. of symbols = p, p, p 3, p 4, p 5 = 0.4, 0., 0., 0., 0. (i) Entropy of the source = H = p log p bits / symbol Substituting we get, 5 i = H = - [p log p + p log p + p 3 log p 3 + p 4 log p 4 + p 5 log p 5 ] = - [0.4 log log log log log 0.] i i 40

229 H =. bits/symbol (ii) Some encoder with N = Different two symbol sequences for the source are: (s s ) AA ( ) BB ( ) CC ( ) DD ( ) EE (s s ) AB ( ) BC ( ) CD ( ) DE ( ) ED (s s 3 ) AC ( ) BD ( ) CE ( ) DC ( ) EC (s s 4 ) AD ( ) BE ( ) CB ( ) DB ( ) EB (s s 5 ) AE ( ) BA ( ) CA ( ) DA ( ) EA A total of 5 messages Arrange the messages in decreasing order of probability and determine the number of bits n i as explained. Proby. No. of bits Messages p i n i AA AB AC BC BA CA Calculate ^ H = Substituting, ^ H =.36 bits/symbol. A technique used in constructing a source encoder consists of arranging the messages in decreasing order of probability and dividing the message into two almost equally probable 4

230 groups. The messages in the first group are given the bit O and the messages in the second group are given the bit. The procedure is now applied again for each group separately, and continued until no further division is possible. Using this algorithm, find the code words for six messages occurring with probabilities, /4, /, /4, /6, /3, /3 Solution: () Arrange in decreasing order of probability m5 /3 0 0 m6 /3 0 m4 /6 0 m / 0 m /4 0 m3 /4 st division nd division 3 rd division 4 th division Code words are m = 0 m = 0 m 3 = m 4 = 0 m 5 = 00 m 6 = 0 Example (3) a) For the source shown, design a source encoding scheme using block size of two symbols and variable length code words b) Calculate ^ H used by the encoder c) If the source is emitting symbols at a rate of 000 symbols per second, compute the output bit rate of the encoder. L ½ p = ¼ ½ S L ¼ ½ S P = ½ ½ S R ¼ P 3 = ¼ 3 R / 4

231 Solution (a). The tree diagram for the source is ½ LL (/6) ¼ ½ ¼ C ¼ ¼ ½ ¾ ½ ¼ ½ ¼ 3 ½ ½ ¼ ½ ¼ 3 ½ ½ 3 LS SL SS SR LL LS SL SS SR RS (/6) (/3) (/6) (/3) (/6) (/6) (/6) (/8) (/6) (/8) LL LS SL SS SR RS Different Messages of Length Two ¼ SL (/3) RR ¼ 3 ½ C ½ 3 ½ ¼ ½ ½ 3 3 SS SR RS RR (/6) (/3) (/6) (/6). Note, there are seven messages of length (). They are SS, LL, LS, SL, SR, RS & RR. 3. Compute the message probabilities and arrange in descending order. 4. Compute n i, F i. F i (in binary) and c i as explained earlier and tabulate the results, with usual notations. Message m i p i n i F i F i (binary) c i SS / LL /8 3 / LS /8 3 3/ SL /8 3 4/ SR /8 3 5/ RS /8 3 6/ RR /8 3 7/8.0 43

232 G = 7 p i log p i =.375 bits/symbol i (b) ^ H = 7 i p i n i =.375 bits/symbol Recall, ^ H G N + N ; Here N = N ^ H G + (c) Rate = 375 bits/sec..3 SOURCE ENCODER DESIGN AND COMMUNICATION CHANNELS The schematic of a practical communication system is shown. Channel Encoder Channel Encoder Data Communication Channel (Discrete) Coding Channel (Discrete) Modulation Channel (Analog) Electrical Communication channel OR Transmissi on medium Noise b c d e f g h Transmitter Physical channel Receiver Σ Demodulator Channel Decoder Fig. : BINARY COMMN. CHANNEL CHARACTERISATION Communication Channel Communication Channel carries different meanings and characterizations depending on its terminal points and functionality. (i) Portion between points c & g: Referred to as coding channel Accepts a sequence of symbols at its input and produces a sequence of symbols at its output. Completely characterized by a set of transition probabilities p ij. These probabilities will depend on the parameters of () The modulator, () Transmission media, (3) Noise, and (4) Demodulator 44

233 A discrete channel (ii) Portion between points d and f: Provides electrical connection between the source and the destination. The input to and the output of this channel are analog electrical waveforms. Referred to as continuous or modulation channel or simply analog channel. Are subject to several varieties of impairments Due to amplitude and frequency response variations of the channel within the passband. Due to variation of channel characteristics with time. Non-linearities in the channel. Channel can also corrupt the signal statistically due to various types of additive and multiplicative noise..4 Mathematical Model for Discrete Communication Channel: Channel between points c & g of Fig. () The input to the channel? A symbol belonging to an alphabet of M symbols in the general case is the input to the channel. he output of the channel A symbol belonging to the same alphabet of M input symbols is the output of the channel. Is the output symbol in a symbol interval same as the input symbol during the same symbol interval? The discrete channel is completely modeled by a set of probabilities t p Probability that the input to the channel is the i th symbol of the alphabet. i (i =,,. M) and p Probability that the i th symbol is received as the j th symbol of the alphabet at the output of ij the channel. Discrete M-ary channel If a channel is designed to transmit and receive one of M possible symbols, it is called a discrete M-ary channel. discrete binary channel and the statistical model of a binary channel 45

234 Shown in Fig. (). O P 00 O Transmitted digit X P 0 P 0 Received digit X p ij = p(y = j / X=i) t t p = p(x = o); p P(X = ) o r r p = p(y = o); p P(Y = ) o p oo + p o = ; p + p 0 = Fig. () p Its features X & Y: random variables binary valued Input nodes are connected to the output nodes by four paths. (i) Path on top of graph : Represents an input O appearing correctly as O as the channel output. (ii) Path at bottom of graph : (iii) Diogonal path from 0 to : Represents an input bit O appearing incorrectly as at the channel output (due to noise) (iv) Diagonal path from to 0 : Similar comments Errors occur in a random fashion and the occurrence of errors can be statistically modelled by assigning probabilities to the paths shown in figure (). A memory less channel: If the occurrence of an error during a bit interval does not affect the behaviour of the system during other bit intervals. Probability of an error can be evaluated as p(error) = P e = P (X Y) = P (X = 0, Y = ) + P (X =, Y = 0) P e = P (X = 0). P (Y = / X = 0) + P (X = ), P (Y = 0 / X= ) Can also be written as, P e = t p o p 0 + t p p () We also have from the model 46

235 p r o p = p r = p t o t o p p p t p p t. p 0, and () Binary symmetric channel (BSC) If, p 00 = p = p (say), then the channel is called a BSC. Parameters needed to characterize a BSC Model of an M-ary DMC. p p t p = p(x = i) i r p j = p(y = j) p ij = p(y = j / X = i) INPUT X j OUTPUT Y i p ij p im M M Fig. (3) This can be analysed on the same lines presented above for a binary channel. M r t p = p p (3) j i = i ij The p(error) for the M-ary channel Generalising equation () above, we have M M t P (error) = P = e pi pij (4) i = j= j i In a DMC how many statistical processes are involved and which are they? Two, (i) Input to the channel and (ii) Noise 47

236 Definition of the different entropies for the DMC. i) Entropy of INPUT X: H(X). M t t ( X) = pi log ( pi ) H bits / symbol (5) i = ii) Entropy of OUTPUT Y: H(Y) M r r ( Y) = pi log ( pi ) H bits / symbol (6) i = iii) Conditional entropy: H(X/Y) M ( X / Y) = P (X = i, Y = j)log ( p(x = i / Y = j) ) i = M H bits/symbol - (7) j = iv) Joint entropy: H(X,Y) M ( X, Y) = P (X = i, Y = j) log ( p (X = i, Y = j) ) i = M H bits/symbol - (8) i = v) Conditional entropy: H(Y/X) M i = M i = ( p (Y = j / X = i) ) H (Y/X) = P (X = i, Y = j) log bits/symbol - (9) Representation of the conditional entropy H(X/Y) represents how uncertain we are of the channel input x, on the average, when we know the channel output Y. Similar comments apply to H(Y/X) vi) Joint Entropy H(X, Y) = H(X) + H(Y/X) = H(Y) + H(X/Y) - (0) ENTROPIES PERTAINING TO DMC To prove the relation for H(X Y) By definition, we have, H(XY) = M i M j p (i, j) log p (i, j) 48

237 i associated with variable X, white j with variable Y. H(XY) = p (i) p ( j/ i) log [ p (i) p ( j/ i) ] i j = p (i) p ( j/ i) log p (i) + p (i) p ( j/ i) log p (i) i j Say, i is held constant in the first summation of the first term on RHS, then we can write H(XY) as H(XY) = p (i) log p(i) + p(ij) log p(j/ i) H (XY) = H(X) + H(Y / X) Hence the proof.. For the discrete channel model shown, find, the probability of error. 0 p 0 Since the channel is symmetric, p(, 0) = p(0, ) = ( - p) X Y Proby. Of error means, situation when X Y Transmitted digit p Received digit P(error) = P e = P(X Y) = P(X = 0, Y = ) + P (X =, Y = 0) = P(X = 0). P(Y = / X = 0) + P(X = ). P(Y = 0 / X = ) Assuming that 0 & are equally likely to occur P(error) = x ( p) + ( p) = - p + - p P(error) = ( p). A binary channel has the following noise characteristics: 49

238 If the input symbols are transmitted with probabilities ¾ & ¼ respectively, find H(X), H(Y), H(XY), H(Y/X). Solution: P(Y/X) X Given = P(X = 0) = ¾ and P(Y = ) ¼ Y 0 0 /3 /3 /3 /3 3 4 H(X) = p i log pi = log + log 4 = bits / symbol i Compute the probability of the output symbols. Channel model is- x y x y p(y = Y ) = p(x = X, Y = Y ) + p(x = X, Y = Y ) () To evaluate this construct the p(xy) matrix using. P(XY) = p(x). p(y/x) = x 4 = x 4 y y () P(Y = Y ) = 7 + = -- Sum of first column of matrix () Similarly P(Y ) = 5 sum of nd column of P(XY) Construct P(X/Y) matrix using 50

239 P(XY) = p(y). p(x/y) i.e., p(x/y) = X p Y = p(y/x) = =? p ( XY ) p( Y ) = = 4 = 6 7 p(xy) p(y) and so on -----(3) 7 5 H(Y) = p log = log + log = bits/sym. i p 7 5 H(XY) = i i p (XY) log p(xy i j ) = log + log 4 + log + log H(XY) = bits/sym. H(X/Y) = = 7 log 6 H(X/Y) = p (XY) log p(x / Y ) log + log 7 + log = p (XY) log p(y / X ) = 3 log + 4 log 3 + log3 + 6 log 3 3. The joint probability matrix for a channel is given below. Compute H(X), H(Y), H(XY), H(X/Y) & H(Y/X) P(XY) = Solution: Row sum of P(XY) gives the row matrix P(X) P(X) = [0.3, 0., 0.3, 0.] 5

240 Columns sum of P(XY) matrix gives the row matrix P(Y) P(Y) = [0., 0.5, 0.5, 0.5] Get the conditional probability matrix P(Y/X) P(Y/X) = P(X/Y) = Get the condition probability matrix P(X/Y) Now compute the various entropies required using their defining equations. (i) H(X) = p( X).log i p H (X) =.9705 bits / symbol = + ( ) 0.3 log 0. log X (ii) H(Y) = j p ( Y) log.log 0.5 p ( Y) = 0. log log H (Y) =.7473 bits / symbol log 0.5 (iii) H(XY) = p (XY) log p(xy) = log log log 0. 5

241 H(XY) = 3.9 (iv) H(X/Y) = p (XY) log p(x / Y) Substituting the values, we get. H(X/Y) = 4.95 bits / symbol (v) H(Y/X) = p (XY) log p(y / X) Substituting the values, we get. H(Y/X) =.400 bits / symbol 4. Consider the channel represented by the statistical model shown. Write the channel matrix and compute H(Y/X). / 3 X Y / 6 / 3 Y INPUT / 6 / 6 / 3 Y 3 OUTPUT X / 6 / 3 Y 4 For the channel write the conditional probability matrix P(Y/X). P(Y/X) = x x y 3 6 y 3 6 y y NOTE: nd row of P(Y/X) is st row written in reverse order. If this is the situation, then channel is called a symmetric one. First row of P(Y/X). P(X ) = Second row of P(Y/X). P(X ) = x + x + x x 6 + x x + x x

242 Recall P(XY) = p(x), p(y/x) P(X Y ) = p(x ). p(y X ) =. 3 P(X Y ) = 3, p(x, Y 3 ) = 6 = (Y X 4 ) and so on. P(X/Y) = H(Y/X) = p (XY) log p(y / X) Substituting for various probabilities we get, H(Y/X) = + log log log 6 log log 6 = 4 x log x log 6 6 log = x log 3 + log 6 =? log 6 + log 5. Given joint proby. matrix for a channel compute the various entropies for the input and output rv s of the channel. Solution: P(X. Y) = P(X) = row matrix: Sum of each row of P(XY) matrix. P(X) = [0.4, 0.3, 0.04, 0.5, 0.8] P(Y) = column sum = [0.34, 0.3, 0.6, 0.7] p (XY) log bits/sym. p(xy). H(XY) = = 6 54

243 . H(X) = p (X) log =.09 bits/sym. p(x) 3. H(Y) = p (Y) log =.97 bits/sym. p(y) Construct the p(x/y) matrix using, p(xy) = p(y) p(x/y) or P(X/Y) = p(xy) p(y) = H(X/Y) = (XY) log p(x / Y) = p.68 bits/sym. Problem: Construct p(y/x) matrix and hence compute H(Y/X). Rate of Information Transmission over a Discrete Channel : For an M-ary DMC, which is accepting symbols at the rate of r s symbols per second, the average amount of information per symbol going into the channel is given by the entropy of the input random variable X. i.e., H(X) = M i = p log p () t i t i Assumption is that the symbol in the sequence at the input to the channel occur in a statistically independent fashion. Average rate at which information is going into the channel is D in = H(X), r s bits/sec () Is it possible to reconstruct the input symbol sequence with certainty by operating on the received sequence? 55

244 Given two symbols 0 & that are transmitted at a rate of 000 symbols or bits per second. t t With p 0 = & p = Din at the i/p to the channel = 000 bits/sec. Assume that the channel is symmetric with the probability of errorless transmission p equal to Rate of transmission of information: Recall H(X/Y) is a measure of how uncertain we are of the input X given output Y. What do you mean by an ideal errorless channel? H(X/Y) may be used to represent the amount of information lost in the channel. Define the average rate of information transmitted over a channel (D t ). D t Amount of information going into the channel Symbolically it is, D t = [ ] s H (H) H(X / Y).r bits/sec. Amount of information lost rs When the channel is very noisy so that output is statistically independent of the input, H(X/Y) = H(X) and hence all the information going into the channel is lost and no information is transmitted over the channel. DISCRETE CHANNELS:. A binary symmetric channel is shown in figure. Find the rate of information transmission over this channel when p = 0.9, 0.8 & 0.6. Assume that the symbol (or bit) rate is 000/second. Input X p p p p Output Y p(x = 0) = p(x = ) = 56

245 Example of a BSC Solution: H(X) = log + log = bit / sym. D r H(X) in s = By definition we have, 000 D t = [H(X) H(X/Y)] bit / sec [ ] Where, H(X/Y) = ( XY) log p( X / Y) Where X & Y can take values. i j p. r s X 0 0 Y 0 0 H(X/Y) = - P(X = 0, Y = 0) log P (X = 0 / Y = 0) = - P(X = 0, Y = ) log P (X = 0 / Y = ) = - P(X =, Y = 0) log P (X = / Y = 0) = - P(X =, Y = ) log P (X = / Y = ) The conditional probability p(x/y) is to be calculated for all the possible values that X & Y can take. Say X = 0, Y = 0, then P(X = 0 / Y = 0) = Where p(y = 0 / X = 0) p(x = 0) p(y = 0) Y = 0 p(y = 0) = p(y = 0 / X = 0). p(x = 0) + p (X = ). p X = = p. + ( p) 57

246 p(y = 0) = p(x = 0 /Y = 0) = p Similarly we can calculate p(x = / Y = 0) = p p(x = / Y = ) = p p(x = 0 / Y = ) = p H (X / Y) = - p log p + p log ( p) log p + = - [ log p + ( p) log ( p) ] p ( p) + ( p) log ( p) D t rate of inforn. transmission over the channel is = [H(X) H (X/Y)]. r s with, p = 0.9, D t = 53 bits/sec. p = 0.8, D t = 78 bits/sec. p = 0.6, D t = 9 bits/sec. What does the quantity ( p) represent? What do you understand from the above example?. A discrete channel has 4 inputs and 4 outputs. The input probabilities are P, Q, Q, and P. The conditional probabilities between the output and input are. Write the channel model. Solution: The channel model can be deduced as shown below: Given, Y P(y/x) p ( p) X ( p) (p) 3 P(X = 0) = P P(X = ) = Q P(X = ) = Q 58

247 P(X = 3) = P Off course it is true that: P + Q + Q + P = i.e., P + Q = Channel model is 0 0 p Input X 3 p p = q ( p) = q 3 Output Y What is H(X) for this? H(X) = - [P log P + Q log Q] What is H(X/Y)? H(X/Y) = - Q [p log p + q log q] = Q. α. A source delivers the binary digits 0 and with equal probability into a noisy channel at a rate of 000 digits / second. Owing to noise on the channel the probability of receiving a transmitted 0 as a is /6, while the probability of transmitting a and receiving a 0 is /3. Determine the rate at which information is received. Solution: Rate of reception of information is given by R = H (X) - H (X/Y) bits / sec -----() Where, H(X) = H(X/Y) = i i p j (i) log p(i) bits / sym. p (ij) log p(i / j) bits / sym () H(X) = log + log = bit / sym. 59

248 Channel model or flow graph is Input 0 5/6 /3 /6 0 Output Index i' refers to the I/P of the channel and index j referes to the output (R x ) 3/3 Probability of transmitting a symbol (i) given that a symbol 0 was received was received is denoted as p(i/j). i = 0 What do you mean by the probability p? j = 0 How would you compute p(0/0) Recall the probability of a joint event AB p(ab) P(AB) = p(a) p(b/a) = p(b) p(a/b) i.e., p(ij) = p(i) p(j/i) = p(j) p(i/j) from which we have, p(i) p(j/ i) p(i/j) = p(j) -----(3) What are the different combinations of i & j in the present case? p(0) p(0 / 0) Say i = 0 and j = 0, then equation (3) is p(0/0) p(j = 0) What do you mean by p(j = 0)? And how to compute this quantity? Substituting, find p(0/0) p(0 / 0) p(0 / 0) Thus, we have, p(0/0) = p(0) = p(0/0) = x = 3 = Similarly calculate and check the following. 3 0 p =, p = ; p =

249 Calculate the entropy H(X/Y) 0 0 H(X/Y) = p(00) log p + p(0) log p + 0 p(0) log p + p() log p 0 Substituting for the various probabilities we get, H(X/Y) = 5 3 Simplifying you get, 30 log log p + 33 H(X/Y) = 0.7 bit/sym. [H(X) H(X/Y)]. r s = ( 0.7) x 000 R = 730 bits/sec. log 3 64 log A transmitter produces three symbols ABC which are related with joint probability shown. p(i) i p(j/i) 9/7 A 6/7 B i B /7 C C j A 0 A B C Calculate H(XY) Solution: By definition we have H(XY) = H(X) + H(Y/X) -----() Where, H(X) = and H(Y/X) = i i p (i) log p(i) bits / symbol -----() j p (ij) log p(j/ i) bits / symbol -----(3) 6

250 From equation () calculate H(X) H(X) =.57 bits/sym. To compute H(Y/X), first construct the p(ij) matrix using, p(ij) = p(i), p(j/i) p(i, j) i j A 0 B C A B C From equation (3), calculate H(Y/X) and verify, it is H(Y/X) = bits / sym. Using equation () calculate H(XY) H(XY) = H(X) + H(Y/X) = Capacity of a Discrete Memory less Channel (DMC): Capacity of noisy DMC Is Defined as The maximum possible rate of information transmission over the channel. In equation form P( x ) [ D ] C = Max -----() t i.e., maximized over a set of input probabilities P(x) for the discrete Definition of D t? D t : Ave. rate of information transmission over the channel defined as Dt [ H(x) H(x / y) ] r s bits / sec () Eqn. () becomes P(x) {[ H(x) H(x / y) ] r } C = Max -----(3) s 6

251 What type of channel is this? Write the channel matrix Y P(Y/X) 0? 0 p q p X o q p Do you notice something special in this channel? What is H(x) for this channel? Say P(x=0) = P & P(x=) = Q = ( P) H(x) = P log P Q log Q = P log P ( P) log ( P) What is H(y/x)? H(y/x) = [p log p + q logq] DISCRETE CHANNELS WITH MEMORY: In such channels occurrence of error during a particular symbol interval does not influence the occurrence of errors during succeeding symbol intervals No Inter-symbol Influence This will not be so in practical channels Errors do not occur as independent events but tend to occur as bursts. Such channels are said to have Memory. Examples: Telephone channels that are affected by switching transients and dropouts Microwave radio links that are subjected to fading In these channels, impulse noise occasionally dominates the Gaussian noise and errors occur in infrequent long bursts. Because of the complex physical phenomena involved, detailed characterization of channels with memory is very difficult. GILBERT model is a model that has been moderately successful in characterizing error bursts in such channels. Here the channel is modeled as a discrete memoryless BSC, where the probability of error is a time varying parameter. The changes in probability of error are modeled by a Markoff process shown in the Fig below. 63

252 The error generating mechanism in the channel occupies one of three states. Transition from one state to another is modeled by a discrete, stationary Mark off process. For example, when the channel is in State bit error probability during a bit interval is 0 - and the channel stays in this state during the succeeding bit interval with a probability of However, the channel may go to state wherein the bit error probability is 0.5. Since the system stays in this state with probability of 0.99, errors tend to occur in bursts (or groups). State 3 represents a low bit error rate, and errors in this state are produced by Gaussian noise. Errors very rarely occur in bursts while the channel is in this state. Other details of the model are shown in Fig. The maximum rate at which data can be sent over the channel can be computed for each state of the channel using the BSC model of the channel corresponding to each of the three states. Other characteristic parameters of the channel such as the mean time between error bursts and mean duration of the error bursts can be calculated from the model.. LOGARITHMIC INEQUALITIES: Fig shows the graphs of two functions y = x - and y = ln x.the first function is a linear measure and the second function is your logarithmic measure. Observe that the log function always lies below the linear function, except at x =. Further the straight line is a tangent to the log function at x =.This is true only for the natural logarithms. For example, y = log x is equal to y = x - at two points. Viz. at x = and at x =.In between these two values y > y.you should keep this point in mind when using the inequalities that are obtained. From the graphs shown, it follows that, y y ; equality holds good if and only if x =.In other words: ln x (x-), equality iffy x = (.) Multiplying equation (.) throughout by - and 64

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

UNIT I AMPLITUDE MODULATION

UNIT I AMPLITUDE MODULATION UNIT I AMPLITUDE MODULATION Prepared by: S.NANDHINI, Assistant Professor, Dept. of ECE, Sri Venkateswara College of Engineering, Sriperumbudur, Tamilnadu. CONTENTS Introduction to communication systems

More information

Theory of Telecommunications Networks

Theory of Telecommunications Networks Theory of Telecommunications Networks Anton Čižmár Ján Papaj Department of electronics and multimedia telecommunications CONTENTS Preface... 5 1 Introduction... 6 1.1 Mathematical models for communication

More information

Lecture 6. Angle Modulation and Demodulation

Lecture 6. Angle Modulation and Demodulation Lecture 6 and Demodulation Agenda Introduction to and Demodulation Frequency and Phase Modulation Angle Demodulation FM Applications Introduction The other two parameters (frequency and phase) of the carrier

More information

CHAPTER 3 Noise in Amplitude Modulation Systems

CHAPTER 3 Noise in Amplitude Modulation Systems CHAPTER 3 Noise in Amplitude Modulation Systems NOISE Review: Types of Noise External (Atmospheric(sky),Solar(Cosmic),Hotspot) Internal(Shot, Thermal) Parameters of Noise o Signal to Noise ratio o Noise

More information

B.Tech II Year II Semester (R13) Supplementary Examinations May/June 2017 ANALOG COMMUNICATION SYSTEMS (Electronics and Communication Engineering)

B.Tech II Year II Semester (R13) Supplementary Examinations May/June 2017 ANALOG COMMUNICATION SYSTEMS (Electronics and Communication Engineering) Code: 13A04404 R13 B.Tech II Year II Semester (R13) Supplementary Examinations May/June 2017 ANALOG COMMUNICATION SYSTEMS (Electronics and Communication Engineering) Time: 3 hours Max. Marks: 70 PART A

More information

VALLIAMMAI ENGINEERING COLLEGE

VALLIAMMAI ENGINEERING COLLEGE VALLIAMMAI ENGINEERING COLLEGE SRM Nagar, Kattankulathur 603 203. DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING QUESTION BANK SUBJECT : EC6402 COMMUNICATION THEORY SEM / YEAR: IV / II year B.E.

More information

Speech, music, images, and video are examples of analog signals. Each of these signals is characterized by its bandwidth, dynamic range, and the

Speech, music, images, and video are examples of analog signals. Each of these signals is characterized by its bandwidth, dynamic range, and the Speech, music, images, and video are examples of analog signals. Each of these signals is characterized by its bandwidth, dynamic range, and the nature of the signal. For instance, in the case of audio

More information

Part A: Question & Answers UNIT I AMPLITUDE MODULATION

Part A: Question & Answers UNIT I AMPLITUDE MODULATION PANDIAN SARASWATHI YADAV ENGINEERING COLLEGE DEPARTMENT OF ELECTRONICS & COMMUNICATON ENGG. Branch: ECE EC6402 COMMUNICATION THEORY Semester: IV Part A: Question & Answers UNIT I AMPLITUDE MODULATION 1.

More information

ELEC 350 Communications Theory and Systems: I. Review. ELEC 350 Fall

ELEC 350 Communications Theory and Systems: I. Review. ELEC 350 Fall ELEC 350 Communications Theory and Systems: I Review ELEC 350 Fall 007 1 Final Examination Saturday, December 15-3 hours Two pages of notes allowed Calculator Tables provided Fourier transforms Table.1

More information

15.Calculate the local oscillator frequency if incoming frequency is F1 and translated carrier frequency

15.Calculate the local oscillator frequency if incoming frequency is F1 and translated carrier frequency DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING SUBJECT NAME:COMMUNICATION THEORY YEAR/SEM: II/IV SUBJECT CODE: EC 6402 UNIT I:l (AMPLITUDE MODULATION) PART A 1. Compute the bandwidth of the AMP

More information

EC2252: COMMUNICATION THEORY SEM / YEAR: II year DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

EC2252: COMMUNICATION THEORY SEM / YEAR: II year DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING EC2252: COMMUNICATION THEORY SEM / YEAR: II year DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK SUBJECT CODE : EC2252 SEM / YEAR : II year SUBJECT NAME : COMMUNICATION THEORY UNIT

More information

Communication Channels

Communication Channels Communication Channels wires (PCB trace or conductor on IC) optical fiber (attenuation 4dB/km) broadcast TV (50 kw transmit) voice telephone line (under -9 dbm or 110 µw) walkie-talkie: 500 mw, 467 MHz

More information

Charan Langton, Editor

Charan Langton, Editor Charan Langton, Editor SIGNAL PROCESSING & SIMULATION NEWSLETTER Baseband, Passband Signals and Amplitude Modulation The most salient feature of information signals is that they are generally low frequency.

More information

AM Limitations. Amplitude Modulation II. DSB-SC Modulation. AM Modifications

AM Limitations. Amplitude Modulation II. DSB-SC Modulation. AM Modifications Lecture 6: Amplitude Modulation II EE 3770: Communication Systems AM Limitations AM Limitations DSB-SC Modulation SSB Modulation VSB Modulation Lecture 6 Amplitude Modulation II Amplitude modulation is

More information

UNIT-2 Angle Modulation System

UNIT-2 Angle Modulation System UNIT-2 Angle Modulation System Introduction There are three parameters of a carrier that may carry information: Amplitude Frequency Phase Frequency Modulation Power in an FM signal does not vary with modulation

More information

Amplitude Modulation II

Amplitude Modulation II Lecture 6: Amplitude Modulation II EE 3770: Communication Systems Lecture 6 Amplitude Modulation II AM Limitations DSB-SC Modulation SSB Modulation VSB Modulation Multiplexing Mojtaba Vaezi 6-1 Contents

More information

DIGITAL COMMUNICATIONS SYSTEMS. MSc in Electronic Technologies and Communications

DIGITAL COMMUNICATIONS SYSTEMS. MSc in Electronic Technologies and Communications DIGITAL COMMUNICATIONS SYSTEMS MSc in Electronic Technologies and Communications Bandpass binary signalling The common techniques of bandpass binary signalling are: - On-off keying (OOK), also known as

More information

Introduction to Amplitude Modulation

Introduction to Amplitude Modulation 1 Introduction to Amplitude Modulation Introduction to project management. Problem definition. Design principles and practices. Implementation techniques including circuit design, software design, solid

More information

Problem Sheet 1 Probability, random processes, and noise

Problem Sheet 1 Probability, random processes, and noise Problem Sheet 1 Probability, random processes, and noise 1. If F X (x) is the distribution function of a random variable X and x 1 x 2, show that F X (x 1 ) F X (x 2 ). 2. Use the definition of the cumulative

More information

VALLIAMMAI ENGINEERING COLLEGE SRM Nagar, Kattankulathur 603 203. DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING QUESTION BANK EC6402 COMMUNICATION THEORY III YEAR / VI SEMESTER ACADEMIC YEAR 2015-16(EVEN)

More information

Problems from the 3 rd edition

Problems from the 3 rd edition (2.1-1) Find the energies of the signals: a) sin t, 0 t π b) sin t, 0 t π c) 2 sin t, 0 t π d) sin (t-2π), 2π t 4π Problems from the 3 rd edition Comment on the effect on energy of sign change, time shifting

More information

(b) What are the differences between FM and PM? (c) What are the differences between NBFM and WBFM? [9+4+3]

(b) What are the differences between FM and PM? (c) What are the differences between NBFM and WBFM? [9+4+3] Code No: RR220401 Set No. 1 1. (a) The antenna current of an AM Broadcast transmitter is 10A, if modulated to a depth of 50% by an audio sine wave. It increases to 12A as a result of simultaneous modulation

More information

ANALOGUE TRANSMISSION OVER FADING CHANNELS

ANALOGUE TRANSMISSION OVER FADING CHANNELS J.P. Linnartz EECS 290i handouts Spring 1993 ANALOGUE TRANSMISSION OVER FADING CHANNELS Amplitude modulation Various methods exist to transmit a baseband message m(t) using an RF carrier signal c(t) =

More information

Master Degree in Electronic Engineering

Master Degree in Electronic Engineering Master Degree in Electronic Engineering Analog and telecommunication electronic course (ATLCE-01NWM) Miniproject: Baseband signal transmission techniques Name: LI. XINRUI E-mail: s219989@studenti.polito.it

More information

3.1 Introduction 3.2 Amplitude Modulation 3.3 Double Sideband-Suppressed Carrier Modulation 3.4 Quadrature-Carrier Multiplexing 3.

3.1 Introduction 3.2 Amplitude Modulation 3.3 Double Sideband-Suppressed Carrier Modulation 3.4 Quadrature-Carrier Multiplexing 3. Chapter 3 Amplitude Modulation Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Outline 3.1 Introduction 3. Amplitude Modulation 3.3

More information

EE470 Electronic Communication Theory Exam II

EE470 Electronic Communication Theory Exam II EE470 Electronic Communication Theory Exam II Open text, closed notes. For partial credit, you must show all formulas in symbolic form and you must work neatly!!! Date: November 6, 2013 Name: 1. [16%]

More information

Noise and Distortion in Microwave System

Noise and Distortion in Microwave System Noise and Distortion in Microwave System Prof. Tzong-Lin Wu EMC Laboratory Department of Electrical Engineering National Taiwan University 1 Introduction Noise is a random process from many sources: thermal,

More information

Communications and Signals Processing

Communications and Signals Processing Communications and Signals Processing Department of Communications An Najah National University 2012/2013 1 3.1 Amplitude Modulation 3.2 Virtues, Limitations, and Modifications of Amplitude Modulation

More information

4.1 REPRESENTATION OF FM AND PM SIGNALS An angle-modulated signal generally can be written as

4.1 REPRESENTATION OF FM AND PM SIGNALS An angle-modulated signal generally can be written as 1 In frequency-modulation (FM) systems, the frequency of the carrier f c is changed by the message signal; in phase modulation (PM) systems, the phase of the carrier is changed according to the variations

More information

Signals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM)

Signals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM) Signals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM) April 11, 2008 Today s Topics 1. Frequency-division multiplexing 2. Frequency modulation

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK SUB.NAME : COMMUNICATION THEORY SUB.CODE: EC1252 YEAR : II SEMESTER : IV UNIT I AMPLITUDE MODULATION SYSTEMS

More information

UNIT-I AMPLITUDE MODULATION (2 Marks Questions and Answers)

UNIT-I AMPLITUDE MODULATION (2 Marks Questions and Answers) UNIT-I AMPLITUDE MODULATION (2 Marks Questions and Answers) 1. Define modulation? Modulation is a process by which some characteristics of high frequency carrier Signal is varied in accordance with the

More information

Digital Communication Lecture-1, Prof. Dr. Habibullah Jamal. Under Graduate, Spring 2008

Digital Communication Lecture-1, Prof. Dr. Habibullah Jamal. Under Graduate, Spring 2008 Digital Communication Lecture-1, Prof. Dr. Habibullah Jamal Under Graduate, Spring 2008 Course Books Text: Digital Communications: Fundamentals and Applications, By Bernard Sklar, Prentice Hall, 2 nd ed,

More information

1B Paper 6: Communications Handout 2: Analogue Modulation

1B Paper 6: Communications Handout 2: Analogue Modulation 1B Paper 6: Communications Handout : Analogue Modulation Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk Lent Term 16 1 / 3 Modulation Modulation

More information

Angle Modulated Systems

Angle Modulated Systems Angle Modulated Systems Angle of carrier signal is changed in accordance with instantaneous amplitude of modulating signal. Two types Frequency Modulation (FM) Phase Modulation (PM) Use Commercial radio

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT IV PART-A

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT IV PART-A MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT IV PART-A 1. What are the advantages of the super heterodyne receiver over TRF receiver? (AUC MAY 2004)

More information

Chapter 3: Analog Modulation Cengage Learning Engineering. All Rights Reserved.

Chapter 3: Analog Modulation Cengage Learning Engineering. All Rights Reserved. Contemporary Communication Systems using MATLAB Chapter 3: Analog Modulation 2013 Cengage Learning Engineering. All Rights Reserved. 3.1 Preview In this chapter we study analog modulation & demodulation,

More information

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure CHAPTER 2 Syllabus: 1) Pulse amplitude modulation 2) TDM 3) Wave form coding techniques 4) PCM 5) Quantization noise and SNR 6) Robust quantization Pulse amplitude modulation In pulse amplitude modulation,

More information

EE4512 Analog and Digital Communications Chapter 6. Chapter 6 Analog Modulation and Demodulation

EE4512 Analog and Digital Communications Chapter 6. Chapter 6 Analog Modulation and Demodulation Chapter 6 Analog Modulation and Demodulation Chapter 6 Analog Modulation and Demodulation Amplitude Modulation Pages 306-309 309 The analytical signal for double sideband, large carrier amplitude modulation

More information

EXPERIMENT WISE VIVA QUESTIONS

EXPERIMENT WISE VIVA QUESTIONS EXPERIMENT WISE VIVA QUESTIONS Pulse Code Modulation: 1. Draw the block diagram of basic digital communication system. How it is different from analog communication system. 2. What are the advantages of

More information

ELE636 Communication Systems

ELE636 Communication Systems ELE636 Communication Systems Chapter 5 : Angle (Exponential) Modulation 1 Phase-locked Loop (PLL) The PLL can be used to track the phase and the frequency of the carrier component of an incoming signal.

More information

Code No: R Set No. 1

Code No: R Set No. 1 Code No: R05220405 Set No. 1 II B.Tech II Semester Regular Examinations, Apr/May 2007 ANALOG COMMUNICATIONS ( Common to Electronics & Communication Engineering and Electronics & Telematics) Time: 3 hours

More information

3.1 Introduction to Modulation

3.1 Introduction to Modulation Haberlesme Sistemlerine Giris (ELE 361) 9 Eylul 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 3 Amplitude Modulation Speech, music,

More information

Part-I. Experiment 6:-Angle Modulation

Part-I. Experiment 6:-Angle Modulation Part-I Experiment 6:-Angle Modulation 1. Introduction 1.1 Objective This experiment deals with the basic performance of Angle Modulation - Phase Modulation (PM) and Frequency Modulation (FM). The student

More information

Amplitude Modulation, II

Amplitude Modulation, II Amplitude Modulation, II Single sideband modulation (SSB) Vestigial sideband modulation (VSB) VSB spectrum Modulator and demodulator NTSC TV signsals Quadrature modulation Spectral efficiency Modulator

More information

Analog Communication.

Analog Communication. Analog Communication Vishnu N V Tele is Greek for at a distance, and Communicare is latin for to make common. Telecommunication is the process of long distance communications. Early telecommunications

More information

M(f) = 0. Linear modulation: linear relationship between the modulated signal and the message signal (ex: AM, DSB-SC, SSB, VSB).

M(f) = 0. Linear modulation: linear relationship between the modulated signal and the message signal (ex: AM, DSB-SC, SSB, VSB). 4 Analog modulation 4.1 Modulation formats The message waveform is represented by a low-pass real signal mt) such that Mf) = 0 f W where W is the message bandwidth. mt) is called the modulating signal.

More information

Amplitude Modulation Chapter 2. Modulation process

Amplitude Modulation Chapter 2. Modulation process Question 1 Modulation process Modulation is the process of translation the baseband message signal to bandpass (modulated carrier) signal at frequencies that are very high compared to the baseband frequencies.

More information

Wireless Communication Fading Modulation

Wireless Communication Fading Modulation EC744 Wireless Communication Fall 2008 Mohamed Essam Khedr Department of Electronics and Communications Wireless Communication Fading Modulation Syllabus Tentatively Week 1 Week 2 Week 3 Week 4 Week 5

More information

! Amplitude of carrier wave varies a mean value in step with the baseband signal m(t)

! Amplitude of carrier wave varies a mean value in step with the baseband signal m(t) page 7.1 CHAPTER 7 AMPLITUDE MODULATION Transmit information-bearing (message) or baseband signal (voice-music) through a Communications Channel Baseband = band of frequencies representing the original

More information

page 7.51 Chapter 7, sections , pp Angle Modulation No Modulation (t) =2f c t + c Instantaneous Frequency 2 dt dt No Modulation

page 7.51 Chapter 7, sections , pp Angle Modulation No Modulation (t) =2f c t + c Instantaneous Frequency 2 dt dt No Modulation page 7.51 Chapter 7, sections 7.1-7.14, pp. 322-368 Angle Modulation s(t) =A c cos[(t)] No Modulation (t) =2f c t + c s(t) =A c cos[2f c t + c ] Instantaneous Frequency f i (t) = 1 d(t) 2 dt or w i (t)

More information

EEE 309 Communication Theory

EEE 309 Communication Theory EEE 309 Communication Theory Semester: January 2016 Dr. Md. Farhad Hossain Associate Professor Department of EEE, BUET Email: mfarhadhossain@eee.buet.ac.bd Office: ECE 331, ECE Building Part 05 Pulse Code

More information

EEE 309 Communication Theory

EEE 309 Communication Theory EEE 309 Communication Theory Semester: January 2017 Dr. Md. Farhad Hossain Associate Professor Department of EEE, BUET Email: mfarhadhossain@eee.buet.ac.bd Office: ECE 331, ECE Building Types of Modulation

More information

Amplitude Modulation. Ahmad Bilal

Amplitude Modulation. Ahmad Bilal Amplitude Modulation Ahmad Bilal 5-2 ANALOG AND DIGITAL Analog-to-analog conversion is the representation of analog information by an analog signal. Topics discussed in this section: Amplitude Modulation

More information

Amplitude Modulated Systems

Amplitude Modulated Systems Amplitude Modulated Systems Communication is process of establishing connection between two points for information exchange. Channel refers to medium through which message travels e.g. wires, links, or

More information

EE390 Final Exam Fall Term 2002 Friday, December 13, 2002

EE390 Final Exam Fall Term 2002 Friday, December 13, 2002 Name Page 1 of 11 EE390 Final Exam Fall Term 2002 Friday, December 13, 2002 Notes 1. This is a 2 hour exam, starting at 9:00 am and ending at 11:00 am. The exam is worth a total of 50 marks, broken down

More information

Chapter 2: Signal Representation

Chapter 2: Signal Representation Chapter 2: Signal Representation Aveek Dutta Assistant Professor Department of Electrical and Computer Engineering University at Albany Spring 2018 Images and equations adopted from: Digital Communications

More information

Communication Systems

Communication Systems Electrical Engineering Communication Systems Comprehensive Theory with Solved Examples and Practice Questions Publications Publications MADE EASY Publications Corporate Office: 44-A/4, Kalu Sarai (Near

More information

4- Single Side Band (SSB)

4- Single Side Band (SSB) 4- Single Side Band (SSB) It can be shown that: s(t) S.S.B = m(t) cos ω c t ± m h (t) sin ω c t -: USB ; +: LSB m(t) X m(t) cos ω c t -π/ cos ω c t -π/ + s S.S.B m h (t) X m h (t) ± sin ω c t 1 Tone Modulation:

More information

Chapter 3. Amplitude Modulation Fundamentals

Chapter 3. Amplitude Modulation Fundamentals Chapter 3 Amplitude Modulation Fundamentals Topics Covered 3-1: AM Concepts 3-2: Modulation Index and Percentage of Modulation 3-3: Sidebands and the Frequency Domain 3-4: AM Power 3-5: Single-Sideband

More information

Amplitude Frequency Phase

Amplitude Frequency Phase Chapter 4 (part 2) Digital Modulation Techniques Chapter 4 (part 2) Overview Digital Modulation techniques (part 2) Bandpass data transmission Amplitude Shift Keying (ASK) Phase Shift Keying (PSK) Frequency

More information

CHAPTER 2! AMPLITUDE MODULATION (AM)

CHAPTER 2! AMPLITUDE MODULATION (AM) CHAPTER 2 AMPLITUDE MODULATION (AM) Topics 2-1 : AM Concepts 2-2 : Modulation Index and Percentage of Modulation 2-3 : Sidebands and the Frequency Domain 2-4 : Single-Sideband Modulation 2-5 : AM Power

More information

Fundamentals of Digital Communication

Fundamentals of Digital Communication Fundamentals of Digital Communication Network Infrastructures A.A. 2017/18 Digital communication system Analog Digital Input Signal Analog/ Digital Low Pass Filter Sampler Quantizer Source Encoder Channel

More information

FM THRESHOLD AND METHODS OF LIMITING ITS EFFECT ON PERFORMANCE

FM THRESHOLD AND METHODS OF LIMITING ITS EFFECT ON PERFORMANCE FM THESHOLD AND METHODS OF LIMITING ITS EFFET ON PEFOMANE AHANEKU, M. A. Lecturer in the Department of Electronic Engineering, UNN ABSTAT This paper presents the outcome of the investigative study carried

More information

II. Random Processes Review

II. Random Processes Review II. Random Processes Review - [p. 2] RP Definition - [p. 3] RP stationarity characteristics - [p. 7] Correlation & cross-correlation - [p. 9] Covariance and cross-covariance - [p. 10] WSS property - [p.

More information

Communication Systems

Communication Systems Electronics Engineering Communication Systems Comprehensive Theory with Solved Examples and Practice Questions Publications Publications MADE EASY Publications Corporate Office: 44-A/4, Kalu Sarai (Near

More information

Chapter 4. Part 2(a) Digital Modulation Techniques

Chapter 4. Part 2(a) Digital Modulation Techniques Chapter 4 Part 2(a) Digital Modulation Techniques Overview Digital Modulation techniques Bandpass data transmission Amplitude Shift Keying (ASK) Phase Shift Keying (PSK) Frequency Shift Keying (FSK) Quadrature

More information

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal.

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal. 1 2.1 BASIC CONCEPTS 2.1.1 Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal. 2 Time Scaling. Figure 2.4 Time scaling of a signal. 2.1.2 Classification of Signals

More information

ELEC3242 Communications Engineering Laboratory Frequency Shift Keying (FSK)

ELEC3242 Communications Engineering Laboratory Frequency Shift Keying (FSK) ELEC3242 Communications Engineering Laboratory 1 ---- Frequency Shift Keying (FSK) 1) Frequency Shift Keying Objectives To appreciate the principle of frequency shift keying and its relationship to analogue

More information

Communications I (ELCN 306)

Communications I (ELCN 306) Communications I (ELCN 306) c Samy S. Soliman Electronics and Electrical Communications Engineering Department Cairo University, Egypt Email: samy.soliman@cu.edu.eg Website: http://scholar.cu.edu.eg/samysoliman

More information

V. CHANDRA SEKAR Professor and Head Department of Electronics and Communication Engineering SASTRA University, Kumbakonam

V. CHANDRA SEKAR Professor and Head Department of Electronics and Communication Engineering SASTRA University, Kumbakonam V. CHANDRA SEKAR Professor and Head Department of Electronics and Communication Engineering SASTRA University, Kumbakonam 1 Contents Preface v 1. Introduction 1 1.1 What is Communication? 1 1.2 Modulation

More information

Communications IB Paper 6 Handout 2: Analogue Modulation

Communications IB Paper 6 Handout 2: Analogue Modulation Communications IB Paper 6 Handout 2: Analogue Modulation Jossy Sayir Signal Processing and Communications Lab Department of Engineering University of Cambridge jossy.sayir@eng.cam.ac.uk Lent Term c Jossy

More information

Satellite Communications: Part 4 Signal Distortions & Errors and their Relation to Communication Channel Specifications. Howard Hausman April 1, 2010

Satellite Communications: Part 4 Signal Distortions & Errors and their Relation to Communication Channel Specifications. Howard Hausman April 1, 2010 Satellite Communications: Part 4 Signal Distortions & Errors and their Relation to Communication Channel Specifications Howard Hausman April 1, 2010 Satellite Communications: Part 4 Signal Distortions

More information

Chapter 5. Amplitude Modulation

Chapter 5. Amplitude Modulation Chapter 5 Amplitude Modulation So far we have developed basic signal and system representation techniques which we will now apply to the analysis of various analog communication systems. In particular,

More information

II Year (04 Semester) EE6403 Discrete Time Systems and Signal Processing

II Year (04 Semester) EE6403 Discrete Time Systems and Signal Processing Class Subject Code Subject II Year (04 Semester) EE6403 Discrete Time Systems and Signal Processing 1.CONTENT LIST: Introduction to Unit I - Signals and Systems 2. SKILLS ADDRESSED: Listening 3. OBJECTIVE

More information

ANALOG COMMUNICATIONS. BY P.Swetha, Assistant Professor (Units 1, 2 & 5) K.D.K.Ajay, Assistant Professor (Units 3 & 4)

ANALOG COMMUNICATIONS. BY P.Swetha, Assistant Professor (Units 1, 2 & 5) K.D.K.Ajay, Assistant Professor (Units 3 & 4) ANALOG COMMUNICATIONS BY P.Swetha, Assistant Professor (Units 1, 2 & 5) K.D.K.Ajay, Assistant Professor (Units 3 & 4) (R15A0409) ANALOG COMMUNICATIONS Course Objectives: Objective of the course is to:

More information

UNIT 1 QUESTIONS WITH ANSWERS

UNIT 1 QUESTIONS WITH ANSWERS UNIT 1 QUESTIONS WITH ANSWERS 1. Define modulation? Modulation is a process by which some characteristics of high frequency carrier signal is varied in accordance with the instantaneous value of the modulating

More information

two computers. 2- Providing a channel between them for transmitting and receiving the signals through it.

two computers. 2- Providing a channel between them for transmitting and receiving the signals through it. 1. Introduction: Communication is the process of transmitting the messages that carrying information, where the two computers can be communicated with each other if the two conditions are available: 1-

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 10 Single Sideband Modulation We will discuss, now we will continue

More information

Angle Modulation KEEE343 Communication Theory Lecture #12, April 14, Prof. Young-Chai Ko

Angle Modulation KEEE343 Communication Theory Lecture #12, April 14, Prof. Young-Chai Ko Angle Modulation KEEE343 Communication Theory Lecture #12, April 14, 2011 Prof. Young-Chai Ko koyc@korea.ac.kr Summary Frequency Division Multiplexing (FDM) Angle Modulation Frequency-Division Multiplexing

More information

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY

TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY 2 Basic Definitions Time and Frequency db conversion Power and dbm Filter Basics 3 Filter Filter is a component with frequency

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 16 Angle Modulation (Contd.) We will continue our discussion on Angle

More information

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27) ECEn 665: Antennas and Propagation for Wireless Communications 131 9. Modulation Modulation is a way to vary the amplitude and phase of a sinusoidal carrier waveform in order to transmit information. When

More information

DT Filters 2/19. Atousa Hajshirmohammadi, SFU

DT Filters 2/19. Atousa Hajshirmohammadi, SFU 1/19 ENSC380 Lecture 23 Objectives: Signals and Systems Fourier Analysis: Discrete Time Filters Analog Communication Systems Double Sideband, Sub-pressed Carrier Modulation (DSBSC) Amplitude Modulation

More information

EE 400L Communications. Laboratory Exercise #7 Digital Modulation

EE 400L Communications. Laboratory Exercise #7 Digital Modulation EE 400L Communications Laboratory Exercise #7 Digital Modulation Department of Electrical and Computer Engineering University of Nevada, at Las Vegas PREPARATION 1- ASK Amplitude shift keying - ASK - in

More information

EE 451: Digital Signal Processing

EE 451: Digital Signal Processing EE 451: Digital Signal Processing Stochastic Processes and Spectral Estimation Aly El-Osery Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA November 29, 2011 Aly El-Osery (NMT)

More information

Wireless Communication: Concepts, Techniques, and Models. Hongwei Zhang

Wireless Communication: Concepts, Techniques, and Models. Hongwei Zhang Wireless Communication: Concepts, Techniques, and Models Hongwei Zhang http://www.cs.wayne.edu/~hzhang Outline Digital communication over radio channels Channel capacity MIMO: diversity and parallel channels

More information

TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY

TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY TSEK02: Radio Electronics Lecture 2: Modulation (I) Ted Johansson, EKS, ISY An Overview of Modulation Techniques: chapter 3.1 3.3.1 2 Introduction (3.1) Analog Modulation Amplitude Modulation Phase and

More information

Lecture 3 Concepts for the Data Communications and Computer Interconnection

Lecture 3 Concepts for the Data Communications and Computer Interconnection Lecture 3 Concepts for the Data Communications and Computer Interconnection Aim: overview of existing methods and techniques Terms used: -Data entities conveying meaning (of information) -Signals data

More information

PRINCIPLES OF COMMUNICATIONS

PRINCIPLES OF COMMUNICATIONS PRINCIPLES OF COMMUNICATIONS Systems, Modulation, and Noise SIXTH EDITION INTERNATIONAL STUDENT VERSION RODGER E. ZIEMER University of Colorado at Colorado Springs WILLIAM H. TRANTER Virginia Polytechnic

More information

COMM 601: Modulation I

COMM 601: Modulation I Prof. Ahmed El-Mahdy, Communications Department The German University in Cairo Text Books [1] Couch, Digital and Analog Communication Systems, 7 th edition, Prentice Hall, 2007. [2] Simon Haykin, Communication

More information

Lecture 12 - Analog Communication (II)

Lecture 12 - Analog Communication (II) Lecture 12 - Analog Communication (II) James Barnes (James.Barnes@colostate.edu) Spring 2014 Colorado State University Dept of Electrical and Computer Engineering ECE423 1 / 12 Outline QAM: quadrature

More information

Amplitude Modulation Early Radio EE 442 Spring Semester Lecture 6

Amplitude Modulation Early Radio EE 442 Spring Semester Lecture 6 Amplitude Modulation Early Radio EE 442 Spring Semester Lecture 6 f f f LO audio baseband m http://www.technologyuk.net/telecommunications/telecom_principles/amplitude_modulation.shtml AM Modulation --

More information

EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June COMMUNICATIONS IV (ELEC ENG 4035)

EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June COMMUNICATIONS IV (ELEC ENG 4035) EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June 2007 101902 COMMUNICATIONS IV (ELEC ENG 4035) Official Reading Time: Writing Time: Total Duration: 10 mins 120 mins 130 mins Instructions: This is a closed

More information

Radio Receiver Architectures and Analysis

Radio Receiver Architectures and Analysis Radio Receiver Architectures and Analysis Robert Wilson December 6, 01 Abstract This article discusses some common receiver architectures and analyzes some of the impairments that apply to each. 1 Contents

More information

Angle Modulation, II. Lecture topics. FM bandwidth and Carson s rule. Spectral analysis of FM. Narrowband FM Modulation. Wideband FM Modulation

Angle Modulation, II. Lecture topics. FM bandwidth and Carson s rule. Spectral analysis of FM. Narrowband FM Modulation. Wideband FM Modulation Angle Modulation, II Lecture topics FM bandwidth and Carson s rule Spectral analysis of FM Narrowband FM Modulation Wideband FM Modulation Bandwidth of Angle-Modulated Waves Angle modulation is nonlinear

More information

EE 451: Digital Signal Processing

EE 451: Digital Signal Processing EE 451: Digital Signal Processing Power Spectral Density Estimation Aly El-Osery Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA December 4, 2017 Aly El-Osery (NMT) EE 451:

More information

Solutions to some sampled questions of previous finals

Solutions to some sampled questions of previous finals Solutions to some sampled questions of previous finals First exam: Problem : he modulating signal m(a m coπf m is used to generate the VSB signal β cos[ π ( f c + f m ) t] + (1 β ) cos[ π ( f c f m ) t]

More information