COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems

Similar documents
Introduction to Error Control Coding

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK

Communication Theory II

Lab/Project Error Control Coding using LDPC Codes and HARQ

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010)

Computing and Communications 2. Information Theory -Channel Capacity

DADS with short spreading sequences for high data rate communications or improved BER performance

Chapter 1 Coding for Reliable Digital Transmission and Storage

Chapter 4. Part 2(a) Digital Modulation Techniques

ELEC3028 (EL334) Digital Transmission

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

Detection and Estimation of Signals in Noise. Dr. Robert Schober Department of Electrical and Computer Engineering University of British Columbia

The BICM Capacity of Coherent Continuous-Phase Frequency Shift Keying

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

ECE 4400:693 - Information Theory

OFDM Transmission Corrupted by Impulsive Noise

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Outline. Communications Engineering 1

Amplitude Frequency Phase

TABLE OF CONTENTS CHAPTER TITLE PAGE

Coding Theory. Algorithms, Architectures, and Applications. André Neubauer Münster University of Applied Sciences, Germany

Frequency-Hopped Spread-Spectrum

COMMUNICATION SYSTEMS

photons photodetector t laser input current output current

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Thus there are three basic modulation techniques: 1) AMPLITUDE SHIFT KEYING 2) FREQUENCY SHIFT KEYING 3) PHASE SHIFT KEYING

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

Communication Systems

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

Course Specifications

Communication Systems

Part A: Question & Answers UNIT I AMPLITUDE MODULATION

About Homework. The rest parts of the course: focus on popular standards like GSM, WCDMA, etc.

Versuch 7: Implementing Viterbi Algorithm in DLX Assembler

Simulink Modeling of Convolutional Encoders

EELE 6333: Wireless Commuications

Communication Theory II

Communications Overhead as the Cost of Constraints

EE303: Communication Systems

Wireless Communication: Concepts, Techniques, and Models. Hongwei Zhang

CONCLUSION FUTURE WORK

Chaos based Communication System Using Reed Solomon (RS) Coding for AWGN & Rayleigh Fading Channels

Study of Turbo Coded OFDM over Fading Channel

Introduction to Coding Theory

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Improved concatenated (RS-CC) for OFDM systems

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Information Theory and Communication Optimal Codes

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

Information Theory: A Lighthouse for Understanding Modern Communication Systems. Ajit Kumar Chaturvedi Department of EE IIT Kanpur

DIGITAL COMMINICATIONS

Communication Efficiency of Error Correction Mechanism Based on Retransmissions

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

Principles of Communications

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

SYSTEM LEVEL DESIGN CONSIDERATIONS FOR HSUPA USER EQUIPMENT

EC2252: COMMUNICATION THEORY SEM / YEAR: II year DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

EDI042 Error Control Coding (Kodningsteknik)

Information Theory and Huffman Coding

Basics of Error Correcting Codes

CODING TECHNIQUES FOR ANALOG SOURCES

2. TELECOMMUNICATIONS BASICS

ECE 630: Statistical Communication Theory

Soft Channel Encoding; A Comparison of Algorithms for Soft Information Relaying

Coding Techniques and the Two-Access Channel

The figures and the logic used for the MATLAB are given below.

Design of 2 4 Alamouti Transceiver Using FPGA

PERFORMANCE EVALUATION OF WCDMA SYSTEM FOR DIFFERENT MODULATIONS WITH EQUAL GAIN COMBINING SCHEME

MITIGATING INTERFERENCE TO GPS OPERATION USING VARIABLE FORGETTING FACTOR BASED RECURSIVE LEAST SQUARES ESTIMATION

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

COHERENT DEMODULATION OF CONTINUOUS PHASE BINARY FSK SIGNALS

Problem Sheet 1 Probability, random processes, and noise

CT-516 Advanced Digital Communications

Modulation and Coding Tradeoffs

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Interleaved PC-OFDM to reduce the peak-to-average power ratio

THE idea behind constellation shaping is that signals with

BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS

CDMA Technology. Pr. S.Flament Pr. Dr. W.Skupin On line Course on CDMA Technology

Block Markov Encoding & Decoding

Communications Theory and Engineering

Bit-permuted coded modulation for polar codes

ISSN: International Journal of Innovative Research in Science, Engineering and Technology

Channel Coding/Decoding. Hamming Method

Chapter 2 Soft and Hard Decision Decoding Performance

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

IDMA Technology and Comparison survey of Interleavers

Construction of Efficient Amplitude Phase Shift Keying Constellations

Chapter 4. Communication System Design and Parameters

Analysis of Convolutional Encoder with Viterbi Decoder for Next Generation Broadband Wireless Access Systems

Solutions to Information Theory Exercise Problems 5 8

Transcription:

1 Introduction The reliable transmission of information over noisy channels is one of the basic requirements of digital information and communication systems. Here, transmission is understood both as transmission in space, e.g. over mobile radio channels, and as transmission in time by storing information in appropriate storage media. Because of this requirement, modern communication systems rely heavily on powerful channel coding methodologies. For practical applications these coding schemes do not only need to have good coding characteristics with respect to the capability of detecting or correcting errors introduced on the channel. They also have to be efficiently implementable, e.g. in digital hardware within integrated circuits. Practical applications of channel codes include space and satellite communications, data transmission, digital audio and video broadcasting and mobile communications, as well as storage systems such as computer memories or the compact disc (Costello et al., 1998). In this introductory chapter we will give a brief introduction into the field of channel coding. To this end, we will describe the information theory fundamentals of channel coding. Simple channel models will be presented that will be used throughout the text. Furthermore, we will present the binary triple repetition code as an illustrative example of a simple channel code. 1.1 Communication Systems In Figure 1.1 the basic structure of a digital communication system is shown which represents the architecture of the communication systems in use today. Within the transmitter of such a communication system the following tasks are carried out: source encoding, channel encoding, modulation. COPYRIGHTED MATERIAL Coding Theory Algorithms, Architectures, and Applications 2007 John Wiley & Sons, Ltd André Neubauer, Jürgen Freudenberger, Volker Kühn

2 INTRODUCTION Principal structure of digital communication systems source FEC encoder u channel FEC encoder b modulator FEC encoder channel FEC encoder source FEC decoder encoder û channel FEC decoder encoder r demodulator FEC encoder The sequence of information symbols u is encoded into the sequence of code symbols b which are transmitted across the channel after modulation. The sequence of received symbols r is decoded into the sequence of information symbols û which are estimates of the originally transmitted information symbols. Figure 1.1: Basic structure of digital communication systems In the receiver the corresponding inverse operations are implemented: demodulation, channel decoding, source decoding. According to Figure 1.1 the modulator generates the signal that is used to transmit the sequence of symbols b across the channel (Benedetto and Biglieri, 1999; Neubauer, 2007; Proakis, 2001). Due to the noisy nature of the channel, the transmitted signal is disturbed. The noisy received signal is demodulated by the demodulator in the receiver, leading to the sequence of received symbols r. Since the received symbol sequence r usually differs from the transmitted symbol sequence b, a channel code is used such that the receiver is able to detect or even correct errors (Bossert, 1999; Lin and Costello, 2004; Neubauer, 2006b). To this end, the channel encoder introduces redundancy into the information sequence u. This redundancy can be exploited by the channel decoder for error detection or error correction by estimating the transmitted symbol sequence û. In his fundamental work, Shannon showed that it is theoretically possible to realise an information transmission system with as small an error probability as required (Shannon, 1948). The prerequisite for this is that the information rate of the information source be smaller than the so-called channel capacity. In order to reduce the information rate, source coding schemes are used which are implemented by the source encoder in the transmitter and the source decoder in the receiver (McEliece, 2002; Neubauer, 2006a).

INTRODUCTION 3 Further information about source coding can be found elsewhere (Gibson et al., 1998; Sayood, 2000, 2003). In order better to understand the theoretical basics of information transmission as well as channel coding, we now give a brief overview of information theory as introduced by Shannon in his seminal paper (Shannon, 1948). In this context we will also introduce the simple channel models that will be used throughout the text. 1.2 Information Theory An important result of information theory is the finding that error-free transmission across a noisy channel is theoretically possible as long as the information rate does not exceed the so-called channel capacity. In order to quantify this result, we need to measure information. Within Shannon s information theory this is done by considering the statistics of symbols emitted by information sources. 1.2.1 Entropy Let us consider the discrete memoryless information source shown in Figure 1.2. At a given time instant, this discrete information source emits the random discrete symbol X = x i which assumes one out of M possible symbol values x 1, x 2,..., x M. The rate at which these symbol values appear are given by the probabilities P X (x 1 ), P X (x 2 ),..., P X (x M ) with P X (x i ) = Pr{X = x i }. Discrete information source Information source X The discrete information source emits the random discrete symbol X. The symbol values x 1, x 2,..., x M appear with probabilities P X (x 1 ), P X (x 2 ),..., P X (x M ). Entropy M I(X ) = P X (x i ) log 2 (P X (x i )) (1.1) i=1 Figure 1.2: Discrete information source emitting discrete symbols X

4 INTRODUCTION The average information associated with the random discrete symbol X is given by the so-called entropy measured in the unit bit M I(X ) = P X (x i ) log 2 (P X (x i )). i=1 For a binary information source that emits the binary symbols X = 0 and X = 1 with probabilities Pr{X = 0} =p 0 and Pr{X = 1} =1 Pr{X = 0} =1 p 0, the entropy is given by the so-called Shannon function or binary entropy function I(X ) = p 0 log 2 (p 0 ) (1 p 0 ) log 2 (1 p 0 ). 1.2.2 Channel Capacity With the help of the entropy concept we can model a channel according to Berger s channel diagram shown in Figure 1.3 (Neubauer, 2006a). Here, X refers to the input symbol and R denotes the output symbol or received symbol. We now assume that M input symbol values x 1, x 2,..., x M and N output symbol values r 1, r 2,..., r N are possible. With the help of the conditional probabilities P X R (x i r j ) = Pr{X = x i R = r j } and P R X (r j x i ) = Pr{R = r j X = x i } the conditional entropies are given by M N ( I(X R) = P X,R (x i,r j ) log 2 PX R (x i r j ) ) and I(R X ) = i=1 j=1 M i=1 j=1 N P X,R (x i,r j ) log 2 (P R X (r j x i )). With these conditional probabilities the mutual information I(X ; R) = I(X ) I(X R) = I(R) I(R X ) can be derived which measures the amount of information that is transmitted across the channel from the input to the output for a given information source. The so-called channel capacity C is obtained by maximising the mutual information I(X ; R) with respect to the statistical properties of the input X, i.e. by appropriately choosing the probabilities {P X (x i )} 1 i M. This leads to C = max I(X ; R). {P X (x i )} 1 i M If the input entropy I(X ) is smaller than the channel capacity C I(X )! <C, then information can be transmitted across the noisy channel with arbitrarily small error probability. Thus, the channel capacity C in fact quantifies the information transmission capacity of the channel.

INTRODUCTION 5 Berger s channel diagram I(X R) I(X ) I(X ; R) I(R) Mutual information I(R X ) I(X ; R) = I(X ) I(X R) = I(R) I(R X ) (1.2) Channel capacity C = max I(X ; R) (1.3) {P X (x i )} 1 i M Figure 1.3: Berger s channel diagram 1.2.3 Binary Symmetric Channel As an important example of a memoryless channel we turn to the binary symmetric channel or BSC. Figure 1.4 shows the channel diagram of the binary symmetric channel with bit error probability ε. This channel transmits the binary symbol X = 0orX = 1 correctly with probability 1 ε, whereas the incorrect binary symbol R = 1orR = 0 is emitted with probability ε. By maximising the mutual information I(X ; R), the channel capacity of a binary symmetric channel is obtained according to C = 1 + ε log 2 (ε) + (1 ε) log 2 (1 ε). This channel capacity is equal to 1 if ε = 0orε = 1; for ε = 1 2 the channel capacity is 0. In contrast to the binary symmetric channel, which has discrete input and output symbols taken from binary alphabets, the so-called AWGN channel is defined on the basis of continuous real-valued random variables. 1 1 In Chapter 5 we will also consider complex-valued random variables.

6 INTRODUCTION Binary symmetric channel X = 0 1 ε ε R = 0 X = 1 1 ε ε R = 1 Bit error probability ε Channel capacity C = 1 + ε log 2 (ε) + (1 ε) log 2 (1 ε) (1.4) Figure 1.4: Binary symmetric channel with bit error probability ε 1.2.4 AWGN Channel Up to now we have exclusively considered discrete-valued symbols. The concept of entropy can be transferred to continuous real-valued random variables by introducing the so-called differential entropy. It turns out that a channel with real-valued input and output symbols can again be characterised with the help of the mutual information I(X ; R) and its maximum, the channel capacity C. In Figure 1.5 the so-called AWGN channel is illustrated which is described by the additive white Gaussian noise term Z. With the help of the signal power S = E { X 2} and the noise power N = E { Z 2} the channel capacity of the AWGN channel is given by C = 1 ( 2 log 2 1 + S ). N The channel capacity exclusively depends on the signal-to-noise ratio S/N. In order to compare the channel capacities of the binary symmetric channel and the AWGN channel, we assume a digital transmission scheme using binary phase shift keying (BPSK) and optimal reception with the help of a matched filter (Benedetto and Biglieri, 1999; Neubauer, 2007; Proakis, 2001). The signal-to-noise ratio of the real-valued output

INTRODUCTION 7 AWGN channel X + R Z Signal-to-noise ratio S N Channel capacity C = 1 2 log 2 ( 1 + S ) N (1.5) Figure 1.5: AWGN channel with signal-to-noise ratio S/N R of the matched filter is then given by S N = E b N 0 /2 with bit energy E b and noise power spectral density N 0. If the output R of the matched filter is compared with the threshold 0, we obtain the binary symmetric channel with bit error probability ( ) ε = 1 2 erfc E b. Here, erfc( ) denotes the complementary error function. In Figure 1.6 the channel capacities of the binary symmetric channel and the AWGN channel are compared as a function of E b /N 0. The signal-to-noise ratio S/N or the ratio E b /N 0 must be higher for the binary symmetric channel compared with the AWGN channel in order to achieve the same channel capacity. This gain also translates to the coding gain achievable by soft-decision decoding as opposed to hard-decision decoding of channel codes, as we will see later (e.g. in Section 2.2.8). Although information theory tells us that it is theoretically possible to find a channel code that for a given channel leads to as small an error probability as required, the design of good channel codes is generally difficult. Therefore, in the next chapters several classes of channel codes will be described. Here, we start with a simple example. N 0

8 INTRODUCTION Channel capacity of BSC vs AWGN channel 1.5 1 C 0.5 AWGN BSC 0 5 4 3 2 1 0 1 2 3 4 5 10 log 10 ( Eb N 0 ) Signal-to-noise ratio of AWGN channel S N = E b N 0 /2 Bit error probability of binary symmetric channel ( ε = 1 ) 2 erfc E b N 0 (1.6) (1.7) Figure 1.6: Channel capacity of the binary symmetric channel vs the channel capacity of the AWGN channel 1.3 A Simple Channel Code As an introductory example of a simple channel code we consider the transmission of the binary information sequence 00101110 over a binary symmetric channel with bit error probability ε = 0.25 (Neubauer, 2006b). On average, every fourth binary symbol will be received incorrectly. In this example we assume that the binary sequence 00000110 is received at the output of the binary symmetric channel (see Figure 1.7).

INTRODUCTION 9 Channel transmission 00101110 BSC 00000110 Binary symmetric channel with bit error probability ε = 0.25 Transmission w/o channel code Figure 1.7: Channel transmission without channel code Encoder 00101110 Encoder 000000111000111111111000 Binary information symbols 0 and 1 Binary code words 000 and 111 Binary triple repetition code {000, 111} Figure 1.8: Encoder of a triple repetition code In order to implement a simple error correction scheme we make use of the so-called binary triple repetition code. This simple channel code is used for the encoding of binary data. If the binary symbol 0 is to be transmitted, the encoder emits the code word 000. Alternatively, the code word 111 is issued by the encoder when the binary symbol 1 is to be transmitted. The encoder of a triple repetition code is illustrated in Figure 1.8. For the binary information sequence given above we obtain the binary code sequence 000 000 111 000 111 111 111 000 at the output of the encoder. If we again assume that on average every fourth binary symbol is incorrectly transmitted by the binary symmetric channel, we may obtain the received sequence 010 000 011 010 111 010 111 010. This is illustrated in Figure 1.9.

10 INTRODUCTION Channel transmission 000000111000111111111000 BSC 010000011010111010111010 Binary symmetric channel with bit error probability ε = 0.25 Transmission with binary triple repetition code Figure 1.9: Channel transmission of a binary triple repetition code Decoder 010000011010111010111010 Decoder 00101010 Decoding of triple repetition code by majority decision 000 000 001 000 010 000 011 111.. 110 111 111 111 Figure 1.10: Decoder of a triple repetition code The decoder in Figure 1.10 tries to estimate the original information sequence with the help of a majority decision. If the number of 0s within a received 3-bit word is larger than the number of 1s, the decoder emits the binary symbol 0; otherwise a 1 is decoded. With this decoding algorithm we obtain the decoded information sequence 00101010.

INTRODUCTION 11 As can be seen from this example, the binary triple repetition code is able to correct a single error within a code word. More errors cannot be corrected. With the help of this simple channel code we are able to reduce the number of errors. Compared with the unprotected transmission without a channel code, the number of errors has been reduced from two to one. However, this is achieved by a significant reduction in the transmission bandwidth because, for a given symbol rate on the channel, it takes 3 times longer to transmit an information symbol with the help of the triple repetition code. It is one of the main topics of the following chapters to present more efficient coding schemes.