People s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research University M Hamed BOUGARA Boumerdes

Size: px
Start display at page:

Download "People s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research University M Hamed BOUGARA Boumerdes"

Transcription

1 People s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research University M Hamed BOUGARA Boumerdes Institute of Electrical and Electronic Engineering Department of Electronics Final Year Project Report Presented in Partial Fulfilment of the Requirements for the Degree of MASTER In Telecommunication Option: Telecommunications Title: Performance Analysis of Turbo Codes Presented by: - Djanet LAKHAL - Nada KHERIBET Supervisor: Dr. Abdelhakim DAHIMENE Registration Number:.../2017

2 Abstract Abstract: Telecommunication is one of the most important technologies present this days. One problem with communication is that wrong interpretations may occur in the case of receiving data from a noisy channel. Telecommunication engineers were trying all the time to develop new channel coding techniques that ensure the safe transfer of information in the presence of any undesirable effect and produce results very close to the theoretical limit set by Shannon, such as turbo codes. Turbo codes have been successfully implemented in satellite and video conferencing systems and provision has been made in 3 rd and 4 th generation mobile systems. This project presents an evaluation of soft output decoding for turbo codes. Coding theory related to this work is studied, including convolutional encoding and Viterbi decoding. Recursive systematic convolutional (RSC) codes and nonuniform interleaver s commonly used in turbo code encoder design are analysed. The scope of this work is analysing the performance of the Turbo code. Using MATLAB we have implemented a digital communication system i.e. a transmitter, a channel and a receiver, both transmitter and receiver are turbo encoder and decoder. After the simulation of our system, the results are presented in the form of curves (bit error rate versus signal to noise ratio) This includes thorough investigation of central components that influence Turbo code performances, such as the constraint length of the encoder, the type of modulation, the type of the interleaver, as well as the procedure of iterative decoding. The whole investigation is carried out over an additive white Gaussian noise channels and the results are discussed in details in the coming chapters. ii

3 Dedication: We dedicate our senior design project to our families and many friends. A special feeling of gratitude go to our loving parents, whose words of encouragements and push for tenacity ring in our ears. we also dedicate this dissertation to all loving partners who have supported us throughout the process. We will always appreciate all their time and effort.specially our classmates for helping us to develop our technology skills. This work is also dedicated to you, if you are a student or a teacher or anybody else, since you are interested, you pick it up, open it and flip to the dedication. It is the way for our minds to meet and have a conversation about science and even happiness that could be reached. Just be curious, live your life with passion and never settle.

4 Acknowledgments: There have been many people who have walked alongside us during the last four years. They have guided us, placed opportunities in front of us and showed us the doors that might be useful to open. We would like to thank each and every one of them. Our deep gratitude goes especially to Dr. A.Dahimene, who expertly guided us through our graduate education, his unwavering and enthusiasm for telecommunication and maths kept us constantly engaged with our research. Our appreciation extends to PhD students, their effort and time have been especially valuable. Finally, special thanks also goes to all our teachers for their continued supports.

5 Table of contents Contents Dedication... i Acknowledgments...ii Abstract...iii List of figures... iv List of tables...vi List of abbreviation...vii Table of contents...viii General introduction...ix Chapter 1 : Coding theory 1.1 Introduction History Source coding Channel coding Linear codes Block codes Convolutional codes Convolutional codes Representations: Tail-biting Convolutional code Convolutional Decoding-Viterbi algorithm Application A need for better code Conclusion Chapter 2: Turbo coding 2.1. Introduction 2.2. Princip les of Turbo Codes The Turbo Encoder: Recursive systematic convolutional encoders Recursive convolutional encoders vs. Non-recursive encoder : Tail Bits: The Interleaver: Why interleaving: The Puncturing Mechanism: Worked example: The Turbo Decoder :... 33

6 Table of contents Turbo decoding principles: Log-Likelihood Ratio : Structure of Iterative Decoding: Decoding Algorithms: Turbo Code Illustrated Example : Turbo codes Applications: Conclusion: Chapter 3 : Simulation and results analysis 3-1 Introduction: Simulation Setup: Parameters of simulation: Bit Error Rate (BER) Energy per bit to noise power spectral density ratio (E b /N 0 ): Turbo Code error performance analysis using BPSK modulation : BER performance for frame length L = 500: BER performance for frame length L = 1500: BER as a function of frame size: BER as a function of Constraint Length: BER as a function of Generator Polynomial: Turbo Code error performance analysis using 16-QAM modulaion: Effect of number of decoding iterations : Frame size effect: BPSK and 16-QAM comparison: The UMTS/LTE Turbo Code : Conclusion: General conclusion...55 References...56

7 List of figures List of figures: Fig1.1 block code scheme...5 Fig1.2 block coding mapping...5 Fig.1.3 Terminology of convolutional codes Fig1.4 A rate-1/2 non-systematic non-recursive binary convolutional encoder, represents an input bit and and represent output bits Fig1.5 State diagram and state transition table for (2,1,2) convolutional encoder...12 Fig1.6 Trellis diagram-one stage...13 Fig1.7 Trellis diagram with trellis termination for the input data length L = Fig1.8 Block Diagram of Viterbi decoder.14 Fig1-9 Trellis diagram for the hard decision decoding of the convolutiona code (2, 1, 2) with trellis termination for the input data length L = Fig1-10 Trellis diagram for the soft decision decoding of the convolutional code (2,1,2)...17 Fig2. 1 Principle illustration of coding gain in relation to BER at a given FiG2. 2 The turbo coding/ decoding principle Fig2. 3 Block diagram of a turbo Encoder Fig2. 4 A rate-1/2 ; [7;5]8 Recursive Systematic Convolutional Encoder Fig2. 5 (a)the state transition diagram and (b) a segment of the trellis diagram for the rate- 1/2 systematic recursive binary convolutional encoder in Fig Fig2. 6 a) Non-recursive R=1/2 and K=2 convolutional encoder Fig2. 7 A rate-1/2 systematic recursive binary convolutional encoder with a circuit to generate tail bits Fig2. 8 Example of Techniques for Adding Tail bits Fig2. 9 Example of random interleaver Fig2. 10 Permutation of data with a semi-random interleaver length L = 9 and S = Fig2. 11 Block diagram of a turbo encoder with puncturing Fig.2.12 The rate1/2 turbo encoder presented in the original paper of Berrou, Glavieux and Thitimajshima Fig2. 13 A mechanical turbo engine Fig2. 14 Block diagram of turbo decoder Fig2. 15 Turbo coding illustrated example Fig.3.1 BER as a function of E /N and multiple iterations for frame length 500 using BPSK modulation Fig.3.2 BER as a function of E /N and multiple iterations for frame length 1500 using BPSK modulation.. 43 Fig.3.3 Scattering plot at different values of E N for BPSK modulation. 44 Fig.3.4 BER as a function of E /N and different frame lengths Fig.3. 5 Turbo code performance for constraint length, K= Fig.3.6 Generator Polynomial effect..47 Fig.3.7 BER as a function of E /N and multiple iterations for frame length 1500 using iii

8 List of figures 16-QAM modulation...48 Fig.3.8 Scattering plot at different values of E /N for 16-QAM modulation..49 Fig. 3.9 BER as a function of E /N and different frame lengths using 16-QAM modulation Fig.3.10 Comparison of BPSK and 16-QAM for bit error rate versus Eb/N Fig.3.11 BER as a function of E /N and multiple iterations for UMTS and LTE Turbo Code Fig.3.12 Comparison for UMTS/LTE Turbo Code over AWGN iv

9 List of tables List of tables Table 2.1 The state transitions for the convolutional encoder 25 Table 2.2 Input and output sequences for convolutional encoders Table 2.3 Dataword Permutations versus Codeword Weights Table 2.4 Writing data row wise in memory Table 2.5 Reading data column wise from memory Table 2.6 Reading data diagonal wise from memory...29 Table 2.7 Information bits and multiplexed coded bits for an odd even Interleaver...29 Table 2.8 Applications of turbo code...38 Table 3.1 BER for frame size K = 500, 1500 and Table 3.2 BER for frame size K = 500, 1500 and v

10 List of Abbreviations List of abbreviations 3G: third generation 3GPP: 3rd Generation Partnership Project 4G: fourth generation ARQ : Automatic Repeat Request APPs :a posteriori probabilities AWGN: Additive White Gaussian Noise BER: bit error rate BCJR: Bahl,Coke, Jelinek and, Raviv BEC: backward error correction BSC: Binary systematic Channel CDMA: code division multiple access FEC: forward error correction FDMA: frequency division multiple access GSM: global system for mobile IIR: infinite impulse response LLR: log-likelihood ratios LTE: long term evolution MIMO: multiple-input multiple-output MAP: maximum a posteriori OFDM: orthogonal frequency division multiplexing RSC: recursive systematic convolutional SNR: Signal-to-noise ratio SISO: soft-input, soft-output SOVA: soft output viterbi algorithm UMTS: Universal Mobile Telephone Service vi

11 General introduction General introduction: Nowadays digital communication has viewed a rational revolution especially in in the field of satellites and cellular networks, which deals with efficient sending, and receiving of data wirelessly without corrupting the original information. At the transmitter the information is digitized into a sequence of binary bits, this bits are not haphazardly created, it is possible to compress them in order to reduce the amount of transmitted data. This step so called source coding is very important. Then the data are mapped to analogue signal waveforms and transmitted over a communication channel. This communication channel cause the signal to be corrupted with certain noise and interference whatever the type channel is. At the receiver level, the process is reversed, the noisy signal is mapped back to binary bits which should be similar to the original sequence of data but unfortunately is not the case. The received binary information is just an estimate of the binary transmitted data incorporated with some bit errors. This unavoidable bit errors i.e. flipped, added or deleted bits, correlated with the amount of noise applied in the channel. The data must also be protected against perturbations, which could lead to misinterpretation of the transmitted message at the receiving end. However to fight this phenomena and provide more reliable information transmission, a basic concept has been introduced which is channel coding. This technique makes the receiver capable of detecting and correcting errors by sending a retransmission request or by adding a redundancy to the compressed message, this later is called forward error correction code and it the most effective in term of speed and efficiency. There are two fundamental types of channel codes used to deal with noise problem and practical implementation of an error control code, namely block codes and convolutional code. the two codes differ in the theory of work, block codes are based rigorously on finite field arithmetic and abstract algebra. They can be used to either detect or correct errors. Convolutional codes are one of the most widely used channel codes. These codes are developed with a separate strong mathematical structure and are primarily used for real time error correction. The encoded bits depend not only on the current k input bits but also on past input bits with a decoding based on the widely used Viterbi algorithm. As a result of the wide acceptance of convolutional codes, there vii

12 General introduction have been many advances to extend and improve this basic coding scheme. This advancement resulted in two new coding schemes, namely turbo codes. Turbo code, a near channel capacity error correcting code codes was proposed by Berrou et al[35]. It represents a recent breakthrough in coding theory. These codes are parallel-concatenated convolutional codes. The decoding strategies for turbo codes are based on a maximum a posteriori (MAP) algorithm and a soft output Viterbi algorithm (SOVA). Regardless of which algorithm is implemented, the turbo code decoder requires the use of two (same algorithm) component decoders that operate in an iterative manner. Thus the main objective of this project is to evaluate the performance of turbo code by changing different parameters and apply it to different standards. This dissertation will explain some basics in coding theory then narrow the research to turbo coding and the main applications that use it. The structure of this dissertation is as follow. Chapter 1 mainly it gives a brief introduction to digital communication system in general and to the research field of this dissertation i.e. channel coding. It will highlight a few fundamental concepts coding theory and the goals that the coding community are striving towards. Chapter 2 provides an introduction to turbo coding principles, this includes a description of the different components of turbo code that is encoding, interleaving and decoding, furthermore the obedience of this mechanism to UMTS and LTE standards. Chapter 3 presents the results and discussion on the simulation where a performance analysis of turbo code is done regarding the design of the central components like: the interleaver, the trellis termination, the number of iterations and others. viii

13 Chapter 1 Coding theory

14 Chapter 1: Coding theory 1.1 Introduction The growing need of information storage and transmission in industry leads mathematicians and computer scientists to develop efficient and reliable data transmission methods which are often elegant applications of a very basic concepts of abstract algebra, involving finite fields, group theory and polynomials. Coding theory deals with the design of codes used for data compression,cryptography, error detection and correction and their respective fitness for specific application to ensure the fully, fast and correct recovery of data [1] [2]. The process begins with an information source, such as a data terminal or the human voice. The source encoder transforms the source output into a sequence of symbols which we call a message m; if the information source is continuous, the source encoding involves analog-to-digital conversion. If security is desired, the message would next be encrypted using a cipher, the subject of the field of cryptography. The next step is the error-control coding, also called channel coding, which involves introducing controlled redundancy into the message m. The output is a string of discrete symbols, which we call a codeword c. Next, the modulator transforms each discrete symbol in the codeword into a wavelength to be transmitted across the channel. The transmission is subject to noise of various types, and then the processes are reversed [3]. 1.2 History 1948 the birth of new subject called information theory ; established by Claude Shannon s paper ; he showed that good codes exist, but it remains a big challenge to construct and implement them Richard Hamming the first person who invented error detecting and correcting codes. He said Damn it, if the machine can detect an error, why it can t locate the position and correct it the first spaceship MARINER 4 that photographed another planet (Mars) and sent a digitized image at rate of 8.33 bit per second, so it took 8h to transmit a single picture using a 64 FSK modulation. 1

15 Chapter 1: Coding theory The transmission rate increased to bit per second due to the use of a powerful error correction code known as Reed-Muller code Easy transmission of color picture inform of binary data, with high quality (VIKING 1 mission to Mars) High resolution color pictures of Jupiter and its moons (Voyager 1 mission) [2] [4]. Recent years Mobile standards, such as Universal Mobile Telecommunications System (UMTS) and longue term evolution (LTE) have been adopted Turbo codes as a channel coding scheme. 1.3 Source coding Source coding is a vital part of any communication system as it helps to use disk space and transmission bandwidth efficiently. It is the process by which the source data are mapped to alphabetical symbols. The mapping is generally performed in sequences or groups of information and alphabetical symbols. The coding is performed in such a manner that it guarantees the exact recovery of the information symbol back from the alphabetical symbols otherwise it will destroy the basic theme of the source coding. The source coding also known as compression or bit-rate reduction process, it is the process of removing redundancy from the source symbols, which essentially reduces data size. This process is called lossless compression if the information symbols are exactly recovered from the alphabetical symbols otherwise it is called lossy compression. Source Encoder converts the M input sequence into a n binary sequence of 0 s and 1 s tries to minimize the average length of messages according to a particular assumed probability model (i.e Higher the probability, shorter is the codeword eg: Huffman coding ) which is called entropy encoding [5]. 2

16 Chapter 1: Coding theory 1.4 Channel coding The unavoidable presence of noise in a channel causes discrepancies (errors) in the data sequences transmitted Through a digital communication system. For deep space communication signals are limited by channel noise of the receiver which are more of a continuous nature than a bursty nature. Likewise narrowband modems are limited by the noise present in a the telephone network and also modeled better as a continuous disturbance. In addition to the use of high frequencies which leads to a rapid fading of the signal even if the receiver is moved few inches. So many classes of channel codes that are designed to combat different types of noise and for many application the probability of errors may be greater than 10^(-1) which means that 9 out of 10 transmitted bits are correct values which is unacceptable. The necessary requirements to an acceptable performance level is communicating with a probability of error often equal to or less than 10^-(6), this later resort us to the use of channel coding. The goal behind channel coding is to find codes which transmit quickly with error correction or at least detection. Channel coding is a mapping of the incoming data sequence into a channel input sequence and an inverse mapping the channel output sequence into an output data sequence in such a way that the overall effect of channel noise on the system is minimized [2] [6]. Shannon s theorem or noisy-channel coding theorem establishes that for any given degree of noise adulteration of a communication channel it is possible to communicate data nearly error free up to a commutable maximum rate through the channel. This results was presented by Claude Shannon in 1948.The Shannon capacity or limit of a channel is the theoretical maximum information transfer rate of the channel, for a particular noise level this means that it is possible to transmit information nearly without error at any rate below a limiting rate C [7][6]. There are two basic coding methods for controlling errors in data transmission over unreliable or noisy communication channels:bec (Backward Error Correction) Coding: it allows the receiver only errors detection. If an error is detected the sender is requested to retransmit the message.fec (Forward Error Correction) Coding: it allows the receiver to correct a certain amount of errors and the method used with this coding scheme is 3

17 Chapter 1: Coding theory redundancy, hence there are a class of channel codes that are designed to add redundancy to the transmitted message are the linear codes. 1.5 Linear codes Linear codes are widely used in practice for a number of reasons. One reason is that they are easy to construct. Another reason is that encoding linear codes is very quick and easy. Decoding is also often facilitated by the linearity of a code. The theory of general linear codes requires the use of abstract algebra and linear algebra. A code is linear if the modulo-2 sum of two code words is also a code word. Each codeword is built as a linear combination of the information messages. Generally, the complete encoded message is needed before starting decoding. Easy to implement but difficult to decode [8]. Algebraic coding theory is divided into two major types of codes: 1. Linear block codes 2. Convolutional codes [2] Block codes Linear Block code is one of the different implemented codes for error correction codes that encode data into blocks; they are conceptually useful because they allow coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are Reed Solomon codes, Hamming codes(the most common), Hadamard codes, Expander codes, Golay codes, and codes. These codes are known as cyclic block codes, because they can be generated using Boolean polynomials. Block codes are typically hard-decoded using algebraic decoders. any error-correcting code that acts on a block of k bits of input data to produce n bits of output data (n,k) we may also refer to it as block code, Consequently the block coder is a memoryless device. Under this definition codes such as turbo codes, 4

18 Chapter 1: Coding theory terminated convolutional codes and other iteratively decodable codes (turbo -like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has memory and is instead classified as a tree code. Fig1.1 block code scheme. If any sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called message and the procedure given by the block code encodes each message individually into a codeword, also called a block in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. Fig.1.2 block coding mapping. 5

19 Chapter 1: Coding theory In linear block codes, n code word symbols can take 2 possible values. From that, we select 2 code words to form the code. A block code is said to be useful when there is one to one mapping between message m and its code word c as shown above, so no compression of data is done like in source coding but here the data are extended by adding redundancy which is represented as parity bits. Generator matrix [9]: All code words can be obtained as linear combination of basis vectors. The basis vectors can be designated as {,,,.., }.For a linear code, there exists a k by n generator matrix such that : =. (1.1) Where: c = {,,.., } and m = {,,., } G =... = Block codes in systematic form [9]: In this form, the code word consists of (n -k) parity check bits followed by k bits of the message. Means that the input data is present as it is in the output codeword. The rate or efficiency for this code is R= k/n. The generating matrix : G = [ : P] so C = m.g = [m mp] (m: message parity, mp: parity part) Let we consider (7, 4) linear code where k=4 and n=7, with an input signal m= (1110) and 6

20 Chapter 1: Coding theory G = = Then the output signal c= m.g = = We substitute: c = ( ) ( ) ( ) = ( ) Another method: Let m = (,,, ) and c = (,,,,,, ) we have c = m.g = (,,, ) By matrix multiplication we obtain : =, =, =, =, =, =, = The code word corresponding to the message (1110) is then ( ). parity check matrix (H) [9]: When G is systematic, it is easy to determine the parity check matrix H as: H = [ ] (1.2) The parity check matrix H of a generator matrix is an (n-k)-by-n matrix satisfying: = 0 (1.3) Then the code words should satisfy (n-k) parity check equations 7

21 Chapter 1: Coding theory = = 0 (1.4) (Minimum distance of a block code : Hamming weight w(c) : It is defined as the number of non-zero components of c. For ex: The hamming weight of c = ( ) is 4 Hamming distance d( c, x): It is defined as the number of places where they differ. The hamming distance between c = ( ) and x = ( ) is 4 The hamming distance between two n-tuple c and x is equal to the hamming weight of the sum of c and x: d(c, x) = w( c x) For example: The hamming distance between c = ( ) and x = ( ) is 4 and the weight of: c x = ( ) is 4. Minimum hamming distance : It is defined as the smallest distance between any pair of code vectors in the code. For a given block code C, is defined as: =min{ d(c, x): c, x C, c x} (1.5) The Hamming distance between two code vectors in C is equal to the Hamming weight of a third code vector in C. = min{w( c x):c, x C, c x} = min{w(y):y C, y 0} = (1.6) 8

22 Chapter 1: Coding theory Convolutional codes Convolutional codes, invented in 1954 by Peter Elias[10], are a Forward Error Correcting codes whose decoding simplicity and good performance make them widely used for 2G cellular systems, data modem, in satellite communications and many other applications. Convolutional codes are used to reliably transmit digital data over unreliable communication channel system, in particular for the Gaussian channel. Unlike a block code, which acts on the message in finite-length blocks, a convolutional code acts like a finite-state machine, taking in a continuous stream of message bits and producing a continuous stream of output bits. The convolutional encoder has a finite memory of the past inputs, which is held in the encoder state. The output depends on the value of this state, as well as on the present message bits at the input, but is completely unaffected by any subsequent message bits. Thus, the encoder can begin encoding and transmission before it has the entire message. This differs from block codes, where the encoder must wait for the entire message before encoding. When discussing convolutional codes it is convenient to use time to mark the progression of input bits through the encoder. For example, we say that the input bit at time t-1 influences the output bit at time t but the output bit at time t does not depend on the input bits after time t [11]. A convolutional code is specified by three parameters (n,k,m) illustrated by Fig.1.3 Fig.1.3Terminology of convolutional codes [10]. 9

23 Chapter 1: Coding theory A convolutional encoder with the rate = is constructed on a basis of k input bits, n output bits and m memory units. The memory outputs and input data are joined each other in the required combination by an Exclusive OR (XOR) operator which generates the output bits. Figure 1.2 shows the convolutional encoder (n = 2; k = 1;m= 2) structure. In the convolutional encoder, one bit entering the encoder will affect to the code performance for m+1 time slots, which represents the constraint length value of the code Convolutional codes Representations: a) Polynomial representation Since a XOR is a linear operation the convolutional encoder is a linear feed forward circuit. Based on this property, the encoder outputs can be obtained by convolution of input bits with n impulse responses generally called generator sequences having lengths equal to the constraint length of the code. Fig1.4 shows a block diagram of a binary convolutional encoder. The blocks represent shift register elements and the circles represent modulo-2 adders. At time t the input to the encoder is one message bit,, and the output is two bits, and thus; The convolutional code word is : =,, The message u is given by: u= 1 (1) (1) (1) 2,, The state of this encoder is given by: S, ; where is the content of the lefthand register element and, is the content of the right-hand register element. We can see from Fig1.4 that[11]: = (1.8) 10

24 Chapter 1: Coding theory = Fig1.4 A rate-1/2 non-systematic non-recursive binary convolutional encoder, represents an input bit and and represent output bits [11]. For the illustrated encoder in Fig1.2 the generator matrix is of the form [10]: = ; The generator sequences determine the existence connection between the encoder memories, its input and output. Examining the diagram in Fig1.2 it is easy to verify that [1]:, =, =, = 1;, =, = 1;, = 0 And thus, by using D-Transform notation the generator polynomials have the respective expressions [10]: = ; = 1 + (1.9) A binary representation can be associated to the generator polynomials [1]: = (1,1,1) = 7 ; = (1,0,1) = 5 The encoder output can be expressed as[10]: = (D) = (D) where: = (1.10) 11

25 Chapter 1: Coding theory b) State diagram representation The convolutional encoder can also be represented using a finite state machine. A state diagram depicts the entire behavior of a convolutional encoder. The number of states in a state diagram depends on the number of memory elements in the encoder. If number of memory elements is m then the number of states in the state diagram will be 2 m.for the (2,1,2) convolutional encoder the number of states will be 4.In the state diagram, 2 branches leave each state and enter the new state to represent the state transitions for memories and the encoder outputs based on the combination of input data [12].Fig1.5 shows the state diagram of the convolutional encoder (2,1,2) illustrated in Fig1.4 Fig1.5 State diagram and state transition table for (2,1,2) convolutional encode [13]. c) Trellis representation: Trellis Diagram is a structure derived from the state diagram that will allow us to develop an efficient way to decode convolutional codes. The state machine view shows what happens at each instant when the sender has a message bit to process, but doesn t show how the system evolves in time. The trellis is a structure that makes the time evolution explicit [14]. An example is shown in Fig1.6 Each column of the trellis has the set of states; each state in a column has two possible path to move to next status. One path is the transition path when the input bit is 0 and the other path is the transition path when the input bit is 1. The path when the input bit = 0 is marked in red line and the path when the input bit = 1 is marked in blue line. 12

26 Chapter 1: Coding theory Fig1.6 Trellis diagram-one stage [13]. Fig1.7 Trellis diagram with trellis termination for the input data length L = 3 [13] Tail-biting Convolutional code Tail- biting convolutional code is the process of terminating the transmitted data with a number of zeros equal to m i.e. the number of memory elements ; in order to force the encoder back to t zero state. Thus, the memory elements are initialized by the last m input bits of the input sequence in order to be sure that encoder ends in the same state. Obviously, the tail bits must be ignored once the decoding process has been completed, as they are not part of the original data. By considering an (n, k, m) encoder, m zeros will be padded for each of the k input sequences of length L producing n(l + m) output bits. Then the effective rate i.e. average number of input bits carried by an output bit is given as [15] 13

27 Chapter 1: Coding theory = = (1.11) is the fractional rate loss. Thus, the disadvantage is that extra bits need to be transmitted at the cost of extra transmission time, and as a consequence a larger is required for a given probability of error and Receiver complexity is slightly increased.the advantage of termination is that it is easy to implement and does not effect the error correction capability of the convolutional code [15] Convolutional Decoding-Viterbi algorithm Several algorithms have been developed for decoding convolutional codes, the one most commonly used is the Viterbi algorithm introduced in 1967 by Andrew Viterbi.This algorithm essentially performs maximum likelihood decoding for estimating and searching the most likely survivor path in the trellis according to the receiving sequences, meanwhile the error during transmission can be corrected [16].Fig1.8 shows the basic units of Viterbi decoder. It consists of branch metric unit (BMU), add compare and select unit(acs) and survivor memory management unit (SMU). Fig1.8 Block Diagram of Viterbi decoder [17]. Viterbi algorithm can be broken down into the following three steps [18]: 1- In the BMU the received data symbols are compared to the ideal outputs of the encoder from the Transmitter and branch metric is calculated using Hamming distance or Euclidean distance. 14

28 Chapter 1: Coding theory 2-Recursively compute the shortest paths up to time n, in terms of the shortest paths up to time n-1.in this step,decisions are used to recursively update the survivor path of the signal. This is known as add-compare-select(acs)recursion. 3-Recursively finds the shortest path leading to each trellis state using the decisions from step 2. The shortest path is called the survivor path for that state and the process is referred to as survivor path decode. Finally, if all survivor paths are traced back in time,they merge into a unique path,which is the most likely signal path. In the Viterbi Algorithm decoding, there are two ways to calculate the distance to choose a most likekihood path : a) Hard decision Viterbi decoding In hard-decision decoding,the path through the trellis is determined using the Hamming distance measure,the most optimal path is the one with the minimum Hamming distance. Furthermore, the hard decision decoding applies one bit quantization on the received bits[18]. Suppose that the code word U=( ) of an encoded message m=101,is sent over a noisy channel and that Z=( ) is received i.e: three errors occurred. The Hard-decision Viterbi algorithm is illustrated in Fig1.9 The very last step is the backward reconstruction of the path, which is drawn in green. This path corresponds to the codeword = ( ). the finally maximum likelihood decoded data is ^=(100) which provides 1 bit error which has occurred at the time unit. 15

29 Chapter 1: Coding theory Fig1-9 Trellis diagram for the hard decision decoding of the convolutiona code (2, 1, 2) with trellis termination for the input data length L = 3 [13]. a) Soft decision Viterbi decoding On the contrary to the hard decision decoding, the soft-decision decoding uses multi-bit quantization for the received bits and Euclidean distance as a distance measure instead of the Hamming distance. The advantage of using soft decision decoding is to provide decoder with more information, which decoder then use for recovering the message sequence. Thus,it provides better error performance than hard decision type Viterbi decoding [18] For the code with the rate, the branch metric values of the trellis diagram for the time t can be computed as follows:,, =,,, (1.12) Where, and, give the l-th received information and the output transition information for the transition from state j to state i,respectively. As an example, for the previously presented code (2,1,2) with m = (101) and U = ( ) the soft information obtained from the channel Z=(1,,,,, 1,, 1,, 1). 16

30 Chapter 1: Coding theory The Soft-decision Viterbi algorithm is illustrated in Fig.1.10 Fig1-10 Trellis diagram for the soft decision decoding of the convolutional code (2,1,2) [13]. Decoding has been accomplished and the decoded data has been highlighted in green, which has correctly recovered the input bitstream of the encoder ^=(101 ). In comparison with the hard decision approach, this appproach has better performance in error correction of the code and hence in most of the applications the Viterbi decoding is implemented as soft decision algorithm. 1.6 Application Applications that cannot tolerate latency like telephone conversations do not use Automatic Repeat Request (ARQ) the resend data will arrive too late so they are pushed to use the forward error correction (FEC). A n other case is where the transmitter immediately forgets the information as soon as it is sent such as television or satellite cameras, the original data is no more available. Application that use ARQ must have a return channel like digital money transfers [2]. 1.7 A need for better code 17

31 Chapter 1: Coding theory Developing a channel code which can guarantee that the transmitted bits are received without errors after being detected and corrected was always a scope of study, working on energy efficiency and bandwidth efficiency together without ignoring one to achieve the other was also a part of the wanted goal, Codes with lower rate (i.e. bigger redundancy) can usually correct more errors. since more errors can be corrected, the communication system can operate with a lower transmit power, transmit over longer distances, tolerate more interference, use smaller antennas and transmit at a higher data rate. These properties make the code energy efficient. On the other hand, low-rate codes have a large overhead and are hence more heavy on bandwidth consumption. Also, decoding complexity grows exponentially with code length, and long (low -rate) codes set high computational requirements to conventional decoders. According to Viterbi, this is the central problem of channel coding: encoding is easy but decoding is not [19]. For every combination of bandwidth (W ), channel type, signal power ( S) and received noise power (N), there is the channel capacity the theoretical upper limit on the data transmission rate R, for which error-free data transmission is possible. For additive white Gaussian noise channels, the formula is: R < W log2( 1 + N S ) [bits/s] (1.13) One way of making the task of the decoder easier is using a code with mostly highweight code words. High-weight code words, i.e. code words containing more ones and less zeros, can be distinguished more easily and would allow for easier decoding [19]. Another strategy involves combining simple codes in a parallel fashion, so that each part of the code can be decoded separately with less complex decoders and each decoder can gain from information exchange with others. This is called the divide-andconquer strategy. Finally these hypotheses lead us to introduce a new concept the turbo code which was the dawn of channel coding techniques by achieving the following [7]: Performance close to the Shannon Limit. Mix between Convolutional and Block codes. The best code among FEC codes 18

32 Chapter 1: Coding theory With four key elements[6]: Concatenated Encoders Recursive convolutional encoders Pseudo-random interleaving Iterative Decoding 1.8 Conclusion In this chapter, we have discussed coding theory at the level of source and particularly of channel. The goal of finding explicit codes that reach the limits predicted by Shannon s original work has been achieved. The constructions require techniques from a surprisingly wide range of pure mathematics: linear algebra, the theory of fields and algebraic geometry all play a vital role. Not only has coding theory helped to solve problems of vital importance in the world outside mathematics, it has enriched other branches of mathematics, with new problems as well as new solutions. Channel code was always a tradeoff to designers, using a code with good error control performance require the scheme to be selected based on the characteristics of the communication channel. Engineers were trying to find the appropriate error correction code that could help us to mitigate the effects of channel noise like the additive Gaussian noise and Rayleigh noise and increase the system s performance against signal distortion that occurs when data are received. Beyond error correction, turbo codes--or the so called turbo principle--are doing the task and contributing in the revolution of coding theory and sparked lots of other ideas. 19

33 Chapter 1:coding theory 20

34 Chapter 2 : Turbo Coding Chapter 2 Turbo Coding 21

35 Chapter 2 : Turbo Coding 2.1. Introduction In 1949, Claud Shannon, showed that with the right error-correction codes, data could be transmitted at speeds up to the channel capacity, hypothetically free from errors, and with surprisingly low transmitting power. But then, which code could do it? The intervening years, have seen many well introduced channel codes inch towards Shannon limit but all have required large block lengths,however there consequent complexity,cost and signal latency made them impractical within 3 to 5 db of the limit. In communication protocols, FEC is used in the form of block codes, convolutional codes or Turbo Codes. Such coding challenges the Shannon limit as it is illustrated by Fig.2.1, where the principle of applying redundancy to the bit stream causes the SNR per bit requirement to lower, when considering the same BER. Fig.2.1 Principle illustration of coding gain in relation to BER at a given SNR per bit. In 1993, group of French researchers, Berrou, Glavieux and Thitimajshima introduced one of the most powerful error control codes,called Turbo codes also termed parallel concatenated codes which provide small decoding complexity for very long code words and perform within an astonishing 0.5 db of the Shannon limit, for a bit-error rate of one in Instead of a single encoder at the transmitter and a single decoder at the receiver, turbo codes use two encoders at one end and two decoders at the other end to bypass the complexity problem Princip les of Turbo Codes Turbo codes are basically constructed by two parallel Recursive Systematic Convolutional (RSC) codes i.e. (IIR) encoders ;separated by an interleaver, the recursive coder makes convolutional codes with short constraint length appear to be block codes with a large block length. Due to the feedback connection from the output to the input of the RSC encoder, it is possible to find bit streams that automatically return the RSC encoders to zero state. This generates code words with 22

36 Chapter 2 : Turbo Coding low weight for the turbo code. As one of the effective solution to reduce this drawback, application of good interleavers is suggested. The interleavers are designed in such a way that to prohibit generation of bad bit streams of the second RSC codes.generally, turbo encoded data are decoded by iterative decoding techniques, the concept behind turbo decoding is to pass soft decisions from the output of one decoder to the input of the other decoder, and to iterate this process several times so as to produce more reliable decisions. The basic principle of turbo coding concept is illustrated by Fig 2.2 Fig.2.2 The turbo coding/ decoding principle [21]. 2.3 The Turbo Encoder The general structure used in turbo encoders is shown in Fig.2.2.Two component codes are used to code the same input bits, but an interleaver is placed between the encoders. Generally RSC encoders are used as the component codes. The turbo encoding process starts with three copies of the data block to be transmitted. The first copy goes into one of the encoders, where a convolutional code takes the data bits and computes parity bits from them. The second copy goes to the second encoder, which contains an identical convolutional code. This second encoder gets not the original string of bits but rather a string with the bits in another order, scrambled by a system called an interleaver. This encoder then reads these scrambled data bits and computes parity bits them. Finally, the transmitter takes the third copy of the original data from and sends it, along with the two independent strings of parity bits, over the channel. The parity bits then serve to strengthen every bit in the codeword against bit errors. The concept is illustrated by Fig

37 Chapter 2 : Turbo Coding Fig.2.3 Block diagram of a turbo Encoder [11]. Where; The three output sequences are multiplexed together to form the output sequence ={,,, = } is a block of input symbols Resulting in an overall rate R=1/3 linear, systematic, block code Recursive systematic convolutional encoders: In chapter 1, we have discussed about the non-systematic non-recursive rate 1/2 convolutional encoder, with generator matrix : If we multiply G by = [ ]. 1+ we obtain the generator matrix of a systematic recursive convolutional encoder, i.e. systematic convolutional encoder with feedback. = 1. The rate-1/2 systematic recursive convolutional encoder is shown in Fig.2.4 Fig.2.4 A rate-1/2, [7;5]8 Recursive Systematic Convolutional Encoder[11]. For systematic codes, the information sequence is part of the code word, which correspond to the direct connection from the input to one of the output. For each input 24

38 Chapter 2 : Turbo Coding bit the encoder generates two code word bits: the systematic bit denoted the parity bit denoted. = and Recursive Systematic Convolutional Encoder representations are shown in Fig.2.5 and Fig.2.6 Table 2.1The state transitions for the convolutional encoder in Fig.2.4 [11]. Fig2. 5 (a)the state transition diagram and (b) a segment of the trellis diagram for the rate-1/2 systematic recursive binary convolutional encoder in Fig.2.4 [11] Recursive convolutional encoders vs. Non-recursive encoder : (a) (b) Fig.2.6 a) Non-recursive R=1/2 and K=2 convolutional encoder b) Recursive R = 1/2 and K = 2 convolutional encoder. Table 2.2 shows the output sequences corresponding to the same input sequence given to the two encoders [22]. 25

39 Chapter 2 : Turbo Coding Table 2.2 Input and output sequences for convolutional encoders. Encoder Input Output 1 Output 2 Weight Non-RSC RSC Type Thus, a recursive convolutional encoder tends to produce higher weight code words as compared to non-recursive encoder, resulting in better error performance. Hence, the main purpose of implementing RSC encoders as component encoders for turbo codes is to utilize the recursive nature of the encoders and not the fact that the encoders are systematic [22] Tail Bits As discussed in chapter 1, to simplify and improve the decoding process, the input data word to the turbo encoder is tailored so that the codeword from the first component encoder finishes at the all-zero state.the bits appended to the dataword are referred to as tail bits. Fig.2. 7 A rate-1/2 systematic recursive binary convolutional encoder with a circuit to generate tail bits [11]. Fig.2.7 shows the encoder circuit in Figure 2.4 modified to add tail bits. The input to the encoder is switched between point A, where the message is encoded, and point B, where the padding bits are added. By switching the input to point B the feedback bits are canceled and zeros are fed into the shift registers [11]. Now, assume a five-bit dataword, 01101, arrives at the first component encoder. The trellis path described by the codeword created from this input is highlighted in red in Fig2.7, finishing at state 10 ;in order to return back to the all-zero state the addition of two bits demonstrated by the bold green lines is required. 26

40 Chapter 2 : Turbo Coding Fig.2.8 Example of Techniques for Adding Tail bits [23] The Interleaver The interleaving performed on the information sequence before it is fed to the second constituent encoder is a re-ordering or permutation of the information symbols making the two decisions uncorrelated. '' The role of the permutation is to introduce some random behavior in the code," says Berrou. In other words, the interleaver provide randomness to the transmitted information which proves its effectivity in increasing the free distance to the concatenated code and in reducing the effect of impulse noise,also in dealing with bursty channels.ie.channels where errors occur in many consecutive bits rather than occurring in bits independently of each other. Mathematically, the relation between the input sequence [ ] symbols and the interleaved sequence follows [12]: y[ ]= [ ] Why interleaving Interleaver design represent [ ] including infinite is defined by permuted law as (2.1) an important issue in a turbo-coded system. It must perform in a way that if the output of one encoder produces a low weight code word which can result in a poor error performance, the second encoder s output should produce codes with good weight, High-weight code words are desirable because it means that they are more distinct and thus the decoder will have an easier time distinguishing among them [24]. If an input sequence x produces a low weight output from coder 1, then the interleaved version of x needs to produce a code of good weight from coder 2. This is 27

41 Chapter 2 : Turbo Coding crucial for it lowers the number of codewords with small hamming weight, and allows the bit error curve to drop at a faster rate as SNR increases [25]. The interleaver design is a key factor which determines the good performance of a turbo code. Table 2.3 gives an example of three different permutations of a five-bit, weight-2 dataword and the difference in codeword weights for the recursive systematic Convolutional Encoder [7;5]8(in octal). Table 2.3 Dataword Permutations versus Codeword Weights. Data word Codeword Codeword Weight Some interleaver types used in turbo codes are discussed below in detail. a) Row Column interleaver The simplest interleaver is a memory in which data is written row wise and read column-wise. This is called a row column interleaver and belongs to the class of block interleavers [26].while very simple,it also provides some randomness. For example, data is written as shown in Table 2.4 Table 2.4 Writing data row wise in memory [27]. The interleaving process consists in reading data as shown in Table 2.5 Table 2.5 Reading data column wise from memory [27]. 28

42 Chapter 2 : Turbo Coding b) Helical interleaver A helical interleaver writes data row wise as in Table 2.4 but reads data diagonally [2].as shown in Table 2.6. Table 2.6 Reading data diagonal wise from memory [27]. c) Odd even interleaver First, the bits are left uninterleaved and encoded, but only the odd-positioned coded bits are stored. Then, the bits are scrambled and encoded, but now only the even-positioned coded bits are stored. Odd-even encoders can be used when the second encoder produces one output bit per one input bit [22].The process is illustrated by Table 2.7. Table 2.7 Information bits and multiplexed coded bits for an odd even interleaver [27]. d) Random (Pseudo-Random) Interleaver: The Random Interleaver rearranges the elements of its input vector using a random permutation. The incoming data is rearranged using a series of generated permuter indices. A permuter is essentially a device that generates pseudo-random permutation of given memory addresses. The data is arranged according to the pseudo-random order of memory addresses as shown in Fig.2.10 [28]. At the de-interleaver, the reverse permutation pattern is used to deinterleave the data. It has been shown that this type of interleaver has better performance for high weight input bit stream, typically greater than weight-2 distribution. 29

43 Chapter 2 : Turbo Coding Fig.2. 9 Example of random interleaver [29]. e) Semi-random Interleaver Semi-random interleaver is another type of scrambler introduced to overcom the drawback of pseudo random interleaver. The main idea is that any two input bit positions with distance S can not be permuted to two bit positions,whose distance is less than S. Fig.2.12 shows the process for the semi-random interleaver with length L = 9 and threshold value S = 3. The best turbo code performance is achieved by the threshold value [12]. Fig Permutation of data with a semi-random interleaver length L = 9 and S = 3 [12]. f) Polynomial interleaver: ( LTE interleaver) The polynomial interleaver uses a polynomial to rearrange the input sequence bit order, as can be seen by the equation below [30] : where: ()=(. +. ) mod K (2.2) i is the index of the input sequence. π(i) is the index of the interleaved sequence. K is the length of the input sequence. f 1 and f 2 are values that are associated with the length of the sequence. Values for 30

44 Chapter 2 : Turbo Coding f 1 and f 2 are defined differently for each frame length. In LTE, 188 frame lengths with a content from 40 to 6144 bits has been defined The Puncturing Mechanism The puncturing mechanism is a technique adopted in the output of the two RSC encoders, to obtain a higher coding rate as shown in Fig.2.9. Without puncturing, the length of the turbo codeword, using rate-1/2 component codes, would be three times that of the dataword (one systematic and two parity bits per data bit), however by applying the puncturing scheme the code rate can be increased to 1/2 without increasing the complexity of the decoder. In the puncturing algorithm of a normal Turbo code, the deleted bits are usually located periodically. Fig.2.11 Block diagram of a turbo encoder with puncturing [21] Worked example In this example we consider the component encoders used to define the rate-1/2 turbo encoder presented in the original paper of Berrou, Glavieux and Thitimajshima. The encoders both use the rate-1/2 systematic recursive convolutional code with generator matrix: =

45 Chapter 2 : Turbo Coding Fig.2.12 The rate1/2 turbo encoder presented in the original paper of Berrou, Glavieux and Thitimajshima. The component codes are concatenated in parallel, as in Fig.2.10, producing a rate-1/3 turbo code. However, puncturing is used for both encoders, using the puncturing matrix[11].: = The first column indicates which bits are output at the even output instants and the second column indicates which bits are output at the odd output instants. For example, this puncture matrix alternately selects the outputs of the encoding filters. The trellises are not terminated, so there are no padding bits. For this example we will use the length-10 interleaver. The 10-bit message = [8,3,7,6,9,0,2,5,1,4] = [ ] d is passed into the first encoder, which returns the parity bits = [ ] An interleaved version of the message, = [ ] is passed into the second encoder, which returns the parity bits = [ ] After puncturing, the actual transmitted bits are = [ ] = [ ] = [ ] 32

46 Chapter 2 : Turbo Coding Since for every 10 message bits there are 20 codeword bits (10 message bits plus five parity bits for each encoder), the rate of this turbo code is 1/2 and the resulting punctured codeword is =[ ] 2.4. The Turbo Decoder One of the novel attributes of turbo codes is their ability to compose "large codes" that can be decoded with reasonably low complexity. As mentioned earlier, this is achieved by their iterative decoding process. The iterative decoding structure consists of two softinput, soft-output (SISO) decoding modules which are separated by the interleaving / deinterleaving blocks in a structure similar to that of the encoder. These decoders would then exchange probabilistic information in an iterative way[31]. The term turbo code was coined because this iterative decoding process resembles the cyclic feedback mechanism of the turbocharger which uses its exhaust to force air into the engine and boost combustion as shown in Fig.2.11 Fig.2.13 A mechanical turbo engine [31] Turbo decoding principles Log-Likelihood Ratio At the receiver, the signal is demodulated with its associated noise and a soft output provided to the decoder. The soft output might take the form of a quantized value of the decoded bit with its associated noise, or it may be a bit with associated probability (i.e. 1 with p(1) = 0.8 ). Most often it is the log likelihood ratio (LLR) also termed the log- a posteriori ratio which is defined as [32]: ( ) = log ( ( ) ) (2.3) 33

47 Chapter 2 : Turbo Coding where is the kth data (uncoded) bit and R is the received sequence (codeword). Based on the LLR, the decoder takes the decision: [ ( = Thus the sign of is the hard-decision on )] reliability of this decision. (2.4) and the magnitude ( ) is the The LLR in equation (2.3) can be expanded and expressed as follows[33]: Where the quantity ( )=4 R + log is the signal-to-noise ratio and ( ) ( (2.5) ) denotes the fading amplitude for a fading channel whereas for a Gaussian channel we set =. The first term in equation (2.5) represents soft-input information provided from the demodulator to the decoder, from now on called channel values. The second term represents a priori information about data bits [32]. ( )= ( Structure of Iterative Decoding ) (2.6) The block diagram of an iterative turbo decoder that matches the encoder represented in Fig.2.2 is shown in Fig Fig.2.14 Block diagram of turbo decoder [11]. As seen in the figure, The decoder has as input ratios from the channel for u, and, respectively. and, the log likelihood In the first iteration, = 1,by considering equiprobable data bits, P( = +1) = P( = -1), the a priori information is zero in the beginning of the iterative process and so the input is and. Note that if puncturing has occurred then the received LLRs of the punctured bits will be zero.the first decoder will output the a posteriori probabilities for the message bits, 34

48 Chapter 2 : Turbo Coding also as log likelihood ratios.the new, extrinsic, information that the decoder 1 has created about the message bits in the first iteration, is thus = (2.7) The inputs to the decoder 2 from the channel are ( ) and, with the values interleaved to be in the same order as that of the message bits when they were input into the encoder 2. For the iteration, the decoder 2 will use the extrinsic information from the first encoder, as extra a priori information about the message bits. The vector of LLRs from the decoder 1 will be interleaved so as to be in the same order as the Code 2 message bits: = ( ) (2.8) The decoder 2 will output the a posteriori probabilities it calculates for the message bits as log likelihood ratios.the new, extrinsic, information created by the decoder 2 about the message bits at iteration is thus Where + = ( ) (2.9) is the information that was already known about the message bits prior to this iteration of Code 2 decoding. In the second, and subsequent, iterations the decoder 1 will repeat the process but now will have extra a priori information in the form of the extrinsic information from the decoder 2 created in the previous iteration. Thus in the iteration the extrinsic information from the decoder 2 is de-interleaved so as to be in the same order as the Code 1 message bits: = ( ) (2.10) The new, extrinsic, information about the message bits from the decoder 1 is thus: Since + = (2.11) is the information that was already known about the message bits prior to this iteration of Code 1 decoding. Note that the log likelihood ratios from the channel remain unchanged throughout the turbo decoding; only the extrinsic information changes at each iteration. The decoder can be halted after a fixed number of iterations, the final decision can be made on the basis of the output of either component decoder or an average of both outputs [11]. 35

49 Chapter 2 : Turbo Coding Decoding Algorithms Having outlined the iterative decoding process, we will now describe the decoding algorithms. SISO algorithms are necessary for turbo decoding because the decoders are required to share their extrinsic information with each other. Although SISO decoding algorithms are more computationally complex, they allow iterative sharing of results between decoders, which permits the use of powerful concatenated coding structures. The two main types of algorithm most commonly used are the Maximum APosteriori MAP algorithm, also commonly known as the BCJR algorithm and the Soft Output Viterbi Algorithm SOVA. MAP looks for the most likely symbol received whereas SOVA looks for the most likely sequence. Both MAP and SOVA perform similarly at high. At low, MAP has a distinct advantage, gained at the cost of added complexity [25] Turbo Code Illustrated Example 36

50 Chapter 2 : Turbo Coding Fig.2.15 Turbo coding illustrated example [27]. 37

51 Chapter 2 : Turbo Coding Turbo codes Applications Turbo codes are used in 3G/4G mobile communications (e.g., in UMTS and LTE). They are used for pictures, video,and mail transmissions. Beyond error correction, turbo codes are also helping engineers solve a number of communications problems. One example is in trying to mitigate the effects of multipath propagation ;that is, signal distortion that occurs when you receive multiple replicas of a signal that bounced off different surfaces. Turbo codes may eventually help portable devices solve this major limitation of mobile telephony. However, the several iterations required by turbo decoding make the delay unacceptable for real time voice communications and other applications that require instant data processing like optical transmission. For systems that can tolerate decoding delays, like deep-space communications, turbo codes have become an attractive option. Table 2.8 summarizes normalized or proprietary applications of turbo codes, known to date. Table 2.8 Applications of turbo code [34]. Application Turbo termination Polynomials Rates tail bits 23,33,25,37 1/6,1/4,1/3,1/2 tail bits 13,15,17 1/4,1/3,1/2 tail bits 13,15,17 1/4,1/3,1/2 tail bits 13,15,17 1/4,1/3,1/2 tail bits 13,15 code CCSDS Binary, 16- (deep space) state 3GPP Binary, 8- (UMTS) state 3G PP2 Binary, 8- (CDMA2000) state 3G PP LTE Binary, 8- (Long Term state Evolution) DVB-SSP Binary, 8- (Satellite state Service To Portable) 38

52 Chapter 2 : Turbo Coding 2.5. Conclusion In this chapter the fundamental principles behind Turbo coding have been introduced, including the encoder structure and the principles of iterative decoding. The central components of a turbo code encoder are the Recursive Systematic Convolutional(RSC) encoders and the interleaver that link them in parallel by re-ordering the bits in the information sequence before they enter the second constituent encoder. The concept of iterative decoding depends on the use of soft- input/softoutput decoders which calculate a posteriori probabilities (APPs).An optimal algorithm for computing APPs is the BCJR- algorithm. 39

53 Chapter 3: Simulation and results analysis Chapter 3 Simulation and results analysis 40

54 Chapter 3: Simulation and results analysis 3.1 Introduction This chapter presents simulation results for the implementation of classical Turbo Codes in Matlab simulation platform, we have used different specifications such as coded and uncoded data, varying number of decoding iterations, the length of the interleaver and observing the effect of generator polynomials and the constraint length using two different modulation schemes BPSK and 16-QAM in the presence of Additive White Gaussian Noise in the presence of AWGN channel to examine the modulation effect on the Turbo Code performance. At the end, we have dealt with the study of UMTS and LTE Turbo Code. Simulations have been conducted using code rate fixed at 1/3, LogMAP decoding algorithm and pseudo random interleaver. 3.2 Simulation Setup This simulation is based on the classical parallel concatenated convolutional codes system. In this structure, two components of encoders with code rate 1/3 are used.first, we have generated integer values from the uniform distribution using randi function. Then, the information bits have been scrambled using randperm function and passed through the encoder to be encoded using comm.turboencoder function, the output is then modulated using comm.bpskmodulator or comm.rectangularqammodulator function.after that the modulated signal is passed through the channel using comm.awgnchannel function,the noisy signal will be demodulated using comm.bpskdemodulator or comm.rectangularqamdemodulator functions.the demodulated signal will be decoded using comm.turbodecoder. To analyse the bit error probability we call comm.errorrate function. 3.3 Parameters of simulation: Bit Error Rate (BER) The Bit Error Rate is the number of received bits of a data stream over a channel that have been altered due to noise, divided by the total number of transferred bits during a studied time interval, which is a unitless performance measure. = Number of errors Total number of bits sent The BER equation of BPSK modulation IS: 1 = erfc E N 2 41

55 Chapter 3: Simulation and results analysis And the BER equation FOR 16-QAM modulation is given by: ² Energy per bit to noise power spectral density ratio (Eb/N0 ) Eb/N0 is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It equals to the signal power divided by the user bit rate. Eb/N0 is dimensionless; it is frequently expressed in decibels. 3.4 Turbo Code error performance analysis using BPSK modulation : For the purposes of this simulation BPSK modulation scheme in Additive White Gaussian Noise have been applied to analyse the BER and study the performance of Turbo Code as a function of frame length, number of decoding iterations and generator polynomial.as the Turbo Code decoding is computationally very intensive, most of the simulated performance results are for small frame lengths BER performance for frame length L = 500 The Turbo Code program was simulated for frame length L = 500 over AWGN channel. To keep the simulation fast, the number of frames for each SNR was taken as 500. Thus, for a frame length 500, bits were sent at each SNR value to get the BER. The SNR range was used from -8 to 6 db. The number of decoder iterations was chosen to be 7. The BER for the iterations is shown in Fig.3.1 Fig.3.1 BER as a function of and multiple iterations for frame length 500 using BPSK modulation 42

56 Chapter 3: Simulation and results analysis If we compare the uncoded and coded BER performance for different numbers of iterations, we see from the graph that with the use of coding scheme we can achieve / desired BER at a smaller value of as compared with the uncoded data. It is quite clear that coded data provide better performance and a gain of 10 db relative to an uncoded channel for BER of 2.9. It can be noticed from Fig.3.1 that the Turbo Code was able to achieve BER of a 7.5 after 1st decoder iteration at = 4. after the 7th iteration. It can be seen that as the number The BER improved to 2.9 of iteration increases, the BER performance improves.however,the rate of improvement decreases. This is depicted by the overlapping curves after 4th iteration. The BER after 4 iterations is 3.9. The BER does not show significant improvement after 4th iteration.thus, the number of iterations should be kept such as to avoid extra computations BER performance for frame length L = 1500 The simulation for the Turbo Code was run with frame size L = 1500 keeping the number of frames 500 and number of iterations 7. So, bits were sent at each SNR value and the BER curve is shown in Fig. 3.2 Fig.3.2 BER as a function of and multiple iterations for frame length 1500 using BPSK modulation Fig.3.2 shows an improvement in the BER performance as compared to the frame size K=500. The BER decreases with the increase in the number of iterations. This behavior is similar to the case with frame size 500. However, for the the 1 st iteration L=1500 is able to 43

57 Chapter 3: Simulation and results analysis achieve higher BER of 7 at almost = 1dB Compared to K=500 that achieve BER of = 4 It can be seen that for K=1500; the achieved BER at 2 while for K=500 it was able to achieve 6 and after 5 iterations is. Hence, the code can achieve lower BER with the increase of frame size this is because the interleaver permutes the data and the decoder is better able to decode the data. In Fig.3.3 shown bellow a scattering plot has been performed for the received signal to visualize how much the transmitted bits are affected by the noisy channel. Fig.3.3 Scattering plot at different values of / for BPSK modulation. In the simulation we have modulated random data symbols using BPSK modulation of order 2 that specifies the number of points in the signal constellation.it can be seen from Fig2.3 that the noise at 6dB causes symbols to scatter around the ideal constellation points forming two distinguishable clouds whereas by decreasing / to -1db then to -4 db the transmitted bits became increasingly scattred in the shape of a single cloud. Hence; the obtained results in our simulations infers that the BER decreases with the increase in the number of iterations at very low values of BER as a function of frame size / between -4 and -1 db The performance comparison of Turbo code have been done in Fig3.3 by plotting the BER for different frame sizes at the 2nd iteration. The figure shows that by increasing the frame size, the BER performance of the code improves and lower BER can be achieved by keeping the SNR constant. 44

58 Chapter 3: Simulation and results analysis Fig.3.4 BER as a function of The BER values for / and different frame lengths between -5db and -3db and for frame size 500, 1500 and obtained from the simulation is given in Table 3.1 Table 3.1 BER for frame size K = 500, 1500 and Frame length / Total number Of bits (db) Number errors It can be seen in the figure that for frame size , BER of 5.7 = of BER is achieved at. However, frame size 1500 and 500 achieve only a BER of 2.01 respectively for the same / and. Hence, the inter-leaver with length provides better performance as compared with the other ones. 45

59 Chapter 3: Simulation and results analysis From the comparison carried out and the obtained results we have concluded that as the length of the inter-leaver increases, the more randomizing of the errors will occur and the error performance will improve BER as a function of Constraint Length Both the constraint length and the generator polynomials used in the component codes of Turbo Codes are important parameters. Often in Turbo Codes the generator polynomials which lead to the largest minimum free distance are used. In this simulation, frame length L = 1500 with 500 frames transmitted has been used. Fig.3.5 shows the difference in performance that can result from different generator polynomials using constraint length 3 Turbo Code. Fig.3. 5 Turbo code performance for constraint length, K=3 It can be noticed from Fig.3. 5 that coded data with generator polynomial G(D)=[7,5] provide better performance and a gain of 0.5dB relative to G(D)=[5 7] and 4 db relative to G(D)=[6,5] for desired BER of. Hence, Turbo Code with constraint length K=3 and G(D)=[7,5] would allow for more easier decoding compare with G(D)=[5,7] because one way of making the task of the decoder easier is using a code with mostly high-weight code words that can be distinguished more easily BER as a function of Generator Polynomial The effect of increasing the constraint length of the component codes used in Turbo Codes is shown in Fig

60 Chapter 3: Simulation and results analysis In this simulation, we have used constraint length, K=3,4 and 5 and their corresponding optimum minimum free distance generator polynomials [5,7 ] 8,[15,17]8 and [37,21] 8 respectively. Fig.3.6 Generator Polynomial effect It can be seen from the graph that increasing the constraint length of the Turbo Code does improve its performance, with the K=4 code performing about 0.4 db better than K=3 code at a BER of 10,and the K=5 code giving a furher improvement of about 0.1dB. 3.5 Turbo Code error performance analysis using 16-QAM modulaion Modulation is basically a process by which characteristic of the carrier are varied in accordance with the incoming sequence. In modern communication, the focus is on higher level modulation scheme that offer much faster data rates and higher levels of spectral efficiency. In this section a 16-QAM modulaion scheme has been used to study the performance of Turbo Code in Additive White Gaussian Noise and perform acomparison with the study performed in section Effect of number of decoding iterations : The Turbo Code program was simulated for frame length L = 1500.The SNR range as used from -6 to 6 db and the number of decoder iterations was chosen to be 7. The BER for the iterations is shown in Fig

61 Chapter 3: Simulation and results analysis Fig.3.7 BER as a function of and multiple iterations for frame length 1500 using 16-QAM modulation It can be noticed from Fig.3.7 that coded data provide a gain of 9 db relative to the uncoded data for BER of 3. after 1st It is shown also that the Turbo Code was able to achieve BER of 7 decoder iteration and 4 iterations is 6 after the 5th iteration at. A scattering plot has been performed at values of =8 received signal to illustrate the noise effect shown by Fig.3.8 = 2,1. The BER after 4 2 for the 48

62 Chapter 3: Simulation and results analysis Fig.3.8 Scattering plot at different values of / for 16-QAM modulation As we see in Fig. 3.8 the noise at 8dB causes symbols to gather around the constellation points forming 16 distinguishable clouds while by decreasing / to 1dB then to -2 db the error will increase along the transmitted bits causing them to scatter in the shape of a single cloud where the ransmitted bits can not be destinguised. We conclude from the obtained results that BPSK modulation scheme offer more reliable data transmission than 16-QAM modulation.this can be explained by the fact that for the same energy per bi the spacing between points in the 16 QAM constellation is smaller than the one in BPSK. Hence,It is logical to find a higher BER for 16QAM for the same / Frame size effect The study of the frame size effect on the performance of Turbo Code using 16 QAM modulation has been performed and the performance comparison is shown in Fig3.9 by plotting the BER for different frame sizes at the 2nd iteration. Fig. 3.9 BER as a function of and different frame lengths Using 16-QAM modulation 49

63 Chapter 3: Simulation and results analysis The BER values for between -5dB and -2dB and for frame size 500, 1500 and obtained from the simulation is given in Table 3.2 Table 3.2 BER for frame size K = 500, 1500 and Frame length / Total number Of bits (db) Number errors We see from Fig.3.9 that for frame size , a BER of 1.8 = of BER is achieved at. However, frame size 1500 and 500 achieve only a BER of 7.9 respectively for the same BPSK and 16-QAM comparison: / and. This section illustrate the difference in performance between BPSK and 16-QAM modulation for frame sise of 1500 at the 6th iteration. 50

64 Chapter 3: Simulation and results analysis Fig.3.10 Comparison of BPSK and 16-QAM for bit error rate versus Eb/N0. As we notice from the graph, the BPSK curve falls faster then the one of 16-QAM which means that compared to the BPSK modulation schemes 16QAM requires a higher value for / at the receiver for a given bit error probability in order to attain a certain quality of transmission like power efficiency. Besides that, we see that with BPSK modulation for BER=3 coded data provide better performance with a gain of 5.5 db relative to an uncoded channel. Whereas 16-QAM provide a gain of 3dB which means less performance. 3.6 The UMTS/LTE Turbo Code : This section gives the performance of the UMTS and LTE Turbo Codes. Simulations were run to determine the performance of the Turbo Codes using BPSK modulation AWGN channel for frame length of 560 bits and 500 tansmitted frames for each over /. The UMTS and LTE Turbo Coder scheme is a Parallel Concatenated Convolution Code It comprises of two constraint length K = 4 (8 state) RSC encoders,a generator polynomialmatrix of [13 15], and a feedback connection polynomial of 13. Therefore, in order to set the trellis structure, we need to use the poly2trellis (4, [13 15],13) function. The difference between the UMTS and LTE scheme lies on the interleaver type,the LTE interleaver has been described in chaper 2 section2-3-2, for the UMTS interleaver refer to [22]. 51

65 Chapter 3: Simulation and results analysis Fig.3.11 BER as a function of and multiple iterations for UMTS and LTE Turbo Code As expected, by increasing the number of iterations we obtain progressively better performance results. We observe that the BER performance at 1st,2nd,and 3rd iteration is almost the same, however the BER reach a value of 2.9 = 4 for the UMTS, while the LTE attain a BER of 1.5 the same value of for the 4th iteration at after 7 iterations for. Fig.3.12 shows a comparison between the UMTS and LTE Turbo Code for the 1st iteration 52

66 Chapter 3: Simulation and results analysis Fig.3.12 Comparison for UMTS/LTE Turbo Code over AWGN The BER results in Fig.3.12 indicate that we get similar BER performance for the range of / from -5 to -2 db,whereas for / values between -2 and -1dB we see that the LTE curve takes a small gain advantage relative to the UMTS curve. For example taking a BER= 7 LTE curve provide a gain of 0.4dB compared to the UMTS curve. The perfoemance of UMTS Turbo Code is almost similar to the one of the LTE Turbo Code. The LTE interleaver is however less complex( from the implementation point of viewand) thus it may be implemented with a simpler hardware. 3.7 Conclusion In this chapter we have studied the performance of turbo coding, whose basic configuration depends on the parallel concatenation of two component codes (RSC). We have concluded that the performance of Turbo codes depends on different parameters including number of iterations, frame size,generator polynomial and constrain length.the results obtained from our simulation ara listed bellow: 1- Turbo Codes provide better error performance and sufficient coding gain as compared with the un-coded data 2- BPSK modulation scheme offer more reliable data transmission than 16-QAM modulation 53

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

Contents Chapter 1: Introduction... 2

Contents Chapter 1: Introduction... 2 Contents Chapter 1: Introduction... 2 1.1 Objectives... 2 1.2 Introduction... 2 Chapter 2: Principles of turbo coding... 4 2.1 The turbo encoder... 4 2.1.1 Recursive Systematic Convolutional Codes... 4

More information

Chapter 1 Coding for Reliable Digital Transmission and Storage

Chapter 1 Coding for Reliable Digital Transmission and Storage Wireless Information Transmission System Lab. Chapter 1 Coding for Reliable Digital Transmission and Storage Institute of Communications Engineering National Sun Yat-sen University 1.1 Introduction A major

More information

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Department of Electronic Engineering FINAL YEAR PROJECT REPORT Department of Electronic Engineering FINAL YEAR PROJECT REPORT BEngECE-2009/10-- Student Name: CHEUNG Yik Juen Student ID: Supervisor: Prof.

More information

Performance comparison of convolutional and block turbo codes

Performance comparison of convolutional and block turbo codes Performance comparison of convolutional and block turbo codes K. Ramasamy 1a), Mohammad Umar Siddiqi 2, Mohamad Yusoff Alias 1, and A. Arunagiri 1 1 Faculty of Engineering, Multimedia University, 63100,

More information

Lab/Project Error Control Coding using LDPC Codes and HARQ

Lab/Project Error Control Coding using LDPC Codes and HARQ Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an

More information

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team Advanced channel coding : a good basis Alexandre Giulietti, on behalf of the T@MPO team Errors in transmission are fowardly corrected using channel coding e.g. MPEG4 e.g. Turbo coding e.g. QAM source coding

More information

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif PROJECT 5: DESIGNING A VOICE MODEM Instructor: Amir Asif CSE4214: Digital Communications (Fall 2012) Computer Science and Engineering, York University 1. PURPOSE In this laboratory project, you will design

More information

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing 16.548 Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing Outline! Introduction " Pushing the Bounds on Channel Capacity " Theory of Iterative Decoding " Recursive Convolutional Coding

More information

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr. Lecture #2 EE 471C / EE 381K-17 Wireless Communication Lab Professor Robert W. Heath Jr. Preview of today s lecture u Introduction to digital communication u Components of a digital communication system

More information

ECE 6640 Digital Communications

ECE 6640 Digital Communications ECE 6640 Digital Communications Dr. Bradley J. Bazuin Assistant Professor Department of Electrical and Computer Engineering College of Engineering and Applied Sciences Chapter 8 8. Channel Coding: Part

More information

Digital Television Lecture 5

Digital Television Lecture 5 Digital Television Lecture 5 Forward Error Correction (FEC) Åbo Akademi University Domkyrkotorget 5 Åbo 8.4. Error Correction in Transmissions Need for error correction in transmissions Loss of data during

More information

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication Available online at www.interscience.in Convolutional Coding Using Booth Algorithm For Application in Wireless Communication Sishir Kalita, Parismita Gogoi & Kandarpa Kumar Sarma Department of Electronics

More information

TABLE OF CONTENTS CHAPTER TITLE PAGE

TABLE OF CONTENTS CHAPTER TITLE PAGE TABLE OF CONTENTS CHAPTER TITLE PAGE DECLARATION ACKNOWLEDGEMENT ABSTRACT ABSTRAK TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF ABBREVIATIONS i i i i i iv v vi ix xi xiv 1 INTRODUCTION 1 1.1

More information

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS Manjeet Singh (ms308@eng.cam.ac.uk) Ian J. Wassell (ijw24@eng.cam.ac.uk) Laboratory for Communications Engineering

More information

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES Michelle Foltran Miranda Eduardo Parente Ribeiro mifoltran@hotmail.com edu@eletrica.ufpr.br Departament of Electrical Engineering,

More information

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

Intuitive Guide to Principles of Communications By Charan Langton  Coding Concepts and Block Coding Intuitive Guide to Principles of Communications By Charan Langton www.complextoreal.com Coding Concepts and Block Coding It s hard to work in a noisy room as it makes it harder to think. Work done in such

More information

Simulink Modeling of Convolutional Encoders

Simulink Modeling of Convolutional Encoders Simulink Modeling of Convolutional Encoders * Ahiara Wilson C and ** Iroegbu Chbuisi, *Department of Computer Engineering, Michael Okpara University of Agriculture, Umudike, Abia State, Nigeria **Department

More information

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1 Wireless Networks: Physical Layer: Modulation, FEC Guevara Noubir Noubir@ccsneuedu S, COM355 Wireless Networks Lecture 3, Lecture focus Modulation techniques Bit Error Rate Reducing the BER Forward Error

More information

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Error Control Coding Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Topics Introduction The Channel Coding Problem Linear Block Codes Cyclic Codes BCH and Reed-Solomon

More information

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq. Using TCM Techniques to Decrease BER Without Bandwidth Compromise 1 Using Trellis Coded Modulation Techniques to Decrease Bit Error Rate Without Bandwidth Compromise Written by Jean-Benoit Larouche INTRODUCTION

More information

A Survey of Advanced FEC Systems

A Survey of Advanced FEC Systems A Survey of Advanced FEC Systems Eric Jacobsen Minister of Algorithms, Intel Labs Communication Technology Laboratory/ Radio Communications Laboratory July 29, 2004 With a lot of material from Bo Xia,

More information

Error Correcting Code

Error Correcting Code Error Correcting Code Robin Schriebman April 13, 2006 Motivation Even without malicious intervention, ensuring uncorrupted data is a difficult problem. Data is sent through noisy pathways and it is common

More information

Decoding of Block Turbo Codes

Decoding of Block Turbo Codes Decoding of Block Turbo Codes Mathematical Methods for Cryptography Dedicated to Celebrate Prof. Tor Helleseth s 70 th Birthday September 4-8, 2017 Kyeongcheol Yang Pohang University of Science and Technology

More information

Chapter 3 Convolutional Codes and Trellis Coded Modulation

Chapter 3 Convolutional Codes and Trellis Coded Modulation Chapter 3 Convolutional Codes and Trellis Coded Modulation 3. Encoder Structure and Trellis Representation 3. Systematic Convolutional Codes 3.3 Viterbi Decoding Algorithm 3.4 BCJR Decoding Algorithm 3.5

More information

Journal of Babylon University/Engineering Sciences/ No.(5)/ Vol.(25): 2017

Journal of Babylon University/Engineering Sciences/ No.(5)/ Vol.(25): 2017 Performance of Turbo Code with Different Parameters Samir Jasim College of Engineering, University of Babylon dr_s_j_almuraab@yahoo.com Ansam Abbas College of Engineering, University of Babylon 'ansamabbas76@gmail.com

More information

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter n Soft decision decoding (can be analyzed via an equivalent binary-input additive white Gaussian noise channel) o The error rate of Ungerboeck codes (particularly at high SNR) is dominated by the two codewords

More information

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society Abstract MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING A Public Lecture to the Uganda Mathematics Society F F Tusubira, PhD, MUIPE, MIEE, REng, CEng Mathematical theory and techniques play a vital

More information

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder European Scientific Journal June 26 edition vol.2, No.8 ISSN: 857 788 (Print) e - ISSN 857-743 Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder Alaa Ghaith, PhD

More information

Error Control Codes. Tarmo Anttalainen

Error Control Codes. Tarmo Anttalainen Tarmo Anttalainen email: tarmo.anttalainen@evitech.fi.. Abstract: This paper gives a brief introduction to error control coding. It introduces bloc codes, convolutional codes and trellis coded modulation

More information

High-Rate Non-Binary Product Codes

High-Rate Non-Binary Product Codes High-Rate Non-Binary Product Codes Farzad Ghayour, Fambirai Takawira and Hongjun Xu School of Electrical, Electronic and Computer Engineering University of KwaZulu-Natal, P. O. Box 4041, Durban, South

More information

Study of turbo codes across space time spreading channel

Study of turbo codes across space time spreading channel University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2004 Study of turbo codes across space time spreading channel I.

More information

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2 AN INTRODUCTION TO ERROR CORRECTING CODES Part Jack Keil Wolf ECE 54 C Spring BINARY CONVOLUTIONAL CODES A binary convolutional code is a set of infinite length binary sequences which satisfy a certain

More information

ERROR CONTROL CODING From Theory to Practice

ERROR CONTROL CODING From Theory to Practice ERROR CONTROL CODING From Theory to Practice Peter Sweeney University of Surrey, Guildford, UK JOHN WILEY & SONS, LTD Contents 1 The Principles of Coding in Digital Communications 1.1 Error Control Schemes

More information

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding ECE 476/ECE 501C/CS 513 - Wireless Communication Systems Winter 2005 Lecture 9: Error Control Coding Chapter 8 Coding and Error Control From: Wireless Communications and Networks by William Stallings,

More information

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System Anshu Aggarwal 1 and Vikas Mittal 2 1 Anshu Aggarwal is student of M.Tech. in the Department of Electronics

More information

Revision of Lecture Eleven

Revision of Lecture Eleven Revision of Lecture Eleven Previous lecture we have concentrated on carrier recovery for QAM, and modified early-late clock recovery for multilevel signalling as well as star 16QAM scheme Thus we have

More information

Introduction to Error Control Coding

Introduction to Error Control Coding Introduction to Error Control Coding 1 Content 1. What Error Control Coding Is For 2. How Coding Can Be Achieved 3. Types of Coding 4. Types of Errors & Channels 5. Types of Codes 6. Types of Error Control

More information

An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion

An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion Research Journal of Applied Sciences, Engineering and Technology 4(18): 3251-3256, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: December 28, 2011 Accepted: March 02, 2012 Published:

More information

Study of Turbo Coded OFDM over Fading Channel

Study of Turbo Coded OFDM over Fading Channel International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 3, Issue 2 (August 2012), PP. 54-58 Study of Turbo Coded OFDM over Fading Channel

More information

Multiple Input Multiple Output (MIMO) Operation Principles

Multiple Input Multiple Output (MIMO) Operation Principles Afriyie Abraham Kwabena Multiple Input Multiple Output (MIMO) Operation Principles Helsinki Metropolia University of Applied Sciences Bachlor of Engineering Information Technology Thesis June 0 Abstract

More information

Communications Theory and Engineering

Communications Theory and Engineering Communications Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 Channel Coding The channel encoder Source bits Channel encoder Coded bits Pulse

More information

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 7 Channel Coding Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se 2016-04-18 Ove Edfors - ETIN15 1 Contents (CHANNEL CODING) Overview

More information

THE idea behind constellation shaping is that signals with

THE idea behind constellation shaping is that signals with IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 3, MARCH 2004 341 Transactions Letters Constellation Shaping for Pragmatic Turbo-Coded Modulation With High Spectral Efficiency Dan Raphaeli, Senior Member,

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 4, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 4, July 2013 Design and Implementation of -Ring-Turbo Decoder Riyadh A. Al-hilali Abdulkareem S. Abdallah Raad H. Thaher College of Engineering College of Engineering College of Engineering Al-Mustansiriyah University

More information

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 7 Channel Coding Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se 2012-04-23 Ove Edfors - ETIN15 1 Contents (CHANNEL CODING) Overview

More information

EECS 380: Wireless Technologies Week 7-8

EECS 380: Wireless Technologies Week 7-8 EECS 380: Wireless Technologies Week 7-8 Michael L. Honig Northwestern University May 2018 Outline Diversity, MIMO Multiple Access techniques FDMA, TDMA OFDMA (LTE) CDMA (3G, 802.11b, Bluetooth) Random

More information

Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder. Matthias Kamuf,

Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder. Matthias Kamuf, Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder Matthias Kamuf, 2009-12-08 Agenda Quick primer on communication and coding The Viterbi algorithm Observations to

More information

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016 Signal Power Consumption in Digital Communication using Convolutional Code with Compared to Un-Coded Madan Lal Saini #1, Dr. Vivek Kumar Sharma *2 # Ph. D. Scholar, Jagannath University, Jaipur * Professor,

More information

SNR Estimation in Nakagami-m Fading With Diversity Combining and Its Application to Turbo Decoding

SNR Estimation in Nakagami-m Fading With Diversity Combining and Its Application to Turbo Decoding IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 11, NOVEMBER 2002 1719 SNR Estimation in Nakagami-m Fading With Diversity Combining Its Application to Turbo Decoding A. Ramesh, A. Chockalingam, Laurence

More information

ISSN: International Journal of Innovative Research in Science, Engineering and Technology

ISSN: International Journal of Innovative Research in Science, Engineering and Technology ISSN: 39-8753 Volume 3, Issue 7, July 4 Graphical User Interface for Simulating Convolutional Coding with Viterbi Decoding in Digital Communication Systems using Matlab Ezeofor C. J., Ndinechi M.C. Lecturer,

More information

Basics of Error Correcting Codes

Basics of Error Correcting Codes Basics of Error Correcting Codes Drawing from the book Information Theory, Inference, and Learning Algorithms Downloadable or purchasable: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html CSE

More information

Disclaimer. Primer. Agenda. previous work at the EIT Department, activities at Ericsson

Disclaimer. Primer. Agenda. previous work at the EIT Department, activities at Ericsson Disclaimer Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder This presentation is based on my previous work at the EIT Department, and is not connected to current

More information

Spreading Codes and Characteristics. Error Correction Codes

Spreading Codes and Characteristics. Error Correction Codes Spreading Codes and Characteristics and Error Correction Codes Global Navigational Satellite Systems (GNSS-6) Short course, NERTU Prasad Krishnan International Institute of Information Technology, Hyderabad

More information

ECE 6640 Digital Communications

ECE 6640 Digital Communications ECE 6640 Digital Communications Dr. Bradley J. Bazuin Assistant Professor Department of Electrical and Computer Engineering College of Engineering and Applied Sciences Chapter 8 8. Channel Coding: Part

More information

Performance of Parallel Concatenated Convolutional Codes (PCCC) with BPSK in Nakagami Multipath M-Fading Channel

Performance of Parallel Concatenated Convolutional Codes (PCCC) with BPSK in Nakagami Multipath M-Fading Channel Vol. 2 (2012) No. 5 ISSN: 2088-5334 Performance of Parallel Concatenated Convolutional Codes (PCCC) with BPSK in Naagami Multipath M-Fading Channel Mohamed Abd El-latif, Alaa El-Din Sayed Hafez, Sami H.

More information

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Weimin Liu, Rui Yang, and Philip Pietraski InterDigital Communications, LLC. King of Prussia, PA, and Melville, NY, USA Abstract

More information

Error Protection: Detection and Correction

Error Protection: Detection and Correction Error Protection: Detection and Correction Communication channels are subject to noise. Noise distorts analog signals. Noise can cause digital signals to be received as different values. Bits can be flipped

More information

Performance Analysis of n Wireless LAN Physical Layer

Performance Analysis of n Wireless LAN Physical Layer 120 1 Performance Analysis of 802.11n Wireless LAN Physical Layer Amr M. Otefa, Namat M. ElBoghdadly, and Essam A. Sourour Abstract In the last few years, we have seen an explosive growth of wireless LAN

More information

SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding

SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding A. Ramesh, A. Chockalingam Ý and L. B. Milstein Þ Wireless and Broadband Communications Synopsys (India) Pvt. Ltd., Bangalore 560095,

More information

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors Single Error Correcting Codes (SECC) Basic idea: Use multiple parity bits, each covering a subset of the data bits. No two message bits belong to exactly the same subsets, so a single error will generate

More information

Error Detection and Correction

Error Detection and Correction . Error Detection and Companies, 27 CHAPTER Error Detection and Networks must be able to transfer data from one device to another with acceptable accuracy. For most applications, a system must guarantee

More information

Turbo coding (CH 16)

Turbo coding (CH 16) Turbo coding (CH 16) Parallel concatenated codes Distance properties Not exceptionally high minimum distance But few codewords of low weight Trellis complexity Usually extremely high trellis complexity

More information

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont. TSTE17 System Design, CDIO Lecture 5 1 General project hints 2 Project hints and deadline suggestions Required documents Modulation, cont. Requirement specification Channel coding Design specification

More information

International Journal of Scientific & Engineering Research Volume 9, Issue 3, March ISSN

International Journal of Scientific & Engineering Research Volume 9, Issue 3, March ISSN International Journal of Scientific & Engineering Research Volume 9, Issue 3, March-2018 1605 FPGA Design and Implementation of Convolution Encoder and Viterbi Decoder Mr.J.Anuj Sai 1, Mr.P.Kiran Kumar

More information

Department of Electronics and Communication Engineering 1

Department of Electronics and Communication Engineering 1 UNIT I SAMPLING AND QUANTIZATION Pulse Modulation 1. Explain in detail the generation of PWM and PPM signals (16) (M/J 2011) 2. Explain in detail the concept of PWM and PAM (16) (N/D 2012) 3. What is the

More information

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Presented to Dr. Tareq Al-Naffouri By Mohamed Samir Mazloum Omar Diaa Shawky Abstract Signaling schemes with memory

More information

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday Lecture 4: Wireless Physical Layer: Channel Coding Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday Channel Coding Modulated waveforms disrupted by signal propagation through wireless channel leads

More information

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12 Digital Communications I: Modulation and Coding Course Term 3-8 Catharina Logothetis Lecture Last time, we talked about: How decoding is performed for Convolutional codes? What is a Maximum likelihood

More information

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Convolutional Coder Basics Coder State Diagram Encoder Trellis Coder Tree Viterbi Decoding For Simplicity assume Binary Sym.Channel

More information

Course Developer: Ranjan Bose, IIT Delhi

Course Developer: Ranjan Bose, IIT Delhi Course Title: Coding Theory Course Developer: Ranjan Bose, IIT Delhi Part I Information Theory and Source Coding 1. Source Coding 1.1. Introduction to Information Theory 1.2. Uncertainty and Information

More information

Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction

Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction Okeke. C Department of Electrical /Electronics Engineering, Michael Okpara University of Agriculture, Umudike, Abia State,

More information

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry c 2008 Kanagaraj Damodaran Submitted to the Department of Electrical Engineering & Computer Science and the Faculty of

More information

Chapter 2 Overview - 1 -

Chapter 2 Overview - 1 - Chapter 2 Overview Part 1 (last week) Digital Transmission System Frequencies, Spectrum Allocation Radio Propagation and Radio Channels Part 2 (today) Modulation, Coding, Error Correction Part 3 (next

More information

Simple Algorithm in (older) Selection Diversity. Receiver Diversity Can we Do Better? Receiver Diversity Optimization.

Simple Algorithm in (older) Selection Diversity. Receiver Diversity Can we Do Better? Receiver Diversity Optimization. 18-452/18-750 Wireless Networks and Applications Lecture 6: Physical Layer Diversity and Coding Peter Steenkiste Carnegie Mellon University Spring Semester 2017 http://www.cs.cmu.edu/~prs/wirelesss17/

More information

Improved concatenated (RS-CC) for OFDM systems

Improved concatenated (RS-CC) for OFDM systems Improved concatenated (RS-CC) for OFDM systems Mustafa Dh. Hassib 1a), JS Mandeep 1b), Mardina Abdullah 1c), Mahamod Ismail 1d), Rosdiadee Nordin 1e), and MT Islam 2f) 1 Department of Electrical, Electronics,

More information

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access Spread Spectrum Chapter 18 FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access Single Carrier The traditional way Transmitted signal

More information

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004. EE29C - Spring 24 Advanced Topics in Circuit Design High-Speed Electrical Interfaces Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 24. Announcements Project phase 1 is posted

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

Performance of Turbo codec OFDM in Rayleigh fading channel for Wireless communication

Performance of Turbo codec OFDM in Rayleigh fading channel for Wireless communication Performance of Turbo codec OFDM in Rayleigh fading channel for Wireless communication Arjuna Muduli, R K Mishra Electronic science Department, Berhampur University, Berhampur, Odisha, India Email: arjunamuduli@gmail.com

More information

Turbo Codes for Pulse Position Modulation: Applying BCJR algorithm on PPM signals

Turbo Codes for Pulse Position Modulation: Applying BCJR algorithm on PPM signals Turbo Codes for Pulse Position Modulation: Applying BCJR algorithm on PPM signals Serj Haddad and Chadi Abou-Rjeily Lebanese American University PO. Box, 36, Byblos, Lebanon serj.haddad@lau.edu.lb, chadi.abourjeily@lau.edu.lb

More information

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Matthias Breuninger and Joachim Speidel Institute of Telecommunications, University of Stuttgart Pfaffenwaldring

More information

MULTILEVEL CODING (MLC) with multistage decoding

MULTILEVEL CODING (MLC) with multistage decoding 350 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 3, MARCH 2004 Power- and Bandwidth-Efficient Communications Using LDPC Codes Piraporn Limpaphayom, Student Member, IEEE, and Kim A. Winick, Senior

More information

On the performance of Turbo Codes over UWB channels at low SNR

On the performance of Turbo Codes over UWB channels at low SNR On the performance of Turbo Codes over UWB channels at low SNR Ranjan Bose Department of Electrical Engineering, IIT Delhi, Hauz Khas, New Delhi, 110016, INDIA Abstract - In this paper we propose the use

More information

Layered Space-Time Codes

Layered Space-Time Codes 6 Layered Space-Time Codes 6.1 Introduction Space-time trellis codes have a potential drawback that the maximum likelihood decoder complexity grows exponentially with the number of bits per symbol, thus

More information

Comparison Between Serial and Parallel Concatenated Channel Coding Schemes Using Continuous Phase Modulation over AWGN and Fading Channels

Comparison Between Serial and Parallel Concatenated Channel Coding Schemes Using Continuous Phase Modulation over AWGN and Fading Channels Comparison Between Serial and Parallel Concatenated Channel Coding Schemes Using Continuous Phase Modulation over AWGN and Fading Channels Abstract Manjeet Singh (ms308@eng.cam.ac.uk) - presenter Ian J.

More information

IDMA Technology and Comparison survey of Interleavers

IDMA Technology and Comparison survey of Interleavers International Journal of Scientific and Research Publications, Volume 3, Issue 9, September 2013 1 IDMA Technology and Comparison survey of Interleavers Neelam Kumari 1, A.K.Singh 2 1 (Department of Electronics

More information

ADVANCED WIRELESS TECHNOLOGIES. Aditya K. Jagannatham Indian Institute of Technology Kanpur

ADVANCED WIRELESS TECHNOLOGIES. Aditya K. Jagannatham Indian Institute of Technology Kanpur ADVANCED WIRELESS TECHNOLOGIES Aditya K. Jagannatham Indian Institute of Technology Kanpur Wireless Signal Fast Fading The wireless signal can reach the receiver via direct and scattered paths. As a result,

More information

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 Lecture 18 Today: (1) da Silva Discussion, (2) Error Correction Coding, (3) Error Detection (CRC) HW 8 due Tue. HW 9 (on Lectures

More information

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. COMPACT LECTURE NOTES on COMMUNICATION THEORY. Prof. Athanassios Manikas, version Spring 22 Digital

More information

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents S-72.3410 Introduction 1 S-72.3410 Introduction 3 S-72.3410 Coding Methods (5 cr) P Lectures: Mondays 9 12, room E110, and Wednesdays 9 12, hall S4 (on January 30th this lecture will be held in E111!)

More information

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 4, Number 5 (2014), pp. 463-468 Research India Publications http://www.ripublication.com/aeee.htm Power Efficiency of LDPC Codes under

More information

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel International Journal of Pure and Applied Mathematics Volume 114 No. 11 2017, 221-230 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu BER Analysis

More information

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <http://grouper.ieee.org/groups/802/mbwa>

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <http://grouper.ieee.org/groups/802/mbwa> 23--29 IEEE C82.2-3/2R Project Title Date Submitted IEEE 82.2 Mobile Broadband Wireless Access Soft Iterative Decoding for Mobile Wireless Communications 23--29

More information

Performance of Reed-Solomon Codes in AWGN Channel

Performance of Reed-Solomon Codes in AWGN Channel International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 4, Number 3 (2011), pp. 259-266 International Research Publication House http://www.irphouse.com Performance of

More information

DIGITAL COMMINICATIONS

DIGITAL COMMINICATIONS Code No: R346 R Set No: III B.Tech. I Semester Regular and Supplementary Examinations, December - 23 DIGITAL COMMINICATIONS (Electronics and Communication Engineering) Time: 3 Hours Max Marks: 75 Answer

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

SPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS

SPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS SPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS RASHMI SABNUAM GUPTA 1 & KANDARPA KUMAR SARMA 2 1 Department of Electronics and Communication Engineering, Tezpur University-784028,

More information

EDI042 Error Control Coding (Kodningsteknik)

EDI042 Error Control Coding (Kodningsteknik) EDI042 Error Control Coding (Kodningsteknik) Chapter 1: Introduction Michael Lentmaier November 3, 2014 Michael Lentmaier, Fall 2014 EDI042 Error Control Coding: Chapter 1 1 / 26 Course overview I Lectures:

More information