Contents Chapter 1: Introduction... 2

Similar documents
Performance comparison of convolutional and block turbo codes

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

Turbo coding (CH 16)

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

Journal of Babylon University/Engineering Sciences/ No.(5)/ Vol.(25): 2017

Study of turbo codes across space time spreading channel

ECE 6640 Digital Communications

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Decoding of Block Turbo Codes

Study of Turbo Coded OFDM over Fading Channel

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

Bridging the Gap Between Parallel and Serial Concatenated Codes

Outline. Communications Engineering 1

THE idea behind constellation shaping is that signals with

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Digital Television Lecture 5

Performance of Parallel Concatenated Convolutional Codes (PCCC) with BPSK in Nakagami Multipath M-Fading Channel

ERROR CONTROL CODING From Theory to Practice

High-Rate Non-Binary Product Codes

Turbo Codes for Pulse Position Modulation: Applying BCJR algorithm on PPM signals

Channel Coding for IEEE e Mobile WiMAX

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Differentially-Encoded Turbo Coded Modulation with APP Channel Estimation

IDMA Technology and Comparison survey of Interleavers

A rate one half code for approaching the Shannon limit by 0.1dB

On the performance of Turbo Codes over UWB channels at low SNR

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

ISSN: Page 320

TURBOCODING PERFORMANCES ON FADING CHANNELS

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Statistical Communication Theory

ISSN: International Journal of Innovative Research in Science, Engineering and Technology

Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels

Improved concatenated (RS-CC) for OFDM systems

Performance of Turbo codec OFDM in Rayleigh fading channel for Wireless communication

A Survey of Advanced FEC Systems

Chapter 3 Convolutional Codes and Trellis Coded Modulation

6. FUNDAMENTALS OF CHANNEL CODER

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding

PERFORMANCE EVALUATION OF WIMAX SYSTEM USING CONVOLUTIONAL PRODUCT CODE (CPC)

Input weight 2 trellis diagram for a 37/21 constituent RSC encoder

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

Master s Thesis Defense

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Chapter 1 Coding for Reliable Digital Transmission and Storage

Analysis of Convolutional Encoder with Viterbi Decoder for Next Generation Broadband Wireless Access Systems

Recent Progress in Mobile Transmission

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS

ECE 6640 Digital Communications

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Performance Analysis of MIMO Equalization Techniques with Highly Efficient Channel Coding Schemes

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 4, July 2013

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Bit Error Rate Performance Evaluation of Various Modulation Techniques with Forward Error Correction Coding of WiMAX

IN 1993, powerful so-called turbo codes were introduced [1]

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 8, February 2014

An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion

Design of HSDPA System with Turbo Iterative Equalization

PERFORMANCE OF TWO LEVEL TURBO CODED 4-ARY CPFSK SYSTEMS OVER AWGN AND FADING CHANNELS

Iterative Demodulation and Decoding of DPSK Modulated Turbo Codes over Rayleigh Fading Channels

Department of Electronics and Communication Engineering 1

Performance of Turbo Product Code in Wimax

TABLE OF CONTENTS CHAPTER TITLE PAGE

FOR applications requiring high spectral efficiency, there

Hardware Accelerator for Duo-binary CTC decoding Algorithm Selection, HW/SW Partitioning and FPGA Implementation. Joakim Bjärmark Marco Strandberg

SNR Estimation in Nakagami-m Fading With Diversity Combining and Its Application to Turbo Decoding

Simulink Modeling of Convolutional Encoders

Low Power Implementation of Turbo Code with Variable Iteration

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

Performance Evaluation and Comparative Analysis of Various Concatenated Error Correcting Codes Using BPSK Modulation for AWGN Channel

LDPC Decoding: VLSI Architectures and Implementations

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Multiple-Bases Belief-Propagation for Decoding of Short Block Codes

People s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research University M Hamed BOUGARA Boumerdes

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Iterative Joint Source/Channel Decoding for JPEG2000

TURBO codes are an exciting new channel coding scheme

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

Turbo and LDPC Codes for Digital Video Broadcasting

designing the inner codes Turbo decoding performance of the spectrally efficient RSCC codes is further evaluated in both the additive white Gaussian n

Turbo-codes: the ultimate error control codes?

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

Comparison of MAP decoding methods for turbo codes

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

BANDWIDTH EFFICIENT TURBO CODING FOR HIGH SPEED MOBILE SATELLITE COMMUNICATIONS

PERFORMANCE ANALYSIS OF IDMA SCHEME USING DIFFERENT CODING TECHNIQUES WITH RECEIVER DIVERSITY USING RANDOM INTERLEAVER

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting

EXIT Chart Analysis for Turbo LDS-OFDM Receivers

ATSC 3.0 Physical Layer Overview

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access

Performance Analysis of n Wireless LAN Physical Layer

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System

Transcription:

Contents Chapter 1: Introduction... 2 1.1 Objectives... 2 1.2 Introduction... 2 Chapter 2: Principles of turbo coding... 4 2.1 The turbo encoder... 4 2.1.1 Recursive Systematic Convolutional Codes... 4 2.1.2 The interleaver... 7 2.1.3 Trellis termination... 9 2.2 Turbo Decoding... 11 2.3 Merits of turbo codes... 14 Chapter 3: Design and Implementation... 16 3.1 The encoder... 16 3.1.1 Recursive systematic convolutional code... 16 3.1.2 The interleaver... 18 3.2 The decoder... 18 3.3 Testing... 21 Chapter 4: Results... 24 Chapter 5: Conclusion and Recommendations... 27 5.1 Conclusion... 27 5.2 Recommendations... 27 BIBLIOGRAPHY... 28 1

Chapter 1: Introduction 1.1 Objectives The aim of this project is to give a detailed description of the principles of turbo coding, to investigate the merits of turbo coding and to design a working encoder and decoder and examine the performance of the designed coder. 1.2 Introduction In the 1940s, Claude Shannon showed that for a given bandwidth over a channel with a given signal-to-noise ratio (SNR), there exists a maximum possible rate of information transfer i.e. the channel capacity. From this it was possible to show that the minimum SNR needed to reliably transmit one bit of information given an infinite bandwidth was - 1.6dB over a Gaussian noise channel. This minimum SNR is called the Shannon limit. However, practical systems cannot achieve infinite bandwidth so a pragmatic Shannon limit of 0.2dB is used when measuring code performance. Shannon also showed that for a given transmission rate (R) and channel capacity (C), if R < C, then a coding system exists that allows the bit error rate to be made arbitrarily small. This gave rise to forward-error-correcting (FEC) codes whereby redundancy in the form of parity bits is added. The redundancy is then used by the decoder at the receiver to correct channel errors. Because more channel errors can be tolerated with an FEC code than without, systems with FEC codes can transmit at a lower power, transmit over longer distances or withstand more interference, leading to an increase in the energy efficiency of the system. Many FEC codes were subsequently developed, like Hamming codes, BCH codes, Reed- Solomon codes to codes formed from the serial concatenation of two component codes. However, it proved difficult for these codes to reach the Shannon limit, as longer codes would result in higher decoding complexity. Prior to the 1990s, the best of these codes were limited to 3dB above the Shannon limit. Then in 1993, Claude Berrou, Alain Glavieux and Punya Thitmajshima published a paper that introduced a new class of codes they called turbo codes. They showed that these codes could achieve a bit error rate of 10-5 (since practically, error free transmission is not possible, 2

a bit error rate of 10-5 to 10-6 is usually used as a benchmark for error-free transmission) at an SNR of 0.7dB very close to the Shannon limit. The main ideas introduced by the authors were the concepts of parallel concatenation of codes separated by an interleaver for the encoding process, recursive systematic convolutional (RSC) codes and iterative decoding. The decoder output is fed back a set number of iterations with the results getting better with each iteration. This feedback process is what gives rise to the name turbo codes since it is similar to the feedback process of a turbo engine. 3

Chapter 2: Principles of turbo coding 2.1 The turbo encoder Turbo codes are formed from the parallel concatenation of two component codes separated by an interleaver. Parallel concatenation means that the two component encoders share the same set of input bits, rather than one encoder encoding the output bits of the other. The interleaver rearranges the input bits in a prescribed but irregular manner before sending the bits to the input of the second encoder. The encoders thus work on the same set of input bits albeit in arranged in a different order. The generic form of a turbo encoder is shown below. Input Upper Encoder Parallel to Serial converter Output Interleaver Lower encoder Figure 2.1: a generic turbo encoder 2.1.1 Recursive Systematic Convolutional Codes Although any type of component code can be used in the turbo encoder, convolutional codes known as recursive systematic convolutional (RSC) codes are almost always used. Consider a rate ½ (for each input bit, there are two output bits) convolutional code with constraint length K and memory K-1. The input to the encoder at time K is a bit d k and the corresponding codeword is a bit pair (u k, v k ) where K 1 u k = g 1i d k i modulo 2 g 1i = 0,1 i=0 K 1 v k = g 2i d k i modulo 2 g 2i = 0,1 i=0 4

And G 1 = {g 1i } and G 2 = {g 2i }. If we set K = 3, G 1 = {111} and G 2 = {101} we get the following encoder. Figure 2.2: A nonsystematic convolutional encoder. K=3, rate = 1/2 This is a nonsystematic convolutional (NSC) code. The output bits do not appear at the output. Data bits enter from the left and are stored in the linear shift register. Each time a new bot arrives, data is shifted to the right. The output bits are generated from XOR-ing a particular subset of the bits stored in the shift register. The turbo encoder however requires a recursive systematic convolutional (RSC) code. RSC codes contain a feedback loop whereby previously encoded information bits are continuously fed back into the encoder input. The input bits are also included in the output of the encoder. To convert the rate ½ NSC code into an RSC code, a feedback loop is added and one of the two outputs u k or v k is set equal to d k. a k is recursively calculated as K 1 a k = d k + g i a k i modulo 2 i=0 Whereg i = g 1i if u k = d k or g i = g 2i if v k = d k. The resulting encoder is as shown. 5

Figure 2.3: a recursive systematic convolutional code. K=3, rate = 1/2 The statistical properties of a k are the same as those of d k. The free distance of the two codes is also similar. Their trellises are identical with respect to state transitions and their corresponding output bits. However u k and v k do not correspond to the same input sequence d k in the RSC as they do in the NSC code. Therefore the only change when converting an NSC code to an RSC code is the mapping between input and output sequences. RSC codes are preferred over NSC codes when it comes to turbo codes because they are not as vulnerable to producing low-weight code words as NSC codes. For example, the unit weight input sequence (000 0100 000) will result in a low-weight output in an NSC code but because of the recursive nature of the RSC code, the same sequence will result in an infinite weight output i.e. (111 1111 111). The two RSC codes are then concatenated in a parallel manner with an interleaver between them. The systematic bits from the two encoders are however discarded since only one set of systematic bits is needed. The resultant encoder is a rate 1/3 encoder. However the code rate can be changed through a process called puncturing which involves removing some parity bits from the parity bits generated by the two component encoders. The resultant turbo encoder is shown below. 6

Figure 2.4: A parallel concatenation of two RSC codes. 2.1.2 The interleaver The interleaver rearranges the input bits in a prescribed manner before sending them to the second encoder. The interleaver helps to minimise the occurrence of low-weight codewords which hurt the performance of the code. Since the input to the second encoder is scrambled it produces a different output from the first encoder. Thus, although it is possible that one of the two encoders will occasionally produce a low-weight codeword, the probability that both encoders will produce a low-weight codeword is extremely small. Another advantage is that since the outputs of the two encoders are uncorrelated, they each carry unique information. By sharing this information during the decoding process, an even better estimate of the original message can be made. This improves the overall performance of the code. There are several types of interleaver that are used in conjunction with turbo codes. They include: 7

2.1.2.1 Matrix interleavers In this type of interleaver, the input data is arranged in matrix form and read out in a specific pattern. In the row-column interleaver the data is written row-wise and read column-wise while in the helical interleaver, the data is written row-wise and read diagonally. They are also called block interleavers. An example of a matrix interleaver is shown below. Figure 2.5: Matrix interleaver 2.1.2.2 Odd-Even interleaver First, the bits are left un-interleaved then encoded, but only the odd positioned coded bits are stored. Then the bits are scrambled and encoded, but now only the even positioned bits are stored. Odd-Even interleavers can be used when the second encoder produces one output bit per input bit. Figure 2.6: Odd-Even interleaver 8

2.1.2.3 Pseudo-random interleaver In a pseudo-random interleaver data is written into memory in sequential order and read out in a pseudo-random manner. An improvement on this interleaver is the S-random interleaver whereby in addition to data being read out in a pseudorandom manner, a restriction is set whereby two input positions within distance S cannot be permuted to two output positions within distance S. For short block sizes, the matrix and odd-even interleavers outperform the pseudo-random interleaver. But for longer block sizes the pseudo-random interleaver outperforms the matrix and odd-even interleavers. For long block sizes, the interleaving should be as random as possible while maintaining some structure to enable decoding. 2.1.3 Trellis termination When the k bits of a block or frame have been coded, the register of the encoder is in any of 2 v possible states (where v is the constraint length). The aim of termination is to lead the encoder towards the zero state. The decoder will then have knowledge of the final state by following one of the paths of the trellis to the zero state. This results in a small gain in performance of the code. There are two methods of terminating the encoder: 2.1.3.1 Classical termination Immediately after the k-bit block has been coded, the v feedback bits that follow are used as the input to the encoder. As the bits are shifted through the register, they are XOR-ed with each other, resulting in all zeros in the register. However, this method results in a reduction in the code rate due to the generation of the zero bits. 9

Figure 2.7: Classical trellis termination 2.1.3.2 Tail-biting This technique involves making the decoding trellis circular, i.e. making sure that the initial and final states of the encoder are identical. The advantage of this method is that the code rate is preserved but there is an increase in the decoder complexity. RSC codes that employ this method are called circular recursive systematic (CRSC) codes. Figure 2.8: A circular trellis 10

2.2 Turbo Decoding Since a turbo code trellis would have a very large number of states due to the interleaver, a conventional Viterbi algorithm decoder would be huge. Instead, decoding is divided between two decoders and performed iteratively. This means that the decoders cannot make hard decisions i.e. 0 or 1, but must soft decisions i.e. a value in [0, 1] that corresponds to the likelihood or log likelihood. The decoders input is in the following form; R(U i ) = log ( P(Y i U i = 1) P(Y i U i = 0) ) The expression above is a log-likelihood ratio (LLR). Calculation of the LLR requires not only the required signal sample, but also knowledge of the statistics of the channel. For example, if BPSK modulation is used over an AWGN channel with noise variance σ 2, the corresponding decoder input in LLR form would be R(U i ) = 2Y i /σ 2. The received bits are put into LLR form before decoding. For each data bit X i the turbo decoder computes the following LLR Λ(X i ) = log P(X i = 1 Y i Y n ) P(X i = 0 Y i Y n ) Where P(X i = j Y i Y n ) is the probability that X i = j given the entire received codeword(y i Y n ). Once Λ(X i ) is computed, a hard decision is made. If Λ(X i ) > 0, then X i = 1 and if Λ(X i ) < 0 then X i = 0. This decision rule is known as maximum a posteriori (MAP) since P(X i = j Y i Y n ) are a posteriori probabilities. The structure of the decoder is as shown below. 11

De-interleaver Extrinsic information Systematic bits Upper parity bits Upper Decoder Interleaver Lower Decoder Interleaver De-interleaver Lower parity bits Output Figure 2.9: Generic turbo decoder The turbo decoder uses the received code word along with knowledge of the code structure to compute Λ(X i ). The decoder first attempts to compute Λ(X i ) using only the structure of the upper encoder, and then computes it using just the structure of the lower encoder. Each of these two LLR estimates is computed using a soft-input soft-output (SISO) decoder. Because the two SISO decoders produce LLR estimates of the same set of data bits albeit in a different order because of the interleaver, the decoder performance can be improved by sharing these LLR estimates between the two decoders. Thus, the first decoder passes the information it has learned i.e. the extrinsic information, to the second decoder through an interleaver. The second decoder then uses this information to generate its own estimates. This information is then de-interleaved and sent back to the first decoder. This iterative process is repeated for a number of times until a satisfactory result is achieved. A hard decision is then made on the final result. The SISO processors use a trellis diagram to represent all possible sequences of the encoder states. The decoders generate Λ(X i ) by sweeping through the trellis in a prescribed manner. This sweep can be implemented using one of two algorithms, the soft output Viterbi algorithm (SOVA) or the maximum a posteriori (MAP) algorithm. The SOVA algorithm is not as complex as the MAP algorithm but does not perform as well. The complexity of the MAP algorithm can be reduced by implementing it in the log domain. This version is called the log-map and complexity is reduced because multiplication operations are reduced to addition. However this adds an operation of the form log (e x + e y ) which is non-trivial to 12

compute. Fortunately, this can be approximated by the maximum operator in an operation called max-star defined as follows. max (x, y) = log(e x + e y ) = max(x, y) + f c ( y x ) I.e. the log-add operation can be implemented by simply taking the maximum of two arguments and adding a correction function which is proportional to the magnitude of the difference between the two arguments. By setting the correction function to zero, we obtain the max-log-map algorithm. This algorithm is faster than the MAP logarithm but it results in reduced bit-error rate (BER) performance. 13

2.3 Merits of turbo codes Turbo codes were the first codes to be able to get close to the Shannon limit. They provide very good performance at low SNRs with moderate decoding complexity. This gain in performance can translate to reduced power consumption since transmitting systems can afford to transmit at reduced power levels without a drop in performance. Due to the presence of the interleaver, turbo codes also have the ability to deal with burst errors i.e. errors that occur in quick succession. When a signal affected by burst errors arrives at the decoder, the de-interleaving process spreads out the long succession of errors into smaller chunks of errors that are much easier to decode. Figure 2.9: performance of the original turbo code compared to a (2, 1, 14) convolutional code. (From: Christian Schlengel, Lance Perez, Trellis and Turbo Coding ) Another contribution of turbo codes has been the idea of the turbo principle. To solve certain complex a priori signal processing problems, these problems are divided into a cascade of elementary processing operations, simpler to implement. However this division 14

leads to a loss of optimality. To overcome this, an exchange of probabilistic information is instituted between these operations. This is best illustrated by the fact that even though turbo codes use simple convolutional codes with small free distances, their performance exceeds that of complex convolutional codes with long constraint lengths and large free distance. The turbo principle has been applied to serial concatenated codes (SCCs). The turbo decoder is used to decode the SCCs which are usually composed of two convolutional codes as shown below. Figure 2.10: A serial concatenated code with a turbo decoder The turbo principle has also been applied to equalization. The inner code of a serial concatenation could be an inter-symbol interference (ISI) channel. The ISI channel can be interpreted as a rate 1 code defined over the field of real numbers. Figure 2.11: Turbo equalization 15

Chapter 3: Design and Implementation The implementation and simulation of the designed turbo code was carried out using MATLAB Simulink. 3.1 The encoder 3.1.1 Recursive systematic convolutional code The problem of developing a theoretical approach for coming up with good convolutional codes has not been solved yet. Computer searches are therefore used to find convolutional codes with the largest free distance. The free distance determines how many errors a code can correct. The computer runs through the connector taps (for a given constraint length) either exhaustively or according to heuristic rules and tries to find the minimal basic encoder with the largest free distance. Reference tables are then produced for various code rates and constraint lengths. Figure 3.1: Table showing the best rate ½ convolutional codes for various constraint lengths. (From: Christian Schlengel, Lance Perez, Trellis and Turbo Coding ) 16

The convolutional code chosen to implement the turbo code was a rate ½, constraint length 4 code with the generators g 1 = {1101}, g 2 = {1111} with a free distance of 6. The code circuit diagram is shown below. Figure 3.2: Circuit diagram for the chosen convolutional code This was implemented in MATLAB using the function poly2trellis(4, [15 17], 15) The function takes the constraint length and the code generators (in octal form) and generates a trellis that is then used by the encoders and decoders. Two RSC coders were then concatenated with an interleaver between them as shown below. Figure 3.3: The designed turbo encoder 17

The Puncture blocks after the convolutional encoders remove the systematic bits from the two component encoders because a set of systematic bits is already present. The parity bits and the systematic bits are combined into serial form using the matrix concatenate block. 3.1.2 The interleaver The role of the interleaver is to increase the randomness of the overall turbo code. According to Shannon s theorem, the more random a code is, the better it is at correcting errors. The interleaver chosen should therefore try and permute the input bits in an irregular a manner as possible but still be able to retain enough structure such that the bits can be de-interleaved. The interleaver chosen for use in the turbo encoder was a random interleaver. It was chosen because it offers the best performance when the block length is large. Larger block lengths are preferred as they provide superior performance compared to shorter block lengths. The random interleaver block takes an initial seed and uses this to generate a random permutation using a random number generator. The permutation is predictable for a given seed but different seeds result in different permutations. 3.2 The decoder The generic form of the turbo decoder is as shown. De-interleaver Extrinsic information Systematic bits Upper parity bits Upper Decoder Interleaver Lower Decoder Interleaver De-interleaver Lower parity bits Output Figure 3.4: generic turbo decoder When the encoded bits arrive at the receiver, they are de-multiplexed into systematic bits and parity bits. The systematic bits and parity bits from the upper encoder are fed into the upper decoder. This decoder then generates an estimate of the original message using the structure of the upper encoder which is described by a trellis. This estimate is passed to the lower decoder through an interleaver. The lower decoder then improves on this estimate 18

using information from an interleaved set of systematic bits and parity bits from the lower encoder. This improved estimate is then de-interleaved and fed back to the upper decoder and the cycle continues for a certain number of iterations until a satisfactory estimate is obtained. The practical implementation of the decoder is shown below Figure 3.5: practical implementation of the turbo decoder 19

The APP (a posteriori) Decoder block performs the soft-input soft-output (SISO) processing of the received bits. The block is shown below. Figure 3.6: Simulink APP decoder block The block uses the MAP (maximum a posteriori) algorithm or its variant the max-log-map to perform decoding of convolutional codes. The input L(c) represents the sequence of loglikelihoods of the code bits (the encoder output) while L(u) represents the sequence of loglikelihoods of the input bits. The corresponding outputs represent the updated versions of these sequences based on the information about the encoder. The first block the received bits go through when they arrive is the zero-order hold (ZOH) block. This block holds the information at is input for a specified amount of time. It is used in the implementation to hold the input frame until the required number of iterations is completed. From the ZOH block, the bits go through a multiport selector which separates the received bits into systematic bits and parity bits which are then sent to their respective decoders. The turbo decoder also contains a feedback loop as shown in figure 3.5. the delay in the feedback loop is set to be equal to the frame size. It allows the iteration to happen on a frame by frame basis. The pulse generator and the product blocks in the feedback loop act as a sort of AND gate. The pulse width (or its duty cycle) is set to (number of iterations 1). As long as the pulse is high, the current data in the loop is unaffected. Once the pulse is low, the result of the multiplication is zero. This controls the number of times data is fed back to the first decoder. For example, if the number of iterations is set to 3, a frame will go through the decoder blocks for its first run. It will then be fed back twice i.e. (no. of iterations - 1), meaning the same frame has gone through the decoder blocks for three times. At this time, the pulse will switch to low and a zero frame will be produced to indicate the end of iterations. 20

Once the iterations have finished, the hard decision block makes a hard decision to generate either a 1 or 0. If the log-likelihood is greater than 0, then the bit is read as a 1, if it is less than 0, it is read as a 0. 3.3 Testing The encoder-decoder pair was tested over an additive white Gaussian noise (AWGN) channel. Additive means that the noise is added to the signal in question, white means that the noise has a uniform power distribution over a large bandwidth and Gaussian means that the noise has a Gaussian or normal distribution. An AWGN channel contains only AWGN noise. For BPSK modulation, the channel can be modeled as y = ax + n Where y is the received signal, x is the modulated signal and n is the AWGN random variable with zero mean and variance σ 2. a is a channel amplitude scaling factor. For AWGN noise the noise variance in terms of the noise power spectral density (N 0 ) is given by σ 2 = N 0 2 For M-ARY modulation schemes the symbol energy per bit is given by E s = R m R c E b Where R m = log 2 M (for BPSK M = 2), R c is the code rate (for the designed turbo code, this is 1/3) and E b is the bit energy. Assuming E s = 1 for BPSK (symbol energy normalized to 1) E b N 0 = E s R m R c N 0 = E s R m R c 2σ 2 = 1 R m R c 2σ 2 From the above equation, the variance for a given E b N 0 can be calculated as 1 σ 2 E b = (2R m R c ) N 0 The above equation was used to generate the variance needed by the AWGN channel block in MATLAB. In this way the E b N 0 of the channel can be varied. 21

The testing of the encoder-decoder pair was implemented as shown. Figure 3.7: the turbo encoder and decoder with an AWGN channel The Bernoulli Binary block was used to generate random sequences of bits. The bits are encoded and converted from unipolar to bipolar form and sent through an AWGN channel. They are then scaled by a factor of 2/σ 2. This scaling provides the decoders with information about the channel statistics. The bits are then fed to the decoder. The output bits are then sent to an error rate calculation block which calculates the bit error rate. This information is then displayed or stored in an array variable. The figure below shows the state of the bits at various stages of the above block. 22

Figure 3.8: state of bits at various stages of the encoder-channel-decoder block 23

Chapter 4: Results The following results were obtained by comparing the decoded bits to the input bits. The bit-error rate (BER) was the can calculated and the results plotted. The resultant graph is shown below. Figure 4.1: Graph showing Bit Error Rate vs Eb/No (word length = 1000) It can be seen that the bit-error rate (BER) falls rapidly especially as the number of iterations increases. It can be seen that the turbo coder achieves a BER of 10 5 at around 0.7dB with 10 iterations. This is in line with the expected performance (the original turbo code achieved this BER after 18 iterations with a word length of 65,536 bits). However the BER values appear to converge from about 10 6 to 10 8. The decline in BER is steep up to about 0.5 x 10 6 from where it begins to level off. Therefore, the turbo code appears to have an error beyond which gains in BER performance cannot be extracted from increasing the number of iterations. 24

The word length was also varied and a graph of BER vs word length was plotted. The graph is shown below Figure 4.2: Graph of BER vs word length This shows that as the word length increases, the coder becomes better at correcting errors. However, a large increase in computation time was noticed as the word length was increased. A tradeoff is therefore made between word length and decoding time. The figure below shows a comparison of the input bits and the decoded bits. It can be seen that the waveforms are similar except for a few errors. The comparison was carried out at an Eb/No of 0.7dB after 10 iterations. 25

Figure 4.3: Comparison between input bits and decoded bits 26

Chapter 5: Conclusion and Recommendations 5.1 Conclusion Turbo codes are a class of error correcting code that get very close to the Shannon limit. They employ two recursive systematic convolutional encoders in parallel concatenation, separated by an interleaver. They employ an iterative decoding process. In this paper, a detailed description of turbo codes was given. A turbo encoder and decoder were designed and built in MATLAB Simulink and tested over an AWGN channel. The performance of the designed turbo code was the demonstrated by calculation of the bit-error rate for varying levels of signal to noise ratio over multiple iterations. 5.2 Recommendations The performance of turbo codes over different modulation schemes could be investigated. Another area of further research would be to investigate the performance of turbo codes with different interleavers. New interleavers that have proposed for use with turbo codes and that show improvements over the random interleaver are the S-random interleaver and the quadratic permutation polynomial (QPP) interleaver. Hybrid turbo codes, which combine features of parallel concatenated codes and serial concatenated codes and aim to eliminate the error floor experienced by parallel concatenated codes, provide another area of further research. 27

BIBLIOGRAPHY [1] C. Berrou, A. Glavieux and P. Thitimasjshima, "Near Shannon Limit Error Correcting Coding and Decoding: Turbo Codes," in Proceedings of the IEEE International Conference on Communications, Geneva, 1993. [2] B. Sklar, Digital Communications: Fundamentals and Applications, New Jersey: Prentice Hall, 2001. [3] C. Berrou, Codes and Turbo Codes, Paris: Springer, 2010. [4] M. C. Valenti and J. Sun, "Turbo Codes," in Handbook of RF and Wireless Technologies, 2003, pp. 375-400. [5] C. B. Schlegel and L. Perez, Trellis and Turbo Coding, New Jersey: John Wiley & Sons, 2004. [6] M. Valenti and J. sun, "The UMTS Turbo Code and an Efficeint Decoder Implementation Suitable for Software Defined Radios," International Journal of Wireless Information Networks, vol. 8, no. 4, pp. 202-215, 2001. [7] D. J. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003. [8] Y. Jiang, A Practical Guide To Error-Control Coding Using MATLAB, Norwood: Artech House, 2010. 28