Lecture 15. Turbo codes make use of a systematic recursive convolutional code and a random permutation, and are encoded by a very simple algorithm:

Similar documents
A rate one half code for approaching the Shannon limit by 0.1dB

Lab/Project Error Control Coding using LDPC Codes and HARQ

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

Copyright 2010 Rock Star Recipes Ltd.

Removing Ionospheric Corruption from Low Frequency Radio Arrays

Reality Chess. Yellow. White

Error Correcting Code

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Chapter 1 Coding for Reliable Digital Transmission and Storage

The Ultimate Guide To Bass Harmonics. Create chords, melodies and solo bass pieces using harmonics

Turbo coding (CH 16)

Bridging the Gap Between Parallel and Serial Concatenated Codes

Performance comparison of convolutional and block turbo codes

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010)

Information Processing and Combining in Channel Coding

PHYS289 Lecture 4. Electronic Circuits

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Differentially-Encoded Turbo Coded Modulation with APP Channel Estimation

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

Motion Planning for Simple Two-Wheeled Robots

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

EXIT Chart Analysis for Turbo LDS-OFDM Receivers

Reducing ATE Cost in System-on-Chip Test

TURBO coding [1] is a well-known channel-coding technique

SIGNATURE ANALYSIS FOR MEMS PSEUDORANDOM TESTING USING NEURAL NETWORKS

On the performance of Turbo Codes over UWB channels at low SNR

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

Iterative Joint Source/Channel Decoding for JPEG2000

The Effects of MIMO Antenna System Parameters and Carrier Frequency on Active Control Suppression of EM Fields

Digital Television Lecture 5

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Infographics for Educational Purposes: Their Structure, Properties and Reader Approaches

BALANCING MONEY AND EXCELLING AT IT TOO! MODELING THE LAW OF THE LEVER NAME:

Decoding of Block Turbo Codes

RECOMMENDATION ITU-R P Attenuation by atmospheric gases

Ukulele at School published by Daniel Ho Creations

Comparison of Mesh Protection and Restoration Schemes and the Dependency on Graph Connectivity

6. FUNDAMENTALS OF CHANNEL CODER

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Color Correction in Color Imaging

NETWORK OF REMOTE SENSORS FOR MAGNETIC DETECTION

chapter two: melodic material

Spreading Codes and Characteristics. Error Correction Codes

Contents Chapter 1: Introduction... 2

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

(12) Patent Application Publication (10) Pub. No.: US 2002/ A1. Jin (43) Pub. Date: Sep. 26, 2002

Reliable and Total Order Broadcast in the Crash-Recovery Model

RECURSIVE BLIND IDENTIFICATION AND EQUALIZATION OF FIR CHANNELS FOR CHAOTIC COMMUNICATION SYSTEMS

Serial Concatenation of LDPC Codes and Differentially Encoded Modulations. M. Franceschini, G. Ferrari, R. Raheli and A. Curtoni

Chapter 3 Convolutional Codes and Trellis Coded Modulation

WAN_0247. DRC Attack and Decay Times for Real Audio Signals INTRODUCTION SCOPE

A Large-Scale MIMO Precoding Algorithm Based on Iterative Interference Alignment

Outline. Communications Engineering 1

A Three-Microphone Adaptive Noise Canceller for Minimizing Reverberation and Signal Distortion

Linear Turbo Equalization for Parallel ISI Channels

ERROR CONTROL CODING From Theory to Practice

Basic Stuff That You HAVE To Know

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

Ultra high speed optical transmission using subcarrier-multiplexed four-dimensional LDPCcoded

Research Article Extended Composite Right/Left-Handed Transmission Line and Dual-Band Reactance Transformation

Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels

Performance Analysis of n Wireless LAN Physical Layer

Project. Title. Submitted Sources: {se.park,

Table Of Contents. GuitarZoom 2015

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

Further reading. 1. Visual perception. Restricting the light. Forming an image. Angel, section 1.4

16 MICROSTRIP LINE FILTERS

Code and constellation optimization for efficient noncoherent communication

Lecture 13 February 23

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

On the Designs and Challenges of Practical Binary Dirty Paper Coding

An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion

Study of Turbo Coded OFDM over Fading Channel

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

Performance Analysis and Improvements for the Future Aeronautical Mobile Airport Communications System. Candidate: Paola Pulini Advisor: Marco Chiani

Iterative Demodulation and Decoding of DPSK Modulated Turbo Codes over Rayleigh Fading Channels

by Mark White Whitmark Music Publishing BMI

Closing the Gap to the Capacity of APSK: Constellation Shaping and Degree Distributions

LDPC codes for OFDM over an Inter-symbol Interference Channel

DRC Operation in Wolfson Audio CODECs WM8903 WM8904 WM8912 WM8944 WM8945 WM8946. Table 1 Devices that use the DRC Function

EECS 473 Advanced Embedded Systems. Lecture 13 Start on Wireless

Improvements encoding energy benefit in protected telecommunication data transmission channels

EXIT Chart Analysis of Iterative Demodulation and Decoding of MPSK Constellations with Signal Space Diversity

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

Optimized Codes for the Binary Coded Side-Information Problem

Coding for Efficiency

Volume 29, Issue 3. Moonlighting Entrepreneurs

FOR upstream communication in a Passive Optical Network

Recent Progress in Mobile Transmission

Low-Complexity LDPC-coded Iterative MIMO Receiver Based on Belief Propagation algorithm for Detection

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

ENGN8637, Semster-1, 2018 Project Description Project 1: Bit Interleaved Modulation

High-Rate Non-Binary Product Codes

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 4, July 2013

Transcription:

18.413: Error-Correcting Codes Lab April 6, 2004 Lecturer: Daniel A. Spielman Lecture 15 15.1 Related Reading Fan, pp. 108 110. 15.2 Remarks on Convolutional Codes Most of this lecture ill be devoted to Turbo codes and their variants. Hoever, since these are built from convolutional codes, I ll begin ith a fe more remarks on them. To begin, I ll point out that the algorithm that e discussed last class, alternately called the BCJR algorithm, MAP decoding, forards-backards algorithm, or sum-product algorithm, solves the probability estimation problem exactly. In this ay, it is analogous to the computations that e could do for the repitition code, the parity code, and LDPC codes on trees. For each message bit, e exactly compute the probability that it as 1 given the probability that every other message and check bit as 1. The next point that I ould like to make is that the algorithm described last lecture can run into numerical trouble. In particular, it keeps computing values that are proportional to the probabilities in question. Hoever, these values ill get loer and loer as the algorithm progresses through the trellis. It is quite possible for them to become so lo that they cannot be accurately represented in floating point. To compensate for this problem, one may re-scale the values. For example, say that e are interested in the probabilities that some state s i is equal to one of {00, 01, 10, 11}. Our algorithm computes values that are proportional to these probabilities. It is natuaral to re-scale these so that they actually become probabilities. To do that, one just multiplies them by a constant so that their sum becomes 1. If you keep doing this as the algorithm moves don the trellis, you ill avoid some of the potential numerical problems. 15.3 Turbo Codes Turbo codes make use of a recursive convolutional code and a random permutation, and are encoded by a very simple algorithm: 1. Form message bits {0, 1} n (the algorithm is so simple, I include this as a step) 2. Output. 15-1

15-2 3. Encode using the recursive convolutional code. Output the check bits generated. 4. Permute the bits of according to the random permutation, and call the result. 5. Encode using the recursive convolutional code. Output the check bits generated. If the recursive convolution code is the one described in the last class, hich has one stream of check bits in its output, then this is a rate 1/3 code. Let me make one thing clear: the code is specified by the convolutional coder and the permutation. The permutation must be knon to both the encoder and the decoder. The permutation does not really have to be random, but random ones do seem to ork very ell. 15.4 Decoding Turbo codes are exciting because they have a fast decoding algorithm that performs exceptionally ell. There are many ays of describing the decoding algorithm, but Fan s is the easiest. So, I ill follo his description. In particular, I ill reproduce his figures. (,,) Figure 15.1: Turbo Encoder. The node can be vieed as a repetition code that sends its input 3 ays. The node represents the permutation. Note that the block length must be fixed so that can be fixed. The object on the right just interleaves the three streams. This is Figure 3.8 in Fan. In Fan s depiction, the input string,, goes through a repetition code, denoted by the node in the figure. One of the outputs of the node goes straight into the output, making this a code. The next output of the node goes into a convolutional encoder, and the check bits v 1 output by this encoder go into the output. Finally, the third output of the node goes through a permutation, denoted, and then into a convolutional encoder, and the check bits v 2 output by this encoder go into the ouput. To understand the decoding, e first observe that the output of this encoder contains the information necessary to apply the forard-backard decoder to either convolutional code, as e obtain channel outputs for both the inputs and outputs of each code. Hoever, here the convolutional

15-3 codes share their inputs. This sharing allos for a more poerful decoding algorithm, in hich e take the extrinsic outputs produced by each decoder and feed them into the other. In the first stage of the decoder, depicted in Figure 15.4, the channel output for and v 1 are fed into a node that is used to decoder the upper. In the figure, e depict this by passing the channel output for through the node, and bringing a null message up from the other decoder. As hen e initialized the LDPC decoders, combining ith a null message at a node has no effect on the other message being sent, so the channel output for goes straight through. (,,) null Figure 15.2: Decoding step 0. On receiving the channel outputs for and v 1, the top convolutional decoder ill output the extrinsic probabilities that each bit of as 1. It is very important that these be the extrinsic, not the posterior, probabilities. That is, the probability that i 1 ill not take into account the channel output for i. The channel output ill be factored in later. This is just like hat happened at the parity nodes in LDPC codes. In the next stage of the decoding, depicted in Figure 15.4, (,,) Figure 15.3: Decoding step 1. The extrinsic outputs from the top decoder are passed through the node, here they are combined ith the channel outputs for, and are treated as improved estimates for the probability that each bit of 1. These probabilities are then passed into the loer decoder, hich also takes

15-4 as input the channel outputs for v 2. This decoder ill also produce extrinsic outputs for the probability that each input bit is 1. So far, this is all pretty conventional. Where things get interesting is in the next step. In the next step, depicted in Figure 15.4, the extrinsic outputs from the bottom decoder are passed up through the node, here they are again combined ith the outputs from the channel for, and are fed as inputs into the top decoder, here they are again combined ith the channel outputs for v 2. (,,) Figure 15.4: Decoding step 2. This process is then repeated for a fe iterations: bouncing from one decoder to the next. When e finally ant to get an output, e combine the channel outputs and the last outputs from each decoder at the node, and pass the outputs out on the left. This is depicted in Figure 15.4. (,,) Figure 15.5: Obtaining the outputs. Empirically, this algorithm does incredibly ell, providing good performance even at 0.6 db!

15-5 15.5 Exit charts For the first couple of years after Turbo codes ere invented and experimentally observed to ork ell, no one had a good explanation for hy they orked so ell. The first explanation that I thought helped, although it as still not completely rigorous, as provided by the EXIT charts of Stephan ten Brink. These EXIT charts are essentially a heuristic version of the analysis that e did of LDPC codes on the erasure channel. Here s the idea: We are going to form a chart that e hope ill explain the performance of each convolutional code individually. By then making the heuristic assumptions that each is behaving independently, and that all messages being sent in the system are Gaussian distributed of a given capacity, (even though they are not) e ill predict the behavior of the Turbo decoder. It turns out that even though these assumptions are false, they give a very good approximation of hat actually happens. Let me explain more concretely hat e ill measure. Consider just one convolutional decoder, as in Figure 15.5. This part of the decoder can be vieed as having three inputs: channel outputs for, channel outputs for v 1, and the messages being passed up from the other decoder, x. x Figure 15.6: A part of the decoder Begin by fixing some random codeord, say, v 1. Assume that e fix the channel noise level, and thereby fix the distributions of the channel outputs for each of the symbols in and v 1. At each iteration of the decoding algorithm, hat is changing is the messages x coming from the loer decoder. We are going to vie these as being like passing the input through another channel, and by ignoring the correlations among the bits, vie it as a memoryless channel. In particular, e ill take x to be the output of passing through some Gaussian channel. To try to understand the effects of successive iterations, e ill see hat happens hen e vary the noise of this Gaussian channel. For each noise level of the x channel, e ill perform a simulation. We ill then measure, empirically, the capacity of the meta-channel that e see if e measure the extrinsic outputs of the top decoder. That is, form x by passing through a Gaussian channel, apply the decoder, look at

15-6 the outputs, treat the hole thing as a meta-channel, as estimate the capacity empirically as e did in Project 1. By plotting the capacity of the channel through hich e passed x against the observed capacity of the meta-channel, e obtain a point on our exit chart. For an example, see Figure 15.5. 1 Exit Chart at 0.8 db 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 15.7: An exit chart for the convolutional code from small project 3 From the exit charts, e ould predict that if the curve stays above the x y line, then the decoder should converge. The reasoning is as follos: at the first stage of the decoding, a null message is transmitted. This corresponds to obtaining x by sending through a channel that just erases. So, the capacity of the output of the first decoder ill correspond to here the curve intersects the y-axis in the figure. The messages output by the first decoder are then passed to the second decoder, hich is identical. No, the messages output by the first decoder are not necessarily Gaussian distributed. But, e can empirically measure their capacity, and pretend that they came from a Gaussian distirbution of the same capacity. We then look again at the EXIT chart to figure out hat the capacity should be of the messages output by the bottom decoder, etc. So, this heuristic analysis orks by measuring the capacities of the outputs of decoders, pretending that they ere Gaussian of that capacity, and then looking on the chart for hat the next capacity should be. If all of these pretend assumptions ere right, e could just follo the charts to find out ho the decoding should behave. Empirically, e find that this orks very ell. If e vary the original channel through hich and v 1 are passed, e get a different chart:

15-7 1 Exit Chart at 0.6 db 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 15.8: An exit chart for the convolutional code from small project 3 But, you might onder, hy are e going to the trouble of making all these heuristic assumptions? Why not just use the distributions that actually occur? The reason is that e do not have any nice characterization of hat these distributions ill be, and cannot find any ay of determining them other than by simulating the Turbo code decoding process. Hoever, simulating the full Turbo code decoding is computationally expensive, hile simulating the decoding of the convolutional codes is cheap. So, e prefer the cheaper simulation. More importantly, this simulation seems to predict the behavior of the full system from just the behavior of its components. If this simulation alays yields a good prediction, then it should be possible to design better codes by computing EXIT charts of various components, and seeing ho they fit together. In particular, it allos us to predict ho the system ill behave if e use to different convolutional codes.