AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

Similar documents
RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Chapter 3 Convolutional Codes and Trellis Coded Modulation

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Outline. Communications Engineering 1

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

Decoding of Block Turbo Codes

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

Intro to coding and convolutional codes

Coding for Efficiency

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Contents Chapter 1: Introduction... 2

ERROR CONTROL CODING From Theory to Practice

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

6. FUNDAMENTALS OF CHANNEL CODER

FOR applications requiring high spectral efficiency, there

Channel Coding for IEEE e Mobile WiMAX

Statistical Communication Theory

TABLE OF CONTENTS CHAPTER TITLE PAGE

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

Digital Television Lecture 5

Chapter 1 Coding for Reliable Digital Transmission and Storage

ECE 6640 Digital Communications

A Survey of Advanced FEC Systems

Simulink Modeling of Convolutional Encoders

VA04D 16 State DVB S2/DVB S2X Viterbi Decoder. Small World Communications. VA04D Features. Introduction. Signal Descriptions. Code

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

Lecture 15. Turbo codes make use of a systematic recursive convolutional code and a random permutation, and are encoded by a very simple algorithm:

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Block code Encoder. In some applications, message bits come in serially rather than in large blocks. WY Tam - EIE POLYU

Communications Theory and Engineering

Journal of Babylon University/Engineering Sciences/ No.(5)/ Vol.(25): 2017

Turbo coding (CH 16)

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

ECE710 Space Time Coding For Wireless Communication HW3

Ultra Low Power Consumption Military Communication Systems

Basics of Error Correcting Codes

2. Performance comparison of split/full bit level channel interleavers

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion

ECE 6640 Digital Communications

Optimized Codes for the Binary Coded Side-Information Problem

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010)

IN 1993, powerful so-called turbo codes were introduced [1]

Turbo Codes for Pulse Position Modulation: Applying BCJR algorithm on PPM signals

II. FRAME STRUCTURE In this section, we present the downlink frame structure of 3GPP LTE and WiMAX standards. Here, we consider

Revision of Lecture Eleven

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS

Error Control Codes. Tarmo Anttalainen

Performance comparison of convolutional and block turbo codes

Chapter 9. Digital Communication Through Band-Limited Channels. Muris Sarajlic

FPGA Implementation of Viterbi Algorithm for Decoding of Convolution Codes

designing the inner codes Turbo decoding performance of the spectrally efficient RSCC codes is further evaluated in both the additive white Gaussian n

A rate one half code for approaching the Shannon limit by 0.1dB

Lab/Project Error Control Coding using LDPC Codes and HARQ

Hamming Codes as Error-Reducing Codes

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK. Subject Name: Digital Communication Techniques

Collaborative decoding in bandwidth-constrained environments

KINGS DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DIGITAL COMMUNICATION TECHNIQUES YEAR/SEM: III / VI BRANCH : ECE PULSE MODULATION

Module 3 Greedy Strategy

Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Novel BICM HARQ Algorithm Based on Adaptive Modulations

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

COHERENT DEMODULATION OF CONTINUOUS PHASE BINARY FSK SIGNALS

The figures and the logic used for the MATLAB are given below.

International Journal of Scientific & Engineering Research Volume 9, Issue 3, March ISSN

EE521 Analog and Digital Communications

ISSN: International Journal of Innovative Research in Science, Engineering and Technology

_ MAPequalizer _ 1: COD-MAPdecoder. : Interleaver. Deinterleaver. L(u)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder. Matthias Kamuf,

Analysis of Convolutional Encoder with Viterbi Decoder for Next Generation Broadband Wireless Access Systems

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

Introduction to Error Control Coding

Performance Analysis of n Wireless LAN Physical Layer

Improved PHR coding of the MR-O-QPSK PHY

High-Rate Non-Binary Product Codes

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Disclaimer. Primer. Agenda. previous work at the EIT Department, activities at Ericsson

Spreading Codes and Characteristics. Error Correction Codes

COMBINING GALOIS WITH COMPLEX FIELD CODING FOR HIGH-RATE SPACE-TIME COMMUNICATIONS. Renqiu Wang, Zhengdao Wang, and Georgios B.

2018/11/1 Thursday. YU Xiangyu

Symbol-by-Symbol MAP Decoding of Variable Length Codes

Bit-Interleaved Coded Modulation: Low Complexity Decoding

Comparison Between Serial and Parallel Concatenated Channel Coding Schemes Using Continuous Phase Modulation over AWGN and Fading Channels

On a Viterbi decoder design for low power dissipation

Differentially-Encoded Turbo Coded Modulation with APP Channel Estimation

Transcription:

AN INTRODUCTION TO ERROR CORRECTING CODES Part Jack Keil Wolf ECE 54 C Spring

BINARY CONVOLUTIONAL CODES A binary convolutional code is a set of infinite length binary sequences which satisfy a certain set of conditions. In particular the sum of two code words is a code word. It is easiest to describe the set of sequences in terms of a convolutional encoder that produces these sequences. However for a given code, the encoder is not unique. We start with a simple example.

A RATE ½, 4-STATE, BINARY CONVOLUTIONAL ENCODER Consider the 4-state, binary, convolutional encoder shown below: Output Input Binary Sequence Binary Sequence There are two output binary digits for each (one) input binary digit. This gives a rate ½ code. Each adder adds its two inputs modulo.

A RATE ½ ENCODER Consider the following encoder. Input Output Current state Next state Assume current state is and input is a. Then, the next state is. The outputs are.

A RATE ½ ENCODER The output code words are the labels on the branches of all of the paths through the binary tree. The nodes in the tree are the states of the encoder. Assume we start in the state. Input Input

A RATE ½ ENCODER Consider the input sequence... Input Input Then the output sequence would be...

NOTION OF STATES At any time the encoder can be in one of 4 states: Input Output S S State s s a b c d

NOTION OF STATES These states can be put as labels of the nodes on the tree. Input Input a a b b c a d a b a c d b a b c d a b

NOTION OF A TRELLIS There are only 4 states. Thus, the states at the same depth in the tree can be merged, and the tree can be redrawn as a trellis: a a a a a b b b b c c c d d d

OTHER EXAMPLES An 8-state rate ½ code Can represent the tap connections as: Top Bottom. 8-state trellis This is written in octal as: (,3) or (,5). States

OTHER EXAMPLES An 8-state, rate /3 encoder with inputs and 3 outputs. inputs 3 Outputs The trellis has 8 states but 4 branches coming out of each state. There are three output binary digits on each branch. TRY DRAWING IT!!!

ENCODING ON THE RATE ½, 4-STATE TRELLIS The path corresponding to the input... is shown as the dark line and corresponds to the output...

HARD DECISION DECODING ON THE RATE ½, 4-STATE TRELLIS Assume that we receive... We put the number of differences or errors on each branch. / / / / / / / / / / / / / / / / / / / / / /

VITERBI DECODING We count the total number of errors on each possible path and choose the path with the fewest errors. We do this one step at a time. 3 4 or 3 4 or 3 or 4 3 or

VITERBI DECODING Viterbi decoding tells us to choose the smallest number at each state and eliminate the other path: 3 3 3

VITERBI DECODING Continuing in this fashion we have: 3 3 3 3 or 3 5 or 4 or 3 4 or 3 In case of a tie, take either path.

VITERBI DECODING This yields: 3 3 3 3 3 3

VITERBI DECODING If a decision is to be made at this point we choose the path with the smallest sum. The information sequence (i.e., input to the encoder) corresponds to that path. 3 3 3

VITERBI DECODING In a packet communication system, one could append s at the end of the message (i.e., the input to the encoder) to force the encoder back to the all state. Then we would have only one path remaining and that would be the winning code word.

VITERBI DECODING The basic decoding module is referred to as an ACS module: Add, Compare, Select. At the decoder, one could make early decisions on the information bits by tracing back from the state with the lowest accumulated sum and then decoding some D steps back from the point of trace-back in the trellis.... Winning node. Start traceback here. Decode this branch

VITERBI DECODING A great deal is known about how to choose D without losing much in performance. In a rate ½ code, a good value of D is about 4 to 5 times the number of delay elements in the decoder.

VITERBI DECODING The trace-back is complicated and may be the speed bottleneck in the decoder. One may not want to trace back at each step in the trellis but rather wait for L steps and then decode L branches at a time. Other tricks are used in designing the decoder. One is to use a circular buffer.

DECODING ON THE TRELLIS: SOFT INPUT DECODING For a soft input decoder, the input to the decoders are probabilities of the code symbols. In actuality we use the logarithm of the probabilities so that for independent noise, the logarithm of the probability of a sequence is the sum of the logarithms of the individual symbols. FOR a AWGN channel, this corresponds to taking the squared difference between the noiseless transmitted outputs and the received values.

SOFT INPUT DECODING Assume we transmit a code symbol as a - and a code symbol as a +. Then we would label the branches in the trellis with the transmitted values: a - - a a a - - - - - - a + + + + + + + + + + + + b b b b + - - + c - - + - c - - + - c - + - + - + - + d d d + - + -

SOFT INPUT DECODING Now assume we receive the vector.9,.,.8,.9,,.4, -., -.3, We would put the following squared errors on the trellis: a - -/ 8. a a a - -/6.85 - -/.96 - -/.3 a + +/. + +/.5 + +/.36 + +/.3 b b b b + -\3.65 - +\3.5 c + -/.96 - +/.36 + +/.36 - -/.96 c + -/4.93 - +/5.33 + +/.3 - -/.3 c (- -.9) +(- -.) = 8. (+-.9) + (+-.) )=. - +/.36 - +/5.33 d d d + -/.96 + -/4.93

VITERBI ALGORITM WITH SOFT INPUT DECODING Now use the Viterbi Algorithm. a 8. 8. 6.85 4.87 a a a.96 Min [(4.87+.96),(3.67+.36)].3 a..5.36.3 3.65. 3.5 8.7.96 b b b b c.36.36 4.93 5.33.3.96.3 3.67 c c.36 5.33 d d d 3.7.96 Etc, etc., 4.93

DECODING ON THE TRELLIS: SOFT OUTPUT DECODING The Viterbi algorithm finds the path on the trellis that was most likely to have been transmitted. It makes hard decisions. There is another (more complicated) algorithm called the BCJR algorithm that produces soft outputs: that is, it produces the probability that each symbol is a or a. The output of the BCJR algorithm then can be used as the soft input to another decoder. This is used in iterative decoding for Turbo Codes.

FREE HAMMING DISTANCE OF CONVOLUTIONAL CODES The error correction capability of a convolutional code is governed by the so-called free Hamming distance, d free, of the code. d free is the smallest Hamming distance between two paths in the trellis that start in a common state and end in a common state. Two candidate paths are:

FREE HAMMING DISTANCE OF CONVOLUTIONAL CODES Tables giving the free distance of a large number of convolutional codes exist in many textbooks. They usually list the codes in octal form. One very popular rate ½ code with 64 states is usually listed as: 33, 7. It has d free =. In binary, after dropping the leading s, this gives the tap connections: and.

PUNCTURING CONVOLUTIONAL CODES Starting with a rate ½ encoder one can construct higher rate codes by a technique called puncturing. For example, consider the following pattern: Input the first binary digit and transmit both output bits Input the next binary digit and transmit only one of the two output bits. Input the next binary digit and transmit both output bits. Input the next binary digit and transmit only one of the two output bits. Continue in this way where the odd input binary digits produce two output bits but where the even inputs produce only one output bit. The result is a rate /3 code where we have 3 output binary digits for every two input binary digits.

PUNCTURING CONVOLUTIONAL CODES For example, in the 4-state trellis, if we omit the bottom output every other time, the result is: a a a a X X a X X X X b b b b c c X X X X c The X s denote the missing bits and are ignored in decoding. X d d d X

PUNCTURING CONVOLUTIONAL CODES Note that the same trellis is used in decoding the original code and the punctured code. Puncturing lowers the d free of the code. The best puncturing patterns were found by simulation. For example, for the rate ½, 64-state code we have: Rate Puncturing Pattern d free /,,,,,, /3, X,, X,, X, 6 3/4, X, X,, X, X, 5 5/6, X, X, X, X,, X 4 7/8, X, X, X, X, X, X 3

TURBO CODES AND ITERATIVE DECODING

SYSTEMATIC CONVOLUTIONAL ENCODER A systematic convolutional encoder has one of its outputs equal to its input. Example of a 4-state, rate ½, encoder: Input Outputs

PARALLEL TURBO ENCODER Example of a rate /3 Turbo encoder: Input Random Interleaver 3 Outputs

PARALLEL TURBO DECODER Random Deinterleaver Soft Output Decoder for Systematic Code Random Interleaver Soft Output Decoder for Systematic Code Many details are omitted

TURBO CODES Codes with very few states give excellent performance if the interleaver size is large enough. Serial Turbo codes also exist and give comparable performance. Higher rate codes can be obtained by puncturing.