FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Similar documents
Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

Chapter 10 Error Detection and Correction 10.1

Hamming Codes and Decoding Methods

Digital Television Lecture 5

The idea of similarity is through the Hamming

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Hamming Codes as Error-Reducing Codes

code V(n,k) := words module

Channel Coding/Decoding. Hamming Method

Error Detection and Correction

Digital Communication Systems ECS 452

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Error Protection: Detection and Correction

Introduction to Coding Theory

Error-Correcting Codes

Revision of Lecture Eleven

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Outline. Communications Engineering 1

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

1111: Linear Algebra I

EE521 Analog and Digital Communications

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Chapter 10 Error Detection and Correction

Error Correction with Hamming Codes

To Your Hearts Content

Intro to coding and convolutional codes

6. FUNDAMENTALS OF CHANNEL CODER

ECE 6640 Digital Communications

Detecting and Correcting Bit Errors. COS 463: Wireless Networks Lecture 8 Kyle Jamieson

Error Control Codes. Tarmo Anttalainen

Lecture 13 February 23

ECE 6640 Digital Communications

Exercises to Chapter 2 solutions

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Dual-Mode Decoding of Product Codes with Application to Tape Storage

Communications Theory and Engineering

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

Synchronization of Hamming Codes

LDPC Decoding: VLSI Architectures and Implementations

Digital Data Communication Techniques

Solutions for the Practice Final

Basics of Error Correcting Codes

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

MATH 433 Applied Algebra Lecture 12: Sign of a permutation (continued). Abstract groups.

Chapter 1 Coding for Reliable Digital Transmission and Storage

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS

Physical-Layer Services and Systems

16.36 Communication Systems Engineering

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

STRATEGY AND COMPLEXITY OF THE GAME OF SQUARES

Chapter 3 Convolutional Codes and Trellis Coded Modulation

1.6 Congruence Modulo m

Versuch 7: Implementing Viterbi Algorithm in DLX Assembler

Course Developer: Ranjan Bose, IIT Delhi

Noisy Index Coding with Quadrature Amplitude Modulation (QAM)

DIGITAL COMMINICATIONS

Copyright S. K. Mitra

Communication Theory II

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

Decoding of Block Turbo Codes

Asst. Prof. Thavatchai Tayjasanant, PhD. Power System Research Lab 12 th Floor, Building 4 Tel: (02)

Forward Error Correction for experimental wireless ftp radio link over analog FM

ORTHOGONAL space time block codes (OSTBC) from

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use?

(CSC-3501) Lecture 6 (31 Jan 2008) Seung-Jong Park (Jay) CSC S.J. Park. Announcement

Low Power LDPC Decoder design for ad standard

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

International Journal of Engineering Research in Electronics and Communication Engineering (IJERECE) Vol 1, Issue 5, April 2015

Hamming net based Low Complexity Successive Cancellation Polar Decoder

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

New DC-free Multilevel Line Codes With Spectral Nulls at Rational Submultiples of the Symbol Frequency

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

DEGRADED broadcast channels were first studied by

FPGA Implementation Of An LDPC Decoder And Decoding. Algorithm Performance

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

A few chessboards pieces: 2 for each student, to play the role of knights.

Three of these grids share a property that the other three do not. Can you find such a property? + mod

Computing and Communications 2. Information Theory -Channel Capacity

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

Lecture 3 Data Link Layer - Digital Data Communication Techniques

Decoding Turbo Codes and LDPC Codes via Linear Programming

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

EEE 301 Digital Electronics

Outline. EECS 122, Lecture 6. Error Control Overview Where are Codes Used? Error Control Overview. Error Control Strategies ARQ versus FEC

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Layering and Controlling Errors

LECTURE 8: DETERMINANTS AND PERMUTATIONS

Channel Coding and Cryptography

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

Performance comparison of convolutional and block turbo codes

The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1.

ENGR170 Assignment Problem Solving with Recursion Dr Michael M. Marefat

Department of Electronics and Communication Engineering 1

Low Correlation Zone Signal Sets

Transcription:

1 Information Transmission Chapter 5, Block codes FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

2 Methods of channel coding For channel coding (error correction) we have two main classes of codes, namely: block codes, which we first encountered when we discussed Shannon's channel coding theorem convolutional codes. We shall briefly discuss both classes.

A schematic communication system 3

4 The binary field For the following calculations we use the binary field, for which the rules of addition and multiplication are those of modulo-two arithmetic: Notice that since 1+1=0, subtraction is the same as addition, which is very convenient

5 The error pattern Suppose that the codeword is transmitted over the binary symmetric channel and that is the possibly erroneously received version of it, then the error pattern is defined to be the N-tuple that satisfies If we have one error, that is, e consists of one 1 and N-1 0's, then one component in v is altered. Two errors cause two altered components in v.

6 Minimum distance The minimum distance, d min, of a block code B is the minimum of all the distances between two non-identical codewords of the code. If the sum of any two codewords is a codeword, then the code is said to be linear. For a linear block code the minimum distance is simply equal to the least number of 1's in a nonzero codeword In general, a block code with minimum distance d min will correct up (d min -1)/2 errors. Alternatively, it can be used to detect up to d min -1 errors.

7 The (7,4) Hamming code Hamming constructed a class of single-error-correcting linear block codes with minimum distance d min =3. In the table we specify an encoder mapping for the (7,4) Hamming code with M=2 4 =16 codewords.

8 Example Assume that we would like to transmit the information 4- tuple u=(1011) over a binary symmetric channel. Then we encode it, by using the mapping in the table, and obtain the codeword v=(0110011). Let, for example, the sixth position be altered by the channel. Thus, we receive r=(0110001).

9 Example cont To correct the error we add position-wise modulo-two rows 2, 3, and 7 (the positions corresponding to the 1's in r) and obtain that is, the binary representation of 6; we flip the sixth position in r=(0110001) and obtain the estimate of the codeword which corresponds to the information 4-tuple.

10 How does it work? (I) Why does our scheme work? We can write the received 7- tuple as the sum of the codeword and the error pattern, that is, r=v+e. Remember that 1+1=0! Due to this simple equality we can obtain the sum of the rows corresponding to the 1's in r by adding componentwise the sums of the rows corresponding to the 1's in v and e.

11 How does it work? (II) Now we exploit that the mapping in the table is constructed such that the sum of the rows corresponding to the 1's in any codeword is 000. Hence, we conclude that the sum of the rows corresponding to the 1's in r (this is the sum that the decoder computes) is equal to the sum of the rows corresponding to the 1's in e.

12 How does it work? (III) But assuming at most one error during the transmission we obtain in case of no errors the sum of zero rows which we interpret as 000 and then we accept r as our estimate In case of one error the sum contains one row, namely, precisely the row which is the binary representation of the position of the 1 in e. Hence, flip that position in r and we obtain our estimated codeword

13 The generator matrix How do we obtain the remarkable encoder mapping? Since the Hamming code is linear the codewords corresponding to the information 4 tuples 1000, 0100, 0010, 0001 are of particular interest; these codewords form a socalled generator matrix for the (7,4) Hamming code:

14 Codeword generation All codewords can be obtained as the product of the corresponding information 4-tuples and the generator matrix: For example, the codeword corresponding to u=(1011) is obtained as the position-wise modulo-two sum of the first, third and fourth rows in G, that is, in agreement with the mapping.

15 Generation of the parity check matrix Assume that we have a K x N generator G, then by the theory of matrices there exists an (N-K) x N matrix H such that It follows immediately that that is, we have the fundamental result Where H is the so-called parity check matrix.

16 The parity-check matrix In words, let v be a codeword, then if we add (position-wise modulo-two) the rows of H T corresponding to the 1s in v we obtain the allzero (N-K)-tuple. This computation is a parity-checking procedure and thus we call the matrix H a parity-check matrix of our code.

The generator matrix and the parity check matrix 17 It is easily verified that Using linear algebra we can obtain the generator matrix G for a given parity-check matrix H and vice versa.

18