Hamming Codes and Decoding Methods

Similar documents
Hamming Codes as Error-Reducing Codes

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Error-Correcting Codes

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

Exercises to Chapter 2 solutions

Digital Communication Systems ECS 452

STRATEGY AND COMPLEXITY OF THE GAME OF SQUARES

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Error Correction with Hamming Codes

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Digital Television Lecture 5

Channel Coding/Decoding. Hamming Method

Communications Theory and Engineering

Error Protection: Detection and Correction

Modular arithmetic Math 2320

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

code V(n,k) := words module

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

Revision of Lecture Eleven

Coding for Efficiency

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

Synchronization of Hamming Codes

Lab/Project Error Control Coding using LDPC Codes and HARQ

Chapter 10 Error Detection and Correction

EE521 Analog and Digital Communications

SOLUTIONS FOR PROBLEM SET 4

ECE Advanced Communication Theory, Spring 2007 Midterm Exam Monday, April 23rd, 6:00-9:00pm, ELAB 325

The Capability of Error Correction for Burst-noise Channels Using Error Estimating Code

Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Section 3 Fall 2010

Chapter 10 Error Detection and Correction 10.1

Outline. Communications Engineering 1

Cryptography. 2. decoding is extremely difficult (for protection against eavesdroppers);

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Acentral problem in the design of wireless networks is how

MAT Modular arithmetic and number theory. Modular arithmetic

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Error Detection and Correction

An Efficient Forward Error Correction Scheme for Wireless Sensor Network

Fermat s little theorem. RSA.

Noisy Index Coding with Quadrature Amplitude Modulation (QAM)

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

Detecting and Correcting Bit Errors. COS 463: Wireless Networks Lecture 8 Kyle Jamieson

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

Robust Reed Solomon Coded MPSK Modulation

Intro to coding and convolutional codes

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Introduction to Coding Theory

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT

LDPC Decoding: VLSI Architectures and Implementations

THE REMOTENESS OF THE PERMUTATION CODE OF THE GROUP U 6n. Communicated by S. Alikhani

ELG 5372 Error Control Coding. Lecture 10: Performance Measures: BER after decoding

How Many Mates Can a Latin Square Have?

Edge-disjoint tree representation of three tree degree sequences

On Information Theoretic Interference Games With More Than Two Users

CSC344 Wireless and Mobile Computing. Department of Computer Science COMSATS Institute of Information Technology

Block Markov Encoding & Decoding

6. FUNDAMENTALS OF CHANNEL CODER

Lecture 13 February 23

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

To Your Hearts Content

Basics of Error Correcting Codes

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

18.204: CHIP FIRING GAMES

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1.

How (Information Theoretically) Optimal Are Distributed Decisions?

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

TIME encoding of a band-limited function,,

Econ 172A - Slides from Lecture 18

Decoding Turbo Codes and LDPC Codes via Linear Programming

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/4/14

On the Capacity Regions of Two-Way Diamond. Channels

CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Sep 25. Homework #1. ( Due: Oct 10 ) Figure 1: The laser game.

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

Game Theory and Randomized Algorithms

MULTIPATH fading could severely degrade the performance

EXACT SIGNAL RECOVERY FROM SPARSELY CORRUPTED MEASUREMENTS

The idea of similarity is through the Hamming

Graphs of Tilings. Patrick Callahan, University of California Office of the President, Oakland, CA

Page 1. Outline. Basic Idea. Hamming Distance. Hamming Distance Visual: HD=2

Spreading Codes and Characteristics. Error Correction Codes

Solutions for the Practice Questions

Shuffling with ordered cards

Block Ciphers Security of block ciphers. Symmetric Ciphers

Lossy Compression of Permutations

Crossing Game Strategies

ORTHOGONAL space time block codes (OSTBC) from

LDPC codes for OFDM over an Inter-symbol Interference Channel

ICE1495 Independent Study for Undergraduate Project (IUP) A. Lie Detector. Prof. : Hyunchul Park Student : Jonghun Park Due date : 06/04/04

Decoding Distance-preserving Permutation Codes for Power-line Communications

Design of Parallel Algorithms. Communication Algorithms

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

Communication Theory II

DEGRADED broadcast channels were first studied by

Transcription:

Hamming Codes and Decoding Methods Animesh Ramesh 1, Raghunath Tewari 2 1 Fourth year Student of Computer Science Indian institute of Technology Kanpur 2 Faculty of Computer Science Advisor to the UGP Indian institute of Technology Kanpur April 26, 2018 Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 1 / 22

Table of Contents Abstract Background Basic concepts Hamming code with standard decoding Error-Reduction limits of standard decoding A lower bound for the [7,4,3]-Hamming code with standard decoding Extension to general Hamming codes Table Other decoding methods Future work References Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 2 / 22

Abstract Hamming codes are the first nontrivial family of error-correcting codes. In this survey we look into the notion error-reduction and present several decoding methods with the goal of improving the error-reducing capabilities of Hamming codes.first,the error-reducing properties of Hamming codes with standard decoding are explored.we show a lower bound on the average number of errors present in a decoded message when two errors are introduced by the channel for general Hamming codes.finally Other decoding algorithms are investigated experimentally, and it is found that these algorithms improve the error reduction capabilities of Hamming codes beyond the afore mentioned lower bound of standard decoding. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 3 / 22

Background The messages that we send from the sender to receiver are encoded in blocks of bits, called codewords.the Hamming code has a generator matrix, that encodes the message (a binary vector), basically multiplies the message vector with the generator matrix to form a code word, at the transmitting side of a communication channel.the Hamming code also has a parity-check matrix, that is a generator matrix of the null-space of the code and helps decode the message at the receiver, that is if we multiply the parity check matrix with a codeword it should give us 0 otherwise it means there is error. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 4 / 22

Background A code has a limit in the number of errors that it is capable of correcting (given by (d 1)/2, where d is the minimum pairwise Hamming distance between words of the code). Main motivation : When the afore mentioned limit is exceeded, random output occurs when attempting to apply error correction to the erroneous vector. So if we somehow can reduce the error before the correction is done we can some how do error correction even when there are large number of errors. This motivates the exploration and construction of new models that attempt to reduce the number of errors in the received vector upon decoding. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 5 / 22

Basic concepts Let x ɛ F N Hamming weight The Hamming weight of x, w(x), is defined as the number of non-zero entries in x. For the case of binary vectors, this is equivalent to the number of 1s in the vector. Hamming distance The Hamming distance between the two words x, y ɛ F n 2, d(x, y), is the number of coordinates in which the two words differ. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 6 / 22

Basic concepts Let C denote the set of codewords obtained from encoding a set 2 k binary message vectors of length k (i.e., F k ). Block code A code is referred to as a block code if the messages are encoded in blocks of a given length (i.e., C F n 2 for some n). Linear block code A linear block code is a block code that has the property that any F 2 linear combination of codewords in C is also a codeword. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 7 / 22

Basic concepts. Let M = F2 k be a set of binary message vectors of dimension k = 2 m m 1,m 3, an integer. An [n, k, d]-hamming code is a linear block code that maps a message in M to a unique codeword of length n, where n = 2 m 1. Furthermore, any two of the codewords have a minimum Hamming distance of 3. The [7, 4, 3]-Hamming code is the first Hamming code, where m = 3. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 8 / 22

Hamming code with standard decoding A Hamming code can correct one error by adding m, a positive integer, bits to a binary message vector of length 2 m m 1 to produce a codeword of length 2 m 1 When multiple errors are introduced into a codeword, there is no guarantee of correct recovery of messages. Standard decoding Message will be encoded as G T x Now suppose that an error represented by a vector e is added to the codeword y The standard decoding process, We have H (y + e). It can be seen that the computed column matrix matches with column m of the parity check matrix H. Once this bit has been flipped in the received word, it can be seen that this matches the codeword, corresponding to the message, so the error has been corrected. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 9 / 22

Error-Reduction limits of standard decoding Lemma 1 Suppose one or more errors are introduced into a codeword for a Hamming code of any order with standard decoding. Let q be the column of the parity-check matrix that is determined to be erroneous (i.e., q is the product of the parity check matrix and the erroneous codeword). q is independent of the initial message to be sent. Proposition 2 The number of errors in the decoded message (standard decoding) is independent of the transmitted message. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 10 / 22

Error-Reduction limits of standard decoding Thus we can conclude that the column labeled as erroneous has dependence on the parity check matrix and the error vector, the design of the generator matrix is what ultimately influences the reduction in errors.. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 11 / 22

A lower bound for the [7,4,3]-Hamming code with standard decoding In order to prove that 1.7143 is the lower bound for the average number of errors found in the set of decoded messages, we will make use of the following lemmas. Lemma 3 Consider a [n = 2 m 1, 2 m 1 m, 3]-Hamming code with standard decoding. If the received vector y has two errors present, then the index of the column labeled as erroneous by multiplying the parity check matrix with y will always correspond to a 0 on the error vector. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 12 / 22

A lower bound for the [7,4,3]-Hamming code with standard decoding Lemma 4 Let E be the set of all binary vectors with two ones. Suppose that a single 0 in every member of E is replaced with a 1 to obtain E, such set of minimum size. Then E = E /3. Lemma 5 Suppose we want to map a message with a Hamming weight of 2 to a codeword with a Hamming weight of 3, then the generator matrix used for the encoding must contain at least one row r, such that w(r) 4. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 13 / 22

A lower bound for the [7,4,3]-Hamming code with standard decoding Lemma 6 Consider an [n,k,3] Hamming code. Let t be the number of rows in the generator matrix with a Hamming weight of 3. If all other rows have a Hamming weight of 4,then the maximum number of messages with a Hamming weight of 2 that can be mapped to codewords of Hamming weight 3 is (k t) t. Theorem 7 Consider a [7,4,3]-Hamming code C and let E be the set of all unique error vectors of length 7 and weight 2. Let t be the average number of errors found after standard decoding in the decoded message at the receiver for all possible modulo-2 sums of each member of E with each member of C. If the Hamming code is designed to minimize t using the standard decoder, then t = 12/7. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 14 / 22

Extension to general Hamming codes Lemma 8 For every m 4, there is an l ɛ z, 0 l k /2 such that k l + l(k l) 1/3 ( n 2). Recall that k = 2 m m 1 and n = 2 m 1. Theorem 9 Consider an [n = 2 m 1, k = 2 m 1m, d = 3] Hamming code C for m 4, and E = eɛf2 n : w(e) = 2. Find the minimum l ɛ z, 0 l k /2 such that k l + l(k l) 1/3 ( n 2). Then a lower bound for the average (over all codewords inc and all errors in E) number of errors in a message after standard decoding is 2 (k l)/(1/3 ( n 2) ). Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 15 / 22

Table Figure: RESULTS FOR [7,4,3]-HAMMING CODE WITH DIFFERENT DECODING METHODS taken from [RA, 2016] Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 16 / 22

Other decoding methods For all of these algorithms, it should be assumed that the encoding procedure is unchanged. In all of the decoders below, the first step consists of determining all codewords that are a distance of less than or equal to the number of errors introduced from the erroneous vector. The messages corresponding to these codewords were collected into a list L. Minimum of sums decoding For every message, x, the sum of the Hamming distances between x and all y ɛ L was taken. The decoded message would then be the message x that minimizes this norm. As the results show, this decoding method provides a slight improvement to standard decoding, albeit with an increased cost in computational complexity. It should be noted that this decoding method was the only tested method that was found to have results that are independent of the transmitted codeword in this specific experiment for the [7,4,3] code. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 17 / 22

Minimum of maximums decoding The minimum of maximums decoding algorithm finds all Hamming distances between each message and every member of L. Then, for every message, x, the maximum distance between x and every member in L is included in a list. The message that corresponds to the minimum of this list of distances is chosen as the decoded message. Though this algorithm was an improvement from previous results for the cases in which three or four errors were introduced, the number of errors increased when two errors were present. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 18 / 22

Other decoding methods Majority bit decoding The majority bit decoding algorithm observes each bit for every message in L. Let L = y 1, y 2,..., y l,l = L, and let y j i denote the coordinate i of the message y j. Also let n denote the length of the messages y. For each j 1,...,k, if l i=1 2 n y j i > l/2, then entry j of the decoded message is 1; otherwise it is 0. This algorithm gave the best reduction for two errors, but this is not uniformly distributed across messages. The results of all the above algorithms for the [7,4,3]Hamming code are shown in Table. Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 19 / 22

Future work Extending the bound presented in Theorem 9 to an arbitrary number of errors. Explore other decoding methods to provide a greater level of error reduction with low complexity Best possible reduction that can be achieved as no lower bound is known in general Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 20 / 22

References William Rurik and Arya Mazumdar (2016) Hamming Codes as Error-Reducing Codes Swastik Kopparty Shubhangi Sara (2012) Local Testing and Decoding of High-Rate Error-Correcting Codes (2012) R. Roth (2006) Introduction to Coding Theory. Cambridge, NY,2006 Google https://whatis.techtarget.com/definition/hamming-code Wikipedia Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 21 / 22

The End Animesh Ramesh, Raghunath Tewari UGP April 26, 2018 22 / 22