Exercises to Chapter 2 solutions

Similar documents
Error-Correcting Codes

code V(n,k) := words module

The idea of similarity is through the Hamming

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

Hamming Codes as Error-Reducing Codes

Hamming Codes and Decoding Methods

Solutions to Exercises Chapter 6: Latin squares and SDRs

Error Protection: Detection and Correction

Error Correction with Hamming Codes

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

BSc (Hons) Computer Science with Network Security, BEng (Hons) Electronic Engineering. Cohorts: BCNS/17A/FT & BEE/16B/FT

The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1.

THE REMOTENESS OF THE PERMUTATION CODE OF THE GROUP U 6n. Communicated by S. Alikhani

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

Caltech Harvey Mudd Mathematics Competition February 20, 2010

Cryptography. 2. decoding is extremely difficult (for protection against eavesdroppers);

Introduction to Source Coding

Permutations. = f 1 f = I A

EE521 Analog and Digital Communications

Lecture 13 February 23

Digital Communication Systems ECS 452

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON).

Physical-Layer Services and Systems

Digital Television Lecture 5

1.6 Congruence Modulo m

Detecting and Correcting Bit Errors. COS 463: Wireless Networks Lecture 8 Kyle Jamieson

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

Introduction to Error Control Coding

Assignment 2. Due: Monday Oct. 15, :59pm

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

PUTNAM PROBLEMS FINITE MATHEMATICS, COMBINATORICS

BMT 2018 Combinatorics Test Solutions March 18, 2018

1111: Linear Algebra I

Chapter 1. The alternating groups. 1.1 Introduction. 1.2 Permutations

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam

BSc (Hons) Computer Science with Network Security BEng (Hons) Electronic Engineering

PUZZLES ON GRAPHS: THE TOWERS OF HANOI, THE SPIN-OUT PUZZLE, AND THE COMBINATION PUZZLE

The design of the IEEE coding scheme

CIS 2033 Lecture 6, Spring 2017

New DC-free Multilevel Line Codes With Spectral Nulls at Rational Submultiples of the Symbol Frequency

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

PD-SETS FOR CODES RELATED TO FLAG-TRANSITIVE SYMMETRIC DESIGNS. Communicated by Behruz Tayfeh Rezaie. 1. Introduction

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

COUNTING AND PROBABILITY

Math 1111 Math Exam Study Guide

Solutions for the Practice Final

1 Introduction. 2 An Easy Start. KenKen. Charlotte Teachers Institute, 2015

Permutations and codes:

Chapter 5: Signal conversion

SMT 2014 Advanced Topics Test Solutions February 15, 2014

Block Markov Encoding & Decoding

Chapter 10 Error Detection and Correction 10.1

Lower Bounds for the Number of Bends in Three-Dimensional Orthogonal Graph Drawings

Outline. Communications Engineering 1

Channel Coding/Decoding. Hamming Method

LECTURE 8: DETERMINANTS AND PERMUTATIONS

photons photodetector t laser input current output current

BSc (Hons) Computer Science with Network Security. Examinations for Semester 1

Lab/Project Error Control Coding using LDPC Codes and HARQ

ECE Advanced Communication Theory, Spring 2007 Midterm Exam Monday, April 23rd, 6:00-9:00pm, ELAB 325

Waveform Encoding - PCM. BY: Dr.AHMED ALKHAYYAT. Chapter Two

18.S34 (FALL, 2007) PROBLEMS ON PROBABILITY

With Question/Answer Animations. Chapter 6

Midterm Examination Solutions

Hello and welcome to today s lecture. In the last couple of lectures we have discussed about various transmission media.

Introduction to Coding Theory

Final Exam, Math 6105

Digital data (a sequence of binary bits) can be transmitted by various pule waveforms.

17. Symmetries. Thus, the example above corresponds to the matrix: We shall now look at how permutations relate to trees.

low-frequency end. Let fx i g = f::: x;1 x0 ::: x i :::g, x i 2 f;1 1g be a bipolar sequence. The running digital sum z i is dened by z i = ix j=;1 x

NUMBER THEORY AMIN WITNO

MATHEMATICS ON THE CHESSBOARD

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Game Theory and Algorithms Lecture 19: Nim & Impartial Combinatorial Games

I.M.O. Winter Training Camp 2008: Invariants and Monovariants

Data and Computer Communications

Digital Data Communication Techniques

Design of Parallel Algorithms. Communication Algorithms

Communications Theory and Engineering

EXPLAINING THE SHAPE OF RSK

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

n r for the number. (n r)!r!

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

Lecture 2: Sum rule, partition method, difference method, bijection method, product rules

THE use of balanced codes is crucial for some information

B.E SEMESTER: 4 INFORMATION TECHNOLOGY

Review: Theorem of irrelevance. Y j φ j (t) where Y j = X j + Z j for 1 j k and Y j = Z j for

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

6.450: Principles of Digital Communication 1

A Simple Space-Frequency Coding Scheme with Cyclic Delay Diversity for OFDM

An Efficient Forward Error Correction Scheme for Wireless Sensor Network

Reading 14 : Counting

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

Math236 Discrete Maths with Applications

On the Capacity Regions of Two-Way Diamond. Channels

Permutation Editing and Matching via Embeddings

THE ERDŐS-KO-RADO THEOREM FOR INTERSECTING FAMILIES OF PERMUTATIONS

A NEW COMPUTATION OF THE CODIMENSION SEQUENCE OF THE GRASSMANN ALGEBRA

ACTIVITY 6.7 Selecting and Rearranging Things

Transcription:

Exercises to Chapter 2 solutions 1 Exercises to Chapter 2 solutions E2.1 The Manchester code was first used in Manchester Mark 1 computer at the University of Manchester in 1949 and is still used in low-speed data transfer (e.g. TV remote sending signals via infrared). This binary code consists of two codewords: 10 and 01. The codeword 10 is interpreted by the recipient as the message 0, and 01 is understood to mean 1; whereas the received word 00 or 11 indicates a detected error. The following error-free fragment of a bit stream encoded by Manchester code (that is: the stream is a sequence of codewords) had been intercepted:... 010101x01011010... What was the bit x? Answer to E2.1 In... 010101x01011010..., notice that 11 cannot be a codeword. Therefore, the bit stream is split into codewords in the following way:... 0 10 10 1x 01 01 10 10.... The codeword 1x must be 10 so x = 0. E2.2 Consider the alphabet Z 10 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. The Luhn checksum of a word x 1 x 2... x 16 (Z 10 ) 16 is π(x 1 ) + x 2 + π(x 3 ) + x 4 + π(x 5 ) + + x 16 mod 10, viewed as an element of Z 10. Here π : Z 10 Z 10 is defined by the rule π(a) is the sum of digits of 2a. The Luhn code consists of all words in (Z 10 ) 16 whose Luhn checksum is 0. (i) Write down all values of π and check that π is a permutation of the alphabet Z 10. (ii) Find the total number of codewords of the Luhn code. (iii) Prove that a single digit error is detected by the Luhn code. (iv) Look at your 16-digit debit/credit card numbers. Are they codewords of the Luhn code? If you have a card with a number which is not a codeword of the Luhn code, can you bring it to the tutorial? Thanks! Answer to E2.2 (i) π is the following permutation of Z 10 : ( ) 0 1 2 3 4 5 6 7 8 9 0 2 4 6 8 1 3 5 7 9. (ii) Every sequence of 15 digits is the beginning of exactly one Luhn codeword. Indeed, let x 1,..., x 15 Z 10 be arbitrary. Calculate z = π(x 1 ) + x 2 + π(x 3 ) + x 4 + π(x 5 ) + +

Exercises to Chapter 2 solutions 2 π(x 15 ). Then the one and only Luhn codeword of the form x 1 x 2... x 15 x 16 is determined by z + x 16 0 mod 10. This is the same as x 16 ( z) mod 10. Therefore, the number of Luhn codewords is equal to the number of sequences of 15 digits, that is, 10 15. (iii) If x i is replaced by y i, then the Luhn checksum changes by y i x i mod 10 (if i is odd) or by π(y i ) π(x i ) mod 10 (if i is even). In any case, if y i x i, then neither of these changes is zero mod 10, hence altering a single digit changes the Luhn checksum. A codeword has Luhn checksum 0, hence changing a single digit in a codeword gives a word with non-zero Luhn checksum, i.e., not a codeword. E2.3 (loosely based on question A4 from the January 2013 exam). Alice transmitted the same binary word of length 6 to Bob three times, but Bob received three different words: 101010, 011100, 110001. Engineer Clara told Bob that at most two bit errors could have occurred in each word during transmission. Help Bob to recover the word transmitted by Alice. Answer to E2.3 Let z be the word sent by Alice. We are given that each one of the words v 1 = 101010, v 2 = 011100, v 3 = 110001 received by Bob contains at most two errors. To work out z, let s try to see where the errors in v 1, v 2 and v 3 could have occurred. Write v 1, v 2 and v 3 as rows of a matrix: 1 0 1 0 1 0 0 1 1 1 0 0 1 1 0 0 0 1 Note that each column of the matrix contains both 0s and 1s which cannot both be correct. Hence each column contains at least one error, and in total, there are at least 6 errors in the whole matrix. On the other hand, we are given that there were at most 6 = 2 + 2 + 2 errors. Thus, there are exactly 6 errors in the matrix, and the only z which guarantees 6 errors is the word which ensures that there is exactly one error per column that is, the majority bits in each column must be correct: 1 0 1 0 1 0 0 1 1 1 0 0 1 1 0 0 0 1 z = 1 1 1 0 0 0

Exercises to Chapter 2 solutions 3 E2.4 Recall that the binary alphabet is F 2 = {0, 1}. following eight words: 0 0 0 0 0 0 0, 1 0 0 1 1 1 0, 0 1 0 0 1 1 1, 1 0 1 0 0 1 1, 1 1 0 1 0 0 1, 1 1 1 0 1 0 0, 0 1 1 1 0 1 0, 0 0 1 1 1 0 1. Let the code Σ F 7 2 consist of the Find the Hamming distances between all pairs of codewords. Suppose that a codeword of Σ is transmitted, and one, two or three bit errors occur; show that the received word is not a codeword (i.e., the error is detected). What is DECODE(0001110)? Give an example of a word which has more than one nearest neighbour in Σ. Try to see if there are words with three, four etc. nearest neighbours. Try to write a word with a maximum possible number of nearest neighbours in Σ. Answer to E2.4 The Hamming distances between all pairs of distinct codewords can be found directly by looking at the 28 possible pairs. (The number of pairs of distinct codewords is M(M 1)/2.) This can be optimsed is one notices that the non-zero codewords are cyclic shifts of 1001110. Because applying a cyclic shift to both v and w does not change the Hamming distance between v and w, it is enough to only find the distance d(0000000, 1001110) and the distance from 1001110 to all the other codewords. All distances turn out to be 4. Hence d(σ) = 4. A 3-dimensional euclidean space analogue of this code is a set of four points with all pairwise distances equal to 4. Such points are vertices of a regular tetrahedron. In n-dimensional euclidean space, one can construct n + 1 points (not more) with equal pairwise distances. They are vertices of what is called an n-dimensional simplex. For this reason, the code Σ which has a rare property that all pairwise distances between the codewords are the same, is called a simplex code. Since d(c) = 4, the code detects up to 4 1 = 3 errors and corrects up to [(4 1)/2] = 1 error.

Exercises to Chapter 2 solutions 4 The nearest neighbour of 0001110 in Σ is 1001110 (by inspection), at distance 1. Hence DECODE(0001110) = 1001110. Here is an example of a word with seven nearest neighbours in Σ: 1111111. All the non-zero codewords of Σ are at distance 3 from this word. It looks as if there is no word equidistant from all codewords; check this! Remark More efficient ways of calculating the parameters of Σ (and other simplex codes) will arise when we learn how to deal with linear codes, Hamming codes and cyclic codes. To give you a hint: in Σ, the difference of two codewords (viewed as vectors in F 7 2 ) is again a codeword, so d = 4. E2.5 Look again at the Manchester code, the Luhn code and the code Σ introduced in the previous exercises. For each of these codes, determine the parameters [n, k, d] q of the code; state how many errors can the code detect and how many errors can the code correct. Answer to E2.5 The Manchester code: n = 2, q = 2, M = 2 so k = 1; d = 2 by inspection. [2, 1, 2] 2 -code, can detect up to 1 bit error. Does not correct errors. The Luhn code: n = 16, q = 10, M = 10 15 so k = 15, d = 2. A [16, 15, 2] 10 -code. Can detect up to 1 symbol error. Does not correct errors/ The code Σ: n = 7, q = 2, M = 8 so k = 3; d = 4 by inspection. A [7, 3, 4] 2 -code. Can detect up to 3 bit errors and can correct a single bit error. A E2.6 (a) Let C be a q-ary code of length n. Denote M = C. Show: n log q M. (b) Assume that the cost of transmitting one symbol via a q-ary channel is cq. (Imagine a q-ary channel as a cable with q wires; the costs of building and maintaining it would be roughly proportional to q.) Suppose that you need to design a code with M 0 codewords (that is: M is a very large number), and you have the control over the length n and the size q of the alphabet. Which q will ensure the lowest transmission costs per

Exercises to Chapter 2 solutions 5 codeword? In particular, are the binary channels (the type most widely used in today s computer networks) the most economical? Answer to E2.6 (a) A code of length n in an alphabet of size q has at most q n codewords (an elementary counting argument), so q n M, hence the result. (b) The cost of transmitting one codeword is cqn. By part (a), n log q M so this cost is estimated from below as cq log q M = K q where the constant K is c ln M. ln q The function f(x) = x/ ln x decreases on (0, e) and increases on (e, ) (check this by differentiation or otherwise) so f(q) > f(3) if q > 3. Hence the only candidates for the minimum are q = 2 and q = 3. Calculating f(2) = 2.89 and f(3) = 2.73, we conclude that if we accept the (somewhat arbitrary) assumptions in the problem ternary codes are the most economical.