The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1.

Similar documents
Error Protection: Detection and Correction

Introduction to Coding Theory

code V(n,k) := words module

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Channel Coding/Decoding. Hamming Method

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

Error Correction with Hamming Codes

Computing and Communications 2. Information Theory -Channel Capacity

Digital Television Lecture 5

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use?

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Exercises to Chapter 2 solutions

The idea of similarity is through the Hamming

Communications Theory and Engineering

6.004 Computation Structures Spring 2009

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Detecting and Correcting Bit Errors. COS 463: Wireless Networks Lecture 8 Kyle Jamieson

Information Theory and Communication Optimal Codes

Introduction to Error Control Coding

Chapter 10 Error Detection and Correction 10.1

Error-Correcting Codes

Communication Theory and Engineering

Computer Networks. Week 03 Founda(on Communica(on Concepts. College of Information Science and Engineering Ritsumeikan University

Introduction to Source Coding

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Block code Encoder. In some applications, message bits come in serially rather than in large blocks. WY Tam - EIE POLYU

Page 1. Outline. Basic Idea. Hamming Distance. Hamming Distance Visual: HD=2

TOPOLOGY, LIMITS OF COMPLEX NUMBERS. Contents 1. Topology and limits of complex numbers 1

Information Theory and Huffman Coding

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

Hamming Codes as Error-Reducing Codes

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program.

Lab/Project Error Control Coding using LDPC Codes and HARQ

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Nonlinear Multi-Error Correction Codes for Reliable MLC NAND Flash Memories Zhen Wang, Mark Karpovsky, Fellow, IEEE, and Ajay Joshi, Member, IEEE

Performance of Reed-Solomon Codes in AWGN Channel

Outline. Communications Engineering 1

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

Basics of Error Correcting Codes

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Digital Data Communication Techniques

Problem Sheet 1 Probability, random processes, and noise

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression

Communication Theory II

6.450: Principles of Digital Communication 1

LDPC Decoding: VLSI Architectures and Implementations

AHA Application Note. Primer: Reed-Solomon Error Correction Codes (ECC)

Introduction. Chapter Basics of communication

Scheduling in omnidirectional relay wireless networks

ECE 6640 Digital Communications

Digital Communication Systems ECS 452

Study of Undetected Error Probability of BCH codes for MTTFPA analysis

Frequency-Hopped Spread-Spectrum

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

Layering and Controlling Errors

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

QUIZ : oversubscription

Hamming Codes and Decoding Methods

Physical-Layer Services and Systems

photons photodetector t laser input current output current

Lecture 3 Data Link Layer - Digital Data Communication Techniques

Solutions to the problems from Written assignment 2 Math 222 Winter 2015

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

EECS 473 Advanced Embedded Systems. Lecture 13 Start on Wireless

ECE 6640 Digital Communications

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

Lecture Notes 3: Paging, K-Server and Metric Spaces

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

An Enhanced Fast Multi-Radio Rendezvous Algorithm in Heterogeneous Cognitive Radio Networks

Summary of Basic Concepts

Error Detection and Correction

1999 Mathcounts National Sprint Round Solutions

Chapter 1 Coding for Reliable Digital Transmission and Storage

II Year (04 Semester) EE6403 Discrete Time Systems and Signal Processing

COUNTING AND PROBABILITY

CSC344 Wireless and Mobile Computing. Department of Computer Science COMSATS Institute of Information Technology

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

Extended Introduction to Computer Science CS1001.py Lecture 23 24: Error Detection and Correction Codes

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Chapter 4 Digital Transmission 4.1

Communication Theory II

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

White Paper FEC In Optical Transmission. Giacomo Losio ProLabs Head of Technology

Entropy, Coding and Data Compression

Module 3 Greedy Strategy

Revision of Lecture Eleven

Department of Computer Science and Engineering. CSE 3213: Communication Networks (Fall 2015) Instructor: N. Vlajic Date: Dec 13, 2015

Chapter 10 Error Detection and Correction

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

Routing versus Network Coding in Erasure Networks with Broadcast and Interference Constraints

RAINBOW COLORINGS OF SOME GEOMETRICALLY DEFINED UNIFORM HYPERGRAPHS IN THE PLANE

Systems of Orthogonal Circles and Poincarè Geometry, on the TI-92

Good Synchronization Sequences for Permutation Codes

Math 2411 Calc III Practice Exam 2

Transcription:

Alphabets EE 387, Notes 2, Handout #3 Definition: An alphabet is a discrete (usually finite) set of symbols. Examples: B = {0,1} is the binary alphabet T = { 1,0,+1} is the ternary alphabet X = {00,01,...,FF} is the alphabet of 8-bit symbols (used in codes for compact discs, DVDs, and most hard disk drives) A channel alphabet symbol may be an indivisible transmission unit, e.g., one point from a signal constellation, or a sequence of modulation symbols encoded into a coding alphabet symbol The alphabets encountered in EE 387 usually have 2 m symbols. The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1. EE 387, September 23, 2015 Notes 2, Page 1

Block codes: definition The channel alphabet is the set of output symbols of the channel encoder same as set of input symbols to channel (modulator). The senseword alphabet is the set of output symbols from channel demodulator, i.e., input to channel decoder. The senseword alphabet may be larger than the channel alphabet, e.g., when the received symbols represent soft information. Examples: Binary erasure channel: input alphabet {0,1}, output alphabet {0,?,1}. Some disk drive read channel circuits quantize input signal to 6 bits; the channel alphabet has 64 symbols. Definition: A block code of blocklength n over an alphabet X is a nonempty set of n-tuples of symbols from X. C = {(c 11,...,c 1n ),...,(c M1,...,c Mn )} The n-tuples of the code are called codewords. We will think of codewords as vectors whose components are symbols in X. EE 387, September 23, 2015 Notes 2, Page 2

Block codes: rate Suppose that the channel alphabet has Q symbols. The rate of a block code of blocklengh n with M codewords is defined to be R = 1 n log QM. Codewords of length n are usually generated by encoding k information (data, message) symbols using an invertible encoding function. In this case, number of codewords is M = Q k. rate of code is R = 1 n log QQ k = 1 n k = k n. Such a code with blocklength n and rate k/n is called an (n,k) code. The rate is a dimensionless fraction (symbols per symbol). It is the fraction of transmitted symbols that carry information. EE 387, September 23, 2015 Notes 2, Page 3

Block codes: very simple examples C = {00010110} = {SYN} Blocklength n = 8, M = 1, rate R = 1 8 log 21 = 0. Codes with rate 0 are called useless. This code could be used for error rate analysis or byte synchronization. C = {00,01,10,11} Blocklength n = 2, M = 4, rate R = 1 2 log 24 = 1. This code has no redundancy, so it can neither correct nor detect errors. C = {001,010,100} Blocklength n = 3, M = 3, rate R = 1 3 log 23 = 0.528. This code might be used over a channel that drops bits (1 0 may occur but not 0 1), since any dropped 1 can be detected. C = {011,101,110} is a better code for this channel. Why? EE 387, September 23, 2015 Notes 2, Page 4

Block codes: more interesting examples Parity SIMMs have rate 8/9 and blocklength 9 or 36. They can detect one error per 8-bit byte. ECC DIMMs have blocklength 72 and rate 8/9. They can correct one error and detect two errors in 72 bits. Ethernet packet sizes range from 64 to 1518 bytes (12144 bits). Checksum is only 32 bits very high rate code for large packets. The number of binary 5-tuples of weight 2 or 3 (nearly DC balanced) is ( ) ( ) 5 5 + = 10+10 = 20 > 16 = 2 4. 2 3 The 4B5B TAXI code for FDDI uses 16 of these 5-tuples to convey 4 bits of data information: {1E 09 14 15 0A 0B 0E 0F 12 13 16 17 1A 1B 1C 1D} (A few other 5-tuples are used for control purposes.) EE 387, September 23, 2015 Notes 2, Page 5

Hamming distance The Hamming distance d H between n-tuples is the number of components in which the n-tuples differ: { n 1 if xi y i d H (x,y) = d H (x i,y i ), where d H (x i,y i ) = 0 if x i = y i i=1 Hamming distance satisfies the axioms for a metric or distance measure: d(x,y) 0 with equality if and only if x = y (nonnegativity) d(x,y) = d(y,x) (symmetry) d(x,y) d(x,z)+d(z,y) (triangle inequality) Hamming distance is a coarse or pessimistic measure of difference. Other useful distances that occur in error control coding: Lee distance, distance on a circle, is applicable to phase shift coding. Euclidean distance is used with sensewords in R n. EE 387, September 23, 2015 Notes 2, Page 6

Minimum distance The minimum (Hamming) distance d of a block code is the distance between any two closest codewords: d = min{d H (c 1,c 2 ) : c 1,c 2 are codewords and c 1 c 2 } Obvious properties of minimum distance of a code of blocklength n: d 1 since Hamming distance between distinct codewords is a positive integer. d n if code has two or more codewords. d = n+1 or d = for the useless code with only one codeword. (This is a convention, not a theorem.) If C 1 C 2 then d (C 1 ) d (C 2 ) smaller codes have larger (or equal) minimum distance. The minimum distance of a code determines both its error-detecting ability and error-correcting ability. EE 387, September 23, 2015 Notes 2, Page 7

Error-detecting ability Suppose that a block code is used for error detection only. Let c be the transmitted codeword and let r the senseword the received n-tuple. If the r is not a codeword, a detectable error has occurred. If the r is a codeword but not the transmitted codeword, an error has occurred that cannot be detected. If d(c,r) < d, then the senseword cannot be an incorrect codeword. Otherwise c and r would be two codewords whose distance is less than the minimum distance. Conversely, let c 1,c 2 be two closest codewords. If c 1 is transmitted but c 2 is received, then an error of weight d has occurred that cannot be detected. Theorem: The guaranteed error-detecting ability is e = d 1. The error-detecting ability is a worst case measure of the code. Codes designed for error detection can detect the vast majority of errors when d or more symbols are incorrect. EE 387, September 23, 2015 Notes 2, Page 8

Error-correcting ability A block code is a set of M vectors in an n-dimensional space. The geometry of the space is defined by Hamming distance quite different from Euclidean geometry. Nonetheless, geometric intuition can be useful. d* decoding spheres decoding regiion The optimal decoding procedure is usually nearest-neighbor decoding: the senseword r is decoded to the nearest codeword ĉ: ĉ = argmin{d H (c,r) : c is a codeword} In R 2, decoding regions are Voronoi regions defined by perpendicular bisectors of lines connecting codewords. Hamming space is much more complicated combinatorial methods are needed. EE 387, September 23, 2015 Notes 2, Page 9

Error-correcting ability (cont.) Theorem: Using nearest-neighbor decoding, errors of weight t can be corrected if and only if 2t < d. Proof: The spheres of radius t surrounding the codewords do not overlap. Otherwise, there would be two codewords 2t distant. Therefore when t errors occur, the decoder can tell which codeword was sent. d* decoding spheres decoding regiion The maximal decoding region is usually larger than the decoding sphere. Most decoders correct only when senseword r belongs to a decoding sphere of radius t. They are called bounded-distance decoders. EE 387, September 23, 2015 Notes 2, Page 10

Decoding outcomes A block code can have more than one decoder, depending on the type of errors expected or observed the error rate the computational power available at the decoder Suppose that codeword c is transmitted, senseword r is received, and ĉ is the decoder s output. The table below classifies the outcomes. ĉ = c decoder success Successful correction (including no errors) ĉ =? decoder failure ĉ c decoder error Uncorrectable error detected, no decision (not too bad) Miscorrection (very bad) Important: the decoder cannot distinguish the outcome ĉ = c from ĉ c. However, it can assign probabilities to the possibilities; more bit errors corrected suggests a higher probability that the estimate is wrong. EE 387, September 23, 2015 Notes 2, Page 11

Decoding outcomes (cont.) The codeword transmitted and noise encountered are random variables, so the decoder outcomes are probabilistic events. r = c r c P e ĉ =? P ued ĉ c P mc ĉ c P ue No error Error occurred (error in codeword) Error detected but not corrected (decoder failure) Miscorrection (decoder error) Undetectable error (error detection only) Definition: A complete decoder is a decoder that decodes every senseword to some codeword; i.e., the decoder never fails (to make a hard decisions). For a complete decoder, Prued = 0. Definition: A bounded-distance decoder corrects all errors of weight t but no errors of weight > t. More than t errors results in failure or error. For a fixed code, if we reduce t then decoder failure becomes more common while decoder error becomes less likely. ued can be read as uncorrectable error detected, whereas ue is undetected error. EE 387, September 23, 2015 Notes 2, Page 12