ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

Similar documents
Lecture 3 Data Link Layer - Digital Data Communication Techniques

Chapter 10 Error Detection and Correction 10.1

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Revision of Lecture Eleven

Physical-Layer Services and Systems

Digital Television Lecture 5

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

6. FUNDAMENTALS OF CHANNEL CODER

Detecting and Correcting Bit Errors. COS 463: Wireless Networks Lecture 8 Kyle Jamieson

Data and Computer Communications

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Error Detection and Correction

Outline. Communications Engineering 1

ECE 6640 Digital Communications

ECE 6640 Digital Communications

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Layering and Controlling Errors

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Error Protection: Detection and Correction

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel

Block code Encoder. In some applications, message bits come in serially rather than in large blocks. WY Tam - EIE POLYU

Digital Data Communication Techniques

Datacommunication I. Layers of the OSI-model. Lecture 3. signal encoding, error detection/correction

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use?

Implementation of Reed-Solomon RS(255,239) Code

Page 1. Outline. Basic Idea. Hamming Distance. Hamming Distance Visual: HD=2

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Chapter 1 Coding for Reliable Digital Transmission and Storage

Simple Algorithm in (older) Selection Diversity. Receiver Diversity Can we Do Better? Receiver Diversity Optimization.

16.36 Communication Systems Engineering

ETSI TS V1.1.2 ( )

DIGITAL DATA COMMUNICATION TECHNIQUES

Performance of Reed-Solomon Codes in AWGN Channel

Performance Analysis of n Wireless LAN Physical Layer

Introduction to Error Control Coding

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

AHA Application Note. Primer: Reed-Solomon Error Correction Codes (ECC)

EE521 Analog and Digital Communications

EECS 380: Wireless Technologies Week 7-8

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

2018/11/1 Thursday. YU Xiangyu

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

A Survey of Advanced FEC Systems

IJESRT. (I2OR), Publication Impact Factor: 3.785

Signal Encoding Criteria

Computer Networks. Week 03 Founda(on Communica(on Concepts. College of Information Science and Engineering Ritsumeikan University

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

TABLE OF CONTENTS CHAPTER TITLE PAGE

Improved concatenated (RS-CC) for OFDM systems

Wireless Communication in Embedded System. Prof. Prabhat Ranjan

Error Control Codes. Tarmo Anttalainen

QUESTION BANK EC 1351 DIGITAL COMMUNICATION YEAR / SEM : III / VI UNIT I- PULSE MODULATION PART-A (2 Marks) 1. What is the purpose of sample and hold

Bit Error Rate Performance Evaluation of Various Modulation Techniques with Forward Error Correction Coding of WiMAX

Lecture 6: Reliable Transmission"

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

Hardware Implementation of BCH Error-Correcting Codes on a FPGA

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

Outline. EECS 122, Lecture 6. Error Control Overview Where are Codes Used? Error Control Overview. Error Control Strategies ARQ versus FEC

Basics of Error Correcting Codes

ERROR CONTROL CODING From Theory to Practice

Course Developer: Ranjan Bose, IIT Delhi

CSC344 Wireless and Mobile Computing. Department of Computer Science COMSATS Institute of Information Technology

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

Techniques to Mitigate Fading Effects

BER Performance Analysis of QAM Modulation Techniques in MIMO Rayleigh Channel for WCDMA System

White Paper FEC In Optical Transmission. Giacomo Losio ProLabs Head of Technology

Contents Chapter 1: Introduction... 2

QUIZ : oversubscription

Spreading Codes and Characteristics. Error Correction Codes

Rep. ITU-R BO REPORT ITU-R BO SATELLITE-BROADCASTING SYSTEMS OF INTEGRATED SERVICES DIGITAL BROADCASTING

Implementation of Reed Solomon Encoding Algorithm

Chapter 10 Error Detection and Correction

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

An Efficient Forward Error Correction Scheme for Wireless Sensor Network

HY448 Sample Problems

Vector-LDPC Codes for Mobile Broadband Communications

Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction

LDPC Decoding: VLSI Architectures and Implementations

Open Access Concatenated RS-Convolutional Codes for Cooperative Wireless Communication

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

b. When transmitting a message through a transmission medium, the equipment which receives the message should first find out whether it has received

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

BSc (Hons) Computer Science with Network Security, BEng (Hons) Electronic Engineering. Cohorts: BCNS/17A/FT & BEE/16B/FT

TELE4652 Mobile and Satellite Communication Systems

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

CSCI-1680 Physical Layer Rodrigo Fonseca

and coding (a.k.a. communication theory) Signals and functions Elementary operation of communication: send signal on

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

Wireless Communications

Burst Error Correction Method Based on Arithmetic Weighted Checksums

ATSC 3.0 Physical Layer Overview

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

Transcription:

ECE 476/ECE 501C/CS 513 - Wireless Communication Systems Winter 2005 Lecture 9: Error Control Coding Chapter 8 Coding and Error Control From: Wireless Communications and Networks by William Stallings, Prentice Hall, 2002. Diversity makes use of redundancy in terms of using multiple signals to improve signal quality. Error control coding, as discussed in this lecture and next, uses redundancy by sending extra data bits to detect and correct bit errors. Two types of coding in wireless systems (see beginning of Lecture 7) Source coding compressing a data source using encoding of information Channel coding encode information to be able to overcome bit errors Now we focus on channel coding, also called error control coding. I. Error Detection Three approaches can be used to cope with data transmission errors. 1. Using codes to detect errors. 2. Using codes to correct errors called Forward Error Correction (FEC). 3. Mechanisms to automatically retransmit corrupted packets. Bit errors and Packet Errors. Given the following definitions: Pb = probability of a single bit error P1 = probability that a frame of F bits arrives with no errors P1 = ( 1 P ) F b Lecture 9, Page 1 of 32

P2 = probability that a frame arrives with one or more errors, P2 = 1 ( 1 P ) F b Example: F = 500 bytes (4000 bits) P b = 10-6 (a very good performing wireless link) P 1 = (1-10 -6 ) 4000 = 0.9960 P 2 = 1-(1-10 -6 ) 4000 = 0.0040 4 out of every 1000 packets have errors At 100 kbps, this means 6 packets per minute are in error. Given 100 kilobyte data files: 200 packets per file Probability that the file is corrupted is 1-(1- P 1 ) 200 = 0.551 If we do nothing, then a significant amount of our data will be corrupted It is more likely that a complete file would be transmitted with corruption than without corruption. Error Detection Basic Idea: Add extra bits to a block of data bits. Data block: k bits Error detection: n-k more bits - Based on some algorithm for creating the extra bits. Results in a frame (data link layer packets are called frames ) of n bits The receiver separates the data bits and the error correction bits. Then performs the same algorithm used at the source to see if the received bits are what they should have been. Hopefully if errors have occurred, the packet can be retransmitted or corrected. Hopefully because there are always some error patterns that could go undetected. Lecture 9, Page 2 of 32

Parity Check Simplest scheme: Add one bit to the frame. The value of the bit is chosen so as to make the number of 1 s even (or odd, depending on the type of parity). Example 7 bit character: 1110001 Even parity would make an 8 bit character of what? 11100010 Odd parity would make an 8 bit character of what? 11100011 So, this can be used to detect errors. For example, a received 10100010 (when using even parity) would be invalid and an error is detected. But what types of errors would NOT be detected? Even numbers of bits in error. Noise sometimes comes in impulses or during a deep fade so one cannot always assume individual bit errors will occur. Parity checks, therefore, have limited usefulness. For n bits in error over N total bits in a character, the probability follows a Binomial distribution. N n N n Pr{ n bits in error}= p b ( 1 pb ) n Lecture 9, Page 3 of 32

Example: Given is a bit error probability of 10-2. What is the probability that even parity will fail for a 7 bit character plus a parity bit? Failure: 2, 4, 6, or 8 bits in error. Pr{2 bits in error} ==> follows Binomial distribution. All other bit errors are much less likely (6.7e-7 for 4, 2.7e-11 for 6, 1e-16 for 8) Cyclic Redundancy Check (CRC) Uses more than a single parity bit. Adds an (n-k) bit frame check sequence. Lecture 9, Page 4 of 32

Takes the source data and creates a sequence of bits that is only valid if divisible by a predetermined number. Using modulo-2 arithmetic. - Binary addition with no carries. - The same as exclusive-or operations (XOR). - Also can be implemented by polynomial operations. Using the following definitions. T = n-bit frame to be transmitted. D = k-bit block of data, or message, the first k bits of T F = (n-k)-bit frame check sequence, the last (n-k) bits of T P = pattern of n-k+1 bits; this is the predetermined divisor. The goal is for T/P to have no remainder. So, T = 2n-kD + F - 2 n-k D means D is shifted by (n-k) bits so F can be added to it. Example: D = 11111, F=101 T = 11111000 + 101 = 11111101 Lecture 9, Page 5 of 32

Suppose we divide 2 n-k D by P n k 2 D P = Q + R P - Q is the quotient and R is the remainder. So, we can make our frame check sequence F = R - So, T = 2 n-k D + R - Then T P = n k 2 D + R P = Q + R + P R P = Q + 0 - In modulo-2, any number added to itself is zero, so R/P + R/P = 0 The result is a division with no remainder. Example: See the handout from the Stallings textbook on page 208 to see how T is formed and how division creates no remainder. Using polynomial operations. In the example, P = 110101 This could also be represented as a polynomial as P(X) = X 5 + X 4 + X 2 +1 - From bit positions in P of order 0 to 5. Using polynomial operations indirectly performs modulo-2 arithmetic, see Example 8.2. CRC codes fail when a sequence of bit errors creates a different bit sequence that is also divisible by P. It can be shown that the following errors can be prevented by suitably chosen values for P, and the related P(X). All single bit errors will be detected. All double-bit errors. Any odd number of errors. Any burst error for which the length of the burst is less than or equal to (n-k) - This means the burst is less than the frame check sequence. - A burst error is a contiguous set of bits where all of the bits are in error. Lecture 9, Page 6 of 32

Checksums Four versions of P(X) are widely used, in different bit lengths. For example, CRC-16. P(X) = X 16 + X 15 + X 2 +1 The Internet Protocol uses a checksum approach. Only on the packet header information. All of the 16-bit words in the header are added together. A checksum is then inserted into to the header. At the receiver, all of the 16-bit words are added again, and this time (because the checksum was inserted) should sum to zero. II. Block Error Correction Codes Error detection requires blocks to be retransmitted when an error is found. This is inadequate for wireless communication for two reasons. Why? 1. Error rates in a wireless link can be quite high, so a large number of retransmissions might be required, even retransmissions of retransmissions. 2. Propagation delay, especially on satellite links, can be very large. This would make waiting for retransmissions very time consuming. Lecture 9, Page 7 of 32

It is desirable to be able to correct errors without requiring retransmission. Using the bits that were transmitted. On transmission, the k-bit block of data is mapped into an n-bit block called a. Using a encoder. - A codeword may or may not be similar to those from the CRC approach above. - It may come from taking the original data and adding extra bits (as with CRC). - Or it may be created using a completely new set of bits. The codewords are longer (maybe much longer) than the original data. Then the block is transmitted. Lecture 9, Page 8 of 32

At the receiver, comparing the received codeword with the set of valid codewords can result in one of five possible outcomes (Stallings only lists four). 1. There are no bit errors The received codeword is the same as the transmitted codeword. The corresponding source data for that codeword is output from the decoder. 2. An error is detected and can be corrected For certain bit error patterns, it is clear that the received codeword is close to a valid codeword. It is assumed that the close by codeword was sent. It is assumed that the source data for that codeword should be used. 3. An error is detected but cannot be corrected. The received codeword is close to two or more valid codewords. One cannot assume which codeword was the original. So, it is decided only that an error has been detected and the frame should be retransmitted. 4. An error is detected and is erroneously corrected. An error pattern creates a new codeword that is close to a valid codeword. But the one it is close to is not the one that was sent. Therefore, the decoder outputs the source data for a wrong codeword. This is a. 5. An error is not detected. An error pattern occurs that transforms the transmitted codeword into another valid codeword. The receiver assumes no error has occurred. The output from the decoder is the source data for a wrong codeword. This is a. Hopefully other application processes will also check the validity of the data. Lecture 9, Page 9 of 32

Block Code Principles Hamming Distance Given are two example sequences. v 1 = 011011, v 2 = 110001 The Hamming Distance is defined as the number of bits which disagree. d(v 1, v 2 ) = 3 Example: Given k = 2, n = 5 Data block Codeword 00 00000 01 00111 10 11001 11 11110 Suppose the following is received: 00100 - This is not a valid codeword. - An error is detected Can the error be corrected? - We cannot be sure. - 1, 2, 3, 4, or even 5 bits may have been corrupted by noise. - However, only one bit is different between this and 00000. d(00100,00000) = 1 - Two bit changes would have been required between this and 00111. d(00100,00111) = 2 - Three bits with 11110. d(00100,11110) = 3 - And four bits with 11001. d(00100,11001) = 4 Thus the most likely codeword that was sent is 00000. - The output from the decoder is then the data block 00. - But there could be a failed correction and some other data block should have been decoded. Lecture 9, Page 10 of 32

Decoding rule: Use the closest codeword (in terms of Hamming distance). Why is it okay to do this? How much less likely are two errors than one error? Assume BER = 10-3. - And only certain patterns of bit errors will create a failure of the channel coding to correct the error properly. Now, for all cases There are five bits in the codeword, so there are 2 5 =32 possible received codewords. - Four are valid, the other 28 would come from bit errors. See page 216 of the Stallings handout. In many cases, a possible received codeword is a Hamming distance of 1 from a valid codeword. But in eight cases, a received codeword would be a distance of 2 away from two valid codewords. - The receiver does not know how to choose. - A correction decision is undecided. - An error is detected but not correctable. So, we can conclude that in this case an error of 1 bit is always correctable, but not errors of two bits. Block code design With an (n,k) code, there are 2 k valid codewords out of a possible 2 n codewords. The ratio of redundant data bits to data bits, (n-k)/k, is called the of the code. The ratio of k/n is called the. - For example, a ½ rate code carries double the bandwidth of the encoded system for the same net data rate. - ½ of the bits are for error control purposes. Lecture 9, Page 11 of 32

- Example: A 2/5 rate code over a 30 kbps channel. - Net data rate? (2/5)*30 = 12 kbps is the net data rate. - Data rate for error control codes? (3/5)*30 = 18 kbps is the bit rate for the error control codes. For a code consisting of codewords denoted w i, the minimum Hamming distance is defined as dmin = min i j [ d( w, w )] For the example above, dmin = 3. The maximum number of guaranteed correctable errors is i j t corr = d min 2 1 - The symbol x means to round down to the next lowest integer. From the example, tcorr = (3-1)/2 = 1 bit error can be corrected. - A two-bit error will cause either an undecided correction or a failed correction. The number of errors that can be detected is t det = d min 1 From the example, tdet = 3-1 = 2 - All two-bit errors will be detected. - As little as a three bit error might cause a failed detection, since a change in 3 bits might create another valid codeword. Lecture 9, Page 12 of 32

Given these definitions for Hamming distance, why is it necessary that the codewords be longer than the original data? Because if the codewords were the same length, dmin=1, and the maximum detectable is ze. The following design considerations are involved with devising codewords. For values of n and k, we would like the largest possible value of dmin. The code should be relatively easy to encode and decode, with minimal memory and processing time. We would like the number of extra bits, (n-k) to be to preserve bandwidth. We would like the number of extra bits, (n-k) to be to reduce error rate. The last two objectives are in conflict. Lecture 9, Page 13 of 32

Coding Gain Coding can allow us to use lower power (smaller Eb/N0) to achieve the same error rate we would have had without using correction bits. - Since errors can be corrected. The curve below on the right is for an uncoded modulation system. - Above E b /N 0 of 5.4 db, a smaller bit error rate can be achieved using a ½ rate code for the same E b /N 0. - The coding gain of a code is defined as the reduction in db of E b /N 0 that is required to obtain the same error rate. - For example, for a BER of 10-6, 11 db is needed for the ½ rate code, as compared to 13.77 db without the coding. - This is a coding gain of 2.77 db. - What is the coding gain at a BER of 10-3? 11.3 db - 9.4 db =1.9 db Lecture 9, Page 14 of 32

III. Block Codes So far, we have studied the following: The impact of bit errors on packet errors Error detection using parity, CRC, and checksums. Retransmission versus FEC approaches. FEC codewords derived from data blocks not necessarily a [data+code] approach. Hamming distance definition Hamming distance requirements for proper correction and detection of errors. Coding rates Coding gain amount of power savings that can result by using coding. A lower S/N ratio (E b /N 0 ) is needed for the same bit error rate. Now we focus on a few specific types of block codes, convolutional codes, and turbo codes. Block code: takes a block of data and converts it into codewords. Drawback: The complete block must be present. At the transmitter to be encoded. At the receiver to be decoded. This can create extra time delays in the wireless communication process. Cyclic Codes Most error correcting block codes are of this type. We saw cyclic codes for error detection: Given any size block of data creates a fixed-length code. To be put in a packet header, for example. Cyclic-error-correcting codes Given a fixed size block of data creates a fixed-length code. The length of the code depends on the length of the block of data. And a different code is used for a different size block of data. Called cyclic because if you rotate the bits in a codeword (in a cyclic fashion), then another valid codeword results. Lecture 9, Page 15 of 32

Once again uses a generator polynomial that is used as a divisor for a received codeword. If there is no remainder no error. P(X) is the polynomial representation of the codeword. The transmitted codeword is represented using polynomials as n k T ( = X D( + C( n k X D( shifts the data sequence D(X) to the most significant bits. Then C(X) is added into the least significant n-k bits. C(X) is the remainder from the following division (modulo-2) n k X D( C( = Q( + P( P( If bit errors occur, then a different block, Z(X) will be received that can be represented as: Z ( = T ( + E( - E(X) is represented as an error added to the original codeword. When the division operation is performed on Z(X), the following result is obtained. Z( P( B(X) is the quotient. = B( + S( P( S(X) is the remainder, called the. Lecture 9, Page 16 of 32

T = 0011010 T(X) = X 4 +X 3 +X Z(X) = X 5 +X 4 +X 3 +X By expanding the above equations, we can produce the following result. Z( T ( + E( = P( P( Q( + = B( + E( = B( + P( S( P( S( P( E( S( = [ B( + Q( ] + P( P( - So, by performing the operation E(X)/P(X) the same remainder is produced as Z(X)/P(X). - Regardless of the transmitted T(X), the syndrome value only depends on the error that occurred. If we can recover E(X) from S(X), then we can correct Z(X) to make it T(X). - Once that happens, we can use the codeword T(X) to retrieve the original data block. Simple approach: - Create a lookup table to find E(X) from possible S(X). - Then recreate T(X). - Then find D(X) from T(X). Example: - Consider a (7,4) code with a generator polynomial P(X) = X 3 +X 2 +1. - (7, 4) means a block length of 7 with 4 bits of data. - This allows us to correct all single-bit errors. - Assume we are sending the data block 0011. - T(X) can be found from the table on the next page. - Now consider a received Z = 0111010. Find S(X) X 2 +1 X 3 +X 2 +1 X 5 +X 4 +X 3 + +X X 5 +X 4 + + X 2 Lecture 9, Page 17 of 32

Look up on table: 011 error is 0100000 X 3 + X 2 +X X 3 +X 2 + +1 X+1 Remainder = 011 So, Z(X) has an error an can be corrected to 0011010. Then, looking up 0011010 data is 0011 (as expected). - Now to check the derivation: By dividing E(X)/P(X) X 2 +X+1 X 3 +X 2 +1 X 5 X 5 +X 4 + + X 2 X 4 + + X 2 X 4 +X 3 + X X 3 +X 2 + X X 3 +X 2 + +1 X+1 Remainder = 011 Lecture 9, Page 18 of 32

BCH Codes BCH is an abbreviation for the names of the creators of the code. Bose, Chaudhuri, and Hocquenhem. Among the most powerful codes widely used in wireless applications. Achieve significant coding gains. Can be implemented even at high speeds. There is flexibility in the choice of block length and corresponding coding rate. See Table 8.4. BCH does not require table lookup. Uses algorithms for less memory usage. Reed-Solomon (RS) Codes Widely used subclass of BCH codes. Parameter selection guidelines are given in the handout. RS codes are well suited for burst error correction. When a fade or noise impulse creates errors in a consecutive string of bits. Efficient coding techniques are available. The (63,47) RS code is used for US Cellular Digital Packet Data (CDPD) CDPD is the data service created for AMPS and 2 nd generation digital cellular. IV. Block Interleaving Small scale fading causes deep fades In that case, bits are not dropped randomly, but rather in large bursts of errors Technically on half of the bits or lost, since the other half are correct by coincidence. But channel coding is only useful for detecting/correcting isolated errors, and only a few number of errors Once a large number of errors occur in a packet, the packet cannot be recovered. Interleaving can be used when combined with channel coding. The interleaver spreads the bits from one packet out in time so that a sequence of many bits from one packet are not corrupted at the same time Lecture 9, Page 19 of 32

Accomplished by interleaving (combining, interweaving) multiple packets, like packets P 1 through P 5 below. P 1 P 2 P 3 P 4 P 5 P 1 P 2 P 3 P 4 P 5 P 1 P 2 P 3 P 4 P 5 P 1 Burst of Errors No one packet is corrupted too much. Figure 8.8 Block Interleaver m rows & n columns ; degree = m packets at a time mn bits interleaved at a time sequentially read in to rows sequentially read out of columns original source bits separated by m bits Lecture 9, Page 20 of 32

Interleaver delay nm bits must arrive @ the receiver before the process can be inverted de-interleaved delay = nmt b The perceived quality of real-time data (voice, video, etc.) can be affected by this delay. Suppose we have an (n, k) code and one can correct t or fewer errors. And we use an m degree interleaver. Then the result is an (mn, mk) code that can correct bursts of up to mt errors. Since each block individually should correct t errors. V. Convolutional Codes Successful coding needs a somewhat large block of source data. A few more bits of coding data are not as significant when data blocks are large. This preserves bandwidth efficiency. But large blocks of data create delay. To read in the block, receive the block, and process the block on both ends. This is a limitation of block coding which we have seen so far. Convolutional coding approaches the coding process a little bit differently. A convolutional code is specified as an (n, k, K) code. n and k are as before. But with convolutional codes the values are usually much smaller. The K parameter is where the difference lies. Lecture 9, Page 21 of 32

K is called the. K defines the number of blocks of k bits over which a code is computed. So a convolutional code is computed for the current block and uses K-1 blocks in the past. This allows codes to still be computed over a larger block of data. But there is no need to wait to transmit that whole block of data before coding can be added. Encoding The output of a convolutional encoder can be represented using a finitestate machine as seen above. Lecture 9, Page 22 of 32

The above diagram shows 4 possible states. One possible state for each possible pair of values for the last two input bits. The next input causes a transition and produces an output of two more bits. Example - If the previous two input bits were 1 and 0 (0 the last one), then the system will be in state 01. - Then if a 1 arrives, the system will move to state 10 and the coder will output 00. How the coding is performed is determined by the most recent state and the input bit. The history is reflected in the state the machine has come to. Decoding An expanded state diagram can be used to show the time sequence of the encoder: Lecture 9, Page 23 of 32

Commonly called a. - Shows possible state transitions going left to right corresponding to time and data input. Any valid output is defined by a path through the trellis. - For example, a-b-c-b-c-a-a is valid. - But a-b-a is not, so this would indicate an error has occurred. Error correction attempts to find the most likely error that occurred. - Viterbi produced several important algorithms for this. - The Viterbi code finds the path through the trellis where the valid sequence differs from the received sequence by the least amount. - And the choice of how the codes are formed facilitates this. The further details given in the Stallings handout about the error correction using convolutional codes will not be covered in homework or exams. Convolutional codes provide good performance where a high proportion of bits are in error. They are finding increasing use in wireless applications. VI. Turbo codes Referenced from the attached article: E. Guizzo, Closing in on the Perfect Code, IEEE Spectrum, March 2004. A color version is also on the course web site. Two French Engineers had a surprising idea in 1993. Which at first was met with great skepticism. They formulated a new error control coding approach to double data throughput at a given transmitting power. Using a coding scheme called Turbo Codes. They were able to get extremely close to the theoretical channel capacity. Within 0.5 db, where previously researchers had only been able to come within 3 to 5 db. But what does this mean? The theoretical limit was formulated by Claude Shannon in the 1940 s. η C S = = log 2 + B N B 1 max Lecture 9, Page 24 of 32

Example: Assume a 30 khz channel and S/N is 0 db. 0 db 10^0 = 1 C = 30e3 * log 2 (1+1) = 30e3 * log10(2)/log10(2) = 30 kbps If the actual S/N is 4 db more than ideal, then what is the achieved data rate, C? The codes are only achieving at 0 db what could theoretically be obtained at -4 db -4 db 10^-0.4 = 0.3981 C = 30e3 * log2(1+0.3981) = 30e3 * log10(1.3981)/log10(2) = 14.50 kbps Now, if the S/N is only 0.5 db too high, what is C? The codes are only achieving at 0 db what would theoretically be obtained at -0.5 db -0.5 db 10^-0.05 = 0.891 Lecture 9, Page 25 of 32

C = 30e3 * log2(1+0.891) = 30e3 * log10(1.891)/log10(2) = 27.58 kbps (almost twice the data rate) People had spent decades trying to find practical algorithms. - But were getting nowhere close to Shannon s capacity limit. - Since Shannon s theory came from theoretically creating codes where the complexity was astounding to try to create them. - For example, to approach capacity you would need large codewords, say of 1000 bits. - For that, there would be 2 1000 possible codewords (which equals 10 301, there are 10 80 atoms in the universe). - So simplified, algorithmic approaches were devised, but they were not very efficient. - The complexity problem emerges as you figure the cost of a code versus the amount of computation required to decode your data. - Some had even concluded in the late 1970 s that the search was hopeless. Turbo codes opened a whole new area of research with codes called Capacity-approaching codes. Lecture 9, Page 26 of 32

How Turbo Codes work A better resolution figure is on the course web site. Turbo codes use two encoders. They work in parallel. One uses the raw data bits, another uses a scrambled version of the input. - Created by the interleaver. The original data bits are then transmitted, plus the outputs of the two encoders. - This creates a 1/3 rate code. - The Stallings handout also talks about a ½ rate code that can be created by puncturing. - Puncturing only uses half of the check bits, alternating outputs from the two encoders. Lecture 9, Page 27 of 32

Turbo codes use two decoders. Using two decoders solves a lot of the complexity problem that other coding approaches could not overcome. - Each decoder gets its own version of the data and the code bits, then formulates its opinion about how to decode. - Then the two decoders iterate back and forth to compare results. - After 4 to 10 iterations, they come to an agreement. - There is very low probability of error from them both agreeing to the wrong result. - By using two decoders, we once again take advantage of what powerful principle? Diversity! Lecture 9, Page 28 of 32

- Also, what important concept from control theory is used? Feedback - While used for many other applications, feedback had not previously been used in error control decoding. Fuzzy logic - Fuzzy logic is where decisions are made not just with 0 s and 1 s, but with different levels of confidence. - The article does not use that terminology, but fuzzy logic is being used here. - The demodulator does not just give 0 s and 1 s, but different levels of confidence related to the 1 s and 0 s. - Then the decoders upgrade or degrade their confidence for each bit. - Then they combine confidences to formulate a result based on (1) the confidence of the demodulator, (2) the confidence of the first decoder, and (3) the confidence of the second decoder. - This type of approach is also similar to something called soft decision decoding used with convolutional coders. Usefulness of Turbo Codes The benefits are clear. Within 0.5 db of the Shannon limit. Obtained by using diversity, feedback, and fuzzy logic. They are not used yet, but are planned for implementation in many 3G systems. But the decoding delay is substantial. 4 to 10 iterations. So, many consider this unacceptable for real-time voice or other applications that require instant data processing, like hard disk storage or optical transmission. LDPC Codes Turbo codes also have competitors. Low Density Parity Check (LDPC) codes were devised in the 1960 s that also approach Shannon s limit. - But until recently were considered too complex. - But outperform turbo codes in some aspects. Lecture 9, Page 29 of 32

New research perspective Now researchers just assume that codes can approach the Shannon limit. This put an end to a search that lasted 40 years for finding codes to approach the Shannon limit. Now the speed and complexity of codes is the area of research. Turbo codes illustrate the following. It is not always necessary to know about theoretical limits to reach them. The simpleton didn t know the task was impossible, so he did it. VII. Coding in Standards The following coding approaches are used in today s predominant standards U.S. Digital Cellular Voice coded bits are divided into high priority (class-1) and low priority (class-2) bits. - Class-1 uses a ½ rate convolutional code with constraint length K=6. - Plus, the most significant bits among the class-1 bits are block coded using a 7 bit CRC error detection code. - Class-2 no error protection GSM Voice Channels - GSM also groups bits after the speech coder into groups of different priorities. - The most important 50 bits in a frame have 3 CRC error detection bits added. - Then 189 bits in a frame are encoded using a convolutional code with K=5. - The least important 78 bits have no protection. - Coding increases the data rate from 260 bits per frame to 456 bits per 20 ms frame (from 13.0 kbps to 22.8 kbps). Lecture 9, Page 30 of 32

Data Channels - Half-rate punctured convolutional coder with K=5. - Puncturing leaves out 32 bits from every 488 bits (resulting in 456). Control Channels - Highest priority information to be protected. - Use a cyclic block coder which creates a 40-bit check sequence for 184 bits of data. - Then also uses a ½ rate convolutional coder with K=5. GPRS and EDGE higher data rate extensions to GSM No error protection. Applications are required to provide their own error correction schemes to be part of the data payload. IS-95 Forward channels use ½ rate convolutional encoding with K=9. - Convolutional coding occurs first, followed by interleaving, spreading by a Walsh code, and spreading by a long PN sequence. Lecture 9, Page 31 of 32

Reverse channels use 1/3 rate convolutional encoding with K=9. Why would one use this different coding rate for the reverse channel? 1/3 coding means more error control bits. Must overcome more errors. Signal from a mobile would be weaker and more affected by obstructions. IEEE 802.11 Wireless LAN Header fields are protected with CCITT CRC-16 for error detecting and assumes retransmission. At 11 Mbps, ½ rate convolutional coding can be applied to the payload of the packet and is optional. - Called PBCC (Packet Binary Convolutional Code). For 802.11g, at 22 Mbps and 33 Mbps, a 256 state, 2/3 rate convolutional code is used. Next and final lecture 802.11, OFDM, and Ultra Wideband Lecture 9, Page 32 of 32