Chapter 2 Soft and Hard Decision Decoding Performance

Similar documents
Outline. Communications Engineering 1

Decoding of Block Turbo Codes

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Communications Theory and Engineering

ECE 6640 Digital Communications

Chapter 3 Convolutional Codes and Trellis Coded Modulation

Study of Undetected Error Probability of BCH codes for MTTFPA analysis

Communication Theory and Engineering

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Error Control Codes. Tarmo Anttalainen

Performance comparison of convolutional and block turbo codes

Chapter 4 Cyclotomic Cosets, the Mattson Solomon Polynomial, Idempotents and Cyclic Codes

ERROR CONTROL CODING From Theory to Practice

ELG 5372 Error Control Coding. Lecture 10: Performance Measures: BER after decoding

High-Rate Non-Binary Product Codes

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

LDPC Decoding: VLSI Architectures and Implementations

photons photodetector t laser input current output current

A Survey of Advanced FEC Systems

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

A New Adaptive Two-Stage Maximum- Likelihood Decoding Algorithm for Linear Block Codes

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

ANALYSIS OF ADSL2 s 4D-TCM PERFORMANCE

An Energy-Division Multiple Access Scheme

Multiple-Bases Belief-Propagation for Decoding of Short Block Codes

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Decoding Distance-preserving Permutation Codes for Power-line Communications

Lab/Project Error Control Coding using LDPC Codes and HARQ

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

arxiv: v2 [eess.sp] 10 Sep 2018

On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks

Quasi-Orthogonal Space-Time Block Coding Using Polynomial Phase Modulation

Analysis of Convolutional Encoder with Viterbi Decoder for Next Generation Broadband Wireless Access Systems

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Q-ary LDPC Decoders with Reduced Complexity

This chapter describes the objective of research work which is covered in the first

Sphere Decoding in Multi-user Multiple Input Multiple Output with reduced complexity

A Sphere Decoding Algorithm for MIMO

Near-Optimal Low Complexity MLSE Equalization

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel

Error-Correcting Codes

Optimal Power Allocation for Type II H ARQ via Geometric Programming

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

THIS LETTER reports the results of a study on the construction

COMBINING GALOIS WITH COMPLEX FIELD CODING FOR HIGH-RATE SPACE-TIME COMMUNICATIONS. Renqiu Wang, Zhengdao Wang, and Georgios B.

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

On the Construction and Decoding of Concatenated Polar Codes

Optimized threshold calculation for blanking nonlinearity at OFDM receivers based on impulsive noise estimation

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

IN A direct-sequence code-division multiple-access (DS-

Performance Analysis of Maximum Likelihood Detection in a MIMO Antenna System

Decoding Turbo Codes and LDPC Codes via Linear Programming

Chapter 4 Digital Transmission 4.1

Maximum Likelihood Detection of Low Rate Repeat Codes in Frequency Hopped Systems

A Soft-Limiting Receiver Structure for Time-Hopping UWB in Multiple Access Interference

Polar Codes for Magnetic Recording Channels

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Combining Modern Codes and Set- Partitioning for Multilevel Storage Systems

Iterative Decoding for MIMO Channels via. Modified Sphere Decoding

code V(n,k) := words module

COMPARISON OF CODE RATE AND TRANSMIT DIVERSITY IN 2 2 MIMO SYSTEMS

MULTILEVEL CODING (MLC) with multistage decoding

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

THE computational complexity of optimum equalization of

PRINCIPLES OF SPREAD-SPECTRUM COMMUNICATION SYSTEMS

Simulink Modeling of Convolutional Encoders

Layered Space-Time Codes

A 24-Dimensional Modulation Format Achieving 6 db Asymptotic Power Efficiency

Performance Evaluation and Comparative Analysis of Various Concatenated Error Correcting Codes Using BPSK Modulation for AWGN Channel

Near-Optimal Low Complexity MLSE Equalization

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

Information-Theoretic Metrics in Coherent Optical Communications and their Applications

Constellation Shaping for LDPC-Coded APSK

Robust Reed Solomon Coded MPSK Modulation

Introduction to Error Control Coding

REVIEW OF COOPERATIVE SCHEMES BASED ON DISTRIBUTED CODING STRATEGY

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Generalized PSK in space-time coding. IEEE Transactions On Communications, 2005, v. 53 n. 5, p Citation.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY

Chapter 10 Error Detection and Correction 10.1

ICE1495 Independent Study for Undergraduate Project (IUP) A. Lie Detector. Prof. : Hyunchul Park Student : Jonghun Park Due date : 06/04/04

CICT Centro de Informações Científicas e Tecnológicas do Inatel

An improvement to the Gilbert-Varshamov bound for permutation codes

Digital Television Lecture 5

CODE division multiple access (CDMA) systems suffer. A Blind Adaptive Decorrelating Detector for CDMA Systems

Digital Communication Systems ECS 452

Receiver Design for Noncoherent Digital Network Coding

Combined Transmitter Diversity and Multi-Level Modulation Techniques

ECE 6640 Digital Communications

OFDM Transmission Corrupted by Impulsive Noise

TRANSMIT diversity has emerged in the last decade as an

Transcription:

Chapter 2 Soft and Hard Decision Decoding Performance 2.1 Introduction This chapter is concerned with the performance of binary codes under maximum likelihood soft decision decoding and maximum likelihood hard decision decoding. Maximum likelihood decoding gives the best performance possible for a code and is therefore used to assess the quality of the code. In practice, maximum likelihood decoding of codes is computationally difficult, and as such, theoretical bounds on the performance of codes are used instead. These bounds are in lower and upper form and the expected performance of the code is within the region bounded by the two. For hard decision decoding, lower and upper bounds on maximum likelihood decoding are computed using information on the coset weight leader distribution. For maximum likelihood soft decision decoding, the bounds are computed using the weight distribution of the codes. The union bound is a simple and well-known bound for the performance of codes under maximum likelihood soft decision decoding. The union bound can be expressed as both an upper and lower bound. Using these bounds, we see that as the SNR per bit becomes large the performance of the codes can be completely determined by the lower bound. However, this is not the case with the bounds on maximum likelihood hard decision decoding of codes. In general, soft decision decoding has better performance than hard decision decoding and being able to estimate the performance of codes under soft decision decoding is attractive. Computation of the union bound requires the knowledge of the weight distribution of the code. In Sect. 2.3.1, we use a binomial approximation for the weight distribution of codes for which the actual computation of the weight distribution is prohibitive. As a result, it possible to calculate within an acceptable degree of error the region in which the performance of codes can be completely predicted. The Author(s) 2017 M. Tomlinson et al., Error-Correction Coding and Decoding, Signals and Communication Technology, DOI 10.1007/978-3-319-51103-0_2 25

26 2 Soft and Hard Decision Decoding Performance 2.2 Hard Decision Performance 2.2.1 Complete and Bounded Distance Decoding Hard decision decoding is concerned with decoding of the received sequence in hamming space. Typically, the real-valued received sequence is quantised using a threshold to a binary sequence. A bounded distance decoder is guaranteed to correct all t errors or less, where t is called the packing radius and is given by: d 1 t = 2 and d is the minimum hamming distance of the code. Within a sphere centred around a codeword in the hamming space of radius t there is no other codeword, and the received sequence in this sphere is closest to the codeword. Beyond the packing radius, some error patterns may be corrected. A complete decoder exhaustively matches all codewords to the received sequence and selects the codeword with minimum hamming distance. A complete decoder is also called a minimum distance decoder or maximum likelihood decoder. Thus, a complete decoder corrects some patterns of error beyond the packing radius. The complexity of implementing a complete decoder is known to be NP-complete [3]. Complete decoding can be accomplished using a standard array. In order to discuss standard array decoding, we first need to define cosets and coset leaders. Definition 2.1 A coset of a code C is a set containing all the codewords of C corrupted by a single sequence a F n q \ C {0}. A coset of a binary code contains 2 k sequences and there are 2 n k possible cosets. Any sequence of minimum hamming weight in a coset can be chosen as a coset leader. In order to use a standard array, the coset leaders of all the cosets of a code must be known. We illustrate complete decoding with an example. Using a (7, 3) dual Hamming code with the following generator matrix G = [ 1000111 ] 0101011 0011101 This code has codewords C = 0000000 1000111 0101011 0011101 1101100 0110110 1011010 1110001

2.2 Hard Decision Performance 27 Coset Leaders 0000000 1000111 0101011 0011101 1101100 0110110 1011010 1110001 0000001 100011010011100 1101101 0110111 1011011 1110000 0000010 1000101 0101001 0011111 11011110100 1011000 1110011 0000100 1000011 0101111 0011001 1101000 0110010 1011110 1110101 0001000 1001111 0100011 0010101 1100100 0111110 1010010 1111001 0010000 1010111 0111011 0001101 1111100 0111110 1001010 1100001 0100000 1100111 0001011 0111101 1001100 0010110 1111010 1010001 1000000 0000111 1101011 1011101 0101100 111010110110001 0000011 1000100 0101000 0011110 1101111 0110101 1011001 1110010 0000110 1000001 0101101 0011011 11010110000 1011100 1110111 0001100 1001011 0100111 0010001 1100000 0111010 1010110 1111101 0011000 1011111 0110011 0000101 1110100 0101110 1000010 1101001 0001010 1001101 0100001 0010111 11001111100 1010000 1111011 0010100 1010011 0111111 0001001 1111000 0100010 1001110 1100101 0010010 1010101 0111001 0001111 11111100100 1001000 1100011 0001110 1001001 0100101 0010011 11000111000 1010100 1111111 Fig. 2.1 Standard array for the (7, 3, 4) binary code Complete decoding can be accomplished using standard array decoding. The example code is decoded using standard array decoding as follows, The top row of the array in Fig. 2.1 in bold contains the codewords of the (7, 3, 4) code. 1 Subsequent rows contain all the other cosets of the code with the array arranged so that the coset leaders are in the first column. The decoder finds the received sequence on a row in the array and then subtracts the coset leader corresponding to that row from it to obtain a decoded sequence. The standard array is partitioned based on the weight of the coset leaders. Received sequences on rows with coset leaders of weight less than or equal to t = 3 1 = 1 are all corrected. Some received sequences on rows with 2 coset leaders with weight greater than t are also corrected. Examining the standard array, it can be seen that the code can correct all single error sequences, some two error sequences and one three error sequence. The coset weight C i distribution is C 0 = 1 C 1 = 7 C 2 = 7 C 3 = 1 The covering radius of the code is the weight of the largest coset leader (in this example it is 3). 1 It is worth noting that a code itself can be considered as a coset with the sequence a an all zero sequence.

28 2 Soft and Hard Decision Decoding Performance 2.2.2 The Performance of Codes on the Binary Symmetric Channel Consider a real-valued sequence received from a transmission through an AWGN channel. If a demodulator makes hard decisions at the receiver, the channel may be modelled as a binary symmetric channel. Assuming the probability of bit error for the BSC is p, the probability of decoding error with a bounded distance decoder is given by, P BDD (e) = 1 t C i p i (1 p) n i (2.1) i=0 where C i is the number of coset leaders with weight i. C i known for 0 i t and is given by, ( ) n C i = 0 i t. i However, C i, i > t need to be computed for individual codes. The probability of error after full decoding is P Full (e) = 1 n C i p i (1 p) n i. (2.2) i=0 Figure 2.2 shows the performance of the bounded distance decoder and the full decoder for different codes. The bounds are computed using (2.1) and (2.2). As expected, there is significant coding gain between unencoded and coded transmission (bounded distance and full decoding) for all the cases. There is a small coding gain between bounded distance and full decoders. This coding gain depends on the coset leader weight distribution C i for i > t of the individual codes. The balance between complexity and performance for full and bounded distance decoders 2 ensures that the latter are preferred in practice. Observe that in Fig. 2.2 that the complete decoder consistently outperforms the bounded distance decoder as the probability of error decreases and E b increases. We will see in Sect. 2.3 that a similar setup using soft decision decoding in Euclidean space produces different results. 2.2.2.1 Bounds on Decoding on the BSC Channel Suppose s is such that C s is the maximum non-zero value for a code then s is the covering radius of the code. If the covering radius s of a code is known and C i, i > t are not known, then the probability of error after decoding can be bounded by 2 Bounded distance decoders usually have polynomial complexity, e.g. the Berlekamp Massey decoder for BCH codes has complexity O(t 2 ) [1].

2.2 Hard Decision Performance 29 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 10-22 10-24 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 (63,36) BCH Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=36) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (a) BCH Code (63,36) 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 -22 10-24 10 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 (63,39) BCH Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=36) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (b) BCH Code (63,39) 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 10-22 10-24 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 (128,100) Goppa Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=100) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2122 (c) Goppa Code (128,100) 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 10-22 10-24 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 (127,92) BCH Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=92) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (d) BCH Code (127,92) Fig. 2.2 BCH code BDD and full decoder performance, frame error rate () against E b [ t ( ] n P e 1 )p i (1 p) n i + p s (1 p) n s i i=0 [ t ( ] n 1 )p i (1 p) n i + W s p s (1 p) n s i i=0 (2.3) (2.4) assuming the code can correct t errors and W s = 2 n k t i=0 ( ) n. i

30 2 Soft and Hard Decision Decoding Performance The lower bound assumes that there is a single coset leader of weight s, and hence the term p s (1 p) n s while the upper bound assumes that all the coset leaders of weight greater than t have weight equal to the covering radius s. For the lower bound to hold, W s 1. The lower bound can be further tightened by assuming that the W s 1 cosets have weight of t + 1, t + 2,...until they can all be accounted for. 3 2.3 Soft Decision Performance The union bound for the probability of sequence error using maximum likelihood soft decoding performance on binary codes with BPSK modulation in the AWGN channel is given by [2], P s 1 2 n E b A j erfc Rj j=1 (2.5) where R is the code rate, A j is the number of codewords of weight j and E b is the SNR per bit. The union bound is obtained by assuming that events in which the received sequence is closer in euclidean distance to a codeword of weight j are independent as such the probability of error is the sum of all these events. A drawback to the exact computation of the union bound is the fact that the weight distribution A j, 0 j n of the code is required. Except for a small number of cases, the complete weight distribution of many codes is not known due to complexity limitations. Since A j = 0for1 j < d where d is the minimum distance of the code we can express (2.5)as, P s 1 n E b A j erfc Rj 2 N j=d 0 1 2 A E b d erfc Rd + 1 2 n j=d+1 A j erfc E b Rj (2.6) (2.7) A lower bound on the probability of error can be obtained if it is assumed that error events occur only when the received sequence is closer in euclidean distance to codewords at a distance d from the correct codeword. P s 1 2 A E b d erfc Rd (2.8) 3 This can be viewed as the code only has one term at the covering radius, and all other terms are at t + 1.

2.3 Soft Decision Performance 31 where 1 2 n j=d+1 A j erfc E b Rj = 0. (2.9) As such, 1 2 A E b d erfc Rd P s 1 2 n E b A j erfc Rj j=d (2.10) Therefore, the practical soft decision performance of a binary code lies between the upper and lower Union bound. It will be instructive to observe the union bound performance for actual codes using their computed weight distributions as the SNR per bit E b increases. By allowing E b to become large (and P s to decrease) simulations for several codes suggest that at a certain intersection value of E b the upper bound equals the lower bound. Consider Figs. 2.3, 2.4 and 2.5 which show the frame error rate against the SNR per bit for three types of codes. The upper bounds in the figures are obtained using the complete weight distribution of the codes with Eq. (2.5). The lower bounds are obtained using only the number of codewords of minimum weight of the codes with Eq. (2.8). It can be observed that as E b becomes large, the upper bound meets and equals the lower bound. The significance of this observation is that for E b values above the point where the two bounds intersect the performance of the codes under soft decision can be completely determined by the lower bound (or the upper bound). In this region where the bounds agree, when errors occur they do so because the received sequence is closer to codewords a distance d away from the correct codeword. The actual performance of the codes before this region is somewhere between the upper and lower bounds. As we have seen earlier, the two bounds agree when the sum in (2.9) approaches 0. It may be useful to consider an approximation of the complementary error function (erfc), in which case the condition becomes 1 n 2 erfc(x) <e x2 j=d+1 A j e E b Rj 0. (2.11) Clearly, the sum approximates to zero if each term ( in the ) sum also approximates to zero. It is safe to assume that the term A j erfc Eb Rj decreases as j increases since erfc Eb Rj reduces exponentially with j and A j increases in a binomial (in most cases). The size of the gap between the lower and upper bounds is also

32 2 Soft and Hard Decision Decoding Performance (128,29) ebch Code Performance (128,64) ebch Code Performance (a) Extended BCH Code (128,29) (b) Extended BCH Code (128,64) (128,85) ebch Code Performance (128,120) ebch Code Performance (c) Extended BCH Code (128,85) (d) Extended BCH Code (128,120) Fig. 2.3 Extended BCH code lower and upper union bound performance, frame error rate () against E b determined by these terms. Each term A j e E b N Rj 0 becomes small if one or both of the following conditions are met, (a) Some of the A j, j > d are zero. This is common in low rate binary codes with a small number of codewords. (b) The product E b Rj for j > d becomes very large. Observing Fig. 2.3, 2.4 and 2.5, it can be seen that at small values of E b and for low rate codes for which R = k n is small have some A j = 0, j > d and as such the gaps

2.3 Soft Decision Performance 33 (127,22) BCH Code Performance (127,36) BCH Code Performance (a) BCH Code (127,22) (b) BCH Code (127,36) (127,50) BCH Code Performance (127,92) BCH Code Performance (c) BCH Code (127,50) (d) BCH Code(127,92) Fig. 2.4 BCH code lower and upper union bound performance, frame error rate () against E b between the upper and lower bounds are small. As an example consider the low rate (127, 22, 47) BCH code in Fig. 2.4a which has, A j = 0 j {49...54} {57...62} {65...70} {73...78} {81...126}. For the high rate codes, R is large so that the product E b Rj becomes very large therefore the gaps between the upper and lower bounds are small. Figure 2.6 compares bounded distance decoding and full decoding with maximum likelihood soft decision decoding of the (63, 39) and (63, 36) BCH codes. It can be seen from the figure that whilst the probability of error for maximum likelihood

34 2 Soft and Hard Decision Decoding Performance (128,29) RM Code Performance (128,99) RM Code Performance (a) Reed Muller Code(128,29) (b) Reed Muller Code (128,99) (256,37) RM Code Performance (256,163) RM Code Performance (c) Reed Muller Code (256,37) (d) Reed Muller Code (256,163) Fig. 2.5 Reed Muller code lower and upper union bound performance, frame error rate () against E b hard decision decoding is smaller than that of bounded distance decoding for all the values of E b, the upper bound on the probability of error for maximum likelihood soft decision decoding agrees with the lower bound from certain values of E b.this suggests that for soft decision decoding, the probability of error can be accurately determined by the lower union bound from a certain value of E b. Computing the lower union bound from (2.10) requires only the knowledge of the minimum distance of the code d and the multiplicity of the minimum weight terms A d. In practice, A d is much easier to obtain than the complete weight distribution of the code.

2.3 Soft Decision Performance 35 (63,39) BCH Code Performance (a) BCH Code (63,39) union bounds (63,36) BCH Code Performance (c) BCH Code (63,36) union bounds 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 10-22 10-24 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 10-2 10-4 10-6 10-8 10-12 10-14 10-16 10-18 10-22 10-24 10-26 10-28 10-32 10-34 10-36 10-38 10-42 10-44 10-46 10-48 (63,39) BCH Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=36) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (b) BCH Code (63,39) BDD and full decoding (63,36) BCH Code Performance Full Decoding Bounded Distance Decoding Unencoded (k=36) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (d) BCH Code (63,36) BDD and full decoding Fig. 2.6 BCH code: Bounded distance, full and maximum likelihood soft decoding 2.3.1 Performance Assuming a Binomial Weight Distribution Evaluating the performance of long codes with many codewords using the union upper bound is difficult since one needs to compute the complete weight distribution of the codes. For many good linear binary codes, the weight distributions of the codes closely approximates to a binomial distribution. Computing the weight distribution ( of a binary code is known to be NP-complete [3]. Let Eb be defined as, )δ

36 2 Soft and Hard Decision Decoding Performance Best Known Code (127,30,37) Best Known Code (127,120,3) Upper binomial Lower binomial Upper binomial Lower binomial (a) (127,30,37) binomial and actual weight distributions (b) (127,120,3) binomial and actual weight distributions Best Known Code (70,30,16) Best Known Code (255,235,6) Upper binomial Lower binomial Upper binomial Lower binomial (c) (70,30,16) binomial and actual weight distributions (d) (255,235,6) binomial and actual weight distributions Fig. 2.7 Union bounds using binomial and actual weight distributions (WD) for best known codes 1 2 A E b d erfc Rd N ( 1 0 Eb Eb 2 N = 0 )δ n E b A j erfc Rj N (. (2.12) j=d 0 Eb Eb N = 0 )δ ( Hence, Eb is the SNR per bit at which the difference between upper and lower )δ union bound for the code is very small. It is worth noting that equality is only ( possible when E b approaches infinity in (2.12) since lim erfc(x) = 0. To find Eb x )δ for a binary code (n, k, d) we simply assume a binomial weight distribution for the code so that, A i = 2k 2 n ( ) n i (2.13)

2.3 Soft Decision Performance 37 Best Known Code (255,120,40) Uncoded Fig. 2.8 Union bounds using binomial and actual weight distributions (WD) for the (255, 120, 40) best known code and compute an E b using this approach is only an estimate. The accuracy of value that satisfies (2.12). It must be noted that ( Eb N ( 0 )δ obtained Eb depends on how )δ closely the weight distribution of the code approximates to a binomial and how small the difference between the upper and lower union bounds P upper P lower is. Consider Fig. 2.7 that show the upper and lower union bounds using binomial weight distributions and the actual weight distributions of the codes. From Fig. 2.7a, it can be seen that for the low rate code (127, 30, 37) the performance of the code using the binomial approximation of the weight distribution does not agree with the performance using the actual weight distribution at low values of E b. Interestingly Fig. 2.7b d show that as the rate of the codes increases the actual weight distribution of the codes approximates to a binomial. The difference in the performance of the codes using the binomial approximation and actual weight distribution decreases as E b increases. Figure 2.8 shows the performance of the (255, 120, 40) using a binomial weight dis- ( tribution. An estimate for Eb from the figure is 5.2 db. Thus for E b δ 5.2dB,we can estimate the performance of the (255, 120, 40) code under maximum likelihood soft decision decoding in the AWGN channel using the lower union bound. )

38 2 Soft and Hard Decision Decoding Performance SD Code (128,64) Unencoded BDD 25 30 (a) Extended BCH Code (128,64,22) 25 30 SD Code (104,52) 10 0 Unencoded BDD 25 30 (b) Extended QR Code (104,52,20) 10 Unencoded BDD SD Code (136,68) Unencoded BDD 25 30 (c) Extended QR Code (136,68,24) SD Code (168,84) Unencoded BDD (d) Extended QR Code (168,84,24) SD Code (80,40) 25 30 (e) Extended QR Code (80,40,16) SD Code (48,24) Unencoded BDD 25 30 (f) Extended QR Code (48,24,12) Fig. 2.9 Performance of self-dual codes

2.3 Soft Decision Performance 39 2.3.2 Performance of Self-dual Codes A self-dual code C has the property that it is its own dual such that, C = C. Self-dual codes are always half rate with parameters (n, 1 n, d). These codes are 2 known to meet the Gilbert Varshamov bound and some of the best known codes are self-dual codes. Self-dual codes form a subclass of formally self-dual codes which have the property that, W (C ) = W (C ). where W (C ) means the weight distribution of C. The weight distribution of certain types of formally self-dual codes can be computed without enumerating all the codewords of the code. For this reason, these codes can readily be used for analytical purposes. The fact that self-dual codes have the same code rate and good properties makes them ideal for performance evaluation of codes of varying length. Consider Fig. 2.9 which shows the performance of binary self-dual (and formally self-dual) codes of different lengths using the upper and lower union bounds with actual weight distributions, bounded distance decoding and unencoded transmission. Figure 2.10 Code Gain, db 12.00 11.50 11.00 10.50 10.00 9.50 9.00 8.50 8.00 7.50 7.00 6.50 6.00 5.50 5.00 4.50 4.00 3.50 3.00 2.50 (24,12,8) (48,24,12) (80,40,16) (136,68,24) (128,64,22) (104,52,20) 20 40 60 80 100 120 140 160 180 200 Code Length (168,84,24) SDD code gain at 10e-20 SDD code gain at 10e-10 BDD code gain at 10e-20 BDD code gain at 10e-10 Fig. 2.10 Coding gain against code length for self-dual codes at 10 10 and 10 20

40 2 Soft and Hard Decision Decoding Performance shows the coding gain of the self-dual codes at frame error rates () 10 10 and 10 20 for soft decision decoding (SDD) and bounded distance decoding (BDD). The coding gain represents the difference in db between the SDD/BDD performance and unencoded transmission. The coding gain is a measure of the power saving obtainable from a coded system relative to an unencoded system in db at a certain probability of error. The SDD performance of codes with length 168, 136 and 128 at 10 10 are obtained from the union upper bound because the upper and lower bound do not agree at this. Thus, the coding gain for these cases is a lower bound. It is instructive to note that the difference between the coding gain for SDD and BDD at the two values of increases as the length of the code increases. At of 10 20 SDD gives 3.36 db coding gain over BDD for the code of length 168 and 2.70 db for the code of length 24. At a of 10 10,SDDgives3.70 db coding gain over BDD for the code of length 168 and 2.44 db for the code of length 24. 2.4 Summary In this chapter, we discussed the performance of codes under hard and soft decision decoding. For hard decision decoding, the performance of codes in the binary symmetric channel was discussed and numerically evaluated results for the bounded distance decoder compared to the full decoder were presented for a range of codes whose coset leader weight distribution is known. It was shown that as the SNR per information bit increases there is still an observable difference between bounded distance and full decoders. A lower and upper bound for decoding in the BSC was also given for cases where the covering radius of the code is known. For soft decision decoding, the performance of a wide range of specific codes was evaluated numerically using the union bounds. The upper and lower union bounds were shown to converge for all codes as the SNR per information bit increases. It was apparent that for surprisingly low values of E b the performance of a linear code can be predicted by only using knowledge of the multiplicity of codewords of minimum weight. It was also shown for those codes whose weight distribution is difficult to compute, a binomial weight distribution can be used instead.

References 41 References 1. Moon, T.K.: Error Correction Coding: Mathematical Methods and Algorithms. Wiley, New Jersey (2005) 2. Proakis, J.: Digital Communications, 4th edn. McGraw-Hill, New York (2001) 3. Vardy, A.: The intractability of computing the minimum distance of a code. IEEE Trans. Inf. Theory 43, 1759 1766 (1997) Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the book s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

http://www.springer.com/978-3-319-51102-3