Error Control Codes. Tarmo Anttalainen

Size: px
Start display at page:

Download "Error Control Codes. Tarmo Anttalainen"

Transcription

1 Tarmo Anttalainen Abstract: This paper gives a brief introduction to error control coding. It introduces bloc codes, convolutional codes and trellis coded modulation (TCM). Only binary codes are considered in this brief introduction. Contents Introduction.... BLOCK CODES..... BINARY BLOCK CODES Minimum or Free Hamming Distance Syndromes Error detection Weight Distribution Probability of undetected error Error Correction Standard Array Syndrome Decoding...3. CONVOLUTIONAL CODES ENCODER DESCRIPTION CONVOLUTIONAL ENCODING AND DECODING RECURSIVE SYSTEMATIC CONVOLUTIONAL CODE TRELLIS CODED MODULATION ENCODER DESCRIPTION MAPPING BY SET PARTITIONING CONCLUSIONS...8 INDEX...8 REFERENCES...8 Errorcc Page /4/

2 Introduction The history of error control codes began in 95 when a class of single error correcting bloc codes was introduced by Hamming [7, p 399]. The correcting capability of Hamming codes is quite wea but they are still used in many applications, such as teletext of TV and Bluetooth. During 96 s convolutional codes were introduced and Viterbi decoding made them practical to implement. In the beginning of 98 s trellis coded modulation (TCM), which combines convolutional codes and modulation, was invented. With the help of TCM data rate on voice band modems has increased from 9.6 bit/s to 33.6 bit/s. All the three error control methods are introduced in this paper. Latest major invention in the area of error control is Turbo code but it is not discussed here.. Bloc Codes Bloc codes are described by two parameters, n as the length of the code word and as the number of information symbols encoded into each code word. The redundant information of n- symbols is used for Forward Error Correction (FEC) or for error detection if Automatic Repeat Request (ARQ) or Bacward Error Correction (BEC) is in use. How codewords are generated for each set of information bits is defined by generator polynomial or generator matrix of the code... Binary Bloc Codes Let V be a set of all possible combinations of n symbols (in binary case a set of bits) x i, called n-tuples, where i =... n []. In binary case these symbols x i may get values or. V = {x, x,..., x n } (..) We call a subset C of V a code and the selected n-tuples of C we call codewords. We use M as a number of code words in C. Note that error control requires redundancy that means that all possible combinations of n-bits (whole V) are not used as code words. If all code words have constant length of n we tal about bloc code with bloc length of n. For a certain code we select M = of vectors in V for code words. For transmission of information bits we need M codewords each representing one of the possible sets of information bits. Bloc codes encode information bits into n-bit codewords. Encoding is done bloc by bloc and each bloc is independent from the previous and preceding blocs. There is one to one relationship between each set of -information bits and one of the codewords. Code Rate We write a bloc code as (n, ) code and define the Code Rate for a bloc code as R c = /n (..) Code rate is always smaller than one and it tells the amount of information in the transmitted data and -R c tells the amount of redundancy in transmitted data. In many error correction Tarmo Anttalainen Page /4/

3 codes in use the code rate is in the order of.5 to mae performance good enough. If only error detection is needed, code rates close to give good enough performance. Hamming Distance Hamming distance d(x, y) of the two code words x and y is the number of places in which they differ [3]. For example, if c = and c =, then d(c, c ) = d(,)=3.... Minimum or Free Hamming Distance Minimum Hamming distance, d free, is the smallest number of places in which any two code words of a code differ [; 3]. d free = min d(c,c ) (..3) where c and c represent all different code words of the code. Smallest figure of all possible distances is taen as the minimum or free distance. Free distance plays important role in error control coding because it tells what is the minimum number of errors that may change one code word to another. Systematic Codes For systematic codes first bits in transmitted code word equal information bits, that is, first - bits of n-bit long code word contain information bits as they are and the rest n- bits are redundant bits for error control. Error Correction and Detection Capability of the Bloc Codes When a code word in error is received the tas of the error correcting decoder is to find the codeword that most probably was the transmitted one. For that we assume that closest word with smallest Hamming distance to the received one is the best choice. This is relevant assumption because occurrence of smaller number of errors has higher probability than higher number of errors (in operational system). If the received code word is equal to one of the error free code words, decoder is not able to correct or even detect that errors have occurred. Generally, if t errors occur, decoder is always able to correct errors if d free t + (..4) Sometimes error correction is possible although the inequality above is not satisfied, but t- error correction is not quarantined if d free < t +. A code is able to detect an error if the received word is not one of the code words. This is the case if fewer errors than minimum distance have occurred. Up to l bit errors is always detected if d free l + (..5) If l errors occur, and Equation..5 is valid, it is still sure that code word is not changed to another code word because all words differ in higher number of bit positions than l, i.e., d free is larger than l. Tarmo Anttalainen Page /4/

4 Structure of the Linear Bloc Codes Code words of linear bloc codes are n-tuples of elements that in binary case are from GF(), i.e. bits with value or. General properties of linear codes are according to the definition [3, p46]: Sum of two codewords gives one of the codewords; All-zero vector is one of the code words (sum of one codewords and itself). Hamming weight and free distance The Hamming weight w(c) of a codeword c is equal to the number of nonzero places or components in the code word [3, p46]. We saw above that the free distance d free is an important measure when we study the error control capability of a code. Let c be a code word of a linear code and naturally c-c (or c+c) is all zero code word. Now by evaluating all non-zero code words we get Hamming weight w(c) of all code words that equal to the number of non-zero places of them. Let us tae two binary codewords X and Z. The Hamming distance between these two codeword vectors is [5, p 48] d(x, Z) = w(x+z) (..6) For example if X=[] and Z=[] then X+Z==[] where we have mod- added words bit by bit. In the case of linear code X+Z=Y is also a codeword. The distance between X and Z therefore equals the weight of another codeword Y, i.e., d(x,z) = w(x+z) = w(y) (..7) Thus when we calculate distances between all pairs of codewords we actually evaluate the weight of another codeword that is the sum of the two ones under study. When code word Z is all zero vector then X+Z=X and evaluation actually gives weights of all non-zero codewords. In the case of linear code all distances between codewords equal to the weight of one code word of this code. Then weights of the codewords give all Hamming distances of a code. Now the free or minimum distance is given by minimum Hamming weight of the code: d free = min w(c) = w(c) min (..8) where c is any code word except all zero word and we have taen the minimum weight of all code words. Now we can rewrite Equations..4 and..5 d free = w(c) min t + (..9) d free = w(c) min l + (..) where t is the number of errors that the code can always correct and l is the maximum number that are always detected. Matrix representation of linear bloc codes Generally linear bloc codes are defined by the generator matrix G of the code [3, p47]: g g G = g g g g Tarmo Anttalainen Page 3 /4/ g n g n g n (..)

5 where each row contains a row vector g i. For example g = g g g n. Matrix has rows that is equal to the number of information bits encoded to each codeword and the number of columns n is equal to the length of the codewords. Encoding When an information vector is i = [i i... i ] (..) then the code words, c = [c c... c n ], are given by c = i G (..3) that is c = ig + ig ++ ig ig ++ ig, ig ++ ig (..4) n n Let us assume now that information words and code words are binary, i.e., sequences of binary symbols, bits from GF(). Any code word is a linear combination of the row vectors of G because information bit places with logical define which rows are added to mae up a codeword. The rows in the matrix have to be linearly independent; that is, there is not such combination of rows that their sum results to all-zero word. Otherwise different information words would produce equal codewords. We can see each of the rows as the basis of the vector space where each row represents basis function of one dimension. The number of dimensions is the number of information bits that is equal to the number of rows in generator matrix. Codewords are vectors in this space. Two codes are equivalent if and only if their generator matrixes are related [3] by. Column permutations and. Elementary row operations. Equivalent codes have similar performance but the set of code words may be different. The set of code words always equals the permuted set of code words of an equivalent code. The code is not changed (codes are equal, i.e., the set of codewords remains the same) under elementary row operations and Elementary row operations on a generator matrix are as follows [3]:. Interexchange of any two rows;. Multiplication of any row by a nonzero field element (only in binary case); 3. Replacement of any row by the sum of itself (and a multiple of) any other row. Elementary row operations change the mapping of information words to code words but performance of the code remains the same because the set of code words is unchanged. Any generator matrix of an (n, ) linear code can be changed by row operations and column permutations to the systematic form [3, p 49] [6, p47]: Tarmo Anttalainen Page 4 /4/

6 G = [ I M P] = p p p n p p p n M p p p n (..5) where I is the x identity matrix and P is a x (n-) matrix that determines the n- redundant bits or parity chec bits. Every linear code has an equivalent systematic code [3, p5]. Generator matrix in systematic form generates a systematic linear bloc code in which the first bits of each code word are identical to the information bits and the remaining n - bits of each code word are linear combinations of the information bits. Example.. Let us tae a generator matrix of a simple systematic binary linear code [3]. G = If the information vector is i = [ ] The encoded codeword becomes c = [ ] = [ ] Singleton bound To derive an upper bound on d free we can put any linear bloc code into systematic form. The maximum number of non-zero elements in any row of P cannot exceed n-. Then the number of non-zero elements in any row of G cannot exceed +n-. Since all rows of G are valid codewords [5; 3] d free = w(c) min + n - (..6) This is nown as Singleton bound and we can say without any nowledge about the generator or parity chec matrixes that free distance can never exceed +n-. Note that usually equation (..6) gives very optimistic value for d min. One code that meets Singleton bound is binary repetition code which [, p396] c = [,,,] c = [,,,] In this case d free = d(c, c )= + n. Codes that meet Singleton bound are called maximum distance separable (MDS) codes and repetition code is the only binary MDS code. The non-binary Reed-Solomon codes are also MDS codes. Tarmo Anttalainen Page 5 /4/

7 Parity Chec Matrix The decoder in the receiver checs if the received codeword is the original codeword or if it is in error. For this it needs parity chec matrix H that gives for any error free codeword c H T = (..7) Now decoder computes according to (..7) and if the result is all-zero vector, most probably no errors have occurred. Naturally H must be compatible with G that is used for generation of the codewords. To derive the parity chec matrix we start with the generator matrix in systematic form. We use generator matrix in the encoder to mae up the codewords and encoder calculates codewords as c = i G = i [ I P ] = [ i i P ] = [c c n- ] (..8) Where [c c n- ] represents a codeword in systematic form divided into two parts. First -bits of the codeword are identical to the information bits and the second part is parity chec section containing n- bits. For systematic codeword i = c and we can see from the equation above that c n- = i P = c P (..9) Where c represent the first bits of the codeword. Now we may write - c P + c n- = which can be written into another form as P P [c c n- ] = c I = (..) n I n Now if we compare formula (..7) and (..) we notice that we have got the transpose of the parity chec matrix as H T P = (..) I n that has n rows and n- columns. Parity chec matrix we get in a form H = [- P T I n- ] (..) that has n- rows and n columns. Note that - P T equals P T when we are dealing with binary codes where elements are form GF(). Because c H T = holds for all code words that may be equal to any row in G, we get [3] G H T = [ I M P] P = -P + P = I n Where is the x (n-) matrix where all elements equal zero. The parity chec matrix we have found is valid because it fulfills the requirement of Equation..7. Example.. The generator matrix of the (5, 3) systematic linear bloc code in Example.. was: Tarmo Anttalainen Page 6 /4/

8 G = = [ I P ] Now according to (..) we may write corresponding parity chec matrix as H = [- P T I n- ] = This has n-= rows and n=5 columns. To chec a received code word, for example c = [], that corresponds to the information vector i = [], the decoder computes c H T = [ ] = [ ] = The code word is detected to be error free. If we compute G H T we would get zero matrix... Syndromes As we saw above the parity chec matrix H is directly related to the generator matrix G used in encoder. Decoder multiplies the transpose of the parity chec matrix by the received word and if it is one of the error free codewords c the decoder gets: ch T P = c = [.. ] (..3) I n If the received word is in error (and not equal to any of the error free code words) we write it as c. Them multiplication according to..3 gives at least one non-zero element in the resulting vector. We call this vector, which has as many elements as rows in parity-chec matrix (or H T has columns), the syndrome, s, that is given by s = c H T (..4) where c represents now a received word, which may be error free or in error. If the syndrome s =, i.e., all elements of the syndrome equal zero, received word is error free or errors have changed it to another codeword in which case errors are undetectable. Otherwise errors are indicated by the presence of non-zero elements in s. For error detection it is enough to chec if there is one or more non-zero elements in syndrome. Error correction can also be based on the syndrome. To develop decoding method we introduce n-bit error vector e, whose nonzero elements mar the positions of transmission errors in c. For instance, if the transmitted codeword c = [ ] and the received word in error is c = [ ] then e = [ ]. In general c = c + e (..5) Tarmo Anttalainen Page 7 /4/

9 and in the case of binary codes we can write (in finite field GF() additive inverse element of is and then -= and - = ) c = c - e = c + e (..6) We can thin this in a way that the second error in the same bit location cancels the original error and the resulting code word is the original one. If we now substitute this to..5 to..4 we obtain s = c H T = (c + e)h T = ch T + eh T = eh T (..7) We see that the syndrome depends only on the error pattern, it does not depend on the transmitted codeword. Example..3 Let us tae a systematic (7, 4) Hamming code defined by generator matrix [, p 395] G = [ I P ] = The corresponding parity chec matrix becomes H = [ - P T I ] = We see that all columns of H are different and contain at least one non-zero element. This is one characteristic of Hamming codes that produce unique syndrome for all single error cases. The transpose of H is: H T = Tarmo Anttalainen Page 8 /4/ Syndrome is now given by s = c H T = e H T. Decoding with the help of syndrome is discussed in Section Error detection A linear bloc code detects all error patterns with smaller number of errors than d free. []. If e is a code word errors are not noticed. There are undetectable error patterns (the same as the number of non-zero codewords), but n non-zero error patterns. Hence the number of detectable error patterns is

10 n -( ) = n Usually the number of undetectable error patterns is much smaller than total number of possible error patterns. For example for the (7,4) Hamming code in example..3 there are 4 = 5 undetectable error patterns and 7 4 = detectable error patterns []. Cyclic Redundancy Chec (CRC) is the most popular code designed specially for error detection. It is used together with Automatic Repeat Request (ARQ) protocols where errors are detected and frames in error are retransmitted. This is much more efficient error control scheme than Forward Error Correction (FEC) we discuss here. Example..4 Let us assume that the bit error rate in the channel is BER = * -6 and frames, each containing bits, are transmitted. The probability of a number of errors we get with the help of the Poisson distribution: P(i) = (m i /i!)e -m Where i is a certain number of errors in a frame and we want to find the probability that exactly i errors occur. The average number of errors in the frame is m and in our case that is m = ** -6 =. Now probabilities for i number of errors, P(i) are: P() = e -. =.999 P() =. e -. = -3 P() =. / e -. = 5-7 P(3) =. 3 /6 e -. =.7 - etc. We see from the results that approximately one frame in a thousand frames contains one error and one frame in two millions (or little bit more) has more than one error (P()+P(3)+P(4)+ ). Error correction of a single error requires information which of the bits in frame is in error, and this requires redundant bits ( =4) is enough to tell the location of the bit in error. Residual frame error probability is approximately -6 (the same as the probability of more than one error in a frame) when all single error cases are corrected. Error detection of a single error requires only one parity bit. Parity bit is able to detect all single bit errors and odd number of errors. In this case residual frame error probability is approximately ½ -6, that is approximately the same as probability of a frame with two errors. The performance of error detection using only single redundant bit is even better than error correction with redundant bits! However, there are applications that do not tolerate variable delay caused by ARQ and for them FEC is the only choice. From now on we concentrate here on error correction only. Tarmo Anttalainen Page 9 /4/

11 ..4. Weight Distribution Consider a bloc code C and let A i be the number of codewords of weight i. The set {A, A,, A n,} is called weight distribution of code C. The weight distribution can be expressed as a weight enumerator polynomial [, p397] A(z) = A z + A z + + A n z n (..8) Example..5: Codewords of the Hamming code in Example..3 are: i c weight The number of zero weight codewords, A =, weight one codewords, A =, etc. I.e., A =, A =, A =, A 3 = 7, A 4 = 7, A 5 =, A 6 =, A 7 =. Hence the weight enumerator polynomial becomes [, p397] A(z) = + 7 z z 4 + z Probability of undetected error The code cannot detect that errors have occurred if the received word happens to be equal to one of the codewords, i.e., then c + e = c. The probability of undetected error is then [, p397] P e (U) = P(e is a nonzero codeword) = A P( w() = i) Tarmo Anttalainen Page /4/ n i= i e (..9) Error probability P(w(e)=i) depends on the coding channel (a portion of a communication system seen by the coding system). The simplest coding channel is the binary symmetric channel (BSC), where probability that a received bit c i is not the same as transmitted codeword bit c i (bit error probability BER) is For a BSC P(c i c i ) = p = - P(c i = c i ) (..3)

12 P(w(e)=i) = p i (-p) n-i (..3) And hence the probability for undetected error becomes n i P e (U) = A p ( p) i= i n i (..3) where A i is the number of codewords with i number of ones. Example..4: The Hamming code in Examples..3 and..5 has undetected error probability of P e (U) = 7p 3 (-p) 4 + 7p 4 (-p) 3 + p 7 For raw channel bit error rate of p = -, we get P e (U) = 7-6. Hence, the undetected error rate can be very small even for a fairly simple bloc code [, p397]...6. Error Correction As explained in section.. a linear bloc code can correct all error patterns of t of fewer errors if d free t + (..33) where x stands for largest integer contained in x. Then the number of errors that is always corrected successfully is d free t (..34) A code is usually capable of correcting many error patterns of more than t errors. For a BSC, the probability of codeword error, i.e., unsuccessful error correction, is [, p398] t n i n i P(E) P(t or fewer errors) = - p ( p) (..35) i= i..7. Standard Array One conceptually simple method for decoding of any linear bloc code is standard array decoding. We construct a decoding table, a standard array, as follows [, p398]:. Write all codewords in the first row, beginning with all-zero code word c. This all-zero codeword also represents the all-zero error pattern.. From the remaining n- tuples (not yet written to the table) select an error pattern e of weight and place it in the first column. Under each codeword in other columns write c i + e, i =,...,. 3. Select a minimum weight error pattern e 3 from the remaining unused n-tuples and place it in the first column under c =. Under each codeword put c i + e 3, i =,...,. 4. Repeat step 3 until all n-tuples are used. Note that every n-tuple (n-bit word) appears once and only once in the standard array. Tarmo Anttalainen Page /4/

13 Table.. Standard Array. c c c 3... c e c +e c 3 +e... c +e e c +e c 3 +e... c +e e n- c +e n- c 3 +e n-... c +e n- Each row in standard array consists of all received words that would result from the corresponding error pattern in the first column. Each row is called a coset and the first left-most word, error pattern, is called a coset leader. The table contains all possible received n-bit words, error free words and words in error, and coset leader gives an error vector. Example..5 Let us construct the standard array for the (5, ) systematic code with generator matrix given by G = = [ I P ] We can easily see that the minimum distance d min = 3. Code has only 4 different code words and the minimum weight is 3. The standard array is given in Table... Table.. Standard array for the (5, ) code [6, p447]. Code words Coset leaders consist all zero error pattern, all error patterns of weight, and two error patterns of weight. There are many more double error patterns but in the table there is only room for two of them. Actually the number of rows equals the number of syndromes that is n- = 3 = 8. Double error patterns selected according to procedure explained above give syndromes that are different from the syndromes of single error patterns. We might choose other double error patterns instead of ones written in the table above but, if we follow the procedure above, their syndromes would be unique and table would still contain each n-tuple only once. Note that standard array contains all possible received words, error free codewords and words in error, all binary words of length n. The number of rows is n- and number of columns is and hence the number of words in the table is n- * = n, i.e., all n-tuples are present. If we store standard array in decoder, it would chec to which column the received word be- Tarmo Anttalainen Page /4/

14 longs and would tae uppermost word in that column as the corrected code word. Coset leader represents the most probable error pattern. If error pattern does not equal coset leader erroneous decoding will result. However, it is usually not reasonable to store standard array because in the case efficient codes (long codes) it is very large. Decoding principle that requires less memory, syndrome decoding, is presented below...8. Syndrome Decoding Syndrome has as many elements as the codewords have redundant bits, that is n-. This equals the number of columns in submatrix P of G and the number of rows of H (or the number of columns of H T ). This restricts the number of different syndromes to n-. The number of different error vectors equals the total number of different n-bit words minus the number of code words, that is n -. This is usually much higher than the number of syndromes and thus syndromes are not unique for all possible error pattern. For example in the case of (7, 4) Hamming code the number of detectable error patters is n - = and the number of syndromes is only n- = 8. This means that all error cases are detected (all that are not the same as code words) but only 7 of them can be corrected by the syndrome decoder. Error patterns, which we selected to coset leaders in standard array, give unique syndromes and those are used for error correction as follows:. Compute the syndrome s = c H T. Locate the coset leader e l for which s = e l H T 3. Decode c into c c = c + e l The calculation for step can be done by using a simple loo-up table as shown in Figure... e n e e + + Decoded code word c + e Received code word c + c n c c Syndrome calculator S Table Figure.. Table-looup decoder [5, p485]. When e corresponds to a single error in the jth bit of the code word s is identical to the jth row of H T. Therefore, to provide a distinct syndrome for each single-error pattern and for the error free pattern, the rows of H T (or columns of H) must all be different (all pairs of two rows Tarmo Anttalainen Page 3 /4/

15 contains two linearly independent vectors) and each of them must contain at least one nonzero element. Example..6 For the (7, 4) Hamming code of previous examples we may compute syndromes for all single error vectors [ ], [ ],.... [ ] according to s = e H T and we get table below. There are n- - = 3 - = 7 single error patterns and each corresponding one of the syndromes. The syndromes equal rows in H T. Table.. Syndromes for the (7, 4) Hamming code. e s We see that all single error patterns have unique syndrome and this code is able to correct all of them. Let us now suppose that all zero code word is transmitted and received word contains two errors such that e = [ ]. The decoder calculates s = e H T = [ ] and the table gives error pattern e = [ ] assuming that the last bit has been in error. The decoder inverts the last bit that actually was not in error and the decoded word includes then three errors, one generated by erroneous correction by the decoder. This code can decode properly only single error cases. As we saw, the number of nonzero syndromes n- - defines how many different error patterns we can correct in the decoder and this depends on the number of redundant bits, n-, of the code. In n-bit word the number of different j-error patters is n j = n! j!( n j )! (..36) Hence to correct up to t errors and n must satisfy n- n n n n n - n + t = + t (..37) where right hand side equals number of different non-zero syndromes and right hand side gives the number of all error patterns up to t errors. It simply states that each correctable error pattern must have unique syndrome. In the case of single error correcting codes, such as Hamming code, equation reduces to n- - n Equation..37 gives the relationship between bloc length n, number of parity bits n- and number of correctable error patterns. Tarmo Anttalainen Page 4 /4/

16 . Convolutional Codes Unlie in the case of bloc codes, input symbols (usually binary, one bit per symbol) of convolutional codes are not grouped into blocs but each input bit has influence on a span of output bits. When we say that a certain encoder produces an (n,, K) convolutional code, we express that for input bits, n output bits are generated giving code rate of /n. K tells the encoder s memory, that is K-, measured in terms of input symbols. Convolutional encoding may be continuous process but in many applications encoding is processed for subsequent data blocs independently. For example in GSM each speech frame is encoded independently using a convolutional encoder... Encoder Description The encoder for a binary rate /n convolutional code can be seen as a final-state machine (FSM). Encoder consists of a ν-stage shift register connected to modulo- adders and a multiplexer that converts the adder outputs to serial data stream. The constraint length K of a convolutional code is defined by the number of shifts through the FSM over which a single input data bit can affect the encoder output. For an encoder having ν-stage shift register the constraint length is equal to K = ν + (..) Figure.. shows a simple rate-½ binary convolutional encoder with constraint length of K=3. b () input a output b b () Figure.. Binary convolutional encoder, R c = ½, K = 3 []. The binary convolutional encoder can be generalized to rate-/n binary convolutional code by using shift registers and n modulo- adders. For a rate-/n code, the -bit information vector a l = {a l (),..., a l () } is input of the encoder at a time instant l and generates the n-bit code vector b l = {b l (),..., b l () } as shown in Figure... The first -bit register stage is drawn as dashed line to indicate that it is needed only for multiplexing purposes (other registers are needed to store previous input vectors, present one need not necessarily to be stored). Tarmo Anttalainen Page 5 /4/

17 K stage shift register -bit information vector a l n n n n-bit output vector b l Figure.. General binary convolutional encoder for the CC(n,, K) code, according to [7, p359]. A convolutional encoder can be described by a set of impulse responses, {g (j) i }, where g (j) i is the jth output sequence b (j) that results from the ith input sequence a (i) = (,,,,...). The impulse response can have a duration of at most K and have a form g (j) i = (g (j) i,, g (j) i,, g (j) i,,..., g (j) i,k- ). Sometimes {g (j) i } are called generator sequences. For the encode in Figure.. [, p4] g () = (,, ) g () = (,, ) Figure..3 shows a simple rate /n = /3, constraint length K= convolutional encoder. For this encoder the generator sequences are [, p4]: () () (3) g = (, ) g = (, ) g = (, ) () () (3) g = (, ) g = (, ) g = (, ) where, for example, g () = (, ) gives the sequence of the second output bits for input sequence (,), i.e., a () = (, ) and a () = (, ). Tarmo Anttalainen Page 6 /4/

18 b () input a b () output b b (3) Figure..3 Binary convolutional encoder, R c = /3, K = [, p4]. The jth output b (j) i, corresponding to ith input sequence a (i) is the discrete convolution (j) b i = a (i) * g (j) i, where * denotes modulo- convolution. The time domain convolution can be replaced by polynomial multiplication in a D-transform domain so that b (j) i (D) = a (i) (D) g (j) i (D) where a (i) (D) = = a i D,, is the ith input data polynomial, b i (j) (D) = and g i (j) (D) = = K = ( j ) b i D,, is the jth output polynomial corresponding to the ith input, and ( j ) The jth output sequence becomes g i, D, is the associated generator polynomial. b (j) ( j) ( i) ( j (D) = b ( D) = a ( D) g ) ( D) i= i i= Corresponding matrix representation is as follows ( i) ( n) () ( ) [ b ( D),..., b ( D) ] = a ( D),..., a ( D) g where G(D) = g () () [ ] ( n) ( D),, g ( ) D ( n) ( D),, g ( D), i g g ( n) ( D),, g ( ) D ( n) ( D),, g ( D), Tarmo Anttalainen Page 7 /4/ () () is the generator matrix of the code. After multiplexing the outputs, the final codeword has a polynomial representation n j j n b(d) = D ) b ( ( D ) j= Example.. The generator matrix for the code in Figure.. is

19 G(D) = [ + D + D, + D ] It tells, for example, that the second bit in each output bloc is the sum of the present and two bits earlier input bits. For convolutional encoder in Figure..3 the generator matrix is + D D + D G(D) = D which defines that, for example, the second output bit in each output bloc is generated as the sum of second bit in present and first bit in the previous input bloc, see Figure..3. Systematic convolutional codes are those where first encoder output sequences, b (),..., b () are equal to the encoder input sequences a (),..., a (), i.e., first bits is each output bloc are equal to the -bit input bloc... Convolutional Encoding and Decoding Convolutional encoder is a Final State Machine (FSM), its operation can be described by a state-diagram and trellis diagram. The state of the encoder is defined by the shift register contents. For a /n code, the ith shift register contains ν information bits. State of the encoder at time instant l is defined by all bits store in the shift register σ = l () () ( ) ( ) ( a a ;...; a a ),..., l l ν l,..., l ν where, for example, a l- () the first bit in previous bit information bloc. For a rate /n code, the encoder state is l ( a a ) σ = ;...; l l ν The total encoder s memory size is ν Tot = i= ν i ν and then the total number of states is N S = Tot. State diagram for encoder in Figure.. is shown in Figure..4. States are labeled as S (i), where i = ν Tot j= c j j, where c j represents contents jth one bit memory in the encoder. Contents is also shown in Figure..4 (from left to right in Figure..) Tarmo Anttalainen Page 8 /4/

20 / / State S (3) Input/output / State S () / / State S () / State S () / / Figure..4 State diagram for the convolutional encoder, in Figure.. [, p43]. In general /n code has branches leaving each state. The branches are labeled as a/b = (a (), a (),..., a () / b (), b (),..., b (n) ), i.e., -bit input / n-bit output. As an example, if the encoder in Figure..4 is in state S () = () and input a =, encoder produces output b = and next state will be S (3) = (). Convolutional codes are generates by linear adders and they linear codes for which the sum of any two codewords is another codeword and the all-zeros sequence is one of the codewords. Then weight distribution provides information of distance properties of the code. Now for performance analysis we may assume that all-zero codeword was the transmitted one and we analyze only the error situations that mae a decoder to leave all-zero state of the state diagram. For that we can construct the modified state diagram by splitting all-zero state into an input and output states as shown in Figure..5. DNL (/) DNL (/) State S (3) DL (/) D NL DL (/) D State S State S L () () State S () State S () i (/) NL (/) (/) Figure..5 Modified state diagram for the binary convolutional encoder, in Figure.. []. The branches we label as D i N j L, where i represents the number of ones in the encoders output bloc and j represents the number of ones in the input bloc corresponding a particular transition. L acts as counter of transitions. Let X s be a variable that represents accumulated weight of each path that enters state S. Then we can write equations that define how a state is reached from other states, that is X = D NL X in + NL X Tarmo Anttalainen Page 9 /4/

21 X 3 = D NL X + DNL X 3 X = DL X + DL X 3 X out = D L X Solving this group of equations yilds the transfer function 3 D NL T(D,N,L) = X out / X in = = D 5 N L 3 + D 6 N L 4 (L+) + D 7 N 3 L 5 (L+) +... DNL( L + ) The first term of the transfer function tells that the shortest path leaving zero state and entering to it has length of 3 hops and its Hamming distance is 5 that is also the minimum distance d free. Transfer function can be simplified if we are only interested on the distance properties of the code. Then we can set L = N = and get T(D) = D 5 + D D which gives the weight distribution. Instead of state diagram trellis diagram is often used to describe both encoding and decoding processes for convolutional codes. To draw the trellis diagram for the encoder in Figure.. we write all states in columns at each time instant J as shown in Figure..6. Each state corresponds one row in trellis. Initially encoder is in all-zero state and only two branches are possible. The encoder generates if is the input to be encoded and if is encoded as seen in Figure..6. Time instant J = J = J = J = 3 J = 4 State S () Input symbol and state transition: State S () State S () State S (3) Tarmo Anttalainen Page /4/ Figure..6 Trellis diagram for the binary convolutional encoder, in Figure.. [, p45]. Trellis explains encoding very clearly. We simply follow path in trellis according to information sequence to be encoded. In Figure..6 input sequence ( ) is encoded as an example. It generates output sequence ( ) which we get by following path that corresponds given input sequence. Each input sequence has a unique path through trellis. We may now compare transfer functions derived above the trellis diagram in Figure..6. We can easily see that there is a path corresponding term D 5 N L 3 in the transfer function (3

22 hops, distance 5, one input bit with value ). Also paths corresponding other can be found in trellis. Hard-decision Decoding and Viterbi algorithm To explain decoding process we first assume that hard decision decoding method is used. That means that the decoder receives binary sequence of binary elements, i.e., values and. Viterbi decoder computes path metrics for each survivor path in trellis. This metrics in hard decision case is the Hamming distance of the path and received sequence. We illustrate Viterbi decoding with a simple example. In Figure..7 the first bits of an example received sequence are. Decoder nows that, according to trellis, there are only two possible outputs that may be transmitted when encoding starts at all-zero state and they are and. Decoder records path metrics for state S () at time instant J= because output bits of the transition are the same as received bits. If path S () S () was followed by the encoder, sequence was transmitted, and two errors have occurred. The decoder records metrics for path entering state S () at time instant J=. Time instant J = J = J = J = 3 J = 4 State S () X X State S () X X 4 State S () X X State S (3) X X Tarmo Anttalainen Page /4/ X J = 4 3 X 3 X Received sequence Corrected sequence Decoded information Figure..7 Decoding example, CC(,,3) in Figure... X J = 5 X 3 X X 3 At time instant J= metrics for path S () S () - S (), as an example, is 4 because this path corresponds to transmitted sequence but received sequence was. At time instant J=3 two branches enter to each state. Decoder computes path metrics of both possible paths entering the state and terminates path with higher metrics. The paths that remain are called survivor paths and there are four survivor paths in our example. At each later time instant four paths are terminated and four paths and their metrics are recorded. At the end of the data bloc decision is made which or the four survivor paths was the transmitted one. In our example decision is made at time instant J=5 and the path shown by bold line is chosen. The resulting error corrected data sequence is shown in Figure..7 as well as decoded information sequence. Note that if decision were made at time instant J=, or 3 another, most probably wrong path would have been selected. We see from Figure..7 that when there is no errors after J=, X 3

23 distances of wrong paths increase while distance of the correct path remains the same. Its metrics equals number of errors occurred. Transfer function give free distance 5 for our example code and corresponding path in trellis is S () - S () - S () - S (). Its weight is 5 and we now that if there is two errors in the received sequence this code never fails. Actually the longer sequence we decode, the more errors it tolerates. Interleaving The wea point of convolutional codes is that they fail if errors occur in bursts. Decoding relies on adjacent and preceding bits in the sequence and right path may be terminated if there are many errors in a short period of time. As an example we see from Figure..7 that if all three first bits in the sequence were in error the correct path were terminated at time instant J=3 and decoding would fail no matter how long sequence we would decode. This is why interleaving is used together with convolutional codes. Interleaving distributes errors over sequence to be decoded and improves essentially performance of convolutional codes. For example in GSM encoded speech frames are mixed in a way that errors in two subsequent bits at air interface cause errors that are 8 bits apart in the sequence to be decoded. Tail bits Convolutional codes are often used for coding of independent data blocs. For example in GSM each speech frame is encoded independently. Encoding starts from all-zero state and bloc is terminated by adding zero tail bits in the end of data bloc to be encoded so that encoder returns to all zero state. In our example above two zeros are enough to force encoder to all-zero state. Then decoder may always choose the path that starts from all-zero state and returns to that in the end of the data bloc. Soft-decision decoding In soft-decision decoding the input of decoder is the sequence of quantizied symbols, not just bits. Decoding principle is similar as we illustrated above but path metrics are computed more accurately. These so-called confidence measures are used for selection of survivor paths the same way as Hamming distances are used in hard-decision decoding. Soft-decision decoding gives db better performance than hard-decision decoding..3. Recursive systematic convolutional code It is possible to construct a recursive systematic convolutional (RSC) encoder from every rate R c =/n feed-forward non-systematic convolutional encoder, such that the weight distribution of the codes are identical. A rate /n code uses generator polynomials g (D),..., g n (D), one for each output bit. The output sequences are described as b (j) (D) = a(d) g (j) (D), j =,...,n To obtain a systematic code, we need to have b () (D) = a(d), i.e., the first bit in each n-bit bloc equals input bit. If we divide both sides by b () (D) we get: () ~ b ( ) ( D) b () D = = a( D), for j= and () g D ( ) Tarmo Anttalainen Page /4/

24 ( D) ( D) ( D) ( D) ( j) ( j) ~ ( j) b g b ( D) = = a( D), J =,...,n () () g g Often the g (j) (D) are called feed-forward polynomials and g () (D) is called feedbac polynomial. Example.3. The generator polynomials of rate ½ convolutional encoder in Figure.. are g () (D) = + D + D g () (D) = + D According to procedure above we may derive generators for corresponding RSC encoder and ~ they are ( g ) ( D) = ~ g () g ( D) = g () () ( D) + D = ( D) + D + D The corresponding RSC encoder is shown in Figure.3. input a g () (D) = + D + D D D g () (D) = + D ~ b () Feedbac polynomial Feed-forward polynomial Figure.3. Binary convolutional encoder, R c = ½, K = 3 [, p47]. ~ b () The weight distribution of RSC code can be obtained by constructing the state diagram and computing the transfer function the same way as we did earlier for feed-forward encoder. For the encoder in Figure.3. the transfer function would be D N L D N L + D N L T(D,N,L) = = D 5 N 3 L 3 + D 6 N L 4 + D 6 N 4 L DNL DNL D L + D N L If we now set N=L= we get weight distribution becomes T(D) = D 5 + D D This is the same as the weight distribution of feed-forward encoder we derived earlier. When we compare transfer functions we see that input weights (exponent of N) are very different. For example weight 5 codeword is generated by feed forward encoder when input weight is. The RSC encoder requires input weight to generate distance 5 codeword, i.e., to leave all-zero state and to return bac there. This property is used for Turbo codes. Tarmo Anttalainen Page 3 /4/

25 Both feed forward and RSC convolutional codes are time invariant, i.e., if the input sequence a(d) produces output sequence b(d), then the input sequence D i a(d) produces output D i b(d). We will see that this not valid for Turbo codes. Note that both generated codewords, b(d) and D i b(d), have the same weight. 3. Trellis Coded Modulation 3.. Encoder description Convolutional codes with good performance have quite low code rate and they are not attractive for bandwidth limited applications. Trellis coded modulation (TCM) extends signal constellation and uses extra bits for error control. It maes error rate of information data smaller although raw bit error rate in channel increases because of reduced Euclidean distances. TCM has three basic features:. An expanded signal constellation used is larger than would be needed for uncoded transmission. The additional signal points are used for redundancy that is inserted without sacrificing data rate or bandwidth.. The expanded signal constellation is partitioned such that the Euclidean distance in maximised inside each signal point subset. 3. Convolutional encoding is used so that only a certain sequences are allowed. Figure 3.. shows the basic structure for Ungerboec s trellis encoder. The n-bit information vector a = (a (), a (),..., a (n) ) is transmitted at epoch. At each epoch m n information bits are encoded by convolutional encoder into m+r code bits, which selects one of the m+r subsets of n+r -point signal constellation. The number of bits added to each transmitted symbol by trellis coding is r. a a a m... Binary m/(m+r) convolutional encoder b b... b m+r Select subset of signals a m+ a m+ a n b m+r+ b m+r+... b n+r Select signal point from subset Signal point x Figure 3.. Ungerboec trellis encoder []. The uncoded n-m information bits select one signal from n-m signal subset. Figure 3.. shows a 4-state trellis encoder for 8-PSK and now n =, r = and m =. Tarmo Anttalainen Page 4 /4/

26 -Bit input a a 8-PSK constellation b b b b 3 b b x Figure 3.. Ungerboec trellis encoder [, p49]. We could transmit pairs of information bits as they are with 4-PSK or with 8-PSK needed after trellis encoder in Figure 3... We will see that 8-PSK, where additional 4 signal points are used for redundancy, gives better performance. We can see uncoded 4-PSK as one state system where subsequent signals might be any of the four signals D,D, D 4, D 6 and the next state is the same as the previous one. Its trellis contains four parallel transitions as shown in Figure 3..3 and the receiver have to choose, without any help of coding, which of those was the transmitted one. D D D 4 D Figure 3..3 Trellis diagram for uncoded 4-PSK. Convolutional encoder in Figure 3.. has four states and its trellis is shown in Figure Convolutionally encoded bits define transitions in trellis and because there is one uncoded bit associated with each transition there is actually two parallel branches for each transition in trellis. 6 Tarmo Anttalainen Page 5 /4/

27 Input symbol and state transition: State S () State S () / / / / / / / / / / / / C C C C 3 C C C C State S () / / / / C C C 3 C State S (3) / / / / / C 3 C / / / C Figure 3..4 Trellis diagram for 4-state trellis code. C Mapping by set partitioning The critical step in the design of Ungerboec s codes is the method of mapping the outputs of the convolutional encoder to the points in the expanded signal constellation. Figure 3.. shows partitioning of 8-PSK constellation. Equivalent uncoded system is 4-PSK where Euclidean distance is. Note that minimum Euclidean distance in increased by partitioning step by step. a a Binary / convolutional encoder b b b 3 Here n=, r=, m= Select subset of signals Select signal point from subset Signal point x A,765 b = b = B B sqr=.4 b = 6 b = 5 7 b = 3 b = C C C3 4 C 4 b 3 = 6 5 b 7 3 = b 3 = b 3 = b b 3 = b 3 = 3 = b 3 = 3 D D D D 3 D 4 D 5 D 6 D 7 6 Figure 3.. Set partitioning for an 8-PSK signal constellation. From trellis diagram we see that the first encoded bit b defines which is the next state in trellis, for the next state is either S () or S () and for it is S () or S (3) (no matter which was Tarmo Anttalainen Page 6 /4/ 5 7

28 the previous state). This corresponds partition from A to B and B in Figure 3... Second encoded bit b defines which transition taes place corresponding next step in partition. For example from state S () only two bit output combinations, and, are possible. They correspond subsets C and C which are associated with states on the left-hand side in Figure Finally uncoded bit b 3 selects a signal D x from subset of two signals as shown in Figure 3... Note that the minimum distance of paths leaving one state and remerging with the same state later is.4. For example the path S () - S () - S () - S () have this minimum distance to allzero path. The first transition with input zero corresponds to signal or 4 (depending on the uncoded bit) in constellation in Figure 3... With input signal will be either or 6 and minimum distance between signals or 4 and or 6 is. Second transition from state S () to state S () generates either signal or 5 and their distance to all zero path (signal or 4) is.765. Third transition merges path bac to all-zero state and generates either signal or 6 and their distance to all-zero state signals is. Now we have got the minimum Euclidean distance between different paths in trellis and it is d min = =. 4 On the other hand each transition has two parallel branches corresponding value of uncoded bit. Then two different but parallel paths in trellis may differ only by one transition where they use different parallel branches. From Figure 3.. we see that Euclidean distance between parallel transitions is. For example transition S () - S () occurs only if the coded bit values are corresponding to signals and 6. Which one of these two is chosen depends on the uncoded bit. As explained above the wea point (shortest minimum distance) is the decision between two signals at opposite side of the signal constellation. Convolutional encoding have made all other distances higher. At high signal-to-noise ratio (SNR) in AWGN channel the bit error rate performance is dominated by minimum Euclidean distance error events. The pairwise error probability between two coded sequences x and x separated by euclidean distance d min is [, p4] d min P( x x ) = Q 4N The asymptotic coding gain gives the performance improvement in dbs at high SNR. It is defined by d min, coded Eav, coded Ga = log db d E min, uncoded av, uncoded where E av is the average energy per symbol in signal constellation. Our comparison of uncoded 4-PSK and coded 8-PSK leads to the coding gain of E G av, coded a = log db = log = 3dB E av, uncoded Average energy per symbol for both systems is the same. Change from uncoded 4-PSK to uncoded 8-PSK increases data rate by the factor of /3 but maes system 5.3dB worse because of decreased Euclidean distance. But if we use increased transmission data rate for error protection (TCM) we get 3 db improvement! This is a result of convolutional encoding which Tarmo Anttalainen Page 7 /4/

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

Revision of Lecture Eleven

Revision of Lecture Eleven Revision of Lecture Eleven Previous lecture we have concentrated on carrier recovery for QAM, and modified early-late clock recovery for multilevel signalling as well as star 16QAM scheme Thus we have

More information

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 7 Channel Coding Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se 2012-04-23 Ove Edfors - ETIN15 1 Contents (CHANNEL CODING) Overview

More information

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq. Using TCM Techniques to Decrease BER Without Bandwidth Compromise 1 Using Trellis Coded Modulation Techniques to Decrease Bit Error Rate Without Bandwidth Compromise Written by Jean-Benoit Larouche INTRODUCTION

More information

Intro to coding and convolutional codes

Intro to coding and convolutional codes Intro to coding and convolutional codes Lecture 11 Vladimir Stojanović 6.973 Communication System Design Spring 2006 Massachusetts Institute of Technology 802.11a Convolutional Encoder Rate 1/2 convolutional

More information

Chapter 3 Convolutional Codes and Trellis Coded Modulation

Chapter 3 Convolutional Codes and Trellis Coded Modulation Chapter 3 Convolutional Codes and Trellis Coded Modulation 3. Encoder Structure and Trellis Representation 3. Systematic Convolutional Codes 3.3 Viterbi Decoding Algorithm 3.4 BCJR Decoding Algorithm 3.5

More information

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Convolutional Coder Basics Coder State Diagram Encoder Trellis Coder Tree Viterbi Decoding For Simplicity assume Binary Sym.Channel

More information

Error Protection: Detection and Correction

Error Protection: Detection and Correction Error Protection: Detection and Correction Communication channels are subject to noise. Noise distorts analog signals. Noise can cause digital signals to be received as different values. Bits can be flipped

More information

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12 Digital Communications I: Modulation and Coding Course Term 3-8 Catharina Logothetis Lecture Last time, we talked about: How decoding is performed for Convolutional codes? What is a Maximum likelihood

More information

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 7 Channel Coding Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se 2016-04-18 Ove Edfors - ETIN15 1 Contents (CHANNEL CODING) Overview

More information

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1 Wireless Networks: Physical Layer: Modulation, FEC Guevara Noubir Noubir@ccsneuedu S, COM355 Wireless Networks Lecture 3, Lecture focus Modulation techniques Bit Error Rate Reducing the BER Forward Error

More information

ERROR CONTROL CODING From Theory to Practice

ERROR CONTROL CODING From Theory to Practice ERROR CONTROL CODING From Theory to Practice Peter Sweeney University of Surrey, Guildford, UK JOHN WILEY & SONS, LTD Contents 1 The Principles of Coding in Digital Communications 1.1 Error Control Schemes

More information

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 Lecture 18 Today: (1) da Silva Discussion, (2) Error Correction Coding, (3) Error Detection (CRC) HW 8 due Tue. HW 9 (on Lectures

More information

Block code Encoder. In some applications, message bits come in serially rather than in large blocks. WY Tam - EIE POLYU

Block code Encoder. In some applications, message bits come in serially rather than in large blocks. WY Tam - EIE POLYU Convolutional Codes In block coding, the encoder accepts a k-bit message block and generates an n-bit code word. Thus, codewords are produced on a block-by-block basis. Buffering is needed. m 1 m 2 Block

More information

Chapter 1 Coding for Reliable Digital Transmission and Storage

Chapter 1 Coding for Reliable Digital Transmission and Storage Wireless Information Transmission System Lab. Chapter 1 Coding for Reliable Digital Transmission and Storage Institute of Communications Engineering National Sun Yat-sen University 1.1 Introduction A major

More information

Chapter 10 Error Detection and Correction 10.1

Chapter 10 Error Detection and Correction 10.1 Data communication and networking fourth Edition by Behrouz A. Forouzan Chapter 10 Error Detection and Correction 10.1 Note Data can be corrupted during transmission. Some applications require that errors

More information

Communications Theory and Engineering

Communications Theory and Engineering Communications Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 Channel Coding The channel encoder Source bits Channel encoder Coded bits Pulse

More information

Trellis-Coded Modulation [TCM]

Trellis-Coded Modulation [TCM] Trellis-Coded Modulation [TCM] Limitations of conventional block and convolutional codes on bandlimited channels Basic principles of trellis coding: state, trellis, and set partitioning Coding gain with

More information

Decoding of Block Turbo Codes

Decoding of Block Turbo Codes Decoding of Block Turbo Codes Mathematical Methods for Cryptography Dedicated to Celebrate Prof. Tor Helleseth s 70 th Birthday September 4-8, 2017 Kyeongcheol Yang Pohang University of Science and Technology

More information

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013

ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 ECE 5325/6325: Wireless Communication Systems Lecture Notes, Spring 2013 Lecture 18 Today: (1) da Silva Discussion, (2) Error Correction Coding, (3) Error Detection (CRC) HW 8 due Tue. HW 9 (on Lectures

More information

Digital Television Lecture 5

Digital Television Lecture 5 Digital Television Lecture 5 Forward Error Correction (FEC) Åbo Akademi University Domkyrkotorget 5 Åbo 8.4. Error Correction in Transmissions Need for error correction in transmissions Loss of data during

More information

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday Lecture 4: Wireless Physical Layer: Channel Coding Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday Channel Coding Modulated waveforms disrupted by signal propagation through wireless channel leads

More information

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team Advanced channel coding : a good basis Alexandre Giulietti, on behalf of the T@MPO team Errors in transmission are fowardly corrected using channel coding e.g. MPEG4 e.g. Turbo coding e.g. QAM source coding

More information

Error Detection and Correction

Error Detection and Correction . Error Detection and Companies, 27 CHAPTER Error Detection and Networks must be able to transfer data from one device to another with acceptable accuracy. For most applications, a system must guarantee

More information

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2

AN INTRODUCTION TO ERROR CORRECTING CODES Part 2 AN INTRODUCTION TO ERROR CORRECTING CODES Part Jack Keil Wolf ECE 54 C Spring BINARY CONVOLUTIONAL CODES A binary convolutional code is a set of infinite length binary sequences which satisfy a certain

More information

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Weimin Liu, Rui Yang, and Philip Pietraski InterDigital Communications, LLC. King of Prussia, PA, and Melville, NY, USA Abstract

More information

Spreading Codes and Characteristics. Error Correction Codes

Spreading Codes and Characteristics. Error Correction Codes Spreading Codes and Characteristics and Error Correction Codes Global Navigational Satellite Systems (GNSS-6) Short course, NERTU Prasad Krishnan International Institute of Information Technology, Hyderabad

More information

ECE 6640 Digital Communications

ECE 6640 Digital Communications ECE 6640 Digital Communications Dr. Bradley J. Bazuin Assistant Professor Department of Electrical and Computer Engineering College of Engineering and Applied Sciences Chapter 8 8. Channel Coding: Part

More information

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors Single Error Correcting Codes (SECC) Basic idea: Use multiple parity bits, each covering a subset of the data bits. No two message bits belong to exactly the same subsets, so a single error will generate

More information

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif PROJECT 5: DESIGNING A VOICE MODEM Instructor: Amir Asif CSE4214: Digital Communications (Fall 2012) Computer Science and Engineering, York University 1. PURPOSE In this laboratory project, you will design

More information

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding ECE 476/ECE 501C/CS 513 - Wireless Communication Systems Winter 2005 Lecture 9: Error Control Coding Chapter 8 Coding and Error Control From: Wireless Communications and Networks by William Stallings,

More information

TABLE OF CONTENTS CHAPTER TITLE PAGE

TABLE OF CONTENTS CHAPTER TITLE PAGE TABLE OF CONTENTS CHAPTER TITLE PAGE DECLARATION ACKNOWLEDGEMENT ABSTRACT ABSTRAK TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF ABBREVIATIONS i i i i i iv v vi ix xi xiv 1 INTRODUCTION 1 1.1

More information

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1. EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code Project #1 is due on Tuesday, October 6, 2009, in class. You may turn the project report in early. Late projects are accepted

More information

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter n Soft decision decoding (can be analyzed via an equivalent binary-input additive white Gaussian noise channel) o The error rate of Ungerboeck codes (particularly at high SNR) is dominated by the two codewords

More information

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Presented to Dr. Tareq Al-Naffouri By Mohamed Samir Mazloum Omar Diaa Shawky Abstract Signaling schemes with memory

More information

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY 1 Information Transmission Chapter 5, Block codes FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY 2 Methods of channel coding For channel coding (error correction) we have two main classes of codes,

More information

Robust Reed Solomon Coded MPSK Modulation

Robust Reed Solomon Coded MPSK Modulation ITB J. ICT, Vol. 4, No. 2, 2, 95-4 95 Robust Reed Solomon Coded MPSK Modulation Emir M. Husni School of Electrical Engineering & Informatics, Institut Teknologi Bandung, Jl. Ganesha, Bandung 432, Email:

More information

Intuitive Guide to Principles of Communications By Charan Langton Coding Concepts and Block Coding

Intuitive Guide to Principles of Communications By Charan Langton  Coding Concepts and Block Coding Intuitive Guide to Principles of Communications By Charan Langton www.complextoreal.com Coding Concepts and Block Coding It s hard to work in a noisy room as it makes it harder to think. Work done in such

More information

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. COMPACT LECTURE NOTES on COMMUNICATION THEORY. Prof. Athanassios Manikas, version Spring 22 Digital

More information

Channel Coding/Decoding. Hamming Method

Channel Coding/Decoding. Hamming Method Channel Coding/Decoding Hamming Method INFORMATION TRANSFER ACROSS CHANNELS Sent Received messages symbols messages source encoder Source coding Channel coding Channel Channel Source decoder decoding decoding

More information

EE521 Analog and Digital Communications

EE521 Analog and Digital Communications EE521 Analog and Digital Communications Questions Problem 1: SystemView... 3 Part A (25%... 3... 3 Part B (25%... 3... 3 Voltage... 3 Integer...3 Digital...3 Part C (25%... 3... 4 Part D (25%... 4... 4

More information

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004. EE29C - Spring 24 Advanced Topics in Circuit Design High-Speed Electrical Interfaces Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 24. Announcements Project phase 1 is posted

More information

Basics of Error Correcting Codes

Basics of Error Correcting Codes Basics of Error Correcting Codes Drawing from the book Information Theory, Inference, and Learning Algorithms Downloadable or purchasable: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html CSE

More information

Department of Electronics and Communication Engineering 1

Department of Electronics and Communication Engineering 1 UNIT I SAMPLING AND QUANTIZATION Pulse Modulation 1. Explain in detail the generation of PWM and PPM signals (16) (M/J 2011) 2. Explain in detail the concept of PWM and PAM (16) (N/D 2012) 3. What is the

More information

Lecture 3 Data Link Layer - Digital Data Communication Techniques

Lecture 3 Data Link Layer - Digital Data Communication Techniques DATA AND COMPUTER COMMUNICATIONS Lecture 3 Data Link Layer - Digital Data Communication Techniques Mei Yang Based on Lecture slides by William Stallings 1 ASYNCHRONOUS AND SYNCHRONOUS TRANSMISSION timing

More information

WITH the introduction of space-time codes (STC) it has

WITH the introduction of space-time codes (STC) it has IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 6, JUNE 2011 2809 Pragmatic Space-Time Trellis Codes: GTF-Based Design for Block Fading Channels Velio Tralli, Senior Member, IEEE, Andrea Conti, Senior

More information

Forward Error Correction for experimental wireless ftp radio link over analog FM

Forward Error Correction for experimental wireless ftp radio link over analog FM Technical University of Crete Department of Computer and Electronic Engineering Forward Error Correction for experimental wireless ftp radio link over analog FM Supervisor: Committee: Nikolaos Sidiropoulos

More information

A Survey of Advanced FEC Systems

A Survey of Advanced FEC Systems A Survey of Advanced FEC Systems Eric Jacobsen Minister of Algorithms, Intel Labs Communication Technology Laboratory/ Radio Communications Laboratory July 29, 2004 With a lot of material from Bo Xia,

More information

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS Manjeet Singh (ms308@eng.cam.ac.uk) Ian J. Wassell (ijw24@eng.cam.ac.uk) Laboratory for Communications Engineering

More information

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Error Control Coding Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Topics Introduction The Channel Coding Problem Linear Block Codes Cyclic Codes BCH and Reed-Solomon

More information

ANALYSIS OF ADSL2 s 4D-TCM PERFORMANCE

ANALYSIS OF ADSL2 s 4D-TCM PERFORMANCE ANALYSIS OF ADSL s 4D-TCM PERFORMANCE Mohamed Ghanassi, Jean François Marceau, François D. Beaulieu, and Benoît Champagne Department of Electrical & Computer Engineering, McGill University, Montreal, Quebec

More information

Performance comparison of convolutional and block turbo codes

Performance comparison of convolutional and block turbo codes Performance comparison of convolutional and block turbo codes K. Ramasamy 1a), Mohammad Umar Siddiqi 2, Mohamad Yusoff Alias 1, and A. Arunagiri 1 1 Faculty of Engineering, Multimedia University, 63100,

More information

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry c 2008 Kanagaraj Damodaran Submitted to the Department of Electrical Engineering & Computer Science and the Faculty of

More information

A GSM Simulation Platform using MATLAB

A GSM Simulation Platform using MATLAB A GSM Simulation Platform using MATLAB Mr. Suryakanth.B*, Mr. Shivarudraiah.B*, Mr. Sree Harsha H.N** *Asst Prof, Dept of ECE, BMSIT Bangalore, India **Asst Prof, Dept of EEE, CMR Institute of Technology,

More information

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin

More information

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use?

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use? Digital Transmission using SECC 6.02 Spring 2010 Lecture #7 How many parity bits? Dealing with burst errors Reed-Solomon codes message Compute Checksum # message chk Partition Apply SECC Transmit errors

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes Information is stored and exchanged in the form of streams of characters from some alphabet. An alphabet is a finite set of symbols, such as the lower-case Roman alphabet {a,b,c,,z}.

More information

Performance Evaluation of Error Correcting Techniques for OFDM Systems

Performance Evaluation of Error Correcting Techniques for OFDM Systems Performance Evaluation of Error Correcting Techniques for OFDM Systems Yasir Javed Qazi Email: p060059@gmail.com Safwan Muhammad Email:safwan.mu11@gmail.com Jawad Ahmed Malik Email: reply.jawad@gmail.com

More information

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing 16.548 Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing Outline! Introduction " Pushing the Bounds on Channel Capacity " Theory of Iterative Decoding " Recursive Convolutional Coding

More information

ECE 6640 Digital Communications

ECE 6640 Digital Communications ECE 6640 Digital Communications Dr. Bradley J. Bazuin Assistant Professor Department of Electrical and Computer Engineering College of Engineering and Applied Sciences Chapter 8 8. Channel Coding: Part

More information

ELG 5372 Error Control Coding. Lecture 10: Performance Measures: BER after decoding

ELG 5372 Error Control Coding. Lecture 10: Performance Measures: BER after decoding ELG 532 Error Control Coding Lecture 10: Performance Measures: BER after decoding Error Correction Performance Review The robability of incorrectly decoding a received word is the robability that the error

More information

Bit-Interleaved Coded Modulation for Delay-Constrained Mobile Communication Channels

Bit-Interleaved Coded Modulation for Delay-Constrained Mobile Communication Channels Bit-Interleaved Coded Modulation for Delay-Constrained Mobile Communication Channels Hugo M. Tullberg, Paul H. Siegel, IEEE Fellow Center for Wireless Communications UCSD, 9500 Gilman Drive, La Jolla CA

More information

Digital Communication Systems ECS 452

Digital Communication Systems ECS 452 Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5. Channel Coding 1 Office Hours: BKD, 6th floor of Sirindhralai building Tuesday 14:20-15:20 Wednesday 14:20-15:20

More information

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting IEEE TRANSACTIONS ON BROADCASTING, VOL. 46, NO. 1, MARCH 2000 49 Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting Sae-Young Chung and Hui-Ling Lou Abstract Bandwidth efficient

More information

Contents Chapter 1: Introduction... 2

Contents Chapter 1: Introduction... 2 Contents Chapter 1: Introduction... 2 1.1 Objectives... 2 1.2 Introduction... 2 Chapter 2: Principles of turbo coding... 4 2.1 The turbo encoder... 4 2.1.1 Recursive Systematic Convolutional Codes... 4

More information

Space engineering. Space data links - Telemetry synchronization and channel coding. ECSS-E-ST-50-01C 31 July 2008

Space engineering. Space data links - Telemetry synchronization and channel coding. ECSS-E-ST-50-01C 31 July 2008 ECSS-E-ST-50-01C Space engineering Space data links - Telemetry synchronization and channel coding ECSS Secretariat ESA-ESTEC Requirements & Standards Division Noordwijk, The Netherlands Foreword This

More information

Disclaimer. Primer. Agenda. previous work at the EIT Department, activities at Ericsson

Disclaimer. Primer. Agenda. previous work at the EIT Department, activities at Ericsson Disclaimer Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder This presentation is based on my previous work at the EIT Department, and is not connected to current

More information

Lab/Project Error Control Coding using LDPC Codes and HARQ

Lab/Project Error Control Coding using LDPC Codes and HARQ Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an

More information

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016

International Journal of Computer Trends and Technology (IJCTT) Volume 40 Number 2 - October2016 Signal Power Consumption in Digital Communication using Convolutional Code with Compared to Un-Coded Madan Lal Saini #1, Dr. Vivek Kumar Sharma *2 # Ph. D. Scholar, Jagannath University, Jaipur * Professor,

More information

Synchronization of Hamming Codes

Synchronization of Hamming Codes SYCHROIZATIO OF HAMMIG CODES 1 Synchronization of Hamming Codes Aveek Dutta, Pinaki Mukherjee Department of Electronics & Telecommunications, Institute of Engineering and Management Abstract In this report

More information

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 9, SEPTEMBER 2003 2141 Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes Jilei Hou, Student

More information

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder European Scientific Journal June 26 edition vol.2, No.8 ISSN: 857 788 (Print) e - ISSN 857-743 Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder Alaa Ghaith, PhD

More information

code V(n,k) := words module

code V(n,k) := words module Basic Theory Distance Suppose that you knew that an English word was transmitted and you had received the word SHIP. If you suspected that some errors had occurred in transmission, it would be impossible

More information

Bit-Interleaved Coded Modulation: Low Complexity Decoding

Bit-Interleaved Coded Modulation: Low Complexity Decoding Bit-Interleaved Coded Modulation: Low Complexity Decoding Enis Aay and Ender Ayanoglu Center for Pervasive Communications and Computing Department of Electrical Engineering and Computer Science The Henry

More information

Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder. Matthias Kamuf,

Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder. Matthias Kamuf, Know your Algorithm! Architectural Trade-offs in the Implementation of a Viterbi Decoder Matthias Kamuf, 2009-12-08 Agenda Quick primer on communication and coding The Viterbi algorithm Observations to

More information

IN 1993, powerful so-called turbo codes were introduced [1]

IN 1993, powerful so-called turbo codes were introduced [1] 206 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998 Bandwidth-Efficient Turbo Trellis-Coded Modulation Using Punctured Component Codes Patrick Robertson, Member, IEEE, and

More information

LDPC Decoding: VLSI Architectures and Implementations

LDPC Decoding: VLSI Architectures and Implementations LDPC Decoding: VLSI Architectures and Implementations Module : LDPC Decoding Ned Varnica varnica@gmail.com Marvell Semiconductor Inc Overview Error Correction Codes (ECC) Intro to Low-density parity-check

More information

Simulink Modeling of Convolutional Encoders

Simulink Modeling of Convolutional Encoders Simulink Modeling of Convolutional Encoders * Ahiara Wilson C and ** Iroegbu Chbuisi, *Department of Computer Engineering, Michael Okpara University of Agriculture, Umudike, Abia State, Nigeria **Department

More information

Turbo coding (CH 16)

Turbo coding (CH 16) Turbo coding (CH 16) Parallel concatenated codes Distance properties Not exceptionally high minimum distance But few codewords of low weight Trellis complexity Usually extremely high trellis complexity

More information

High-Rate Non-Binary Product Codes

High-Rate Non-Binary Product Codes High-Rate Non-Binary Product Codes Farzad Ghayour, Fambirai Takawira and Hongjun Xu School of Electrical, Electronic and Computer Engineering University of KwaZulu-Natal, P. O. Box 4041, Durban, South

More information

Low Complexity Decoding of Bit-Interleaved Coded Modulation for M-ary QAM

Low Complexity Decoding of Bit-Interleaved Coded Modulation for M-ary QAM Low Complexity Decoding of Bit-Interleaved Coded Modulation for M-ary QAM Enis Aay and Ender Ayanoglu Center for Pervasive Communications and Computing Department of Electrical Engineering and Computer

More information

Near-Optimal Low Complexity MLSE Equalization

Near-Optimal Low Complexity MLSE Equalization Near-Optimal Low Complexity MLSE Equalization Abstract An iterative Maximum Likelihood Sequence Estimation (MLSE) equalizer (detector) with hard outputs, that has a computational complexity quadratic in

More information

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society Abstract MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING A Public Lecture to the Uganda Mathematics Society F F Tusubira, PhD, MUIPE, MIEE, REng, CEng Mathematical theory and techniques play a vital

More information

Chapter 2 Soft and Hard Decision Decoding Performance

Chapter 2 Soft and Hard Decision Decoding Performance Chapter 2 Soft and Hard Decision Decoding Performance 2.1 Introduction This chapter is concerned with the performance of binary codes under maximum likelihood soft decision decoding and maximum likelihood

More information

Constellation Labeling for Linear Encoders

Constellation Labeling for Linear Encoders IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 6, SEPTEMBER 2001 2417 Constellation Labeling for Linear Encoders Richard D. Wesel, Senior Member, IEEE, Xueting Liu, Member, IEEE, John M. Cioffi,

More information

DESIGN OF CHANNEL CODING METHODS IN HV PLC COMMUNICATIONS

DESIGN OF CHANNEL CODING METHODS IN HV PLC COMMUNICATIONS DESIGN OF CHANNEL CODING MEHODS IN HV PLC COMMUNICAIONS Aljo Mujčić, Nermin Suljanović, Matej Zajc, Jurij F. asič University of Ljubljana, Faculty of Electrical Engineering, Digital Signal Processing Laboratory

More information

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System

Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System Implementation of Different Interleaving Techniques for Performance Evaluation of CDMA System Anshu Aggarwal 1 and Vikas Mittal 2 1 Anshu Aggarwal is student of M.Tech. in the Department of Electronics

More information

Error Correction with Hamming Codes

Error Correction with Hamming Codes Hamming Codes http://www2.rad.com/networks/1994/err_con/hamming.htm Error Correction with Hamming Codes Forward Error Correction (FEC), the ability of receiving station to correct a transmission error,

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of OFDM under DWT, DCT based Image Processing Anshul Soni soni.anshulec14@gmail.com Ashok Chandra Tiwari Abstract In this paper, the performance of conventional discrete cosine transform

More information

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont. TSTE17 System Design, CDIO Lecture 5 1 General project hints 2 Project hints and deadline suggestions Required documents Modulation, cont. Requirement specification Channel coding Design specification

More information

Chapter 4 Cyclotomic Cosets, the Mattson Solomon Polynomial, Idempotents and Cyclic Codes

Chapter 4 Cyclotomic Cosets, the Mattson Solomon Polynomial, Idempotents and Cyclic Codes Chapter 4 Cyclotomic Cosets, the Mattson Solomon Polynomial, Idempotents and Cyclic Codes 4.1 Introduction Much of the pioneering research on cyclic codes was carried out by Prange [5]inthe 1950s and considerably

More information

Nonlinear Multi-Error Correction Codes for Reliable MLC NAND Flash Memories Zhen Wang, Mark Karpovsky, Fellow, IEEE, and Ajay Joshi, Member, IEEE

Nonlinear Multi-Error Correction Codes for Reliable MLC NAND Flash Memories Zhen Wang, Mark Karpovsky, Fellow, IEEE, and Ajay Joshi, Member, IEEE IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 20, NO. 7, JULY 2012 1221 Nonlinear Multi-Error Correction Codes for Reliable MLC NAND Flash Memories Zhen Wang, Mark Karpovsky, Fellow,

More information

C802.16a-02/76. IEEE Broadband Wireless Access Working Group <

C802.16a-02/76. IEEE Broadband Wireless Access Working Group < Project IEEE 802.16 Broadband Wireless Access Working Group Title Convolutional Turbo Codes for 802.16 Date Submitted 2002-07-02 Source(s) Re: Brian Edmonston icoding Technology

More information

Optimal Power Allocation for Type II H ARQ via Geometric Programming

Optimal Power Allocation for Type II H ARQ via Geometric Programming 5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 5 Optimal Power Allocation for Type II H ARQ via Geometric Programming Hongbo Liu, Leonid Razoumov and Narayan

More information

COMBINED TRELLIS CODED QUANTIZATION/CONTINUOUS PHASE MODULATION (TCQ/TCCPM)

COMBINED TRELLIS CODED QUANTIZATION/CONTINUOUS PHASE MODULATION (TCQ/TCCPM) COMBINED TRELLIS CODED QUANTIZATION/CONTINUOUS PHASE MODULATION (TCQ/TCCPM) Niyazi ODABASIOGLU 1, OnurOSMAN 2, Osman Nuri UCAN 3 Abstract In this paper, we applied Continuous Phase Frequency Shift Keying

More information

SECTION 4 CHANNEL FORMAT TYPES AND RATES. 4.1 General

SECTION 4 CHANNEL FORMAT TYPES AND RATES. 4.1 General SECTION 4 CHANNEL FORMAT TYPES AND RATES 4.1 General 4.1.1 Aircraft system-timing reference point. The reference timing point for signals generated and received by the AES shall be at the antenna. 4.1.2

More information

Exercises to Chapter 2 solutions

Exercises to Chapter 2 solutions Exercises to Chapter 2 solutions 1 Exercises to Chapter 2 solutions E2.1 The Manchester code was first used in Manchester Mark 1 computer at the University of Manchester in 1949 and is still used in low-speed

More information

Combined Transmitter Diversity and Multi-Level Modulation Techniques

Combined Transmitter Diversity and Multi-Level Modulation Techniques SETIT 2005 3rd International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 27 3, 2005 TUNISIA Combined Transmitter Diversity and Multi-Level Modulation Techniques

More information

Digital to Digital Encoding

Digital to Digital Encoding MODULATION AND ENCODING Data must be transformed into signals to send them from one place to another Conversion Schemes Digital-to-Digital Analog-to-Digital Digital-to-Analog Analog-to-Analog Digital to

More information