Turbo coding (CH 16) Parallel concatenated codes Distance properties Not exceptionally high minimum distance But few codewords of low weight Trellis complexity Usually extremely high trellis complexity Decoding Suboptimum (but close to ML) iterative (turbo) decoding Performance Low error probability at SNRs close to the Shannon limit 1
Shannon (1948): History The channel s SNR (AWGN channel) determines the capacity C, in the sense that for code rates R < C we can have error-free transmission For each code rate R we can compute the Shannon limit Difficult to approach the Shannon limit by classical methods But... Gallager (1961) and Tanner (1981) Berrou, Glavieux, and Thitimajshima invented turbo codes in 1993 2
Encoding Encode information by a systematic encoder Usually a recursive systematic rate ½ convolutional encoder Reorder information bits Encode permuted information bits again, using a recursive systematic encoder (may be the same). Delete the systematic bits this time 3
Example, more detailed 4
Remarks Starting with rate ½ component codes we get approximately rate 1/3 Can be punctured (parity or information bits) to adjust the rate Can add more interleavers and component codes to lower the rate Large information blocks give Better distance properties Better working decoding algorithm Simple component codes (ν=4?) are best for moderate BERs Interleaver design is difficult, and there is no known technique to design the best one. Design criteria are: Implementation complexity Performance at low SNR (pseudorandom-like) Performance at high SNR (high minimum distance) Disadvantage: Delay in decoding 5
Example Waterfall region Error floor region ν = 4, K = 65536 6
Distance properties of turbo codes Classical coding approach is to maximize minimum distance New approach: Few codewords with low weights Recall: In a feedforward encoder, a low-weight codeword is usually generated by a low-weight input sequence In a feedback encoder, a low-weight codeword is usually generated by an input information sequence that is a multiple of the feedback polynomial. Often higher input weights Spectral thinning 7
Spectral thinning: Example 8
Spectral thinning: Example 9
Spectral thinning: Example 10
Spectral thinning: Remarks Requires feedback encoder Single one input in feedforward encoder: Local weight gain effect Single one input in feedback encoder: Gains weight (at least) until next input one is seen Requires an interleaver to make the code time-varying Stronger effect for longer block lengths; similar weight spectrum as random codes Moderate effect on minimum distance 11
Interleavers for turbo codes Goal: Input patterns which produce low-weight words in one component code should map through the interleaver to patterns which produce high-weight words in the other component code Interleavers with traditional structure is usually bad for turbo codes Interleavers with a randomlike structure achieve the above goal to a larger extent Interleavers which are pseudorandom with constraints on spreading properties, and with additional constraints based on the particular component encoders, have provided good results But such randomlike interleavers may be hard to implement in an efficient manner Dithered relative prime (DRP) and quadratic permutation polynomial (QPP) interleavers are easy to implement and have 12 very good properties as well*
Block interleaver: Example Critical input sequence is (1+D 5 )D l 13
Effects of block interleaver 14
Pseudorandom interleavers Your favourite (pseudo)random generator together with table lookup Quadratic congruence c m km(m+1)/2 (mod K), 0 m < K, to generate an index mapping function c m c m+1 (mod K), k is an odd integer Example with K = 4 and k = 1: (c m ) = (0,1,3,2) and interleaver is defined by (1,3,0,2). This pattern can also be shifted cyclically Statistical properties are similar to random interleavers when K is a power of 2 15
Turbo decoding Channel 16
Turbo decoding L c = 4E s /N 0 L (1) (u l ) = ln(p(u l = +1 r 1, L a (1) ) / P(u l = -1 r 1, L a (1) )) SISO 1 SISO 2 Apriori Extrinsic 17