Incremental Redundancy and Feedbac at Finite Bloclengths Richard Wesel, Kasra Vailinia, Adam Williamson Munich Worshop on Coding and Modulation, July 30-31, 2015 1
Lower Bound on Benefit of Feedbac 0.7 0.65 BI-AWGN Channel, SNR = 2dB, Capacity=0.6422 =96 =192 =288 Expected Throughput (R t ) 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 Random Coding and Stop Feedbac =32 Frame error rate 10-3 Polyansiy, Poor, and Verdu 2011 =64 NoFeedbac Polyansiy, Poor, and Verdu 2010 BI-AWGN capacity VLF random coding lower bound Max. rate for fixed length code, no feedbac =16 R =/ =16, 32, 64, 96, 192, 288 t 0.2 0 100 200 300 400 500 600 Average Bloclength ( ) 2
How well can a non-binary LDPC code do? Rate-0.8 protograph Rate-0.75 protograph We will lift these protographs to produce GF(256) LDPC codes for 96 (rate-0.8), 192 and 288 (rate-0.75) input bits. 3
VERY incremental redundancy 1. Send the initial NB-LDPC codeword 2. If the CRC checs, we are done 3. If not, request a transmission of a specific bit that helps the least reliable variable node. 4. Go to step 3 4
Observation: NB-LDPC decoder never visits two wrong codewords. α State%1% Decoder' converges' to'wrong' codeword' State%2% Decoder' does'not' converge' State%3% Decoder' converges'to' correct' codeword' γ β α + β + γ = 1 5
Choose CRC length to Guarantee FER=10-3. inf L crc inf L crc Information Bloc 00 0 Information Bloc CRC Divisor CRC L crc L crc +1 N 0 NB-LDPC Encoded Message α 2 L CRC < 10 3 6
NB-LDPC with single-bit increments performs as well as random coding lower bound, in a fair comparison. 0.7 0.65 BI-AWGN Channel, SNR = 2dB, Capacity=0.6422 =89 =185 =281 Expected Throughput (R t ) 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 =32 =16 =64 BI-AWGN capacity VLF random coding lower bound Max. rate for fixed#length code, no feedbac R t =/ =16, 32, 64, 89, 185, 281 VLF CRC m= NB-LDPC 0.2 0 100 200 300 400 500 600 7 Average Bloclength ( )
But what if I don t want to send feedbac after every bit? Suppose my communication system can only tolerate rounds of feedbac. m How many bits should I transmit in each increment? Can m incremental transmissions get close to the performance of an infinite number of single-bit increments? 8
How many bits does it tae to decode successfully? Probability Density Function 0.04 0.03 0.02 0.01 p.d.f of bloclength until the decoder converges to the correct codeword VLFT Simulation Inv Gaussian Approx. 0 120 130 140 150 160 170 180 190 200 210 220 Bloclength (N S ) From simulation of = 96 system with 120-bit (rate-0.8) NB-LDPC code. N S is the number of bits transmitted until the first successful decoding. R S = N S is the instantaneous rate at the first successful decoding. 9
Rate histogram follows a Gaussian. Probability Density Function 8 6 4 2 f RS p.d.f. of rate until the decoder converges to the correct codeword ( r) = VLFT Simulation Gaussian Approx. 1 2 2πσ e S r µ S 2 2 2 0 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Rate (R ) S Histogram of R S, the highest rate supporting successful decoding. Polyansiy s normal approximation is a practical reality! In this example µ S = 0.63 and 2 = 0.057. Note that capacity is C = 0.6422 > µ S. 10
m Transmissions: The Accumulation Cycle m Let a maximum of accumulation cycle. transmissions form an Performance depends on the cumulative bloclengths: N 1, N 2,!, N m. Note that since we end with a finite bloclength there will be some probability that the communication does not conclude. When that happens, the received transmissions are forgotten and we start over. 11
Computing probability of initial success Probability Density Function 0.04 0.03 0.02 0.01 p.d.f of bloclength until the decoder converges to the correct codeword VLFT Simulation Inv Gaussian Approx. 0 120 130 140 150 160 170 180 190 200 210 220 Bloclength (N S ) f NS ( n) = n 2 2π 2 e n µ S 2 2 2 Suppose N 1, the initial bloclength, is 140 bits. The probability P( n = N 1 ) of successful decoding is shown above. 12
is easier using the rate histogram. Probability Density Function 8 6 4 2 p.d.f. of rate until the decoder converges to the correct codeword VLFT Simulation Gaussian Approx. 0 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Rate (R ) S P( n = N 1 ) is shown can also be seen on the rate histogram. ( ) = Q P n = N 1 N 1. 13
Expected bloclength E[ N ] Probability Density Function 8 6 4 p.d.f. of rate until the decoder converges to the correct codeword VLFT Simulation Gaussian Approx. Also N 5 N 5 2 N 4 N 3 N 2 N 1 0 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Rate (R ) S E[ N ] = N 1 Q N 1 µ N S µ m + N i Q i N S Q i 1 + N m 1 Q i=2 N m 14
Expected throughput per bloc E[ K ] E[K] = Q N m since we will always choose large enough that the probability N m of a lost accumulation cycle is small. 15
Optimizing the throughput rate R T R T = E K E N [ ] [ ] E[ N ] So we just need to choose to minimize E[ N ]. N 1, N 2,!, N m 16
Lets tae some derivatives E[ N ] N 1 = Q N 1 + ( N 1 N 2 )Q' N 1 Setting E [ N ] = 0... N 1 N 2 = Q N 1 Q' + N 1 Q' µ N S 1 N 1 17
For i > 2 E[ N ] N i 1 = Q N i 1 + ( N i 1 N i )Q' N i 1 Q N i 2 18
The sweet spot 0.15 0.1 de[n]/dn2 0.05 0-0.05-0.1-0.15-0.2 143 145 147 149 151 153 155 Second Bloclength N 2 157 159 161 163 N i 2 N i 1 N i 19
We actually choose N i to mae N i-1 optimal 0.15 0.1 de[n]/dn2 0.05 0-0.05-0.1-0.15-0.2 143 145 147 149 151 153 155 Second Bloclength N 2 157 159 161 163 N i 2 N i 1 N i 20
For i > 2 E[ N ] N i 1 = Q N i 1 + ( N i 1 N i )Q' N i 1 Q N i 2 Setting E [ N ] = 0... N i 1 N i = Q N i 1 + N i 1 Q' Q' N 1 µ N S i 1 σ S Q N i 2 21
Sequential Differential Approximation So, for each choice of the remaining bloclengths can be selected so that E[ N ] N i This gives the same set of bloclengths as exhaustive search (within one bit) and essentially the same optimal throughputs. It is easy to chec the set of interesting N 1 values. 22 N 1 = 0 for i { 1,,m 1}
Loos lie 10 (maybe 20) =. 190 180 VLFT λ for various values of m VLFT λ m= VLFT λ for m=2,3,...,20 λ 170 160 R T 150 2 4 6 8 10 12 14 16 18 20 m VLFT R T for various values of m 0.64 0.62 0.6 0.58 0.56 0.54 0.52 VLFT R T m= VLFT R T for m=2,3,...,20 0.5 2 4 6 8 10 12 14 16 18 20 m 23
CRC-based Feedbac with Non-binary LDPC Code 0.7 0.65 BI-AWGN Channel, SNR = 2dB, Capacity=0.6422 =89 =185 =281 Expected Throughput (R t ) 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 =32 =16 =64 BI-AWGN capacity VLF random coding lower bound Max. rate for fixed#length code, no feedbac R t =/ =16, 32, 64, 89, 185, 281 VLF CRC m= NB-LDPC VLF CRC m=5 NB-LDPC 0.2 0 100 200 300 400 500 600 Average Bloclength ( ) 24
Some two-phase feedbac schemes Expected Throughput (R t ) 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 =32 =16 BI-AWGN Channel, SNR = 2dB, Capacity=0.6422 =89 =64 =96 =192 =288 =185 =281 BI-AWGN capacity VLF random coding lower bound Max. rate for fixed#length code, no feedbac R t =/ =16, 32, 64, 96, 192, 288 VLF CRC m= NB-LDPC VLF CRC m=5 NB-LDPC VLF Two-phase m=5 NB-LDPC VLF Two-phase m=5 1024-CC VLF Two-phase m=5 64-CC 0.2 0 100 200 300 Average Bloclength ( ) 400 500 600 25
Percentage of Capacity Perspective 100 VLF BI AWGN Channel, SNR = 2dB, Capacity=0.6422 % Throughput of BI AWGN Capacity 95 90 85 80 75 VLF Lower bound VLF CRC m= NB LDPC VLF Two phase m=5 NB LDPC VLF CRC m=5 NB LDPC VLF Two phase m=5 64 CC 70 0 100 200 300 400 500 600 Expected Latency (λ) 26
Conclusions Sequential Differential Approximation (SDA) is a general tool that can find the optimal incremental transmission sizes for a wide class of incremental redundancy feedbac schemes. Today we used it to show that short NB-LDPC codes with a CRC and incremental redundancy can achieve 89% of capacity with 5 transmissions. (94% with infinite transmissions). We expect a significant portion of that gap to be closed by using 10 transmissions. 27