Table of Contents. A Short Course on Polar Coding. Lecture 1 Information theory review. L1: Information theory review. L2: Gaussian channel

Size: px
Start display at page:

Download "Table of Contents. A Short Course on Polar Coding. Lecture 1 Information theory review. L1: Information theory review. L2: Gaussian channel"

Transcription

1 Table of Contents A Short Course on Polar Coding Theory and Applications Prof. Erdal Arıkan Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey Center for ireless Communications, University of Oulu, May 26 L: Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels L9: Polar codes for selected applications L: Information theory review Lecture Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding Objective Establish notation Review the channel coding theorem Reference for this part: T. Cover and J. Thomas, Elements of Information Theory, 2nd ed., iley: 26. L8: Coding for bandlimited channels L9: Polar codes for selected applications L: Information theory review /2 L: Information theory review 2/2

2 Notation and conventions - I Notation and conventions - II Upper case letters X,U,Y,... denote random variables Lower case letters x,u,y,... denote realization values Script letters X,Y, denote alphabets X N = (X,...,X N ) denotes a vector of random variables X j i = (X i,...,x j ) denotes a sub-vector of X N Similar notation applies to realizations: x N and x j i P X (x) denotes the probability mass function (PMF) on a discrete rv X; we also write X P X (x) Likewise, we use the standard notation P X,Y (x,y), P X Y (x y) to denote the joint and conditional PMF on pairs of discrete rvs For simplicity, we drop the subscripts and write P(x), P(x,y), etc., when there is no risk of ambiguity L: Information theory review Notation 3/2 L: Information theory review Notation 4/2 Entropy Binary entropy function Entropy of X P(x) is defined as H(X) = x X P(x)log P(x) For X Bern(p), i.e., {, with prob. p, X =, with prob. p. H(p) H(X) is a non-negative convex function of the PMF P X H(X) = iff X is deterministic H(X) log X with equality iff P X is uniform over X entropy is given by H(X) = H(p) = plog 2 (p) ( p)log 2 ( p).5.5. p L: Information theory review Entropy 5/2 L: Information theory review Entropy 6/2

3 Joint Entropy Fano s inequality Joint entropy of (X,Y) P(x,y) H(X,Y) = (x,y) X Y Conditional entropy of X given Y H(X Y) = H(X,Y) H(Y) = P(x,y)log (x,y) X Y H(X Y) with eq. iff X if a function of Y P(x,y) P(x,y)log H(X Y) H(X) with eq. iff X and Y are independent P(x y) For any pair of jointly distributed rvs (X,Y) over a common alphabet X, the probability of error satisfies P e = Pr(X Y) H(X Y) H(P e )P e log( X ) log X. Thus, if H(X Y) is bounded away from zero, so is P e. L: Information theory review Entropy 7/2 L: Information theory review Entropy 8/2 Chain rule Chanin rule - II For any random vector X N = (X,...,X N ) For any pair of rvs (X,Y), H(X,Y) = H(X)H(Y X) H(X,Y) = H(Y)H(X Y) H(X,Y) H(X)H(Y) with equality iff X and Y are independent. H(X N ) = H(X )H(X 2 X ) H(X N X N ) N = H(X i X i ) i= N H(X i ) i= with equality iff X,...,X N are independent. L: Information theory review Entropy 9/2 L: Information theory review Entropy /2

4 Mutual information Mutual information bounds For any (X,Y) P(x,y), the mutual information between them is defined as Alternatively, or I(X;Y) = H(X) H(X Y). I(X;Y) = H(Y) H(Y Y) I(X;Y) = H(X)H(Y) H(X,Y) e have with I(X;Y) min{h(x),h(y)} I(X;Y) = iff X and Y are independent I(X;Y) = min{h(x),h(y)} iff X is a function of Y or vice versa L: Information theory review Mutual information /2 L: Information theory review Mutual information 2/2 Conditional mutual information For any three-part ensemble (X,Y,Z) P(x,y,z), the mutual information between X and Y conditional on Z is defined as Alternatively, I(X;Y Z) = H(X Z) H(X YZ) I(X;Y Z) = H(Y Z) H(Y YZ) = H(X Z)H(Y Z) H(X,Y Z) Chain rule of mutual information For any ensemble (X N,Y) P(x,...,x N,y), we have I(X N ;Y) = I(X ;Y)I(X 2 ;Y X ) I(X N ;Y X N ) N = I(X i ;Y X i ) i= Examples exist for both I(X;Y Z) < I(X;Y) and I(X;Y Z) > I(X;Y) L: Information theory review Mutual information 3/2 L: Information theory review Mutual information 4/2

5 Data processing theorem Discrete memoryless channels (DMC) If X Y Z form a Markov chain, i.e., if P(z yx) = P(z y) for all x,y,z, then I(X;Z) I(X;Y). Proof: Use the chain rule to expand I(X;YZ) in two different ways. I(X;YZ) = I(X;Y)I(X;Z Y) = I(X;Y) I(X;YZ) = I(X;Z)I(X;Y Z) I(X;Z) by Markov property A DMC is a conditional probability assignment {(y x) : x X,y Y} for two discrete alphabets X, Y. e write : X Y or simply to denote a DMC X is called the channel input alphabet Y is called the channel output alphabet is called the channel transition probability matrix L: Information theory review Mutual information 5/2 L: Information theory review Channel coding theorem 6/2 Channel coding Channel coding is an operation to achieve reliable communication over an unreliable channel. It has two parts. An encoder that maps messages to codewords A decoder that maps channel outputs back to messages Block code Given a channel : X Y, a block code with length N and rate R is such that the message set consists of integers {,...,M = 2 NR } the codeword for each message m is a sequence x N (m) of length N over X N the decoder operates on channel output blocks y N over Y N and produces estimates ˆm of the transmitted message m. the performance is measured by the probability of frame (block) error, also called frame error rate (FER), which is defined as P e = Pr(ˆm m) where m is the transmitted message which is assumed equiprobable over the message set and ˆm denotes the decoder output. L: Information theory review Channel coding theorem 7/2 L: Information theory review Channel coding theorem 8/2

6 Channel capacity Channel capacity theorem The capacity C() of a DMC : X Y is defined as the maximum of I(X;Y) over all probability assignments of the form P X,Y (x,y) = Q(x)(y x) where Q is an arbitrary probability assignment over the channel input alphabet X, or briefly, For any fixed rate R < C() and ǫ >, there exist block coding schemes with rate R and P e < ǫ provided the code block length N can be chosen as large as desired. C() = max Q(x) I(X;Y). L: Information theory review Channel coding theorem 9/2 L: Information theory review Channel coding theorem 2/2 L: Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels Lecture 2 Additive hite Gaussian Noise (AGN) channel Objective: Review the basic AGN channel Topics Discrete-time and continuous-time Gaussian channel Signaling over a Gaussian channel The union bound Reference for this part: David Forney, Lecture Notes for Course Principles of Digital Communication II, Spring 25, Available online: L9: Polar codes for selected applications L2: Gaussian channel /34 L2: Gaussian channel Outline 2/34

7 Discrete-time (DT) AGN channel Capacity of the DT-AGN channel The input at time i is a real number x i, the output is given by y i = x i z i where the noise sequence {z i } over the entire time frame is iid Gaussian N(,σ 2 ). If a block code {x N (m) : m M} is employed subject to a power constraint N i= x 2 i the capacity is given by (m) NP, m M, C = 2 log 2 ( Pσ 2 ) bits. L2: Gaussian channel Capacity 3/34 L2: Gaussian channel Capacity 4/34 Continuous-time (CT) AGN channel Capacity of the CT-AGN channel This is a waveform channel whose output is given by If signaling over the CT-AGN channel is restricted to waveforms {x(t) that are time-limited to [,T], band-limited to [,], and power-limited to P, i.e., y(t) = x(t)w(t) where x(t) is the channel input and w(t) is white Gaussian noise with power spectral density N o /2. T then the capacity is given by x 2 (t)dt PT, C [b/s] = log 2 ( P N o ) bits/sec. L2: Gaussian channel Capacity 5/34 L2: Gaussian channel Capacity 6/34

8 DT model for the CT-AGN model By Nyquist theory, each use of the CT-AGN channel with signals of duration T and bandwidth gives rise to 2T independent DT-AGN channels. It is customary to use the DT channels in pairs of in-phase and quadrature components of a complex number Accordingly, the capacity of the two-dimensional (2D) DT-AGN channels derived from a CT-AGN channel are given by ( C 2D = log 2 E ) s bits/2d or bits/hz N o where E s is the signal energy per 2D, E s = P 2 = PT J/2D or J/Hz. Signal-to-Noise Ratio Primary parameters in an AGN channel are: Signal bandwidth (Hz), signal power P (att), noise power spectral density N /2 (Joule/Hz). Capacity equals C [b/s] = log 2 (P/N ). Define SNR = P/N to write C [b/s] = log 2 (SNR). riting SNR = (P/2)/(N /2), SNR can be interpreted as the signal energy per real dimension divided by the noise energy per real dimension. For 2D complex signalling, one may write SNR = (P/)/N and interpret SNR as signal energy per 2D divided by the noise energy per 2D. L2: Gaussian channel Capacity 7/34 L2: Gaussian channel Signalling 8/34 Signal energy per 2D: E s Spectral efficiency ρ and data rate R Definition: E s = P/ (joules) E s can be interpreted as signal energy per two dimensions. For 2D (complex) signalling E s is the signal energy. For D (real) signalling, E s /2 is the energy per signal. Note that SNR = E s /N and one may write C [b/2d] = log 2 (E s /N ) ρ is defined as the number of bits per two dimension over the AGN channel. Units: bits/two-dimension or b/2d. R is defined as the number of bits per second sent over the AGN channel. Units: bits/sec or b/s. Since there are (2D/s) 2D dimensions per second, we have R = ρ. Since ρ = R/, the units of ρ can also be expressed as b/s/hz (bits per second per Hertz). L2: Gaussian channel Signalling 9/34 L2: Gaussian channel Signalling /34

9 Normalized SNR Shannon s law says that for reliable communication one has to have ρ < log 2 (SNR) or This motivates the definition Shannon limit now reads SNR > 2 ρ. SNR norm = SNR 2 ρ. SNR norm > (db). The value of SNR norm (in db) for an operational system measures gap to capacity, indicating how much room there is for improvement. L2: Gaussian channel Signalling /34 Another measure of signal-to-noise ratio: E b /N Energy per bit is defined as E b = Es /ρ, and signal-to-noise ratio per information bit as E b /N = Es /ρn = SNR/ρ. Shannon s limit can be written in terms of E b /N can be written as E b /N > 2ρ. ρ The function (2 ρ )/ρ is an increasing function of ρ >, and as ρ, approaches ln2.69 (.59 db), which is called the ultimate Shannon limit on E b /N. L2: Gaussian channel Signalling 2/34 Power-limited and band-limited regimes Band-limited regime 25 Capacity and Bandwidth Tradeoff Operation over an AGN channels is classified as power-limited if SNR and band-limited if SNR. The Shannon limit on the spectral efficiency can be approximated as { SNRlog ρ < log 2 (SNR) 2 e, SNR ; log 2 SNR, SNR. Capacity (b/s) = = 2 Band limited regime In the power-limited regime, the Shannon limit on ρ is doubled by doubling the SNR (a 3 db increase); while in the band-limited case, doubling the SNR increases the Shannon limit by only b/2d SNR (db) Doubling the bandwidth almost doubles the capacity in the deep band-limited regime. Doubling the bandwidth has small effect if the SNR is low (power-limited regime). L2: Gaussian channel Signalling 3/34 L2: Gaussian channel Signalling 4/34

10 Power-limited regime Capacity (b/s) 3 P/N = P/N = Capacity and Bandwidth Tradeoff Power Limited Regime Signal constellations An N-dimensional signal constellation with size M is a set A = {a,...,a M } R N, where each element a j = (a j,...,a jn ) R N is called a signal point. The average energy of the constellation is defined as E(A) = M M a j 2 = M j= M j= i= N aji (dbhz) Doubling the SNR almost doubles the capacity in the deep power-limited regime. Doubling the SNR increases the capacity by not more than b/2d in the band-limited regime. The minimum squared distance dmin 2 (A) is defined as d 2 min (A) = min a i a j 2. The average number of nearest neighbors K min (A) is defined as the average number of nearest neighbors (at distance d min (A)). i j L2: Gaussian channel Signalling 5/34 L2: Gaussian channel Signalling 6/34 Signal constellation parameters Some important derived parameters for each constellation are: Bit rate (nominal spectral efficiency) ρ = (2/N)log 2 M (b/2d) Average energy per two dimensions: E s = (2/N)E(A) (J/2D) Average energy per bit: Uncoded 2-PAM A = { α,α} N = M = 2 ρ = 2 E(A) = α 2 E s = 2α 2 α α E b = α 2 SNR = E s /N = α 2 /N SNR norm = SNR/3 d min = 2α K min = dmin 2 /E s = 2 E b = E(A)/log 2 (M) = E s /ρ (J/b) Energy-normalized figure of merits such as: d 2 min (A)/E(A), d 2 min (A)/E s, or d 2 min (A)/E b, which are independent of scale. Probability of bit error P b (E) = Q( SNR) = SNR 2π e u2 /2 du L2: Gaussian channel Signalling 7/34 L2: Gaussian channel Pulse Amplitude Modulation 8/34

11 Uncoded 2-PAM P b (E) Uncoded 2 PAM Coding Gain 7.8 db Uncoded 2 PAM Ultimate Shannon limit Shannon limit at ρ = E b /N (db) Spectral efficiency: ρ = 2 b/2d Shannon limit: E b /N > (2 ρ )/ρ = 3/2 (.76 db) Target P b (E) = 5 achieved at E b /N = 9.6 db Potential coding gain is = 7.84 db Ultimate coding gain is 9.6 (.59) = db with ρ Uncoded M-PAM Signal set: A = α{±,±3,...,±(m )} Parameters: ρ = 2log2 M b/2d E(A) = α 2 (M 2 )/3 J/D Es = 2E(A) = 2α 2 (M 2 )/3 J/2D SNR = Es /N = 2α 2 (M 2 )/3N SNRnorm = SNR/(2 ρ ) = 2α 2 /3 Probability of symbol error, P s (E), is given by P s (E) = where σ = N /2. 2(M ) M Q(α/σ) 2Q(α/σ) = 2Q( 3SNR norm ) L2: Gaussian channel Pulse Amplitude Modulation 9/34 L2: Gaussian channel Pulse Amplitude Modulation 2/34 Uncoded M-PAM Performance P s (E) Uncoded M PAM, M>> Uncoded PAM Shannon limit SNR (db) norm Uncoded 4-QAM A = {( α, α),( α,α),(α, α),(α,α)}. Parameters: N = 2 M = 4 ρ = 2 E(A) = 2α 2 E s = 2α 2 E b = α 2 d min = 2α K min = 2 d 2 min /E s = 2 This curve is valid for any M-PAM with M. Target P s (E) = 5 is achieved at SNR norm = 8. db. Shannon limit is SNR norm = db. L2: Gaussian channel Pulse Amplitude Modulation 2/34 L2: Gaussian channel Quadrature Amplitude Modulation 22/34

12 Uncoded M M-QAM The signal constellation is A = A M-PAM A M-PAM Parameters: ρ = log2 M 2 = 2log 2 M b/2d E(A) = 2α 2 (M 2 )/3 J/2D Es = E(A) = 2α 2 (M 2 )/3 J/2D SNR = Es /N = 2α 2 (M 2 )/3N SNRnorm = SNR/(2 ρ ) = 2α 2 /3 Probability of symbol error, P s (E), is given by (see notes) Uncoded QAM performance P s (E) Uncoded QAM Uncoded QAM Shannon limit P s (E) 4Q( 3SNR norm ) SNR (db) norm L2: Gaussian channel Quadrature Amplitude Modulation 23/34 Curve valid for M M-QAM with M. Target P s (E) = 5 achieved at SNR norm = 8.4 db. Gap to Shannon limit is 8.4 db. L2: Gaussian channel Quadrature Amplitude Modulation 24/34 Cartesian product constellations Given a constellation A, define a new constellation A as the Kth Cartesian power of A: A = A K = A A A }{{} K E.g., 4 QAM is the second Cartesian power of 2 PAM. The parameters of A are related to those of A as follows: N = KN M = M K E(A ) = K E(A) K min = K K min E s = E s E b = E b d min = d min ρ = ρ L2: Gaussian channel Quadrature Amplitude Modulation 25/34 MAP and ML decision rules Consider transmission over an AGN channel using a constellation A = {a,...,a M }. Suppose in each use of the system a signal a j A is selected with probability p(a j ) and sent over the channel. Given the channel output y, the receiver needs to makes a decision â on which of the signal points a j was sent. There are various decision rules. The Maximum A-Posteriori Probability (MAP) rule sets â MAP = argmax a A [p(a y)] = argmax a A [p(a)p(y a)/p(y)]. The Maximum Likelihood (ML) rule sets â ML = argmax a A [p(y a)]. ML and MAP rules are equivalent for the important special case where p(a j ) = /M for all j. L2: Gaussian channel Decision rules 26/34

13 Minimum Distance decision rule Given an observation y, the Minimum Distance (MD) decision rule is defined as â MD = argmin a A y a. On an AGN channel the ML rule is equivalent to the MD rule. This is because on an AGN channel, with input-output relation y = an, the transition probability density is given by p(y a) = (πn ) N/2 ) e y a 2 /N. Thus, the ML rule â ML = argmax a A [p(y a)] simplifies to â ML = argmin a A y a. Decision regions Consider a decision rule for a given N-dimensional constellation A with size M. Let R j R N be the set of observation points y R N which are decided as a j. For a complete decision rule, the decision regions partition the observation space: R N = M R j ; R j R i =, i j. j= Conversely, any partition of R N into M regions defines a decision rule for N-dimensional signal constellations of size M. L2: Gaussian channel Decision rules 27/34 L2: Gaussian channel Decision rules 28/34 Probability of decision error Decision regions under the MD decision rule Let E be the decision error event. For a receiver with decision regions R j, the conditional probability of E given that a j is sent is given by Pr(E a j ) = Pr(y / R j a j ), while the average probability of error equals Pr(E) = MAP rule minimizes Pr(E). M p(a j )Pr(E a j ). j= Under the MD decision rule, the decision regions are given by R j = {y R N : y a j 2 y a i 2 for all i j} The regions R j are also called the Voronoi regions. Each region R j is the intersection of M pairwise decision regions R ji defined as R ji = {y R N : y a j 2 y a i 2 }. In other words, R j = i j R ji. L2: Gaussian channel Decision rules 29/34 L2: Gaussian channel Decision rules 3/34

14 Probability of error under MD rule on AGN Under any rigid motion (translation or rotation) of a constellation A, the Voronoi regions also move in the same way. Under the MD decision rule, on any additive AGN channel we have Pr(E a j ) = p(y a j )dy = p N (n)dn R j R j a j This probability of error is invariant under rigid motions. (Proof is left as exercise.) (Is this true for any additive noise?) Likewise, Pr(E) is invariant under rigid motions. If the mean m = M j a j of a constellation A is not zero, we may translate it by m to reduce the mean energy from E(A) to E(A) m 2 without changing Pr(E). Probability of decision error for some constellations For 2-PAM Pr(E) = Q( 2E b /N ) where Q(x) = x 2π e u2 /2 du. For 4-QAM Pr(E) = ( Q( 2E b /N )) 2 2Q( 2E b /N ). One can express exact error probabilities for M-PAM and (M M)-QAM in terms of the Q function. (Exercise) However, for general constellations it becomes impractical to determine the exact error probability. Often one uses some bounds and approximations instead of the exact forms. L2: Gaussian channel Decision rules 3/34 L2: Gaussian channel Decision rules 32/34 Pairwise error probabilities e consider MD decision rules and AGN channels here. The pairwise error probability Pr(a j a i ) is defined as the probability that, conditional on a j being transmitted, the received point y is closer to a i than to a j. In other words The union bound The conditional probability of error is bounded (under the MD decision rule on an AGN channel) as Pr(E a j ) Pr(a j a i ) = ( ) ai a j Q. 2N i j i j Pr(a j a i ) = Pr( y a i y a j a j ) Recalling the pairwise error regions R ji = {y R N : y a j 2 y a i 2 }, This leads to Pr(E) M M Q j= i j ( ) ai a j. 2N it can be shown that Pr(a j a i ) = πn d(a i,a j )/2 e x2 /N dx = Q ( ) ai a j. 2N One may also use the approximation ( ) dmin (A) Pr(E) K min (A)Q. 2N The union bound is tight at sufficiently high SNR. L2: Gaussian channel Union bound 33/34 L2: Gaussian channel Union bound 34/34

15 L: Information theory review Lecture 3 Algebraic coding L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding Objective: Introduce the rationale for coding, discuss some important algebraic codes Topics hy coding? Some important algebraic codes Reed-Muller codes Reed-Solomon codes BCH codes L8: Coding for bandlimited channels L9: Polar codes for selected applications L3: Algebraic coding /35 L3: Algebraic coding 2/35 Motivation for coding Coding and modulation Simple contellations such as PAM and QAM are far from delivering Shannon s promise. They have a large gap to Shannon limit. Signaling schemes such as orthogonal, bi-orthogonal, simplex achieve Shannon capacity when one can expand the bandwidth indefinitely; however, after a certain point they become impractical both in terms of complexity per bit and bandwidth limitations. Shannon s proof shows that in the power-limited regime, the key to achieving capacity is to begin with a simple D or 2D constellation A, consider Cartesian powers A N of increasingly high orders, and select a subset A A N to improve the minimum distance of the constellation at the expense of spectral efficiency. binary data binary data channel encoder binary interface channel decoder modulator demodulator channel L3: Algebraic coding Motivation 3/35 L3: Algebraic coding Motivation 4/35

16 Coding and Modulation Spectral efficiency with coding and modulation Design codes in a finite field F taking advantage of the algebraic structure to simplify encoding and decoding. Algebraic codes typically map a binary data sequence u K F K 2 into a codeword xn F 2 m for some m. Modulation maps F 2 m into a signal set A R n for some n (typically n =,2). For example, if A = { α,α}, one may use the mapping α and. For a typical 2D signal set A R 2 (such as a QAM scheme) and a binary code of rate K/N, the spectral efficiency is ( ) ( ) K ρ = log 2 A (b/2d) N Thus, coding reduces the spectral efficiency of the uncoded constellation by a factor of K/N. It is hoped that coding will make up for the deficit in spectral efficiency by improving the distance profile of the signal set. Goal: Design codes that have large minimum Hamming distances in F N 2 (Hamming metric) and modulate them to have correspondingly large Euclidean distances. L3: Algebraic coding Motivation 5/35 L3: Algebraic coding Motivation 6/35 Binary block codes Generators of a binary linear block code Definition A binary block code of length n is any subset C {,} n of the set of all binary n-toples of length n. Definition A code C is called linear if C is a subspace of the vector space F n 2.. Let C F n 2 be a binary linear code. Since C is a vector space, it has a dimension k and there exists a set of basis vectors G = {g,...,g k } that generate C in the sense that C = { k a j g j : a j F 2, j k}. j= Such a code C is called an (n,k) binary linear code. The set G is called the set of generators of C. An encoder for a code C with generators G can implemented as a matrix multiplication x = ag where G is the generator matrix whose ith row is g i, a F k 2 is the information word, and x is the code word. L3: Algebraic coding Binary block codes 7/35 L3: Algebraic coding Binary block codes 8/35

17 The Hamming weight Definition For x F n 2, the Hamming weight of x is defined as w H (x) = number of ones in x The Hamming weight has the following properties: Non-negativity: w H (x) with equality iff x =. Symmetry: w H ( x) = w H (x). Triangle inequality: w H (xy) w H (x)w H (y). The Hamming distance Definition For x,y F n 2, the Hamming distance between x and y is defined as d H (x,y) = w H (x y) The Hamming distance has the following properties for any x,y,z F n 2 : Non-negativity: d H (x,y) with equality iff x = y. Symmetry: d H (x,y) = d H (y,x). Triangle inequality: d H (x,y) d H (x,z)d H (z,y). Thus, the Hamming distance is a metric in the mathematical sense of the word and the space F n 2 with this metric is called the Hamming space. L3: Algebraic coding Binary block codes 9/35 L3: Algebraic coding Binary block codes /35 Distance invariance Minimum distance Theorem The set of Hamming distance d H (x,y) from any codeword x C to all codewords y C is independent of x, and is equal to the set of Hamming weights w H (y) of all codewords y C. Proof. The set of distances from x is {d H (x,y) : y C}. This set can be written as {w H (xy : y C} = xc. But xc = C for a linear code (why?). Taking x =, we obtain the proof. Definition The code minimum distance d of a code C is defined as the minimum of d(x,y) over all x,y C with x y. Remark The minimum distance d equals the minimum of w H (x over all non-zero codewords x C. Remark e refer to an (n,k) code with minimum distance d as an (n,k,d) code. For example, an (n,) repetition code has d = n and is an (n,,d) code. L3: Algebraic coding Binary block codes /35 L3: Algebraic coding Binary block codes 2/35

18 Euclidean Images of Binary Codes Binary codes C are mapped to signal constellations by the mapping which takes x s so that s i = s : F n 2 R n { α, if x i =, α, if x i =. Minimum distances hen a code C is mapped to a signal constellation s(c) by the mapping s defined above, the Hamming distances translate to Euclidean distances as follows: s(x) s(y) 2 = 4α 2 d H (x,y) Thus, minimum code distance translates to a minimum signal distance of d 2 min(s(c)) = 4α 2 d H (C) = 4α 2 d. L3: Algebraic coding Coding gain 3/35 L3: Algebraic coding Coding gain 4/35 Nominal coding gain, union bound hen a code C is mapped to a signal constellation s(c), the nominal coding gain of the constellation is given by γ c (s(c)) = d2 min (s(c)) 4E b = kd n Every signal has the same number of nearest neighbors K min (x) = N d. Union bound: ( ) P b (E) K b /s(c))q γc (s(c))2e b /N = N ( ) d k Q 2d REb /N where R = k/n is the code rate. Decision rules Minimum distance (MD) decoding. Given a received vector r R n, find the signal point s(x) over all x C such that r s(x) 2 is minimized. Hard-decision decoding. Given a received vector r R n, quantize r into y F2 n and find the codeword x C closest to y in the Hamming metric. Erasure-and-error decoding. Map the received word r into a word y {,,?} n and find the codeword x closest to y ignoring the erased coordinates (where y k =?). Generalized minimum distance (GMD) decoding. Apply erasures and errors decoding by erasing successively s = d,d 3,... positions, using the reliability metric r k to prioritize erasure locations. Pick the best candidate. L3: Algebraic coding Coding gain 5/35 L3: Algebraic coding Coding gain 6/35

19 Hard-decision decoding Hard-decisions are obtained by the mapping r y such that {, r >, y =, r. Performance of some early codes Performance of some well-known codes under with hard-decision decoding. Performance limited both by the short block length and hard-decision decoding. L3: Algebraic coding Coding gain 7/35 L3: Algebraic coding Coding gain 8/35 Reed-Muller codes (Reed, 954), (Muller, 954) For every m and r m, there exists an RM code RM(r,m). Define the RM codes with extreme parameters as follows. RM(m,m) = {,} n with (n,k,d) = (2 m,2 m,). RM(,m) = { n, n } with (n,k,d) = (2 m,,n). RM(,m) = { n } with (n,k,d) = (2 m,, ). Define the remaining RM codes for m and r m recursively by RM(r,m) = {(u,uv) u RM(r,m ),v RM(r,m )}. Generator matrices of RM codes Let [ ] [ ] Um U =, U m =, m 2. U m U m The generator matrix of RM(r,m) is the submatrix of U m consisting of rows of Hamming weight 2 r or greater. For any m, the matrix U m has ( ( m),r) rows with Hamming weight 2 m r, r m. This construction of RM codes is called the Plotkin construction. L3: Algebraic coding Reed-Muller codes 9/35 L3: Algebraic coding Reed-Muller codes 2/35

20 Properties of RM codes Tableaux of RM codes RM(r,m) is a binary linear block code with parameters with parameters (n,k,d) = (2 m, r i=( r i),2 m r ). The dimensions satisfy the relation k(r,m) = k(r,m )k(r,m ). The codes are nested: RM(r,m) RM(r,m). The minimum distance of RM(r,m) is d = 2 m r if r. No of nearest neighbors is given by N d = 2 r i m r 2 m i 2 m r i. (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L3: Algebraic coding Reed-Muller codes 2/35 L3: Algebraic coding Reed-Muller codes 22/35 Coding gains of various RM codes Reed-Muller coding gains RM(m,m) are single parity-check codes with nominal coding gains 2k/n which goes to 2 (3 db) as n. However, N d = 2 m (2 m )/2 and K b = 2 m, which limits the coding gain. RM(m 2,m) are Hamming codes extended by an overall parity. These codes have d = 4. The nominal coding gain is 4k/n which goes to 6 db as n. The actual coding gain is severely limited since N d = 2 m (2 m )/24 and K b. RM(, m) (first-order RM codes) have parameters (2 m,m,2 m ). They have a nominal coding gain of (m)/2, which goes to infinity. These codes can achieve the Shannon limit as m. RM(,m) generates the bi-orthogonal signal set of dimension 2 m and size 2 m. L3: Algebraic coding Reed-Muller codes 23/35 L3: Algebraic coding Reed-Muller codes 24/35

21 Decoding algorithms for RM codes Linear codes over finite fields Majority-logic decoding (Reed, 964): A form of successive-cancellation (SC) decoding. Sub-optimal but fast. Soft-decision SC decoding (Schnabl-Bossert, 995): Superior to Reed s algorithm, but slower. ML decoding by using trellis representations: Feasible for small code sizes. An (n,k) linear code C over a finite field F q is a k-dimensional subspace of the vector space F n q = (F q ) n of all n-tuples over F q. For q = 2, this reduces to our previous definition of binary linear codes. As a linear subspace C has k linearly independent codewords (g,...,g k ) that generate C, in the sense that C = { k a j g j : a j F q, j k} j= Thus C has q k distinct codewords. L3: Algebraic coding Reed-Muller codes 25/35 L3: Algebraic coding Reed-Solomon codes 26/35 Reed-Solomon (RS) codes Properties of RS codes Introduced by Irving S. Reed and Gustave Solomon in 96 Can be defined over any field F q A (n,k) RS code over F q exists for any k n q Encoding: Given k data symbols (f,...,f k ) over F q, form the polynomial f(z) = f f z f k z k evaluate f(z) at each field element βi, i q, namely, compute f(β i ) = k j= f jβ j i, to obtain the code symbols (f(β ),...,f(β q )) truncate if necessary to obtain a code of length n < q Minimum distance separable (MDS): A (n,k) RS code has d min = n k, meeting the Singleton bound with equality Typically constructed over F q with q = 2 m with each symbol consisting of m bits Very effective against correcting burst errors confined to a small number of symbols Major applications: Consumer electronics, outer code in concatenated coding schemes Decoding is usually by hard-decision: Berlekamp-Massey algorithm can correct any pattrern of t n k errors Sudan-Guruswami (999) algorithm can go beyond the minimum distance bound L3: Algebraic coding Reed-Solomon codes 27/35 L3: Algebraic coding Reed-Solomon codes 28/35

22 RS code application: G.975 optical transmission standard Performance of RS(255,239) code ITU-T G.975 standard (year 2) for long-distance submarine optical transmission systems specified RS(255,239) code as the forward error correction (FEC) method. In bits, this is a (24,92) code with rate R = This RS code has d min = 6 (in bytes) and can correct any pattern of 8 byte errors. The BER requirement in this application is 2 Data throughput - Gbps are supported G.975 RS codes continue to serve but are being superseded lately by more powerful proprietary solutions ( 3rd Generation FEC ) that use soft-decision decoders and provide better coding gains with higher redundancy BER performance under hard-decision decoding BER RS(255,239) Uncoded E b /N (db) L3: Algebraic coding Reed-Solomon codes 29/35 L3: Algebraic coding Reed-Solomon codes 3/35 Performance of RS(255,239) code Input BER vs output BER Output BER RS(255,239) Input BER RS coding with concatenation Over memoryless channels such as the AGN channel powerful codes may be obtained by concatenating an inner code consisting of q = 2 m codewords or signal points with an outer code over F q. The inner code is typically a binary block or convolutional code. The outer code is typically an RS code. L3: Algebraic coding Reed-Solomon codes 3/35 L3: Algebraic coding Concatenated coding 32/35

23 Interleaving RS concatenated code application: NASA standard In a concatenated coding scheme an error in the inner code appears as a burst of errors to the outer code. To make the symbol errors made by the inner decoder look memoryless interleaving is used. A two dimensional array is prepared where outer coding is applied on the rows and inner coding is applied on the columns. In 97s NASA used an RS/CC concatenated code The inner code is a CC with rate-/2 and 64 states The outer code is an RS(255,223) code over F 256 The code has an overall code rate.437 and a coding gain of 7.3 db at 6 hen an error occurs in the inner code, a column is affected, which appears only as a single symbol error in the outer code. L3: Algebraic coding Concatenated coding 33/35 L3: Algebraic coding Concatenated coding 34/35 Performance of NASA concatenated code L: Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L9: Polar codes for selected applications L3: Algebraic coding Concatenated coding 35/35 L4: Probabilistic coding /4

24 Lecture 4 Probabilistic approach to coding Convolutional codes Introduced by Peter Elias in 955 Objective: Review codes based on random-looking structures Topics Convolutional codes In the example, a data sequence, represented by a polynomial u(d), is multiplied by fixed generator polynomials to obtain two codeword polynomials y (D) = g (D)u(D), y 2 (D) = g 2 (D)u(D) Turbo codes Low-density parity-check (LPDC) codes L4: Probabilistic coding 2/4 L4: Probabilistic coding Convolutional codes 3/4 State diagram representation Trellis representation For an encoder with memory ν, the number of states is 2 ν. For the above example, the state diagram is Including time in the state, we obtain the trellis diagram representation. Code performance improves with the size of the state diagram, but decoding complexity also increases. L4: Probabilistic coding Convolutional codes 4/4 L4: Probabilistic coding Convolutional codes 5/4

25 Maximum-Likelihood decoding of convolutional codes ML decoding is equivalent to finding a shortest path from the beginning to the end of the trellis. A dynamic programming problem, with complexity exponential in the encoder memory. The trellis is usually truncated to make the search more reliable. Decoder error events Errors occur when a path diverging from the correct path appears more likely to the ML decoder. d free is defined as the minimum Hamming weight between any two distinct paths through the trellis. L4: Probabilistic coding Convolutional codes 6/4 L4: Probabilistic coding Convolutional codes 7/4 Union bound Union bound example Rate-/2 convolutional code with 64 states (ν = 6) The union bound for a rate R convolutional code ( ) 2E b P b K b Q γ c N - -2 ML decoding: Theoretical Upper Bound ML decoding (unquantized): Simulation Uncoded where K b is the average density of errored bits on an error path of weight d free γ c = d free R is the nominal coding gain. BER Eb/No (db) Union bound is tight at high SNR L4: Probabilistic coding Convolutional codes 8/4 L4: Probabilistic coding Convolutional codes 9/4

26 Effective coding gain: γ eff Best known convolutional codes Rate-/2 binary convolutional codes The effective coding gain for a coding system on an AGN channel with 2-PAM modulation is defined as E b γ eff = E b N coded 2-PAM N uncoded 2-PAM where the EbNo are the values (in db) required to achieve a target BER. ν d free γ c db K b γ eff (db) ν = log 2 (no of states) γ eff calculated at P b = 6 L4: Probabilistic coding Convolutional codes /4 L4: Probabilistic coding Convolutional codes /4 Best known convolutional codes Best known convolutional codes Rate-/3 binary convolutional codes ν d free γ c db K b γ eff (db) Rate-/4 binary convolutional codes ν d free γ c db K b γ eff (db) L4: Probabilistic coding Convolutional codes 2/4 L4: Probabilistic coding Convolutional codes 3/4

27 Performance of convolutional codes Binary antipodal signalling (from Clark & Cain, Springer, 98) Rate-/3 Rate-/2 Tailbiting convolutional codes To eliminate the overhead due to truncation, one may use a tailbiting convolutional code. Look at the final state and start the encoder in that state. L4: Probabilistic coding Convolutional codes 4/4 L4: Probabilistic coding Convolutional codes 5/4 Application: imax Standard imax convolutional code and modulation options IEEE 82.6e (imax) standard specifies a mandatory tailbiting convolutional code with rate /2 and the generator polynomials g (D) = DD 2 D 3 D 7, g 2 (D) = D 2 D 3 D 6 D 7. Codes of various other rates are obtained by puncturing this code. Rate /2 2/3 3/4 5/6 d free Punc. pat. x Punc. pat. y Enc. output x y x y y 2 x y y 2 x 3 x y y 2 x 3 y 4 x 5 Modulation Rate Payload options (bytes) Spect. eff. (bits/2d) QPSK /2 6, 2, 8, 24, 3, 36 QPSK 3/4 9, 8, 27, QAM /2 2, 24, QAM 3/4 8, QAM /2 8, QAM 2/ QAM 3/ L4: Probabilistic coding Convolutional codes 6/4 L4: Probabilistic coding Convolutional codes 7/4

28 Effect of length on performance BER performance is insensitive to code length. Effect of length on performance FER performance deteriorates with code length BER Rate /2, QPSK, 6 Bytes, depth 6 Rate /2, QPSK, 2 Bytes, depth 6 Rate /2, QPSK, 8 Bytes, depth 6 Rate /2, QPSK, 24 Bytes, depth 6 Rate /2, QPSK, 3 Bytes, depth 6 Rate /2, QPSK, 36 Bytes, depth 6 FER -3-4 Rate /2, QPSK, 6 Bytes, depth 6 Rate /2, QPSK, 2 Bytes, depth 6 Rate /2, QPSK, 8 Bytes, depth 6 Rate /2, QPSK, 24 Bytes, depth 6 Rate /2, QPSK, 3 Bytes, depth 6 Rate /2, QPSK, 36 Bytes, depth Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding Convolutional codes 8/4 L4: Probabilistic coding Convolutional codes 9/4 Turbo codes Invented in early 99s by Claude Berrou. Created by concatenating two (or more) codes with an interleaver between the codes At least one of the encoders is systematic Each constituent code has its own decoder Decoders exchange soft information with each other in an iterative manner Turbo code with parallel concatenation of convolutional codes Convolutional codes are in recursive systematic form to facilitate exchange of soft information. (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L4: Probabilistic coding Turbo codes 2/4 L4: Probabilistic coding Turbo codes 2/4

29 Turbo decoder Turbo decoder for parallel concatenated turbo code uses two separate decoders that exchange soft information. Turbo code performance Turbo codes improved the state-of-the-art by a wide margin! (Figure credit: Forney and Costello, Proc. IEEE, June 27.) (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L4: Probabilistic coding Turbo codes 22/4 L4: Probabilistic coding Turbo codes 23/4 imax Convolutional Turbo Codes (CTC) IEEE 82.6e (imax) specifies a CTC with constituent codes of rate 2/3 ( duobinary ). imax CTC Adaptive Modulation and Coding (AMC) imax CTC offers a number of AMC options with various payload sizes. Rate Modulation Spect. Eff. Payload options (b/2d) (bytes) /2 QPSK 2, 24, 36, 48, 6, 72, 96, 8, 2 3/4 QPSK.5 9, 8, 27, 36, 45, 54 /2 6-QAM 2 24, 48, 72, 96, 2 3/4 6-QAM 3 8, 36, 54 /2 64-QAM 3 36, 72, 8 2/3 64-QAM 4 36, 72 3/4 64-QAM , 72 5/6 64-QAM 5 36, 72 L4: Probabilistic coding Turbo codes 24/4 L4: Probabilistic coding Turbo codes 25/4

30 imax CTC performance: QPSK, Rate /2 The figure shows the imax CTC performance at half-rate with QPSK (4-QAM) modulation with payload ranging from 6 to 2 bytes. (Shannon limit is EbNo =.88 db.) - -2 imax CTC performance vs spectral efficiency The figure shows the imax CTC performance as the spectral efficiency ranges over,.5, 2, 3, 4, 4.5, 5 b/2d (2,6) QPSK AGN (72,54) QPSK AGN (2,6) 6-QAM AGN (72,54) 6-QAM AGN (8,54) 64-QAM AGN (72,48) 64-QAM AGN (72,54) 64-QAM AGN (72,6) 64-QAM AGN BER -3-4 BER Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding Turbo codes 26/ Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding Turbo codes 27/4 CCSDS (space telemetry) turbo code standard (999) CCSDS turbo code payload and frame size options CCSDS turbo code supports a wide range of payload and frame sizes as shown in the table (all lengths are in bits). Note that there are 8 bits of termination. L4: Probabilistic coding Turbo codes 28/4 L4: Probabilistic coding Turbo codes 29/4

31 CCSDS turbo code performance CCSDS turbo code provides a performance leap over the previous standard... but has an error floor Low-Density Parity-Check (LDPC) codes Invented in 96s by Robert Gallager. The codewords are defined as solutions of the equation xh T = where H is a sparse parity-check matrix, such as (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L4: Probabilistic coding Turbo codes 3/4 L4: Probabilistic coding LDPC codes 3/4 Belief Propagation (BP) decoding algorithm Gallager gave a low-complexity decoding algorithm based on passing log-likelihood ratios (LLRs) or beliefs along branches of a graph. BP decoding algorithm converges after a number of iterations that is roughly logarithmic in the code block length BP algorithm is well-suited to parallel implementation, which makes LDPC codes preferable in applications requiring high throughput and low latency. L4: Probabilistic coding LDPC codes 32/4 LDPC performance Rate-/2, length 7 LDPC codes with symbol degree bound d l. (Figure credit: Forney and Costello, Proc. IEEE, June 27.) L4: Probabilistic coding LDPC codes 33/4

32 Application: imax LDPC codes imax LDPC performance The figure shows the performance of imax LDPC coding and modulation options ( max-log-map decoding). imax offers a number of LDPC code alternatives. These codes may require a maximum of 3 - iterations for best performance. LDPC codes are not very suitable for rate adaptation. Rate Length 5/ / /3 234 / / / /3 576 /2 576 BER r=5/6 L=234 r=3/4 L=234A r=2/3 L=234A r=/2 L=234 r=5/6 L=576 r=3/4 L=576A r=2/3 L=576A r=/2 L= Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding LDPC codes 34/4 L4: Probabilistic coding LDPC codes 35/4 imax LDPC performance The figure shows the effect of the effect of the min-sum approximation on the LDPC code performance. imax LDPC performance The figure shows the effect of the effect of the number of iterations on the LDPC code performance ( max-log-map ). - - r=/2 L=576 max 3 iter r=/2 L=576 max iter BER r=/2 L=234 r=/2 L=234 - min-sum BER Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding LDPC codes 36/ Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding LDPC codes 37/4

33 imax LDPC/CTC performance comparison The figure shows the relative performance of imax LDPC and CTC codes. imax CTC/CC performance comparison The figure shows the relative performance of imax CTC and imax CC codes. - CC(2,6), QPSK CC(72,36) QPSK CTC(2,6) QPSK CTC(72,36) QPSK FER 2 3 Turbo (576,288) Turbo (96,48) LDPC,(234,62) LDPC, (576,288) BER Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding imax code comparisons 38/ Eb/No in db (Simulations by Iterative Solutions Coded Modulation Library, 27) L4: Probabilistic coding imax code comparisons 39/4 Summary Turbo and LDPC codes solve the coding problem for most engineering purposes. Convolutional codes still have a place for very short payloads (up to bits) that need to be protected well (control channel). LDPC codes perform better at long block lengths where high reliability, high throughput is required (optical channels, video channels). Turbo codes are superior for applications where packet sizes are moderate and the reliability requirement is not too high (voice applications) Algebraic codes (RS and BCH in particular) have a role as external codes in concatenated schemes. L: Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels L9: Polar codes for selected applications L4: Probabilistic coding imax code comparisons 4/4 L5: Channel polarization /26

34 Lecture 5 Channel polarization The channel Let : X Y be a binary-input discrete memoryless channel Objective: Explain channel polarization Topics: X Y Channel codes as polarizers of information Low-complexity polarization by channel combining and splitting The main polarization theorem Rate of polarization input alphabet: X = {,}, output alphabet: Y, transition probabilities: (y x), x X,y Y L5: Channel polarization 2/26 L5: Channel polarization The setup 3/26 Symmetry assumption Capacity Assume that the channel has input-output symmetry. Examples: For channels with input-output symmetry, the capacity is given by BSC(ǫ) BEC(ǫ) C() = I(X;Y), with X unif. {,} ǫ ǫ Use base-2 logarithms: ǫ ǫ ǫ ǫ? C() ǫ ǫ L5: Channel polarization The setup 4/26 L5: Channel polarization The setup 5/26

35 The main idea The method: aggregate and redistribute capacity Original channels (uniform) New channels (polarized) Channel coding problem trivial for two types of channels Perfect: C() = Vector channel Useless: C() = Transform ordinary into such extreme channels vec N N Combine Split L5: Channel polarization The method 6/26 L5: Channel polarization The method 7/26 Combining Conservation of capacity Begin with N copies of, use a - mapping G N : {,} N {,} N to create a vector channel U U 2 G N X X 2 Y Y 2 Combining operation is lossless: Take U,...,U N i.i.d. unif. {,} then, X,...,X N i.i.d. unif. {,} and C( vec ) = I(U N ;Y N ) = I(X N ;Y N ) U U 2 G N X X 2 Y Y 2 vec : U N Y N U N X N Y N = NC() U N X N Y N vec vec L5: Channel polarization The method 8/26 L5: Channel polarization The method 9/26

36 Splitting U Polarization is commonplace C( vec ) = I(U N ;Y N ) N = I(U i ;Y N,U i ) = i= N C( i ) i= Define bit-channels i : U i (Y N,U i ) U i U U i U i vec U i Y Y i Polarization is the rule not the exception A random permutation G N : {,} N {,} N is a good polarizer with high probability Equivalent to Shannon s random coding approach U U 2 U N G N X X 2 X N Y Y 2 Y N U N Y N i L5: Channel polarization The method /26 L5: Channel polarization The method /26 Random polarizers: stepwise, isotropic The complexity issue.9.8 Capacity Random polarizers lack structure, too complex to implement Need a low-complexity polarizer May sacrifice stepwise, isotropic properties of random polarizers in return for less complexity Bit channel index Isotropy: any redistribution order is as good as any other. L5: Channel polarization The method 2/26 L5: Channel polarization The method 3/26

37 Basic module for a low-complexity scheme The first bit-channel Combine two copies of U X Y : U (Y,Y 2 ) U 2 X 2 Y 2 G 2 and split to create two bit-channels : U (Y,Y 2 ) 2 : U 2 (Y,Y 2,U ) U random U 2 Y Y 2 C( ) = I(U ;Y,Y 2 ) L5: Channel polarization Recursive method 4/26 L5: Channel polarization Recursive method 5/26 The second bit-channel 2 Capacity conserved but redistributed unevenly U X Y 2 : U 2 (Y,Y 2,U ) U 2 X 2 Y 2 U U 2 Y Y 2 Conservation: Extremization: C( )C( 2 ) = 2C() C( 2 ) = I(U 2 ;Y,Y 2,U ) C( ) C() C( 2 ) with equality iff C() equals or. L5: Channel polarization Recursive method 6/26 L5: Channel polarization Recursive method 7/26

38 Extremality of BEC Extremality of BSC (Mrs. Gerber s lemma) Let H : [, 2 ] [, 2 ] be the inverse of the binary entropy function H(p) = plog(p) ( p)log( p), p 2. H(U Y Y 2 ) H(X Y )H(X 2 Y 2 ) H(X Y )H(X 2 Y 2 ) with equality iff is a BEC. U U 2 X X 2 Y Y 2 H(U Y Y 2 ) H ( H (H(X Y )) H (H(X 2 Y 2 )) ) with equality iff is a BSC. U X Y U 2 X 2 Y 2 L5: Channel polarization Recursive method 8/26 L5: Channel polarization Recursive method 9/26 Notation For the size-4 construction The two channels created by the basic transform (,) (, 2 ) will be denoted also as = and = 2 Likewise, we write, for descendants of ; and, for descendants of. L5: Channel polarization Recursive method 2/26 L5: Channel polarization Recursive method 2/26

39 ... duplicate the basic transform... obtain a pair of and each L5: Channel polarization Recursive method 2/26 L5: Channel polarization Recursive method 2/26... apply basic transform on each pair... decode in the indicated order U U 3 U 2 U 4 L5: Channel polarization Recursive method 2/26 L5: Channel polarization Recursive method 2/26

40 ... obtain the four new bit-channels Overall size-4 construction U U X Y U 3 U 3 X 3 Y 3 U 2 U 2 X 2 Y 2 U 4 U 4 X 4 Y 4 L5: Channel polarization Recursive method 2/26 L5: Channel polarization Recursive method 2/26 Rewire for standard-form size-4 construction Size 8 construction U X Y U X Y U 2 X 2 Y 2 U 2 X 2 Y 2 U 3 X 3 Y 3 U 3 X 3 Y 3 U 4 X 4 Y 4 U 4 X 4 Y 4 U 5 X 5 Y 5 U 6 X 6 Y 6 U 7 X 7 Y 7 U 8 X 8 Y 8 L5: Channel polarization Recursive method 2/26 L5: Channel polarization Recursive method 22/26

41 Polarization of a BEC The first bit channel Polarization is easy to analyze when is a BEC. The first bit channel is a BEC. If is a BEC(ǫ), then so are and, with erasure probabilities ǫ = 2ǫ ǫ 2 and respectively. ǫ = ǫ 2 ǫ ǫ ǫ ǫ? If is a BEC(ǫ), then so are and, with erasure probabilities ǫ = 2ǫ ǫ 2 and respectively. ǫ = ǫ 2 ǫ ǫ ǫ ǫ? L5: Channel polarization Recursive method 23/26 L5: Channel polarization Recursive method 23/26 The second bit channel Polarization for BEC( ): N = 6 2 Capacity of bit channels The second bit channel is a BEC..9.8 If is a BEC(ǫ), then so are and, with erasure probabilities ǫ = 2ǫ ǫ 2 and respectively. ǫ = ǫ 2 ǫ ǫ ǫ ǫ? Capacity N= Bit channel index L5: Channel polarization Recursive method 23/26 L5: Channel polarization Recursive method 24/26

42 Polarization for BEC( ): N = 32 2 Polarization for BEC( ): N = 64 2 Capacity of bit channels Capacity of bit channels Capacity.5 Capacity Bit channel index N= Bit channel index N=64 L5: Channel polarization Recursive method 24/26 L5: Channel polarization Recursive method 24/26 Polarization for BEC( ): N = 28 2 Polarization for BEC( ): N = Capacity of bit channels Capacity of bit channels Capacity.5 Capacity N= Bit channel index N= Bit channel index L5: Channel polarization Recursive method 24/26 L5: Channel polarization Recursive method 24/26

43 Polarization for BEC( ): N = 52 2 Polarization for BEC( ): N = 24 2 Capacity of bit channels Capacity of bit channels Capacity.5 Capacity N= Bit channel index N= Bit channel index L5: Channel polarization Recursive method 24/26 L5: Channel polarization Recursive method 24/26 Polarization martingale C( ) C( 2 ) C( ) C() C( ) Theorem (Polarization, A. 27) The bit-channel capacities {C( i )} polarize: for any δ (,), as the construction size N grows [ ] no. channels with C(i ) > δ C() N and [ ] no. channels with C(i ) < δ C() N δ C( ) C( ) Theorem (Rate of polarization, A. and Telatar (28)) Above theorem holds with δ 2 N. δ L5: Channel polarization Recursive method 25/26 L5: Channel polarization Recursive method 26/26

44 L: Information theory review Lecture 6 Polar coding L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding Objective: Introduce polar coding Topics Code construction Encoding Decoding Performance L8: Coding for bandlimited channels L9: Polar codes for selected applications L6: Polar coding /45 L6: Polar coding 2/45 Polar code example: = BEC( ), N = 8, rate /2 2 Polar code example: = BEC( ), N = 8, rate /2 2 I( i ) Rank I( i ) Rank.39 8 frozen U Y.39 8 Y.2 7 frozen U 2 Y Y frozen U 3 Y Y data U 4 Y U 4 Y frozen U 5 Y Y data U 6 Y U 6 Y data U 7 Y U 7 Y data U 8 Y U 8 Y 8 L6: Polar coding Encoding 3/45 L6: Polar coding Encoding 3/45

45 Encoding complexity Encoding: an example Theorem Encoding complexity for polar coding is O(N log N). frozen frozen Y Y 2 Proof: Polar coding transform can be represented as a graph with N[ log(n)] variables. frozen free frozen Y 3 Y 4 Y 5 The graph has (log(n)) levels with N variables at each level. Computation begins at the source level and can be carried out level by level. Space complexity O(N), time complexity O(N log N). free free free Y 6 Y 7 Y 8 L6: Polar coding Encoding 4/45 L6: Polar coding Encoding 5/45 Encoding: an example Encoding: an example frozen Y frozen Y frozen Y 2 frozen Y 2 frozen Y 3 frozen Y 3 free Y 4 free Y 4 frozen Y 5 frozen Y 5 free Y 6 free Y 6 free Y 7 free Y 7 free Y 8 free Y 8 L6: Polar coding Encoding 5/45 L6: Polar coding Encoding 5/45

46 Encoding: an example Successive Cancellation Decoding (SCD) frozen Y frozen Y 2 frozen free frozen Y 3 Y 4 Y 5 Theorem The complexity of successive cancellation decoding for polar codes is O(NlogN). free Y 6 Proof: Given below. free Y 7 free Y 8 L6: Polar coding Encoding 5/45 L6: Polar coding Decoding 6/45 SCD: Exploit the x = a ab structure First phase: treat a as noise, decode (u,u 2,u 3,u 4 ) u b x y u b x y u 2 b 2 x 2 y 2 u 2 b 2 x 2 y 2 u 3 b 3 x 3 y 3 u 3 b 3 x 3 y 3 u 4 b 4 x 4 y 4 u 4 b 4 x 4 y 4 u 5 a x 5 y 5 noise a x 5 y 5 u 6 a 2 x 6 y 6 noise a 2 x 6 y 6 u 7 a 3 x 7 y 7 noise a 3 x 7 y 7 u 8 a 4 x 8 y 8 noise a 4 x 8 y 8 L6: Polar coding Decoding 7/45 L6: Polar coding Decoding 8/45

47 End of first phase Second phase: Treat ˆb as known, decode (u 5,u 6,u 7,u 8 ) û ˆb x y known ˆb y û 2 ˆb 2 x 2 y 2 known ˆb 2 y 2 û 3 ˆb 3 x 3 y 3 known ˆb 3 y 3 û 4 ˆb 4 x 4 y 4 known ˆb 4 y 4 u 5 a x 5 y 5 u 5 a y 5 u 6 a 2 x 6 y 6 u 6 a 2 y 6 u 7 a 3 x 7 y 7 u 7 a 3 y 7 u 8 a 4 x 8 y 8 u 8 a 4 y 8 L6: Polar coding Decoding 9/45 L6: Polar coding Decoding /45 First phase in detail Equivalent channel model u b x y b x y u 2 b 2 x 2 y 2 b 2 x 2 y 2 u 3 b 3 x 3 y 3 b 3 x 3 y 3 u 4 b 4 x 4 y 4 b 4 x 4 y 4 noise a x 5 y 5 noise a x 5 y 5 noise a 2 x 6 y 6 noise a 2 x 6 y 6 noise a 3 x 7 y 7 noise a 3 x 7 y 7 noise a 4 x 8 y 8 noise a 4 x 8 y 8 L6: Polar coding Decoding /45 L6: Polar coding Decoding 2/45

48 First copy of Second copy of b x y b x y b 2 x 2 y 2 b 2 x 2 y 2 b 3 x 3 y 3 b 3 x 3 y 3 b 4 x 4 y 4 b 4 x 4 y 4 noise a x 5 y 5 noise a x 5 y 5 noise a 2 x 6 y 6 noise a 2 x 6 y 6 noise a 3 x 7 y 7 noise a 3 x 7 y 7 noise a 4 x 8 y 8 noise a 4 x 8 y 8 L6: Polar coding Decoding 3/45 L6: Polar coding Decoding 4/45 Third copy of Fourth copy of b x y b x y b 2 x 2 y 2 b 2 x 2 y 2 b 3 x 3 y 3 b 3 x 3 y 3 b 4 x 4 y 4 b 4 x 4 y 4 noise a x 5 y 5 noise a x 5 y 5 noise a 2 x 6 y 6 noise a 2 x 6 y 6 noise a 3 x 7 y 7 noise a 3 x 7 y 7 noise a 4 x 8 y 8 noise a 4 x 8 y 8 L6: Polar coding Decoding 5/45 L6: Polar coding Decoding 6/45

49 Decoding on b = t tw u b (y,y 5 ) u w b (y,y 5 ) u 2 b 2 (y 2,y 6 ) u 2 w 2 b 2 (y 2,y 6 ) u 3 b 3 (y 3,y 7 ) u 3 t b 3 (y 3,y 7 ) u 4 b 4 (y 4,y 8 ) u 4 t 2 b 4 (y 4,y 8 ) L6: Polar coding Decoding 7/45 L6: Polar coding Decoding 8/45 Decoding on Decoding on u w (y,y 3,y 5,y 7 ) u (y,y 2,...,y 8 ) u 2 w 2 (y 2,y 4,y 6,y 8 ) Compute L = (y,...,y 8 u = ) (y,...,y 8 u = ) and set u if u is frozen û = else if L > else L6: Polar coding Decoding 9/45 L6: Polar coding Decoding 2/45

50 Decoding on Decoding on u 2 (y,...,y 8,û ) known û (y,y 3,y 5,y 7 ) Compute u 2 (y 2,y 4,y 6,y 8 ) L = (y,...,y 8,û u 2 = ) (y,...,y 8,û u 2 = ) and set u 2 if u 2 is frozen û 2 = else if L > else L6: Polar coding Decoding 2/45 L6: Polar coding Decoding 2/45 Complexity for successive cancelation decoding Performance of polar codes Let C N be the complexity of decoding a code of length N Decoding problem of size N for reduced to two decoding problems of size N/2 for and So for some constant k This gives C N = O(NlogN) C N = 2C N/2 kn Probability of Error (A. and Telatar (28) For any binary-input symmetric channel, the probability of frame error for polar coding at rate R < C() and using codes of length N is bounded as P e (N,R) 2 N.49 for sufficiently large N. A more refined versions of this result has been given given by S. H. Hassani, R. Mori, T. Tanaka, and R. L. Urbanke (2). L6: Polar coding Decoding 22/45 L6: Polar coding Decoding 23/45

51 Construction complexity Gaussian approximation Construction Complexity Polar codes can be constructed in time O(Npoly(log(N))). This result has been developed in a sequence of papers by R. Mori and T. Tanaka (29) I. Tal and A. Vardy (2) R. Pedarsani, S. H. Hassani, I. Tal, and E. Telatar (2) Trifonov (2) introduced a Gaussian approximation technique for constructing polar codes Dai et al. (25) studied various refinements of Gaussian approximation for polar code construction These methods work extremely well although a satisfactory explanation of why they work is still missing L6: Polar coding Construction 24/45 L6: Polar coding Construction 25/45 Example of Gaussian approximation Polar coding summary Polar code construction and performance estimation by Gaussian approximation FER Polar(65536,644,8) - BPSK Ultimate Shannon limit BPSK Shannon limit Threshold SNR at target FER Gaussian approximation Shannon limit Shannon BPSK limit Gap to ultimate capacity = 3.42 Gap to BPSK capacity =.6 Summary Given, N = 2 n, and R < I(), a polar code can be constructed such that it has construction complexity O(Npoly(log(N))), encoding complexity NlogN, successive-cancellation decoding complexity N log N, frame error probability P e (N,R) = o ( 2 No( N) ) E s /N (db) L6: Polar coding Construction 26/45 L6: Polar coding Construction 27/45

52 Performance improvement for polar codes Concatenation Concatenation to improve minimum distance List decoding to improve SC decoder performance Method Block turbo coding with polar constituents AKMOP (29) Generalized concatenated coding with polar inner AM (29) Reed-Solomon outer, polar inner BJE (2) Polar outer, block inner SH (2) Polar outer, LDPC inner EP (ISIT 2) AKMOP: A., Kim, Markarian, Özgür, Poyraz GCC: A., Markarian BJE: Bakshi, Jaggi, and Effros SH: Seidl and Huber EP: Eslami and Pishro-Nik Ref L6: Polar coding Performance 28/45 L6: Polar coding Performance 29/45 Overview of decoders for polar codes Successive cancellation decoding: A depth-first search method with complexity roughly N log N Sufficient to prove that polar codes achieve capacity Equivalent to an earlier algorithm by Schnabl and Bossert (995) for RM codes Simple but not powerful enough to challenge LDPC and turbo codes in short to moderate lengths List decoding: A breadth-first search algorithm with limited branching (known as beam search in AI). First proposed by Tal and Vardy (2) for polar codes. List decoding was used earlier by Dumer and Shabunov (26) for RM codes Complexity grows as O(LN logn) for a list size L. But hardware implementation becomes problematic as L grows due to sorting and memory management. Sphere-decoding ( British Museum search with branch and bound, starts decoding from the opposite side). L6: Polar coding Performance 3/45 List decoder for polar codes First produce L candidate decisions Pick the most likely word from the list Complexity O(LNlogN) L6: Polar coding Performance 3/45

53 Polar code performance Successive cancellation decoder Polar code performance Improvement by list-decoding: List-32 P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = FER FER EsNo (db) EsNo (db) L6: Polar coding Performance 32/45 L6: Polar coding Performance 33/45 Polar code performance Improvement by list-decoding: List-24 Polar code performance Comparison with ML bound P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = 2 P(248,24), 4-QAM, L-24, CRC-, SNR = 2 - P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = 2 P(248,24), 4-QAM, L-24, CRC-, SNR = 2 - ML Bound for P(248,24), 4-QAM -2-2 FER FER EsNo (db) EsNo (db) L6: Polar coding Performance 34/45 L6: Polar coding Performance 35/45

54 Polar code performance Introducing CRC improves performance at high SNR Polar code performance Comparison with dispersion bound P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = 2 P(248,24), 4-QAM, L-24, CRC-, SNR = 2 ML Bound for P(248,24), 4-QAM - P(248,24), 4-QAM, L-32, CRC-6, SNR = 2 P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = 2 P(248,24), 4-QAM, L-24, CRC-, SNR = 2 ML Bound for P(248,24), 4-QAM - P(248,24), 4-QAM, L-32, CRC-6, SNR = 2 Dispersion bound for (248,24) -2-2 FER FER EsNo (db) EsNo (db) L6: Polar coding Performance 36/45 L6: Polar coding Performance 37/45 Polar codes vs imax Turbo Codes Polar codes vs imax LDPC Codes Comparable performance obtained with List-32 CRC Better performance obtained with List-32 CRC P(24,52), 4-QAM, L-, CRC-, SNR = 2 P(24,52), 4-QAM, L-32, CRC-, SNR = 2 P(24,52), 4-QAM, L-32, CRC-6, SNR = 2 Dispersion bound for (24,52) - imax CTC (96,48) P(248,24), 4-QAM, L-, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-, SNR = 2 P(248,24), 4-QAM, L-32, CRC-6, SNR = 2 - Dispersion bound for (248,24) imax LDPC(234,52), Max Iter = -2-2 FER FER EsNo (db) EsNo (db) L6: Polar coding Performance 38/45 L6: Polar coding Performance 39/45

55 Polar Codes vs DVB-S2 LDPC Codes LDPC (62,332), Polar (6384,342). Rates =.82. BPSK-AGN channel. Polar codes vs IEEE 82.ad LDPC codes Park (24) gives the following performance comparison. Polar N = 6384, R = 37/45, Frame Error Rate of List Decoder FER 2 3 Polar List = Polar List = 32 Polar List 32 with CRC DVBS262 37/45 (Park s result on LDPC conflicts with reference IEEE 82.-/432r2. hether there exists an error floor as shown needs to be confirmed independently.) E b /N (db) L6: Polar coding Performance 4/45 Source: Youn Sung Park, Energy-Effcient Decoders of Near-Capacity Channel Codes, PhD Dissertation, The University of Michigan, 24. L6: Polar coding Performance 4/45 Summary of performance comparisons Implementation performance metrics Successive cancellation decoder is simplest but inherently sequential which limits throughput BP decoder improves throughput and with careful design performance List decoder but significantly improves performance at low SNR Adding CRC to list decoding improves performance significantly at high SNR with little extra complexity Overall, polar codes under list-32 decoding with CRC offer performance comparable to codes used in present wireless standards Implementation performance is measured by Chip area (mm2) Throughput (Mbits/sec) Energy efficiency (nj/bit) Hardware efficiency (Mb/s/mm2) L6: Polar coding Performance 42/45 L6: Polar coding Polar coding performance 43/45

56 Successive cancellation decoder comparisons [] [2] [3] 2 Decoder Type SC SC BP Block Length Technology 9 nm 65 nm 65 nm Area [mm 2 ] Voltage [V] Frequency [MHz] Power [m] Throughput [Mb/s] Engy.-per-bit [pj/b] Hard. Eff. [Mb/s/mm 2 ] [] O. Dizdar and E. Arıkan, arxiv: , 24. [2] Y. Fan and C.-Y. Tsui, An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation, Signal Processing, IEEE Transactions on, vol. 62, no. 2, pp , June 24. [3] C. Zhang, B. Yuan, and K. K. Parhi, Reduced-latency SC polar decoder architectures, arxiv.org, 2. Throughput 73 Mb/s calculated by technology conversion metrics 2 Performance at 4 db SNR with average no of iterations 6.57 L6: Polar coding Polar coding performance 44/45 BP decoder comparisons Property Unit [] [2] [3] [3] [4] [4] Decoding type and Scheduling SCD with folded HPPSN Specialized SC BP Circular Unidirectional BP Circular Unidirectional BP All-ON, Fully Parallel BP Circular Unidirectional, Reduced Complexity Block length Rate Technology CMOS Altera Stratix 4 CMOS CMOS CMOS CMOS Process nm Core area mm Supply V Frequency MHz Power m Iterations Throughput Mb/s Energy efficiency pj/b Energy eff. per iter. pj/b/iter Area efficiency Mb/s/mm Normalized to 45 nm according to ITRS roadmap Throughput Mb/s Energy efficiency pj/b Area efficiency Mb/s/mm Throughput obtained by disabling the BP early-stopping rules for fair comparison. [] Y.-Z. Fan and C.-Y. Tsui, An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation, IEEE Transactions on Signal Processing, vol. 62, no. 2, pp , June 24. [2] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, and. J. Gross, Fast polar decoders: Algorithm and implementation, IEEE Journal on Selected Areas in Communications, vol. 32, no. 5, pp , May 24. [3] Y. S. Park, Energy-efficient decoders of near-capacity channel codes, in 23 October 24 PhD. [4] A. D. G. Biroli, G. Masera, E. Arıkan, High-throughput belief propagation decoder architectures for polar codes, submitted 25. L6: Polar coding Polar coding performance 45/45 L: Information theory review Lecture 7 Origins of polar coding L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels Objective: Relate polar codes to the probabilistic approach in coding Topics Sequential decoding and cutoff rates Methods for boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding as a method to boost the cutoff rate to capacity L9: Polar codes for selected applications L7: Origins of polar coding /4 L7: Origins of polar coding 2/4

57 Goals Outline Show how polar coding originated from attempts to boost the cutoff rate of sequential decoding In particular, focus on the two papers: Pinsker (965) On the complexity of decoding Massey (98) Capacity, cutoff rate, and coding for a direct-detection optical channel A basic fact about search Sequential decoding Pinsker s scheme Massey s scheme Polarization L7: Origins of polar coding Relation to cutoff rates and sequential decoding 3/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 4/4 Pointwise search: 2-D or 2 x -D? Search complexities An item is placed at random in a 2-D square grid with M bins: (X,Y) uniform over {,..., M} 2. Loss models: Correlated loss model: X, Y both forgotten with probability ǫ Independent loss model: X, Y each forgotten independently with probability ǫ 2-D search May ask Is (X,Y) = (x,y)? Receive Yes/No answer. No of questions until finding (X,Y) is a RV: GXY -D search Correlated loss E[GXY ] = ( ǫ) ǫm/2 E[GX ]E[G Y ] = 2[( ǫ) ǫ M/2] Independent loss E[GXY ] = ( ǫ) 2 2ǫ( ǫ) M/2ǫ 2 M/2 E[GX ]E[G Y ] = 2[( ǫ) ǫ M/2] May ask Is X = x? or Is Y = y? Again receive Yes/No answer. No of questions until finding X and Y: GX G Y hich type of search is better for minimizing complexity? L7: Origins of polar coding Relation to cutoff rates and sequential decoding 5/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 6/4

58 Search complexities, cutoff Search complexity: Conclusions drawn Correlated loss Independent loss E[G XY ] O() if M = o(/ǫ) O() if M = o(/ǫ 2 ) E[G X ]E[G Y ] O() if M = o(/ǫ 2 ) O() if M = o(/ǫ 2 ) Cutoff : Search complexity not O(), grows with M -D search cutoff better than 2-D search cutoff under correlated loss model Cutoffs the same under independent loss model In order to reduce the complexity of pointwise search for an object under noisy observations Define object features and search feature by feature Define the features s.t. the observation noise across them have positive correlation L7: Origins of polar coding Relation to cutoff rates and sequential decoding 7/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 8/4 Convolutional codes, Sequential decoding,... Sequential decoding: the algorithm SD is a search algorithm for the correct path in a tree code Convolutional codes were invented by P. Elias (955) Sequential decoding by J. M. ozencraft (957) Fano s algorithm (963), US Patent 3,457,562 (969) SD enjoyed popularity in 96s First coding system in space Viterbi algoritm (967) SD lost ground to Viterbi algorithm in 97s and never recovered L7: Origins of polar coding Relation to cutoff rates and sequential decoding 9/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding /4

59 Sequential decoding: the metric Sequential decoding: the cutoff rate SD uses a metric to distinguish the correct path from the incorrect ones Fano s metric: Γ(y n,x n ) = log P(yn x n ) P(y n ) path length n candidate path x n received sequence y n code rate R nr SD achieves arbitrarily reliable communication at constant average complexity per bit at rates below a (computational) cutoff rate R comp For a channel with transition probabilities (y x), R comp equals R = maxq log y [ Q(x) (y x) Achievability: ozencraft (957), Reiffen (962), Fano (963), Stiglitz and Yudkin (964) Converse: Jacobs and Berlekamp (967) Refinements: ozencraft and Jacobs (965), Savage (966), Gallager (968), Jelinek (968), Forney (974), Arıkan (986) x ] 2 L7: Origins of polar coding Relation to cutoff rates and sequential decoding /4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 2/4 Rules of the game: pointwise, no look-ahead R as an error exponent SD visits nodes at level N in a certain order Forgets what it saw beyond level N upon backtracking Let G N be the number of nodes searched (visited) at level N until correct node is found Let R be the code rate There exist codes s.t. E[G N ] 2 N(R R) For any code of rate R, E[G N ] 2 N(R R) Random coding exponent, (N, R) codes: P e 2 NEr(R) Union bound: P e 2 N(R R) E R (R) R R L7: Origins of polar coding Relation to cutoff rates and sequential decoding 3/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 4/4

60 R as a figure of merit R vs C For a while, R appeared as a realistic goal A figure of merit in design of modulation schemes ozencraft and Jacobs, Principles of Communication Engineering, 965 ozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,966 Massey, Coding and modulation in digital communications, Zürich, 974 Forney gives a first-hand account of this situation in his 995 Shannon Lecture Fano (963) wrote: The author does not know of any channel for which R comp is less than 2 C, but no definite lower bound to R comp has yet been found. An example came in 98 that showed R could be arbitrarily small as a fraction of C But in fact a paradoxical result had already come from Pinsker (965) that showed the flaky nature of R L7: Origins of polar coding Relation to cutoff rates and sequential decoding 5/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 6/4 Boosting the cutoff rate Pinsker s scheme (965) Goal: Finding SD schemes with R comp larger than R R is a fundamental limit if one follows the rules of the game: Single searcher No look-ahead To boost the cutoff rate, change one or both of these rules Use multiple sequential decoders Provide look-ahead Block coding just below capacity: K/N C() N large, block error rate small: P e 2 O(N) Each SD sees a memoryless BSC with R near Boosts the cutoff rate to capacity L7: Origins of polar coding Relation to cutoff rates and sequential decoding 7/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 8/4

61 A scheme that doesn t work Equivalent scheme Cutoff rate = R (Derived vector channel) No improvement in cutoff rate L7: Origins of polar coding Relation to cutoff rates and sequential decoding 9/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 2/4 A conservation law for the cutoff rate Channel splitting to boost cutoff rate (Massey, 98) Parallel channels theorem (Gallager, 965) R (Derived vector channel) NR () Cleaning up the channel by pre-/post-processing can only hurt R Shows that boosting cutoff rate requires more than one sequential decoder Begin with a quaternary erasure channel (QEC) L7: Origins of polar coding Relation to cutoff rates and sequential decoding 2/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 22/4

62 Channel splitting to boost cutoff rate (Massey, 98) Channel splitting to boost cutoff rate (Massey, 98) Relabel the inputs Split the QEC into two binary erasure channels (BEC) BECs fully correlated: erasures occur jointly L7: Origins of polar coding Relation to cutoff rates and sequential decoding 23/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 24/4 Capacity, cutoff rate for one QEC vs two BECs Cutoff rate improvement by splitting 2 Ordinary coding of QEC Independent coding of BECs.8 Cutoff rate of QEC E QEC D C(QEC) = 2( ǫ) R (QEC) = log 4 3ǫ E BEC D E BEC D C(BEC) = ( ǫ) R (BEC) = log 2 ǫ Capacity, cutoff rate (bits) Cutoff rate of BEC Sum cutoff rate after splitting Capacity of QEC Erasure probability ε L7: Origins of polar coding Relation to cutoff rates and sequential decoding 25/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 26/4

63 hy does Massey s scheme work? hy do we have 2R (BEC) R (QEC)? Let G N denote the number of guesses at level N until finding the correct node Joint decoder has quadratic complexity Thus, G N (QEC) = G N (BEC )G N (BEC 2 ) = G N (BEC ) 2 correlated erasures E[G N (QEC)] = E[G N (BEC ) 2 ] (E[G N (BEC )]) 2 Second moment of G N (BEC) becomes exponentially large at a rate below R (BEC). L7: Origins of polar coding Relation to cutoff rates and sequential decoding 27/4 Comparison of Pinsker s and Massey s schemes Pinsker Massey Construct a superchannel by combining independent copies of a given DMC Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Split the given DMC into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical L7: Origins of polar coding Relation to cutoff rates and sequential decoding 28/4 Prescription for a new scheme Consider small constructions Retain independent encoding for the subchannels Do not ignore correlations between subchannels at the expense of capacity This points to multi-level coding and successive cancellation decoding Notation Let V : F 2 = {,} Y be an arbitrary binary-input memoryless channel Let (X,Y) be an input-output ensemble for channel V with X uniform on F 2 The (symmetric) capacity is defined as I(V) = I(X;Y) = 2 V(y x)log V(y x) y Y 2 V(y ) 2 V(y ) x F 2 The (symmetric) cutoff rate is defined as R (V) = R (X;Y) = log y Y x F2 2 V(y x) 2 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 29/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 3/4

64 Basic module for a low-complexity scheme The first bit-channel Combine two copies of U X Y : U (Y,Y 2 ) U 2 X 2 Y 2 G 2 and split to create two bit-channels : U (Y,Y 2 ) 2 : U 2 (Y,Y 2,U ) U random U 2 Y Y 2 C( ) = I(U ;Y,Y 2 ) L7: Origins of polar coding Relation to cutoff rates and sequential decoding 3/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 32/4 The second bit-channel 2 The 2x2 transformation is information lossless 2 : U 2 (Y,Y 2,U ) U U 2 Y Y 2 C( 2 ) = I(U 2 ;Y,Y 2,U ) ith independent, uniform U,U 2, I( ) = I(U ;Y Y 2 ), I( ) = I(U 2 ;Y Y 2 U ). Thus, I( )I( ) = I(U U 2 ;Y Y 2 ) = 2I(), and I( ) I() I( ). L7: Origins of polar coding Relation to cutoff rates and sequential decoding 33/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 34/4

65 The 2x2 transformation creates cutoff rate ith independent, uniform U,U 2, Theorem (25) R ( ) = R (U ;Y Y 2 ), R ( ) = R (U 2 ;Y Y 2 U ). Correlation helps create cutoff rate: R ( )R ( ) 2R () with equality iff is a perfect channel, I() =, or a pure noise channel, I() =. Cutoff rates start polarizing: R ( ) R () R ( ) Cutoff Rate Polarization Theorem (26) The cutoff rates {R (U i ;Y N U i )} of the channels created by the recursive transformation converge to their extremal values, i.e., and N #{ i : R (U i ;Y N U i ) } I() N #{ i : R (U i ;Y N U i ) } I(). Remark: {I(U i ;Y N U i )} also polarize. L7: Origins of polar coding Relation to cutoff rates and sequential decoding 35/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 36/4 Sequential decoding with successive cancellation Final step: Doing away with sequential decoding Use the recursive construction to generate N bit-channels with cutoff rates R (U i ;Y N U i ), i N. Encode the bit-channels independently using convolutional coding Decode the bit-channels one by one using sequential decoding and successive cancellation Achievable sum cutoff rate is N R (U i ;Y N U i ) i= Due to polarization, rate loss is negligible if one does not use the bad bit-channels Rate of polarization is strong enough that a vanishing frame error rate can be achieved even if the good bit-channels are used uncoded The resulting system has no convolutional encoding and sequential decoding, only successive cancellation decoding which approaches NI() as N increases. L7: Origins of polar coding Relation to cutoff rates and sequential decoding 37/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 38/4

66 Polar coding Polar coding complexity and performance To communicate at rate R < I(): Pick N, and K = NR good indices i such that I(U i ;Y N U i ) is high, let the transmitter set U i to be uncoded binary data for good indices, and set U i to random but publicly known values for the rest, let the receiver decode the U i successively: U from Y N ; U i from Y N Û i. Theorem (27) ith the particular one-to-one mapping described here and with the successive cancellation decoding polarization codes are I() achieving, encoding complexity is NlogN, decoding complexity is NlogN, probability of error decays like 2 N (with E. Telatar, 28). L7: Origins of polar coding Relation to cutoff rates and sequential decoding 39/4 L7: Origins of polar coding Relation to cutoff rates and sequential decoding 4/4 L: Information theory review Lecture 8 Coding for bandlimited channels L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding Objective: To discuss coding for bandlimited channels in general and with polar coding in particular Topics Bit interleaved coded modulation (BICM) Multi-level coding and modulation (MLCM) Lattice coding Direct polarization approach L8: Coding for bandlimited channels L9: Polar codes for selected applications L8: Coding for bandlimited channels /37 L8: Coding for bandlimited channels 2/37

67 The AGN Channel Capacity The AGN channel is a continuous-time channel Y(t) = X(t)N(t) such that the input X(t) is a random process bandlimited to subject to a power constraint X 2 (t) P, and N(t) is white Gaussian noise with power spectral density N /2. Shannon s formula gives the capacity of the AGN channel as C [b/s] = log 2 (P/N ) (bits/s) L8: Coding for bandlimited channels Background 3/37 L8: Coding for bandlimited channels Background 4/37 Signal Design Problem Discrete Time Model An AGN channel of bandwidth gives rise to 2 independent discrete time channels per second with input-output mapping Y = X N The continuous time and real-number interface of the AGN channel is inconvenient for digital communications. Need to convert from continuous to discrete-time Need to convert from real numbers to a binary interface X is a random variable with mean and energy E[X 2 ] P/2 N is Gaussian noise with -mean and energy N /2. It is customary to normalize the signal energies to joules per 2 dimensions and define E s = P/ Joules/2D as signal energy (per two dimensions). One defines the the signal-to-noise ratio as E s /N. L8: Coding for bandlimited channels Background 5/37 L8: Coding for bandlimited channels Background 6/37

68 Capacity Signal Design Problem The capacity of the discrete-time AGN channel is given by C = 2 log 2(E s /N ), (bits/d), achieved by i.i.d. Gaussian inputs X N(,E s /2) per dimension. Now, we need a digital interface instead of real-valued inputs. Select a subset A R n as the signal set or modulation alphabet. Finding a signal set with good Euclidean distance properties and other desirable features is the signal design problem. Typically, the dimension n is or 2. L8: Coding for bandlimited channels Background 7/37 L8: Coding for bandlimited channels Background 8/37 Separation of coding and modulation Cutoff rate: A simple measure of reliability Each constellation A has a capacity C A (bits/d) which is a function of E s /N. The spectral efficiency ρ (bits/d) has to satisfy at the operating E s /N. ρ < C A (E s /N ) The spectral efficiency is the product of two terms ρ = R log 2( A ) dim(a) Each constellation A has a cutoff rate R,A (bits/d) which is a function of E s /N such that through random coding one can guarantee the existence of coding and modulation schemes with probability of frame error P e < 2 N[R,A(E s/n ) ρ] where N is the frame length in modulation symbols. where R (dimensionless) is the rate of the FEC. For a given ρ, there any many choices w.r.t. R and A. L8: Coding for bandlimited channels Background 9/37 L8: Coding for bandlimited channels Background /37

69 Sequential decoding and cutoff rate M-ary Pulse Amplitude Modulation Sequential decoding (ozencraft, 957) is a decoding algorithm for convolutional codes that can achieve spectral efficiencies as high as the cutoff rate at constant average complexity per decoded bit. The difference between cutoff rate and capacity at high E s /N is less than 3 db. This was regarded as the solution of the coding and modulation problem in early 7s and interest in the problem waned. (See Forney 995 Shannon Lecture for this story.) Polar coding grew out of attempts to improve the cutoff rate of channels by simple combining and splitting operations. A -D signal set with A = {±α,±3α,...,±(m )}. Average energy: E s = 2α 2 (M 2 )/3 (Joules/2D) Consider the capacity, cutoff rate L8: Coding for bandlimited channels Background /37 L8: Coding for bandlimited channels Background 2/37 Capacity of M-PAM Capacity (bits) PAM-2 PAM-4 PAM-8 PAM-6 PAM-32 PAM-64 PAM-28 Shannon Limit Capacity with PAM Es/N (db) M-PAM is good enough from a capacity viewpoint. L8: Coding for bandlimited channels Background 3/37 Cutoff rate of M-PAM Cutoff rate (bits) PAM-2 PAM-4 PAM-8 PAM-6 PAM-32 PAM-64 PAM-28 PAM-256 PAM-52 PAM-24 Shannon capacity Shannon cutoff rate Gaussian input cutoff rate Cutoff rate with PAM Es/N (db) M-PAM is satisfactory also in terms of cutoff rate. L8: Coding for bandlimited channels Background 4/37

70 Conventional approach Given a target spectral efficiency ρ and a target error rate P e at a specific E s /N o, select M large enough so that M-PAM capacity is close enough to the Shannon capacity at the given E s /N o apply coding external to modulation to achieve the desired P e Such separation of coding and modulation was first challenged successfully by Ungerboeck (98). However, with the advent of powerful codes at affordable complexity, there is a return to the conventional design methodology. L8: Coding for bandlimited channels Background 5/37 How does it work in practice? FER imax CTC Codes: Fixed Spectral Efficiency, Different Modulation CTC(576,432), 6-QAM CTC(864,432), 64-QAM Spectral efficiency = 3 b/2d for both cases. It takes 44 symbols to carry the payload in both cases. Gap to Shannon about 3 db at FER E-3 Provides a coding gain of 4.8 db over uncoded transmission Es/No in db Theory and practice don t match here! L8: Coding for bandlimited channels Background 6/37 hy change modulation instead of just the code rate? Alternative: Fixed code, variable modulation Suppose we fix the modulation as 64-QAM and wish to deliver data at spectral efficiencies, 2, 3, 4, 5 b/2d. e would need a coding scheme that works well at rates /6, /3, /2, 2/3, 5/6. The inability of delivering high quality coding over a wide range of rates forces one to change the order of modulation. The difficulty here is practical: it is a challenge to have a coding scheme that works well over all rates from to. FER imax: Same rate-3/4 code with different order QAM modulations spec. eff..5 spec. eff. 3 Gap to Shannon limit widens slightly with increasing modulation order but in general good agreement. spec. eff. 4.5 CTC(576,432), 4-QAM CTC(576,432), 6-QAM CTC(576,432), 64-QAM Es/No in db L8: Coding for bandlimited channels Background 7/37 L8: Coding for bandlimited channels Background 8/37

71 Polar coding and modulation Direct Method Polar codes can be applied to modulation in at least three different ways. Direct polarization Multi-level techniques Polar lattices BICM Idea: Given a system with q-ary modulation, treat it as an ordinary q-ary input memoryless channel and apply a suitable polarization transform. Theory of q-ary polarization exists. Şasoğlu, E., E. Telatar, and E. Arıkan. Polarization for arbitrary discrete memoryless channels. IEEE IT 29. Sahebi, A. G. and S. S. Pradhan, Multilevel polarization of polar codes over arbitrary discrete memoryless channels. IEEE Allerton, 2. Park,.-C. and A. Barg. Polar codes for q-ary channels, IEEE Trans. Inform. Theory, L8: Coding for bandlimited channels polar 9/37 L8: Coding for bandlimited channels polar 2/37 Direct Method Multi-Level Modulation (Imai and Hirakawa, 977) Represent (if possible) each channel input symbol as a vector X = (X,X 2,...,X r ); then the capacity can be written as a sum of capacities of smaller channels by the chain rule: The difficulty with the direct approach is complexity of decoding. G. Montorsi s ADBP is a promising approach for reducing the complexity here. I(X;Y) = I(X,X 2,...,X r ;Y) r = I(X i ;Y X,...,X i ). i= This splits the original channel into r parallel channels, which are encoded independently and decoded using successive cancellation decoding. Polarization is a natural complement to MLM. L8: Coding for bandlimited channels polar 2/37 L8: Coding for bandlimited channels polar 22/37

72 Polar coding with multi-level modulation Already a well-studied subject: Arıkan, E., Polar Coding, Plenary Talk, ISIT 2. Seidl, M., Schenk, A., Stierstorfer, C., and Huber, J. B. Polar-coded modulation, IEEE Trans. Comm. 23. Seidl, M., Schenk, A., Stierstorfer, C., and Huber, J. B. Multilevel polar-coded modulation, IEEE ISIT 23 Ionita, Corina, et al. On the design of binary polar codes for high-order modulation. IEEE GLOBECOM, 24. Beygi, L., Agrell, E., Kahn, J. M., and Karlsson, M., Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Example: 8-PAM as 3 bit channels PAM signals selected by three bits (b,b 2,b 3 ) Three layers of binary channels created Each layer encoded independently Layers decoded in the order b 3, b 2, b Bit b 2-PAM Bit b PAM Bit b PAM L8: Coding for bandlimited channels polar 23/37 L8: Coding for bandlimited channels polar 24/37 Polarization across layers by natural labeling Performance comparison: Polar vs. Turbo 4.5 Capacity (bits) Layer capacity Layer 2 capacity Layer 3 capacity Sum of three layers Shannon limit Turbo code imax CTC Duobinary, memory 3 QAM over AGN channel Gray mapping BICM Simulator: Coded Modulation Library Polar code Standard construction Successive cancellation decoding QAM over AGN channel Natural mapping Multi-level PAM PAM over AGN channel SNR (db) Most coding work needs to be done at the least significant bits. L8: Coding for bandlimited channels polar 25/37 L8: Coding for bandlimited channels polar 26/37

73 Example: 8-PAM as 3 bit channels Multi-layering jump-starts polarization PAM signals selected by three bits (b,b 2,b 3 ) Three layers of binary channels created Each layer encoded independently Layers decoded in the order b 3, b 2, b Bit b 2-PAM Bit b Capacity (bits) Layer capacity Layer 2 capacity Layer 3 capacity Sum of three layers Shannon limit 4-PAM Bit b 3 8-PAM SNR (db) L8: Coding for bandlimited channels polar 27/37 L8: Coding for bandlimited channels polar 28/37 4-QAM, Rate /2 6-QAM, Rate 3/4 2 Polar 2 FER 3 FER 3 4 Turbo 4 5 Polar(52,256) 4 QAM Polar(24,52) 4 QAM CTC(48,24) 4 QAM CTC(96,48) 4 QAM EbNo (db) 5 Polar(52,384) 6 QAM CTC(92,44) 6 QAM CTC(384,288) 6 QAM CTC(576,432) 6 QAM EbNo (db) L8: Coding for bandlimited channels polar 29/37 L8: Coding for bandlimited channels polar 3/37

74 64-QAM, Rate 5/6 Complexity comparison: 64-QAM, Rate 5/6 Average decoding time in milliseconds per codeword (ms/cw) FER 2 E b /N CTC(576,432) Polar(768,64) Polar(384,32) db db Polar codes show a complexity advantage against CTC codes. 3 4 Polar(768,64) 64 QAM Polar(384,32) 64 QAM CTC(576,48) 64 QAM EbNo (db) Both decoders implemented as MATLAB mex functions. Polar decoder is a successive cancellation decoder. CTC decoder is a public domain decoder (CML). Profiling done by MATLAB Profiler. Iteration limit for CTC decoder was ; average no of iterations was at db and 3.3 at db. CTC decoder used a linear approximation to log-map while polar decoder used exact log-map. L8: Coding for bandlimited channels polar 3/37 L8: Coding for bandlimited channels polar 32/37 Lattices and polar coding Lattices and polar coding Yan, Cong, and Liu explored the connection between lattices and polar coding. Yan, Yanfei, and L. Cong, A construction of lattices from polar codes. IEEE 22 IT. Yan, Yanfei, Ling Liu, Cong Ling, and Xiaofu u. Construction of capacity-achieving lattice codes: Polar lattices. arxiv preprint arxiv:4.87 (24) Yan et al used the Barnes-all lattice contructions such as B 6 = RM(,4) 2RM(3,4)4(Z 6 ) as a template for constructing polar lattices of the type P 6 = P(,4)2P(3,4)4(Z 6 ) and demonstrated by simulations that polar lattices perform better. L8: Coding for bandlimited channels polar 33/37 L8: Coding for bandlimited channels polar 34/37

75 BICM BICM [Zehavi, 99], [Caire, Taricco, Biglieri, 998] is the dominant technique in modern wireless standards such as LTE. As in MLM, BICM splits the channel input symbols into a vector X = (X,X 2,...,X r ) but strives to do so such that I(X;Y) = I(X,X 2,...,X r ;Y) r = I(X i ;Y X,...,X i ) i= r I(X i ;Y). i= BICM vs Multi Level Modulation hy has BICM won over MLM and other techniques in practice? MLM is provably capacity-achieving; BICM is suboptimal but the rate penalty is tolerable. MLM has to do delicate rate-matching at individual layers, which is difficult with turbo and LDPC codes. BICM is well-matched to iterative decoding methods used with turbo and LDPC codes. MLM suffers extra latency due to multi-stage decoding (mitigated in part by the lack of need for protecting the upper layers by long codes) ith MLM, the overall code is split into shorter codes which weakens performance (one may mix and match the block lengths of each layer to alleviate this problem). L8: Coding for bandlimited channels polar 35/37 L8: Coding for bandlimited channels polar 36/37 BICM and Polar Coding This subject, too, has been studied in connection with polar codes. Mahdavifar, H. and El-Khamy, M. and Lee, J. and Kang, I., Polar Coding for Bit-Interleaved Coded Modulation, IEEE Trans. Veh. Tech., 25. Afser, H., N. Tirpan, H. Delic, and M. Koca, Bit-interleaved polar-coded modulation, Proc. IEEE CNC, 24. Chen, Kai, Kai Niu, and Jia-Ru Lin. An efficient design of bit-interleaved polar coded modulation. IEEE PIMRC L: Information theory review L2: Gaussian channel L3: Algebraic coding L4: Probabilistic coding L5: Channel polarization L6: Polar coding L7: Origins of polar coding L8: Coding for bandlimited channels L9: Polar codes for selected applications L8: Coding for bandlimited channels polar 37/37 L9: Polar codes for selected applications /27

76 Lecture 9 Polar codes for selected applications Objective: Review the literature on polar coding for selected applications Topics 6 GHz wireless Optical access networks Millimeter ave 6 GHz Communications 7 GHz of bandwidth available (57-64 GHz allocated in the US) Free-space path loss (4πd/λ) 2 is high at λ = 5 mm but compensated by large antenna arrays. Propagation range limited severely by O 2 absorption. Cells confined to rooms. 5G Ultra reliable low latency communications (URLLC) Machine type communications (MTC) 5G channel coding at Gb/s throughput L9: Polar codes for selected applications 2/27 L9: Polar codes for selected applications 6 GHz ireless 3/27 Millimeter ave 6 GHz Communications Recent IEEE 82..ad i-fi standard operates at 6 GHz ISM band and uses an LDPC code with block length 672 bits, rates /2, 5/8, 3/4, 3/6. Two papers compare polar codes that study polar coding for 6 GHz applications: Z. ei, B. Li, and C. Zhao, On the polar code for the 6 GHz millimeter-wave systems, EURASIP, JCN, 25. Youn Sung Park, Energy-Effcient Decoders of Near-Capacity Channel Codes, PhD Dissertation, The University of Michigan, 24. Millimeter ave 6 GHz Communications ei et al compare polar codes with the LDPC codes used in the standard using a nonlinear channel model ei, B. Li, and C. Zhao, On the polar code for the 6 GHz millimeter-wave systems, EURASIP, JCN, 25. L9: Polar codes for selected applications 6 GHz ireless 4/27 L9: Polar codes for selected applications 6 GHz ireless 5/27

77 Millimeter ave 6 GHz Communications ei et al compare polar codes with the LDPC codes used in the standard using a nonlinear channel model Millimeter ave 6 GHz Communications ei et al compare polar codes with the LDPC codes used in the standard using a nonlinear channel model ei, B. Li, and C. Zhao, On the polar code for the 6 GHz millimeter-wave systems, EURASIP, JCN, 25. ei, B. Li, and C. Zhao, On the polar code for the 6 GHz millimeter-wave systems, EURASIP, JCN, 25. L9: Polar codes for selected applications 6 GHz ireless 6/27 L9: Polar codes for selected applications 6 GHz ireless 7/27 Polar codes vs IEEE 82.ad LDPC codes Polar codes vs IEEE 82.ad LDPC codes Park (24) gives the following performance comparison. (Park s result on LDPC conflicts with reference IEEE 82.-/432r2. hether there exists an error floor as shown needs to be confirmed independently.) In terms of implementation complexity and throughput, Park (24) gives the following figures. Source: Youn Sung Park, Energy-Efficient Decoders of Near-Capacity Channel Codes, PhD Dissertation, The University of Michigan, 24. Source: Youn Sung Park, Energy-Effcient Decoders of Near-Capacity Channel Codes, PhD Dissertation, The University of Michigan, 24. L9: Polar codes for selected applications 6 GHz ireless 8/27 L9: Polar codes for selected applications 6 GHz ireless 9/27

78 Optical access/transport network Polar codes for optical access/transport - Gb/s at E-2 BER OTU4 ( Gb/s Ethernet) and ITU G.975. standards use Reed-Solomon (RS) codes The challenge is to provide high reliability at low hardware complexity. There have been some studies of polar codes fore optical transmission. A. Eslami and H. Pishro-Nik, A practical approach to polar codes, ISIT 2. (Considers a polar-ldpc concatenated code and compares it with OTU4 RS codes.) Z. u and B. Lankl, Polar codes for low-complexity forward error correction in optical access networks, ITG-Fachbericht 248: Photonische Netze - 5, , Leipzig. (Compares polar codes with G.975. RS codes.) L. Beygi, E. Agrell, J. M. Kahn, and M. Karlsson, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. (Coded modulation for optical transport.) L9: Polar codes for selected applications Optical access /27 L9: Polar codes for selected applications Optical access /27 Comparison of polar codes with G.975. RS codes Comparison of polar codes with G.975. RS codes Source: Z. u and B. Lankl, above reference. Source: Z. u and B. Lankl, above reference. L9: Polar codes for selected applications Optical access 2/27 L9: Polar codes for selected applications Optical access 3/27

79 Coded modulation for fiber-optic communication Coded modulation: BICM approach Split the 2 q ary channel into q bit channels and decode them independently. Main reference for this part is the paper: L. Beygi, E. Agrell, J. M. Kahn, and M. Karlsson, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. Data rates Gb/s and beyond BER E-5 Channel model: Self-interfering nonlinear distortion, additive Gaussian noise Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. L9: Polar codes for selected applications Optical access 4/27 L9: Polar codes for selected applications Optical access 5/27 Coded modulation: Multi-level approach Split the 2 q ary channel into q bit channels and decode them successively. Coded modulation: BICM approach Split the 2 q ary channel into q bit channels and decode them independently. Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. L9: Polar codes for selected applications Optical access 6/27 L9: Polar codes for selected applications Optical access 7/27

80 Coded modulation: TCM approach Split the 2 q ary channels into two classes and encode the low-order channels using a trellis hand-crafted for large Euclidean distance and ML-decoded Coded modulation: q ary coding No splitting; 2 q ary processing applied; too complex Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. L9: Polar codes for selected applications Optical access 8/27 L9: Polar codes for selected applications Optical access 9/27 Coded modulation: Polar approach Coded modulation: performance comparison Split the 2 q ary channel into good, mediocre, and bad bit channels; apply coding only to mediocre channels Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. Figure source: Beygi, L., et al, Coded modulation for fiber-optic networks, IEEE Sig. Proc. Mag., Mar. 24. L9: Polar codes for selected applications Optical access 2/27 L9: Polar codes for selected applications Optical access 2/27

Digital Television Lecture 5

Digital Television Lecture 5 Digital Television Lecture 5 Forward Error Correction (FEC) Åbo Akademi University Domkyrkotorget 5 Åbo 8.4. Error Correction in Transmissions Need for error correction in transmissions Loss of data during

More information

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

Coding and Modulation

Coding and Modulation Coding and Modulation A Polar Coding Viewpoint Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University Ankara, Turkey Munich Workshop on Coding and Modulation Munich, 30-31 July 2015

More information

Decoding of Block Turbo Codes

Decoding of Block Turbo Codes Decoding of Block Turbo Codes Mathematical Methods for Cryptography Dedicated to Celebrate Prof. Tor Helleseth s 70 th Birthday September 4-8, 2017 Kyeongcheol Yang Pohang University of Science and Technology

More information

A Survey of Advanced FEC Systems

A Survey of Advanced FEC Systems A Survey of Advanced FEC Systems Eric Jacobsen Minister of Algorithms, Intel Labs Communication Technology Laboratory/ Radio Communications Laboratory July 29, 2004 With a lot of material from Bo Xia,

More information

ERROR CONTROL CODING From Theory to Practice

ERROR CONTROL CODING From Theory to Practice ERROR CONTROL CODING From Theory to Practice Peter Sweeney University of Surrey, Guildford, UK JOHN WILEY & SONS, LTD Contents 1 The Principles of Coding in Digital Communications 1.1 Error Control Schemes

More information

Communications Overhead as the Cost of Constraints

Communications Overhead as the Cost of Constraints Communications Overhead as the Cost of Constraints J. Nicholas Laneman and Brian. Dunn Department of Electrical Engineering University of Notre Dame Email: {jnl,bdunn}@nd.edu Abstract This paper speculates

More information

CT-516 Advanced Digital Communications

CT-516 Advanced Digital Communications CT-516 Advanced Digital Communications Yash Vasavada Winter 2017 DA-IICT Lecture 17 Channel Coding and Power/Bandwidth Tradeoff 20 th April 2017 Power and Bandwidth Tradeoff (for achieving a particular

More information

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing

Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing 16.548 Notes 15: Concatenated Codes, Turbo Codes and Iterative Processing Outline! Introduction " Pushing the Bounds on Channel Capacity " Theory of Iterative Decoding " Recursive Convolutional Coding

More information

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter n Soft decision decoding (can be analyzed via an equivalent binary-input additive white Gaussian noise channel) o The error rate of Ungerboeck codes (particularly at high SNR) is dominated by the two codewords

More information

Multiple-Bases Belief-Propagation for Decoding of Short Block Codes

Multiple-Bases Belief-Propagation for Decoding of Short Block Codes Multiple-Bases Belief-Propagation for Decoding of Short Block Codes Thorsten Hehn, Johannes B. Huber, Stefan Laendner, Olgica Milenkovic Institute for Information Transmission, University of Erlangen-Nuremberg,

More information

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1. EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code Project #1 is due on Tuesday, October 6, 2009, in class. You may turn the project report in early. Late projects are accepted

More information

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1 Wireless Networks: Physical Layer: Modulation, FEC Guevara Noubir Noubir@ccsneuedu S, COM355 Wireless Networks Lecture 3, Lecture focus Modulation techniques Bit Error Rate Reducing the BER Forward Error

More information

Intro to coding and convolutional codes

Intro to coding and convolutional codes Intro to coding and convolutional codes Lecture 11 Vladimir Stojanović 6.973 Communication System Design Spring 2006 Massachusetts Institute of Technology 802.11a Convolutional Encoder Rate 1/2 convolutional

More information

ECE 6640 Digital Communications

ECE 6640 Digital Communications ECE 6640 Digital Communications Dr. Bradley J. Bazuin Assistant Professor Department of Electrical and Computer Engineering College of Engineering and Applied Sciences Chapter 8 8. Channel Coding: Part

More information

Chapter 1 Coding for Reliable Digital Transmission and Storage

Chapter 1 Coding for Reliable Digital Transmission and Storage Wireless Information Transmission System Lab. Chapter 1 Coding for Reliable Digital Transmission and Storage Institute of Communications Engineering National Sun Yat-sen University 1.1 Introduction A major

More information

Chapter 3 Convolutional Codes and Trellis Coded Modulation

Chapter 3 Convolutional Codes and Trellis Coded Modulation Chapter 3 Convolutional Codes and Trellis Coded Modulation 3. Encoder Structure and Trellis Representation 3. Systematic Convolutional Codes 3.3 Viterbi Decoding Algorithm 3.4 BCJR Decoding Algorithm 3.5

More information

MULTILEVEL CODING (MLC) with multistage decoding

MULTILEVEL CODING (MLC) with multistage decoding 350 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 3, MARCH 2004 Power- and Bandwidth-Efficient Communications Using LDPC Codes Piraporn Limpaphayom, Student Member, IEEE, and Kim A. Winick, Senior

More information

EDI042 Error Control Coding (Kodningsteknik)

EDI042 Error Control Coding (Kodningsteknik) EDI042 Error Control Coding (Kodningsteknik) Chapter 1: Introduction Michael Lentmaier November 3, 2014 Michael Lentmaier, Fall 2014 EDI042 Error Control Coding: Chapter 1 1 / 26 Course overview I Lectures:

More information

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix Send SMS s : ONJntuSpeed To 9870807070 To Recieve Jntu Updates Daily On Your Mobile For Free www.strikingsoon.comjntu ONLINE EXMINTIONS [Mid 2 - dc] http://jntuk.strikingsoon.com 1. Two binary random

More information

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team

Advanced channel coding : a good basis. Alexandre Giulietti, on behalf of the team Advanced channel coding : a good basis Alexandre Giulietti, on behalf of the T@MPO team Errors in transmission are fowardly corrected using channel coding e.g. MPEG4 e.g. Turbo coding e.g. QAM source coding

More information

THE idea behind constellation shaping is that signals with

THE idea behind constellation shaping is that signals with IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 3, MARCH 2004 341 Transactions Letters Constellation Shaping for Pragmatic Turbo-Coded Modulation With High Spectral Efficiency Dan Raphaeli, Senior Member,

More information

High-Rate Non-Binary Product Codes

High-Rate Non-Binary Product Codes High-Rate Non-Binary Product Codes Farzad Ghayour, Fambirai Takawira and Hongjun Xu School of Electrical, Electronic and Computer Engineering University of KwaZulu-Natal, P. O. Box 4041, Durban, South

More information

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq.

Using TCM Techniques to Decrease BER Without Bandwidth Compromise. Using TCM Techniques to Decrease BER Without Bandwidth Compromise. nutaq. Using TCM Techniques to Decrease BER Without Bandwidth Compromise 1 Using Trellis Coded Modulation Techniques to Decrease Bit Error Rate Without Bandwidth Compromise Written by Jean-Benoit Larouche INTRODUCTION

More information

On the Construction and Decoding of Concatenated Polar Codes

On the Construction and Decoding of Concatenated Polar Codes On the Construction and Decoding of Concatenated Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place,

More information

Performance comparison of convolutional and block turbo codes

Performance comparison of convolutional and block turbo codes Performance comparison of convolutional and block turbo codes K. Ramasamy 1a), Mohammad Umar Siddiqi 2, Mohamad Yusoff Alias 1, and A. Arunagiri 1 1 Faculty of Engineering, Multimedia University, 63100,

More information

Modulation and Coding Tradeoffs

Modulation and Coding Tradeoffs 0 Modulation and Coding Tradeoffs Contents 1 1. Design Goals 2. Error Probability Plane 3. Nyquist Minimum Bandwidth 4. Shannon Hartley Capacity Theorem 5. Bandwidth Efficiency Plane 6. Modulation and

More information

Error Control Codes. Tarmo Anttalainen

Error Control Codes. Tarmo Anttalainen Tarmo Anttalainen email: tarmo.anttalainen@evitech.fi.. Abstract: This paper gives a brief introduction to error control coding. It introduces bloc codes, convolutional codes and trellis coded modulation

More information

Contents Chapter 1: Introduction... 2

Contents Chapter 1: Introduction... 2 Contents Chapter 1: Introduction... 2 1.1 Objectives... 2 1.2 Introduction... 2 Chapter 2: Principles of turbo coding... 4 2.1 The turbo encoder... 4 2.1.1 Recursive Systematic Convolutional Codes... 4

More information

On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks

On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks San Jose State University From the SelectedWorks of Robert Henry Morelos-Zaragoza April, 2015 On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks Quyhn Quach Robert H Morelos-Zaragoza

More information

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27) ECEn 665: Antennas and Propagation for Wireless Communications 131 9. Modulation Modulation is a way to vary the amplitude and phase of a sinusoidal carrier waveform in order to transmit information. When

More information

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004.

Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 2004. EE29C - Spring 24 Advanced Topics in Circuit Design High-Speed Electrical Interfaces Lecture 17 Components Principles of Error Control Borivoje Nikolic March 16, 24. Announcements Project phase 1 is posted

More information

The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1.

The ternary alphabet is used by alternate mark inversion modulation; successive ones in data are represented by alternating ±1. Alphabets EE 387, Notes 2, Handout #3 Definition: An alphabet is a discrete (usually finite) set of symbols. Examples: B = {0,1} is the binary alphabet T = { 1,0,+1} is the ternary alphabet X = {00,01,...,FF}

More information

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes

Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Physical-Layer Network Coding Using GF(q) Forward Error Correction Codes Weimin Liu, Rui Yang, and Philip Pietraski InterDigital Communications, LLC. King of Prussia, PA, and Melville, NY, USA Abstract

More information

Communications Theory and Engineering

Communications Theory and Engineering Communications Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 Channel Coding The channel encoder Source bits Channel encoder Coded bits Pulse

More information

LDPC Decoding: VLSI Architectures and Implementations

LDPC Decoding: VLSI Architectures and Implementations LDPC Decoding: VLSI Architectures and Implementations Module : LDPC Decoding Ned Varnica varnica@gmail.com Marvell Semiconductor Inc Overview Error Correction Codes (ECC) Intro to Low-density parity-check

More information

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding Shalini Bahel, Jasdeep Singh Abstract The Low Density Parity Check (LDPC) codes have received a considerable

More information

Computing and Communications 2. Information Theory -Channel Capacity

Computing and Communications 2. Information Theory -Channel Capacity 1896 1920 1987 2006 Computing and Communications 2. Information Theory -Channel Capacity Ying Cui Department of Electronic Engineering Shanghai Jiao Tong University, China 2017, Autumn 1 Outline Communication

More information

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia Information Hiding Phil Regalia Department of Electrical Engineering and Computer Science Catholic University of America Washington, DC 20064 regalia@cua.edu Baltimore IEEE Signal Processing Society Chapter,

More information

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif PROJECT 5: DESIGNING A VOICE MODEM Instructor: Amir Asif CSE4214: Digital Communications (Fall 2012) Computer Science and Engineering, York University 1. PURPOSE In this laboratory project, you will design

More information

Polar Codes for Probabilistic Amplitude Shaping

Polar Codes for Probabilistic Amplitude Shaping Polar Codes for Probabilistic Amplitude Shaping Tobias Prinz tobias.prinz@tum.de Second LNT & DLR Summer Workshop on Coding July 26, 2016 Tobias Prinz Polar Codes for Probabilistic Amplitude Shaping 1/16

More information

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 12 Digital Communications I: Modulation and Coding Course Term 3-8 Catharina Logothetis Lecture Last time, we talked about: How decoding is performed for Convolutional codes? What is a Maximum likelihood

More information

ICE1495 Independent Study for Undergraduate Project (IUP) A. Lie Detector. Prof. : Hyunchul Park Student : Jonghun Park Due date : 06/04/04

ICE1495 Independent Study for Undergraduate Project (IUP) A. Lie Detector. Prof. : Hyunchul Park Student : Jonghun Park Due date : 06/04/04 ICE1495 Independent Study for Undergraduate Project (IUP) A Lie Detector Prof. : Hyunchul Park Student : 20020703 Jonghun Park Due date : 06/04/04 Contents ABSTRACT... 2 1. INTRODUCTION... 2 1.1 BASIC

More information

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Error Control Coding Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Topics Introduction The Channel Coding Problem Linear Block Codes Cyclic Codes BCH and Reed-Solomon

More information

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Hamming net based Low Complexity Successive Cancellation Polar Decoder Hamming net based Low Complexity Successive Cancellation Polar Decoder [1] Makarand Jadhav, [2] Dr. Ashok Sapkal, [3] Prof. Ram Patterkine [1] Ph.D. Student, [2] Professor, Government COE, Pune, [3] Ex-Head

More information

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

Department of Electronic Engineering FINAL YEAR PROJECT REPORT Department of Electronic Engineering FINAL YEAR PROJECT REPORT BEngECE-2009/10-- Student Name: CHEUNG Yik Juen Student ID: Supervisor: Prof.

More information

Robust Reed Solomon Coded MPSK Modulation

Robust Reed Solomon Coded MPSK Modulation ITB J. ICT, Vol. 4, No. 2, 2, 95-4 95 Robust Reed Solomon Coded MPSK Modulation Emir M. Husni School of Electrical Engineering & Informatics, Institut Teknologi Bandung, Jl. Ganesha, Bandung 432, Email:

More information

A REVIEW OF CONSTELLATION SHAPING AND BICM-ID OF LDPC CODES FOR DVB-S2 SYSTEMS

A REVIEW OF CONSTELLATION SHAPING AND BICM-ID OF LDPC CODES FOR DVB-S2 SYSTEMS A REVIEW OF CONSTELLATION SHAPING AND BICM-ID OF LDPC CODES FOR DVB-S2 SYSTEMS Ms. A. Vandana PG Scholar, Electronics and Communication Engineering, Nehru College of Engineering and Research Centre Pampady,

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of OFDM under DWT, DCT based Image Processing Anshul Soni soni.anshulec14@gmail.com Ashok Chandra Tiwari Abstract In this paper, the performance of conventional discrete cosine transform

More information

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES Michelle Foltran Miranda Eduardo Parente Ribeiro mifoltran@hotmail.com edu@eletrica.ufpr.br Departament of Electrical Engineering,

More information

LDPC codes for OFDM over an Inter-symbol Interference Channel

LDPC codes for OFDM over an Inter-symbol Interference Channel LDPC codes for OFDM over an Inter-symbol Interference Channel Dileep M. K. Bhashyam Andrew Thangaraj Department of Electrical Engineering IIT Madras June 16, 2008 Outline 1 LDPC codes OFDM Prior work Our

More information

Lab/Project Error Control Coding using LDPC Codes and HARQ

Lab/Project Error Control Coding using LDPC Codes and HARQ Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an

More information

Goa, India, October Question: 4/15 SOURCE 1 : IBM. G.gen: Low-density parity-check codes for DSL transmission.

Goa, India, October Question: 4/15 SOURCE 1 : IBM. G.gen: Low-density parity-check codes for DSL transmission. ITU - Telecommunication Standardization Sector STUDY GROUP 15 Temporary Document BI-095 Original: English Goa, India, 3 7 October 000 Question: 4/15 SOURCE 1 : IBM TITLE: G.gen: Low-density parity-check

More information

Error Protection: Detection and Correction

Error Protection: Detection and Correction Error Protection: Detection and Correction Communication channels are subject to noise. Noise distorts analog signals. Noise can cause digital signals to be received as different values. Bits can be flipped

More information

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Matthias Breuninger and Joachim Speidel Institute of Telecommunications, University of Stuttgart Pfaffenwaldring

More information

Dual-Mode Decoding of Product Codes with Application to Tape Storage

Dual-Mode Decoding of Product Codes with Application to Tape Storage This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2005 proceedings Dual-Mode Decoding of Product Codes with

More information

On Bit-Wise Decoders for Coded Modulation. Mikhail Ivanov

On Bit-Wise Decoders for Coded Modulation. Mikhail Ivanov Thesis for the Degree of Licentiate of Engineering On Bit-Wise Decoders for Coded Modulation Mikhail Ivanov Communication Systems Group Department of Signals and Systems Chalmers University of Technology

More information

Vector-LDPC Codes for Mobile Broadband Communications

Vector-LDPC Codes for Mobile Broadband Communications Vector-LDPC Codes for Mobile Broadband Communications Whitepaper November 23 Flarion Technologies, Inc. Bedminster One 35 Route 22/26 South Bedminster, NJ 792 Tel: + 98-947-7 Fax: + 98-947-25 www.flarion.com

More information

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Clemson University TigerPrints All Theses Theses 8-2009 EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Jason Ellis Clemson University, jellis@clemson.edu

More information

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr.

Lecture #2. EE 471C / EE 381K-17 Wireless Communication Lab. Professor Robert W. Heath Jr. Lecture #2 EE 471C / EE 381K-17 Wireless Communication Lab Professor Robert W. Heath Jr. Preview of today s lecture u Introduction to digital communication u Components of a digital communication system

More information

COMBINING GALOIS WITH COMPLEX FIELD CODING FOR HIGH-RATE SPACE-TIME COMMUNICATIONS. Renqiu Wang, Zhengdao Wang, and Georgios B.

COMBINING GALOIS WITH COMPLEX FIELD CODING FOR HIGH-RATE SPACE-TIME COMMUNICATIONS. Renqiu Wang, Zhengdao Wang, and Georgios B. COMBINING GALOIS WITH COMPLEX FIELD CODING FOR HIGH-RATE SPACE-TIME COMMUNICATIONS Renqiu Wang, Zhengdao Wang, and Georgios B. Giannakis Dept. of ECE, Univ. of Minnesota, Minneapolis, MN 55455, USA e-mail:

More information

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 13

Digital Communications I: Modulation and Coding Course. Term Catharina Logothetis Lecture 13 Digital Communications I: Modulation and Coding Course Term 3-28 Catharina Logothetis Lecture 13 Last time, we talked aout: The properties of Convolutional codes. We introduced interleaving as a means

More information

Trellis-Coded Modulation [TCM]

Trellis-Coded Modulation [TCM] Trellis-Coded Modulation [TCM] Limitations of conventional block and convolutional codes on bandlimited channels Basic principles of trellis coding: state, trellis, and set partitioning Coding gain with

More information

Spreading Codes and Characteristics. Error Correction Codes

Spreading Codes and Characteristics. Error Correction Codes Spreading Codes and Characteristics and Error Correction Codes Global Navigational Satellite Systems (GNSS-6) Short course, NERTU Prasad Krishnan International Institute of Information Technology, Hyderabad

More information

IMPROVING ERROR PERFORMANCE IN BANDWIDTH-LIMITED BASEBAND CHANNELS

IMPROVING ERROR PERFORMANCE IN BANDWIDTH-LIMITED BASEBAND CHANNELS FACULTY OF ENGINEERING AND SUSTAINABLE DEVELOPMENT. IMPROVING ERROR PERFORMANCE IN BANDWIDTH-LIMITED BASEBAND CHANNELS Juan W. Alfaro Zavala June 2012 Master s Thesis in Electronics Master s Program in

More information

EE521 Analog and Digital Communications

EE521 Analog and Digital Communications EE521 Analog and Digital Communications Questions Problem 1: SystemView... 3 Part A (25%... 3... 3 Part B (25%... 3... 3 Voltage... 3 Integer...3 Digital...3 Part C (25%... 3... 4 Part D (25%... 4... 4

More information

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 9, SEPTEMBER 2003 2141 Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes Jilei Hou, Student

More information

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <http://grouper.ieee.org/groups/802/mbwa>

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <http://grouper.ieee.org/groups/802/mbwa> 23--29 IEEE C82.2-3/2R Project Title Date Submitted IEEE 82.2 Mobile Broadband Wireless Access Soft Iterative Decoding for Mobile Wireless Communications 23--29

More information

NEXT generation wireless communications systems are

NEXT generation wireless communications systems are 1 Performance, Complexity, and Receiver Design for Code-Aided Frame Synchronization in Multipath Channels Daniel J. Jakubisin, Student Member, IEEE and R. Michael Buehrer, Senior Member, IEEE Abstract

More information

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS

EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS EFFECTIVE CHANNEL CODING OF SERIALLY CONCATENATED ENCODERS AND CPM OVER AWGN AND RICIAN CHANNELS Manjeet Singh (ms308@eng.cam.ac.uk) Ian J. Wassell (ijw24@eng.cam.ac.uk) Laboratory for Communications Engineering

More information

Digital modulation techniques

Digital modulation techniques Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

photons photodetector t laser input current output current

photons photodetector t laser input current output current 6.962 Week 5 Summary: he Channel Presenter: Won S. Yoon March 8, 2 Introduction he channel was originally developed around 2 years ago as a model for an optical communication link. Since then, a rather

More information

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Presented to Dr. Tareq Al-Naffouri By Mohamed Samir Mazloum Omar Diaa Shawky Abstract Signaling schemes with memory

More information

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation

Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Lecture 9b Convolutional Coding/Decoding and Trellis Code modulation Convolutional Coder Basics Coder State Diagram Encoder Trellis Coder Tree Viterbi Decoding For Simplicity assume Binary Sym.Channel

More information

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010)

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010) ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010) Instructor: Kevin Buckley, Tolentine 433a, 610-519-5658 (W), 610-519-4436 (F), buckley@ece.vill.edu,

More information

Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels

Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels Performance of Nonuniform M-ary QAM Constellation on Nonlinear Channels Nghia H. Ngo, S. Adrian Barbulescu and Steven S. Pietrobon Abstract This paper investigates the effects of the distribution of a

More information

Combined Transmitter Diversity and Multi-Level Modulation Techniques

Combined Transmitter Diversity and Multi-Level Modulation Techniques SETIT 2005 3rd International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 27 3, 2005 TUNISIA Combined Transmitter Diversity and Multi-Level Modulation Techniques

More information

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder European Scientific Journal June 26 edition vol.2, No.8 ISSN: 857 788 (Print) e - ISSN 857-743 Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder Alaa Ghaith, PhD

More information

Improved concatenated (RS-CC) for OFDM systems

Improved concatenated (RS-CC) for OFDM systems Improved concatenated (RS-CC) for OFDM systems Mustafa Dh. Hassib 1a), JS Mandeep 1b), Mardina Abdullah 1c), Mahamod Ismail 1d), Rosdiadee Nordin 1e), and MT Islam 2f) 1 Department of Electrical, Electronics,

More information

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 4, Number 5 (2014), pp. 463-468 Research India Publications http://www.ripublication.com/aeee.htm Power Efficiency of LDPC Codes under

More information

Code Design for Incremental Redundancy Hybrid ARQ

Code Design for Incremental Redundancy Hybrid ARQ Code Design for Incremental Redundancy Hybrid ARQ by Hamid Saber A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Doctor

More information

DEGRADED broadcast channels were first studied by

DEGRADED broadcast channels were first studied by 4296 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 9, SEPTEMBER 2008 Optimal Transmission Strategy Explicit Capacity Region for Broadcast Z Channels Bike Xie, Student Member, IEEE, Miguel Griot,

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 14: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 25 th, 2015 1 Previous Lecture: Source Code Generation: Lossless

More information

Study of Turbo Coded OFDM over Fading Channel

Study of Turbo Coded OFDM over Fading Channel International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 3, Issue 2 (August 2012), PP. 54-58 Study of Turbo Coded OFDM over Fading Channel

More information

Master s Thesis Defense

Master s Thesis Defense Master s Thesis Defense Comparison of Noncoherent Detectors for SOQPSK and GMSK in Phase Noise Channels Afzal Syed August 17, 2007 Committee Dr. Erik Perrins (Chair) Dr. Glenn Prescott Dr. Daniel Deavours

More information

Punctured vs Rateless Codes for Hybrid ARQ

Punctured vs Rateless Codes for Hybrid ARQ Punctured vs Rateless Codes for Hybrid ARQ Emina Soljanin Mathematical and Algorithmic Sciences Research, Bell Labs Collaborations with R. Liu, P. Spasojevic, N. Varnica and P. Whiting Tsinghua University

More information

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors Single Error Correcting Codes (SECC) Basic idea: Use multiple parity bits, each covering a subset of the data bits. No two message bits belong to exactly the same subsets, so a single error will generate

More information

Department of Electronics and Communication Engineering 1

Department of Electronics and Communication Engineering 1 UNIT I SAMPLING AND QUANTIZATION Pulse Modulation 1. Explain in detail the generation of PWM and PPM signals (16) (M/J 2011) 2. Explain in detail the concept of PWM and PAM (16) (N/D 2012) 3. What is the

More information

Turbo coding (CH 16)

Turbo coding (CH 16) Turbo coding (CH 16) Parallel concatenated codes Distance properties Not exceptionally high minimum distance But few codewords of low weight Trellis complexity Usually extremely high trellis complexity

More information

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry

Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry Serially Concatenated Coded Continuous Phase Modulation for Aeronautical Telemetry c 2008 Kanagaraj Damodaran Submitted to the Department of Electrical Engineering & Computer Science and the Faculty of

More information

ETSF15 Physical layer communication. Stefan Höst

ETSF15 Physical layer communication. Stefan Höst ETSF15 Physical layer communication Stefan Höst Physical layer Analog vs digital (Previous lecture) Transmission media Modulation Represent digital data in a continuous world Disturbances, Noise and distortion

More information

Layered Space-Time Codes

Layered Space-Time Codes 6 Layered Space-Time Codes 6.1 Introduction Space-time trellis codes have a potential drawback that the maximum likelihood decoder complexity grows exponentially with the number of bits per symbol, thus

More information

designing the inner codes Turbo decoding performance of the spectrally efficient RSCC codes is further evaluated in both the additive white Gaussian n

designing the inner codes Turbo decoding performance of the spectrally efficient RSCC codes is further evaluated in both the additive white Gaussian n Turbo Decoding Performance of Spectrally Efficient RS Convolutional Concatenated Codes Li Chen School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China Email: chenli55@mailsysueducn

More information

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding

ECE 476/ECE 501C/CS Wireless Communication Systems Winter Lecture 9: Error Control Coding ECE 476/ECE 501C/CS 513 - Wireless Communication Systems Winter 2005 Lecture 9: Error Control Coding Chapter 8 Coding and Error Control From: Wireless Communications and Networks by William Stallings,

More information

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes

Computer Science 1001.py. Lecture 25 : Intro to Error Correction and Detection Codes Computer Science 1001.py Lecture 25 : Intro to Error Correction and Detection Codes Instructors: Daniel Deutch, Amiram Yehudai Teaching Assistants: Michal Kleinbort, Amir Rubinstein School of Computer

More information

Basics of Error Correcting Codes

Basics of Error Correcting Codes Basics of Error Correcting Codes Drawing from the book Information Theory, Inference, and Learning Algorithms Downloadable or purchasable: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html CSE

More information

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING. COMPACT LECTURE NOTES on COMMUNICATION THEORY. Prof. Athanassios Manikas, version Spring 22 Digital

More information

Constellation Shaping for LDPC-Coded APSK

Constellation Shaping for LDPC-Coded APSK Constellation Shaping for LDPC-Coded APSK Matthew C. Valenti Lane Department of Computer Science and Electrical Engineering West Virginia University U.S.A. Mar. 14, 2013 ( Lane Department LDPCof Codes

More information

New Forward Error Correction and Modulation Technologies Low Density Parity Check (LDPC) Coding and 8-QAM Modulation in the CDM-600 Satellite Modem

New Forward Error Correction and Modulation Technologies Low Density Parity Check (LDPC) Coding and 8-QAM Modulation in the CDM-600 Satellite Modem New Forward Error Correction and Modulation Technologies Low Density Parity Check (LDPC) Coding and 8-QAM Modulation in the CDM-600 Satellite Modem Richard Miller Senior Vice President, New Technology

More information