Lecture 13 February 23

Similar documents
Outline. Communications Engineering 1

XJ-BP: Express Journey Belief Propagation Decoding for Polar Codes

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Capacity-Achieving Rateless Polar Codes

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Communication Theory II

Computing and Communications 2. Information Theory -Channel Capacity

Exercises to Chapter 2 solutions

An Efficient Forward Error Correction Scheme for Wireless Sensor Network

On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT

Low Complexity List Successive Cancellation Decoding of Polar Codes

Symbol-Index-Feedback Polar Coding Schemes for Low-Complexity Devices

Bit-permuted coded modulation for polar codes

Digital Television Lecture 5

Cooperative Punctured Polar Coding (CPPC) Scheme Based on Plotkin s Construction

Index Terms Deterministic channel model, Gaussian interference channel, successive decoding, sum-rate maximization.

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

A Study of Polar Codes for MLC NAND Flash Memories

Bit-Interleaved Polar Coded Modulation with Iterative Decoding

On the Capacity Regions of Two-Way Diamond. Channels

Interference Mitigation Through Limited Transmitter Cooperation I-Hsiang Wang, Student Member, IEEE, and David N. C.

Acentral problem in the design of wireless networks is how

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

On the Construction and Decoding of Concatenated Polar Codes

Error Patterns in Belief Propagation Decoding of Polar Codes and Their Mitigation Methods

Broadcast Networks with Layered Decoding and Layered Secrecy: Theory and Applications

Noisy Index Coding with Quadrature Amplitude Modulation (QAM)

LDPC codes for OFDM over an Inter-symbol Interference Channel

5984 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Generalized PSK in space-time coding. IEEE Transactions On Communications, 2005, v. 53 n. 5, p Citation.

ENCODER ARCHITECTURE FOR LONG POLAR CODES

arxiv: v1 [cs.it] 31 Aug 2015

Launchpad Maths. Arithmetic II

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

Joint Relaying and Network Coding in Wireless Networks

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

Frequency hopping does not increase anti-jamming resilience of wireless channels

Intro to coding and convolutional codes

Hamming Codes as Error-Reducing Codes

Polar Codes for Magnetic Recording Channels

Hamming Codes and Decoding Methods

Multirate Digital Signal Processing

Soft Channel Encoding; A Comparison of Algorithms for Soft Information Relaying

Error Detection and Correction: Parity Check Code; Bounds Based on Hamming Distance

THE mobile wireless environment provides several unique

Solutions to Assignment-2 MOOC-Information Theory

Error-Correcting Codes

COMBINATIONAL CIRCUIT

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

EE521 Analog and Digital Communications

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

On the Achievable Diversity-vs-Multiplexing Tradeoff in Cooperative Channels

The Z Channel. Nihar Jindal Department of Electrical Engineering Stanford University, Stanford, CA

High-performance Parallel Concatenated Polar-CRC Decoder Architecture

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY This channel model has also been referred to as unidirectional cooperation

Iterative Joint Source/Channel Decoding for JPEG2000

SHANNON showed that feedback does not increase the capacity

Coding for Efficiency

On Path Memory in List Successive Cancellation Decoder of Polar Codes

On the Capacity of Multi-Hop Wireless Networks with Partial Network Knowledge

Symmetric Decentralized Interference Channels with Noisy Feedback

(Refer Slide Time: 3:11)

Medium Access Control via Nearest-Neighbor Interactions for Regular Wireless Networks

INTERNATIONAL JOURNAL OF PROFESSIONAL ENGINEERING STUDIES Volume VIII /Issue 1 / DEC 2016

TWO-WAY communication between two nodes was first

DEGRADED broadcast channels were first studied by

Lab/Project Error Control Coding using LDPC Codes and HARQ

LDPC Communication Project

Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes

Lecture 20 November 13, 2014

How (Information Theoretically) Optimal Are Distributed Decisions?

Copyright S. K. Mitra

The idea of similarity is through the Hamming

Basics of Error Correcting Codes

Revision of Lecture Eleven

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON).

IEEE Broadband Wireless Access Working Group <

Chapter 1 Coding for Reliable Digital Transmission and Storage

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

6. FUNDAMENTALS OF CHANNEL CODER

Introduction to Coding Theory

Coding Techniques and the Two-Access Channel

A GRAPH THEORETICAL APPROACH TO SOLVING SCRAMBLE SQUARES PUZZLES. 1. Introduction

28,800 Extremely Magic 5 5 Squares Arthur Holshouser. Harold Reiter.

On Information Theoretic Interference Games With More Than Two Users

Coding Schemes for an Erasure Relay Channel

CHANNEL polarization, proposed by Arikan, is a method

BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS

Observations on Polar Coding with CRC-Aided List Decoding

ECE 4400:693 - Information Theory

MAS336 Computational Problem Solving. Problem 3: Eight Queens

Noncoherent Multiuser Detection for CDMA Systems with Nonlinear Modulation: A Non-Bayesian Approach

Polar Codes for Probabilistic Amplitude Shaping

Lecture5: Lossless Compression Techniques

Distributed Approaches for Exploiting Multiuser Diversity in Wireless Networks

ELEC E7210: Communication Theory. Lecture 11: MIMO Systems and Space-time Communications

1.6 Congruence Modulo m

Embedded Orthogonal Space-Time Codes for High Rate and Low Decoding Complexity

Introduction to Error Control Coding

Transcription:

EE/Stats 376A: Information theory Winter 2017 Lecture 13 February 23 Lecturer: David Tse Scribe: David L, Tong M, Vivek B 13.1 Outline olar Codes 13.1.1 Reading CT: 8.1, 8.3 8.6, 9.1, 9.2 13.2 Recap - olar Coding Introduction Last time, we modified the repetition coding scheme to obtain a capacity-achieving coding scheme as shown in Figure 13.1. The modified coding scheme takes in a two bit message, V, and transmits = and X 2 = V in two uses the channel. X 2 Y 2 V X 2 Y 2 V Y 1,Y 2 Y 1 Modified Y 1 + Y 1,Y 2,V Figure 13.1: The coding scheme in the left is a plain-vanilla repetition code for two uses of the channel. It is modified (right) by adding an extra bit V to the second use of the channel. Equivalence: As shown in Figure 13.1, the modified coding scheme is equivalent to transmitting via channel +, and transmitting V via channel, where + has higher capacity than, and is lower capacity than. Even though the underlying physical channels do not transform/split, for ease of exposition, we shall loosely use phrases like - Channel splits into channels + and,, +, to refer to the above equivalence, and we shall also refer +, as the bit channels. 13.2.1 Example: BEC(p) In the previous lecture, we explicitly characterize the bit channels + and for = BEC(p) (Refer to Figure 13.2). In particular, we showed that C( + ) + C( ) = 2C( ). 13-1

2. However, from V s point of view, it is passed through a BEC(1 (1 p) EE/Stats 376A Lecture 13 February 23 2 )channel, Winter 2017 which has less capacity than BEC(p). This is diagrammatically represented in Figure 12.5. V X 2 BEC(p) Y 2 V BEC(1 (1 p) 2 ) Y 1,Y 2 BEC(p) Y 1 BEC(p 2 ) Y 1,Y 2,V Figure 12.5: Diagrammatic interpretation of equivalence between coding scheme in Figure Figure 13.2: + = BEC(p 2 ) and = BEC(1 (1 p) 2 ). 12.4, andtwoseparatebinarychannelsfor and V. In the next section, we will give the idea behind polar codes and show that they achieve capacity. Sanity Check for equivalence: From equations (12.4), (12.5) and(12.6), we have 13.3 Idea I(, V ; Y 1,Y 2 ) = I(V ; Y 1,Y 2 )+I(; Y 1,Y 2 V ) = 1 p 2 +(1 p) 2 Note: some of the figures in = this 2(1 section p), are reproduced from Erdal Arikan s slides 1 Without loss of generality, we will restrict ourselves to symmetric channels whose which gives capacity us backisequation 0 C( (12.3). ) 1. We know that channel coding is trivial for two types of channels: Extending 1. the Noiseless conclusions channel in section - Channel 12.3.3, with twousesofageneralsymmetricchannel no noise i.e, C( ) = 1. are equivalent to a + and channel, as shown in Figure 12.6. 2. seless channel - Channel with zero capacity i.e, C( ) = 0. V X 2 The main idea behindpolar codes Y 2 is to transform V the message bits Y 1,Yin 2 such a way that C( ) fraction of these bits are effectively passed through a noiseless channel, whereas the the remaining (1 C( )) fraction are effectively passed through a useless channel (Figure 13.3). olar codes achieve this byysuccessively 1 applying the + transformation Y 1,Y 2,V, +, until most of the bit channels are either noiseless or useless. Figure 12.6 12.3.4 Conclusion Noiseless Channels Message olar Codes Decoder Thus, we have in some sense divided two symmetric channels seless into two separate channels. The first channel Channels is less reliable than,whereasthesecondchannel + is more reliable. n(1-c) In the next lecture, we ll cascade this argument to obtain polar codes. Figure 13.3: Idea behind polar codes. 12-5 13.3.1 Successive splitting p until now, we ve applied transformations which effectively split a pair of channels into + and channels. We now apply the same transformation on pairs of +, channels, and split them further into ++, + and +, channels respectively. As illustrated in Figure 13.4, this is equivalent of splitting four channels into four channels - ++, +, + and. 1 https://simons.berkeley.edu/sites/default/files/docs/2691/slidesarikan.pdf nc 13-2

(a) Figure 13.4: (a) Splitting pairs of channels +, into ++ + and +, respectively. (b) Overall transformation from 4 channels. (b) Applying the same transformation to the pairs of above four channels will further split them into 8 different channels whose transformation is shown in Figure 13.5 Figure 13.5: Third stage of the transformation with 8 channels, transformed bits, codeword bits X, and output Y. As we keep on splitting further, the bit channels denoted by (n) i converge to either a noiseless or a useless channel. This statement is made precise in the following theorem: Theorem 1. As the number of channels grow, capacity of the bit channels converge to either 0 or 1, and the fraction is equal to { i : C( (n) i ) > 1 δ } n { i : C( (n) i ) < δ } n where n is the total number of original channels. n C( ) δ > 0 n 1 C( ) δ > 0, (13.1) 13-3

roof. Out of the scope of this class. Interested reader can refer the original paper [Arikan]. Corollary 1. The above equations (13.1) directly implies that capacities of all the (n) i channels gravitate to either 0 or 1 i.e, { i : δ < C( (n) i n ) < 1 δ } n 0. This phenomenon of polarization of capacities is illustrated in Figures 13.6 and 13.7. Figure 13.6: lotting capacity of the bit channels as we apply the polar code transformation in successive stages. Note that there are n = 2 k bit channels at the k th stage. In this figure the channels are denoted by W instead of. Figure 13.7: lotting capacity of the bit channels for a BEC(1/2) with n = 64 (left) and n = 1024 (right). Observe that capacities concentrate near 0 or 1 as n increases. 13-4

13.3.2 Why study polar codes? In the last few lectures, we showed that random codes achieve capacity but their encoding and decoding is inefficient because they do not have a nice structure. Keeping this in mind, we added a structure into the coding scheme by restricting ourselves to random linear codes, and this change made the encoding efficient while achieving capacity at the same time. However, this structure is destroyed by the channel noise, and makes decoding inefficient for general linear codes. olar codes overcome this problem by splitting the channels into noiseless and useless channels. The noiseless channels preserve the encoding structure and therefore, for the information passed only over these noiseless channels, decoding can be done efficiently. Therefore, the next logical step is to obtain an encoding scheme to transmit maximum information over the noiseless channels. For example, polar codes split 512 BEC(1/2) channels into 256 channels with C 1 and 256 channels with C 0, and our goal here is to transmit information only using channels with C 1. 13.4 Encoding 13.4.1 Linear representation of olar codes 2 nd stage - 4 channels splitting Let us first consider the case of 4 channels. From Figure 13.8 it is easy to see that codeword bits for message bits 1, 2, 3 and 4 are 1. = 1 2 3 4 2. X 2 = 2 4 3. X 3 = 3 4 4. X 4 = 4. We can express this in a linear form A 4 X =, 1 1 1 1 1 0 1 0 1 2 0 0 1 1 3 = 0 0 0 1 4 [ We ] see that the top left, top right, and bottom right blocks of A 4 are all the equal to 1 1, and this is a direct consequence of successive splitting [13.3.1] of the channels. 0 1 X 2 X 3 X 4. 13-5

Figure 13.8 3 rd stage - 8 channels splitting Similarly, we can extend the above calculations for eight channels to obtain a linear transformation A 8 = X: 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 1 0 2 X 2 0 0 1 1 0 0 1 1 3 X 3 0 0 0 1 0 0 0 1 5 0 0 0 0 1 1 1 1 4 = X 5 X 4 0 0 0 0 0 1 1 0 6 X 6 0 0 0 0 0 0 1 1 7 X 7 0 0 0 0 0 0 0 1 8 X 8 which can be verified from Figure 13.9. Here too we have a recursive structure: [ ] A4 A A 8 = 4. (13.2) A 4 A 4 p until this point, we have only established equivalence between the splitting of the channels and the linear transformations from message bits s to codeword bits X s. How do we obtain a working coding scheme from this? The answer lies in capacities of the bits channels corresponding to 1, 2 13.4.2 Encoding For n = 8 and = BEC(1/2), the effective capacities are listed in Figure 13.9. Here, the encoding scheme sends over the data in the top nc = 4 bit channels ( 8, 7, 6, 5 ), and sends no information (or 0 s) in the rest of the bits channels. Note that the order of the bit channel capacities is not monotonic, and in this case we would use the bits 8, 7, 6, and 4 to send over the data. 13-6

Figure 13.9: Diagram showing the polar code transformation for = BEC(1/2), n = 8. The bit channel capacities are shown in the I(W i ) column, and their rank, in order of descending magnitude, is shown in the Rank column. The bits corresponding to high-capacity channels are the ones which should be used to send information (marked data ), and no information should be sent using bits corresponding to low-capacity channels (marked frozen ). In the above example, the sum of the capacities of the top four bit channels is only 80% of the total capacity and hence, block length of n = 8 is not large enough for the asymptotics of equations (13.1) to kick in. Therefore, in practice we use larger block lengths of size n = 1024. Encoding Complexity From Theorem?? we know that encoding for polar codes is a linear transformation, and hence it can be achieved in O(n 2 ) steps. However, in the previous subsection we showed that the generator matrix of polar codes has an interesting recursive structure, and using this, the running time can be improved up to O(n log n). 13.5 Decoding For the two channel case, decoding + channel requires the knowledge of Y 1, Y 2, and V. Thus we must decode the channel (which only depends on Y 1 and Y 2 ) before decoding + to determine V. Notice that the lesser reliable channel should be decoded before decoding +. Similarly in the four channel case (see Fig. 13.8) we have to first decode 1 (least reliable channel). Then decode 2, and finally decode 3, then 4. Alternative argument - the value of 4 corresponds to the repetition code embedded in, X 2, X 3, and X 4, so it clearly must be decoded last. 13-7

Let us go through a detailed example for the n = 8 BEC(1/2) channel shown in Fig. 13.9. As per the previous section 13.4.2, we encode the message bits by freezing (send 0) 1, 2, 3, 5 because these bit channels have the lowest capacity. The message is thus sent only over the 4, 6, 7, 8 (high capacity bit channels). In the following, we step through the decoding the output Y 1,, Y 8 to obtain back the message. 1. Decode 1 (Frozen) 1 = 0. 2. Decode 2 (Frozen) 2 = 0. 3. Decode 3 (Frozen) 3 = 0. 4. Decode 4 (Data!) We use Y 1, Y 2,..., Y 8, and 1, 2, 3 to decode this signal. Let the decoded bit be denoted by Û4 5. Decode 5 (Frozen) 5 = 0. 6. Decode 6 (Data!) We use Y 1, Y 2,..., Y 8, and 1, 2, 3, 4, 5 to decode this signal. We know 1, 2, 3, and 5 since they are frozen, and we will use (Û4) to estimate Û5. 7. Continue in the same fashion for 7 and 8. So in general, we need know previous bits Û1,..., Ûi 1 to decode i. From the above bitby-bit decoding scheme, we conclude that decoding a message of length n requires O(nm) steps, where m is the time required to decode each bit. To find the complexity of decoding each bit, we look at the k th splitting stage shown in Figure 13.10: Figure 13.10: : k th splitting stage. Decoding comprises of two steps (i) Decoding a from Y a, Y b, and then (ii) decoding b from Y a, Y b, a. This procedure has to be repeated for k 1 lower splitting stages. Thus, the complexity of decoding in k th stage = 2 complexity of decoding in (k 1) th stage, which implies m is equal to 2 k = n. Thus, the total running time of this recursive algorithm is O(n 2 ). Similar to encoding, the recursive structure of the generator matrix can be exploited to reuse components and therefore reduce the running time from O(n 2 ) to O(n log n). With encoding and decoding running time as efficient as O(n log n), polar codes can be applied in practice. In fact, polar codes have been incorporated into the latest 5G wireless standard. 13-8

Bibliography [Arikan] Arikan, Erdal. Channel polarization: A method for constructing capacityachieving codes for symmetric binary-input memoryless channels. IEEE Transactions on Information Theory 55.7 (2009): 3051-3073. 9