Rab Nawaz. Prof. Zhang Wenyi

Size: px
Start display at page:

Download "Rab Nawaz. Prof. Zhang Wenyi"

Transcription

1 Rab Nawaz PhD Scholar (BL ) School of Information Science and Technology University of Science and Technology of China, Hefei Submitted to Prof. Zhang Wenyi 1 P a g e

2 1. A Simple Derivation of the Coding Theorem and some Applications Shannon Source Coding Theorem Derivation of the Coding Theorem For MemoryLess Channel: Continuous channel: Multiaccess Channels Some basics about Multiaccess channels w.r.t Information Theory Multi-access information theory Multiaccess Coding Theorem Broadcast Channel Introduction Mathematical Model of Broadcast Channel Broadcast channel model More on Broadcast channel Concluding Remarks The Capacity of the Gaussian Interference Channel under Strong Interference The Capacity Region of the Discrete Memoryless Interference Channel with Strong Interference Capacity theorem for the Relay Channel A Proof of the Data Compression Theorem of Slepian and Wolf for Ergodic Sources Data Compression A Proof of the Data Compression Theorem Encoding Technique Basic Limits on Protocol Information in Data Communication Networks Introduction Some Assumption in terms of Data Communication Prospect Two Common cases of understanding Basic Model (Assumption) and Calculation Strategies for minimizing protocol information with a delay constraint Description through Flow Chart Acknowledgement References (Papers discussed) P a g e

3 1. A Simple Derivation of the Coding Theorem and some Applications [1] 1.1 Shannon Source Coding Theorem In information theory, Shannon's source coding theorem or noiseless coding theorem establishes the limits to possible data compression, and the operational meaning of the Shannon entropy. The source coding theorem shows that it is impossible to compress the data such that the code rate (average number of bits per symbol) is less than the *Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel. The channel modifies the message in some way. The receiver attempts to infer which message was sent. In this context, entropy (more specifically, Shannon entropy) is the expected value (average) of the information contained in each message. 'Messages' can be modeled by any flow of information. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources. For instance, the entropy of a coin toss is 1 shannon, whereas of m tosses it is m shannons. Generally, you need log2 (n) bits to represent a variable that can take one of n values if n is a power of 2. If these values are equally probable, the entropy (in shannons) is equal to the number of bits. 1.2 Derivation of the Coding Theorem For discrete memoryless channels, the strongest known form of the theorem was stated by Fano [2] in In this result, the minimum probability of error P, for codes of block length N is bounded for any rate below capacity between the limits In this expression, E,(R) and E(R) are positive functions of the channel transition probabilities and of the rate R; O(N) is a function going to 0 with increasing N. For arrange of rates immediately beneath channel capacity, EL(R) = E(R). Let XN be the set of all sequences of length N that can be transmitted on a given channel, and let YN, be the set of all sequences of length N that can be received. We assume that both XN and YN, are finite sets. Let Pr (y I x), for y E YN and x e XN be the conditional probability of receiving sequence y, given that x was transmitted. We assume that we have a code consisting of ii/r code words; that is, a mapping of the integers from 1 to M into a set of code words x1,..., xm ; where xm E XN ; 1<=5<=M. We assume that maximum likelihood de- coding is performed at the receiver; that is, the decoder decodes the output sequence y into the integer m if (1) (2) 3 P a g e

4 Now let Pem be the probability of decoding error when Xm, is transmitted. A decoding error will be made if a y is received such that (2) is not satisfied. Thus we can express Pem as (3) Where we define the function (4) (5) We shall now upperbound Pem by upperbounding the function (6) The reason for using (6) is not at all obvious intuitively, but we can at least establish its validity by noting that the right-hand side of (6) is always non-negative, thereby satisfying the inequality when some term in the numerator is greater than or equal to the denominator, thus the numerator is greater than or equal to the denominator; raising the fraction to the p power keeps it greater than or equal to 1. Substituting (6) in (3), we have, (7) Equation (7) yields a bound to P,, for a particular set of code words. Aside from certain special cases, this bound is too complicated to be useful if the number of code words is large. We will simplify (7) by averaging over an appropriately chosen ensemble of codes. Let us suppose that we define a probability measure P(x) on the set X, of possible input sequences to the channel. (8) (9) Since the code words are chosen with the probability P(x), 4 P a g e

5 (10) the right-hand side of (10) is independent of m, we can substitute (10) in both the m and m term in (9). Since the summation in (9) is over M-1 terms, this yields (11) The above equation (11) bounds to any discrete channel. 1.3 For MemoryLess Channel: We shall now assume that the channel is memoryless so as to simplify the bound in (11). Let x1,, xn..., XN be the individual letters in an input sequence x, and let y1,.,yn,, YN be the letters in a sequence y. By a memoryless channel, we mean a channel that satisfies, (12) Now we restrict the class of ensembles of codes under consideration to those in which each letter of each code word is chosen independently of all other letters with a probability measure p(x); (13) Substituting (12) and (13) in (11), we get (14) The equation 14 can be written in a simplified form as (15) Note that the bracketed term in (15) is a product of sums and is equal to the bracketed term in (14) by the usual arithmetic rule for multiplying products of sums, 5 P a g e

6 (16) Note: (17) If we now upperbound M - 1 by M = e NR, where R is the code rate in nats per channel symbol, (17) can be rewritten as, (18) Since the right-hand side of (18) is independent of m, it is a bound on the ensemble probability of decoding error and is independent of the probabilities with which the code words are used. Application Specific Modeling Note: The binary symmetric channel in which two inputs and two outputs are modeled, it was first modeled by Elias. Moreover the author has extended his work to modeled very noisy channel in the sense that the probability of receiving a given output is almost independent of the input. Parallel channel was modeled through two discrete memoryless channel, first having K inputs and J outputs with transition probability Pik, the second having I inputs and L outputs with transition probabilities Qli as, 1.4 Continuous channel: A time-discrete amplitude-continuous channel is a channel whose input and output alphabets are the set of real numbers. It is usually necessary or convenient to impose a constraint on the code words of such channels to reflect the physical power limitations of the transmitter. In addition to this a detail analysis of the Gaussian noisy channel is also derived from the base line requirements and equation derived above, the detail is presented in [1]. 6 P a g e

7 2. Multiaccess Channels [2] 2.1 Some basics about Multiaccess channels w.r.t Information Theory In information theory, the multiple-access channel (MAC) is a model for communication scenarios where several transmitters wish to communicate to a common receiver. Sometimes these channels are also called multiaccess channels. Among multi terminal networks, multiple-access channels are those for which the strongest results are known, including an exact capacity result for the discrete memoryless setting with an arbitrary number of senders. A typical multicaccess communication system is demonstrated in the following figure 1, Figure 1: Multi-access Communication System There are multiple transmitters and a single receiver. The received signal is corrupted both by noise and by mutual interference between the transmitters. Each of the transmitters is fed by an information source, and each information source generates a sequence of messages, successive messages arriving at random instants of time. There is usually some small amount of feedback from the receiver to the transmitter, but this feedback will not be our main focus. Our major focus, rather, is on the interference, the noise, and the random, or bursty, message arrivals. This type of model is appropriate for the uplink of a satellite network, for a radio network where there is one central repeater, and for the traffic to the central node on a multidrop telephone line. The beginning of the collision resolution approach to multi-access communication came in 1970 with Abramson s ALOHA network. message (or packet) arrived at a transmitter, it would simply be transmitted, ignoring all other transmitters in the network. If another transmitter was transmitting in an overlapping interval, interference would prevent the message from being correctly received, the cyclic redundancy check (CRC) would not check, no acknowledgment would be sent, and the transmitter would try again later; the later time would be pseudo randomly chosen to avoid the certainty of another collision if both transmitters waited the same time. The multi-access information theoretic approach to mul-tiaccess began in 1973 with a coding theorem developed by Ahlswede and Liao. The noise and interference aspects of the multiaccess channels are appropriately modeled, but the random arrivals of the messages are ignored. 7 P a g e

8 2.2 Multi-access information theory The coding theorems of information theory treat the question of how much data can be reliably communicated from one point, or set of points, to another point. The class of channels to be considered is illustrated in Fig. 2. Each unit of time, the first transmitter sends a symbol x from an alphabet X and the second transmitter sends a symbol w from an alphabet W. There is an output alphabet Y and a transmitter probability assignment P(y I x,w) determining the probability of receiving each y E Y for each choice of inputs. Figure 1: Multi-access Channel with two transmitters The channel is memoryless in the sense that if x = (x1,, xn) and w = (w1,,wn) represent the inputs to transmitters one and two, respectively, over N successive time units, then the probability of receiving y = (y1,,yn)for the given x, w, is We assume for the time being that the alphabets are all discrete, but it will soon be obvious that this can be generalized in the same way as for single input channels. As indicated in Fig.1, there are two independent sources which are encoded independently into the two channel inputs. Consider block coding with a given block length N using M code words for transmitter 1, and L code words for transmitter 2; each code word is a sequence of N channel inputs. For convenience we refer to a code with these parameters as an (N, M, L) code. The rates of the two sources are defined as, (1) Each N units of time, source 1 generates an integer m uniformly distributed from 1 to M, and source 2 independently generates an integer I uniformly distributed from 1 to L. The transmitters send x, and w,, respectively, and the corresponding channel output y enters the decoder and is mapped into a decoded and. The decoding is correct and otherwise a decoding error occurs. (2) 8 P a g e

9 2.3 Multiaccess Coding Theorem Let Q1(x) and Q2(w) be probability assignments on the X and W alphabets, respectively, and consider an ensemble of (N, M, L) codes where each code word xm, 1<=m<=M, is independently selected according to the probability assignment X ~ Q1 (x), W ~ Q2 (y) R1 + R2 < = I (x, w, Y) R1 <= I(x; yiw) R2 <= I (w; yix) (3) (4) (5) (6) and each code word is independently selected according to (7) For each code in the ensemble, the decoder uses maximum likelihood decoding, and we want to upper bound the expected value (8) (9) 9 P a g e

10 Now consider Pe1 We first condition this probability on a particular message 1 entering the second encoder, and a choice of code with a particular w, transmitted at the second input. Given wl we can view the channel as a single input channel with input x, and with transition probabilities A maximum likelihood.decoder for that single input channel will make an error (or be ambiguous) if Since this event must occur whenever a type 1 error occurs, the probability of a type 1 error, conditional on wl being sent, is upper bounded by the probability of error or ambiguity on the above single input channel. (10) Taking the expected value of (2.11) over w, and then using the product form of Q1, Q2, and P again, (11) (12) (13) **** Remaining part is continued from next page**** 10 P a g e

11 11 P a g e

12 12 P a g e

13 13 P a g e

14 3. Broadcast Channel [3] 3.1 Introduction A communication channel or simply channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used to convey an information signal, for example a digital bit stream, from one or several senders (or transmitters) to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second. In information theory, a channel refers to a theoretical channel model with certain error characteristics. In this more general view, a storage device is also a kind of channel, which can be sent to (written) and received from (reading). In broadcasting, a channel is a designated radio frequency (or, equivalently, wavelength), assigned by a competent frequency assignment authority for the operation of a particular radio station, television station or television channel. Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model. 3.2 Mathematical Model of Broadcast Channel We model a scenario in which a single source attempting to communicate information simultaneously to several receivers. Thus several different channels with a common input alphabet are specified. We shall determine the families of simultaneously achievable transmission rates for many extreme classes of channels. Upper and lower bounds on the capacity region will be found, and it will be shown that the family of theoretically achievable rates dominates the family of rates achievable by previously known time- sharing. The model is also applicable to the situation of compound channels, where the transmitter does not know the true channel characteristics but wishes to transmit at an interesting rate to the receiver. The broadcast channel is modeled on the next page, see on page P a g e

15 3.3 Broadcast channel model 15 P a g e

16 16 P a g e

17 17 P a g e

18 18 P a g e

19 19 P a g e

20 4. More on Broadcast channel [4] A broadcast channel has one sender and many receivers. The object is to broadcast information to the receivers. The information may be independent or nested. We shall treat broadcast channels with two receivers as shown in Fig. 1 Fig.1 Broadcast Channel A broadcast channel consists of an input alphabet X and two output alphabets Y1 and Y2 and a probability transition function p (y1, y2 I x). The broadcast channel is said to be memoryless if A code for a broadcast channel with independent information consists of an encoder And two decoders, 20 P a g e

21 It is often the case in practice that one received signal is a degraded, or corrupted, version of the other. One receiver may be farther away or downstream. Concluding Remarks One of the coding ideas used in achieving good rate regions is superposition, in which one layers, or superimposes, the information intended for each of the receivers. The receiver can then peel off the information in layers. To achieve superposition, one introduces auxiliary random variables that act as virtual signals. These virtual signals participate in the construction of the code, but are not actually sent. One useful idea used in the proof of capacity for the deterministic broadcast channel is random binning of the outputs Y1 and Y2. Another technique is Marton s introduction of correlated auxiliary random variables. Marton s region is the largest known achievable rate region for the general broadcast channel, but the capacity region remains unknown. 21 P a g e

22 5. The Capacity of the Gaussian Interference Channel under Strong Interference [5] The capacity region of an interference channel, when two separate messages are sent has been obtained only for very strong interference and for some rather trivial cases. We will analyze the problem for a wider class of channels with strong interference. A Gaussian interference channel takes the following form after suitable normalization: (1) where xi, yi and ni (i = 1,2) are sampled values of the input signal, the output signal, and superposed noise, respectively. interference does not reduce the capacity when the interference is very strong- that is, when ( alpha and beta satisfy the following conditions simultaneously: (2) The capacity region is a rectangle with side lengths C1 and C2 where C1 and C2 are the capacities of the component channels when there is no interference: The capacity region G is obtained when the following two inequalities are simultaneously satisfied: and that the region is an intersection GJ of the two capacity regions G1 G2, of the two multiple access channels. Gj can be expressed by the following inequalities when the conditions (4) hold: (3) (4) (5) 22 P a g e

23 Let the encoding functions f1(i) and f2( j) map the message i generated at the first information source and the message j generated at the second source into codewords x1i and x2j, both of length n; let the decoding functions g1(y1) and g2( y2) map the received words y1 and y2 into i and j if y, and y, belong to decoding sets D1i and D2j, respectively. An code for the Gaussian two-user channel (1) consists of M1 codewords x1i, M2 codewords x2j M1 decoding sets such that (6) where codewords should be chosen with the following power constraints: (7) (8) The typical shape of the capacity region is depicted in Fig. 1. The dotted curve represents the rate-pair obtained by frequency division multiplexing (FDM). The rate pair of FDM is expressed in the following way: (9) Strong interference can be expressed as, 23 P a g e

24 (10) The above equation can be written in simplified form as, (11) conditions(10) and (11) having strong and very strong interference. It should be noted that these inequalities compare the amount of information flowing in the desired directions with those flowing in the undesired directions. In other words, both conditions represent the situation where leakage occurs. 24 P a g e

25 6. The Capacity Region of the Discrete Memoryless Interference Channel with Strong Interference [6] The discrete memoryless interference channel with strong interference is a discrete memoryless interference channel with inputs X1 and X2, and corresponding outputs Y1 and Y2 which satisfy, And for all product probability distributions on The capacity region of this channel coincides with the capacity region C of the model where both messages are required at both receiving terminals. This region can be expressed as the union of the rate pairs (R,, R,) satisfying (a) (b) (c) (d) (e) where Q is a time-sharing parameter of cardinality 4, and the union is over all probability distributions of the form set by the channel. Let W1 and W2, be two independent information sources uniformly distributed over the integer sets (1,...,M1} and { 1,..., M2}, respectively. Encoder 1 maps W1, into codeword X1 and encoder 2 maps W2, into codeword X2. The interference channel consists of four finite alphabets and conditional probability distributions and for this channel is a set of two encoding functions, And two decoding functions Such that 25 P a g e

26 (f) (g) (h) Note: Achievability and converse (for hand written calculation, kindly move on next page) 26 P a g e

27 27 P a g e

28 28 P a g e

29 7. Capacity theorem for the Relay Channel [7] In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes. A discrete memoryless single relay channel can be modelled as four finite sets, X1, X2, Y1 and Y and a conditional probability distribution P(Y1, Y1 I X1, X2) on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by P(X1, X2). 29 P a g e

30 30 P a g e

31 31 P a g e

32 8. A Proof of the Data Compression Theorem of Slepian and Wolf for Ergodic Sources [8] 8.1 Data Compression In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. The process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called source coding (encoding done at the source of the data before it is stored or transmitted) in opposition to channel coding. Compression is useful because it reduces resources required to store and transmit data. Computational resources are consumed in the compression process and, usually, in the reversal of the process (decompression). Figure 1, shows the data compression process, Fig.1 Data Compression Process and its elements 32 P a g e

33 8.2 A Proof of the Data Compression Theorem 33 P a g e

34 34 P a g e

35 8.3 Encoding Technique Let A be the set of typical (x,y) n-sequences. We shall prove that (Rx, Ry) = (H(X 1 Y),H(Y)) is achievable. The achievability of the pair (H(X),H(Y 1 X)) follows an identical argument. The remainder of the boundary follows by timesharing these two schemes. 35 P a g e

36 36 P a g e

37 37 P a g e

38 9. Basic Limits on Protocol Information in Data Communication Networks [9] 9.1 Introduction The limitations on the amount of protocol information that must be transmitted in a data communication network to keep track of source and receiver addresses and of the starting and stopping of messages. Assuming Poisson message arrivals between each communicating source-receiver pair, we find a lower bound on the required protocol information per message. This lower bound is the sum of two terms, one for the message length information, which depends only on the distribution of message lengths, and the other for the message start information, which depends only on the product of the source-receiver pair arrival rate and the expected delay for transmitting the message. 9.2 Some Assumption in terms of Data Communication Prospect DATA communication network, for the purposes of this paper, is a finite collection of nodes; a finite collection of noiseless two-way communication links, each connecting some pair of nodes; and a finite collection of sources and receivers, each source and each receiver being connected to a node. We view each source as being paired with a receiver (typically at a different node), and the purpose of the network is to transmit messages from each source to its paired receiver. Messages are assumed to arrive from a source at randomly chosen instants of time and consist of binary strings of random length. Any communication network must not only transmit the messages (data) from source to receiver but must also transmit a certain amount of control information indicating, for example, the beginning, the end, and the destination of each message. It is customary in data networks to refer to such control information as protocol information and to refer to the conventions for representing it as protocols. In simple words a protocol is a source code for representing control information. Our major objective in this paper is to define and calculate the amount of protocol information (in the information-theoretic sense) required in the type of network described above. We shall find, rather surprisingly, that this information is related solely to the starting and stopping of messages and has nothing to do with addressing. 9.3 Two Common cases of understanding The following examples provide some idea of the wide variety of ways in which protocol information can be represented in different types of networks. In a message switching network, each message is generally preceded by a binary encoding of the source and receiver address and of the message length. Messages, with their preceding protocol bits, are queued at the individual nodes and passed from one node to another by a variety of routing algorithms which will be of no concern here. In a packet switching network, the messages are first divided into packets of some fixed length. Each packet is preceded by an encoding of the source and receiver address and the position of the packet within the message. The 38 P a g e

39 final packet, of course, might have fewer message bits than the others and this information must also be encoded. The packets are transmitted through the network independently and are reconstructed into a message at the receiver s node. The amount of protocol information is somewhat greater than for message switching, but the delay in transmission is frequently reduced. Another possible system is to use multiplexers on each link of the network and to assign to each source-receiver pair a fixed fraction of the capacity of the links on some path from source to receiver. Frequently in such systems, there is a special string of bits called an idle character, which is used repeatedly when there is no message to be sent, and a special string of bits called a flag, which precedes and follows each message. The messages themselves are slightly encoded to prevent the occurrence of the flag character within the encoded message. The idle characters and flags should be regarded as protocol codewords which together indicate the beginning and the end of each message. Such a system is usually either very inefficient in its use of link capacities or subject to very long queues since it has no flexibility to allocate the network resources to messages as needed. If all the messages from source are of equal length and if they arrive equally spaced in time, then the multiplexer becomes very efficient and, in fact, it is possible to eliminate all control characters. In comparing the multiplexer approach with the message switching or packet switching approach, we see that the former transmits message start information but no addresses, while the latter transmits addresses but no message start information. Simple network communication model is described in fig. 1. Figure 1. Simple Network Communication Model 39 P a g e

40 9.4 Basic Model (Assumption) and Calculation 40 P a g e

41 41 P a g e

42 42 P a g e

43 9.5 Strategies for minimizing protocol information with a delay constraint Description through Flow Chart 43 P a g e

44 Acknowledgement I am really thanks to Prof. Zhang Wenyi for his kind support during this course. He is very kind, humble, supportive and hardworking professor. He is very sound in theoretical aspects of network science. He taught us the course by selecting some good papers from IEEE transaction on information theory journal and discussed all the nitty gritty in those papers in a very detail manner in the class. I have learnt a lot in this course that I never learnt before. References (Papers discussed) [1]. Gallager, R. "A simple derivation of the coding theorem and some applications." IEEE Transactions on Information Theory 11.1 (1965): [2]. Gallager, Robert. "A perspective on multiaccess channels." IEEE Transactions on information Theory 31.2 (1985): [3]. Cover, Thomas. "Broadcast channels." IEEE Transactions on Information Theory 18.1 (1972): [4]. Cover, Thomas M. "Comments on broadcast channels." IEEE Transactions on information theory 44.6 (1998): [5]. Sato, Hiroshi. "The capacity of the Gaussian interference channel under strong interference (corresp.)." IEEE transactions on information theory 27.6 (1981): [6]. Costa, Max HM, and Abbas El Gamal. "The capacity region of the discrete memoryless interference channel with strong interference." IEEE Transactions on Information Theory 33.5 (1987): [7]. Cover, Thomas, and A. EL Gamal. "Capacity theorems for the relay channel." IEEE Transactions on information theory 25.5 (1979): [8]. Cover, T. "A proof of the data compression theorem of Slepian and Wolf for ergodic sources (Corresp.)." IEEE Transactions on Information Theory 21.2 (1975): [9]. Gallager, Robert. "Basic limits on protocol information in data communication networks." IEEE Transactions on Information Theory 22.4 (1976): P a g e

Block Markov Encoding & Decoding

Block Markov Encoding & Decoding 1 Block Markov Encoding & Decoding Deqiang Chen I. INTRODUCTION Various Markov encoding and decoding techniques are often proposed for specific channels, e.g., the multi-access channel (MAC) with feedback,

More information

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 1 LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 2 STORAGE SPACE Uncompressed graphics, audio, and video data require substantial storage capacity. Storing uncompressed video is not possible

More information

Computing and Communications 2. Information Theory -Channel Capacity

Computing and Communications 2. Information Theory -Channel Capacity 1896 1920 1987 2006 Computing and Communications 2. Information Theory -Channel Capacity Ying Cui Department of Electronic Engineering Shanghai Jiao Tong University, China 2017, Autumn 1 Outline Communication

More information

SHANNON S source channel separation theorem states

SHANNON S source channel separation theorem states IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 9, SEPTEMBER 2009 3927 Source Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Member, IEEE, Elza Erkip, Senior Member,

More information

The Z Channel. Nihar Jindal Department of Electrical Engineering Stanford University, Stanford, CA

The Z Channel. Nihar Jindal Department of Electrical Engineering Stanford University, Stanford, CA The Z Channel Sriram Vishwanath Dept. of Elec. and Computer Engg. Univ. of Texas at Austin, Austin, TX E-mail : sriram@ece.utexas.edu Nihar Jindal Department of Electrical Engineering Stanford University,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY This channel model has also been referred to as unidirectional cooperation

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY This channel model has also been referred to as unidirectional cooperation IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY 2011 4087 New Inner Outer Bounds for the Memoryless Cognitive Interference Channel Some New Capacity Results Stefano Rini, Daniela Tuninetti,

More information

Scheduling in omnidirectional relay wireless networks

Scheduling in omnidirectional relay wireless networks Scheduling in omnidirectional relay wireless networks by Shuning Wang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science

More information

5984 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5984 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5984 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Interference Channels With Correlated Receiver Side Information Nan Liu, Member, IEEE, Deniz Gündüz, Member, IEEE, Andrea J.

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Multi-user Two-way Deterministic Modulo 2 Adder Channels When Adaptation Is Useless

Multi-user Two-way Deterministic Modulo 2 Adder Channels When Adaptation Is Useless Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 2011 Multi-user Two-way Deterministic Modulo 2 Adder Channels When Adaptation Is Useless Zhiyu Cheng, Natasha

More information

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007 3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 53, NO 10, OCTOBER 2007 Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution Yingbin Liang, Member, IEEE, Venugopal V Veeravalli, Fellow,

More information

Information Theory and Communication Optimal Codes

Information Theory and Communication Optimal Codes Information Theory and Communication Optimal Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/1 Roadmap Examples and Types of Codes Kraft Inequality

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 13: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 22 th, 2015 1 o Source Code Generation Lecture Outlines Source Coding

More information

Lab/Project Error Control Coding using LDPC Codes and HARQ

Lab/Project Error Control Coding using LDPC Codes and HARQ Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an

More information

On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT

On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT Syed Ali Jafar University of California Irvine Irvine, CA 92697-2625 Email: syed@uciedu Andrea Goldsmith Stanford University Stanford,

More information

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Course Presentation Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Data Compression Motivation Data storage and transmission cost money Use fewest number of

More information

Module 3: Physical Layer

Module 3: Physical Layer Module 3: Physical Layer Dr. Associate Professor of Computer Science Jackson State University Jackson, MS 39217 Phone: 601-979-3661 E-mail: natarajan.meghanathan@jsums.edu 1 Topics 3.1 Signal Levels: Baud

More information

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society Abstract MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING A Public Lecture to the Uganda Mathematics Society F F Tusubira, PhD, MUIPE, MIEE, REng, CEng Mathematical theory and techniques play a vital

More information

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains: The Lecture Contains: The Need for Video Coding Elements of a Video Coding System Elements of Information Theory Symbol Encoding Run-Length Encoding Entropy Encoding file:///d /...Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2040/40_1.htm[12/31/2015

More information

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

Entropy, Coding and Data Compression

Entropy, Coding and Data Compression Entropy, Coding and Data Compression Data vs. Information yes, not, yes, yes, not not In ASCII, each item is 3 8 = 24 bits of data But if the only possible answers are yes and not, there is only one bit

More information

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure CHAPTER 2 Syllabus: 1) Pulse amplitude modulation 2) TDM 3) Wave form coding techniques 4) PCM 5) Quantization noise and SNR 6) Robust quantization Pulse amplitude modulation In pulse amplitude modulation,

More information

Coding Techniques and the Two-Access Channel

Coding Techniques and the Two-Access Channel Coding Techniques and the Two-Access Channel A.J. Han VINCK Institute for Experimental Mathematics, University of Duisburg-Essen, Germany email: Vinck@exp-math.uni-essen.de Abstract. We consider some examples

More information

Information flow over wireless networks: a deterministic approach

Information flow over wireless networks: a deterministic approach Information flow over wireless networks: a deterministic approach alman Avestimehr In collaboration with uhas iggavi (EPFL) and avid Tse (UC Berkeley) Overview Point-to-point channel Information theory

More information

DEGRADED broadcast channels were first studied by

DEGRADED broadcast channels were first studied by 4296 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 9, SEPTEMBER 2008 Optimal Transmission Strategy Explicit Capacity Region for Broadcast Z Channels Bike Xie, Student Member, IEEE, Miguel Griot,

More information

Spreading Codes and Characteristics. Error Correction Codes

Spreading Codes and Characteristics. Error Correction Codes Spreading Codes and Characteristics and Error Correction Codes Global Navigational Satellite Systems (GNSS-6) Short course, NERTU Prasad Krishnan International Institute of Information Technology, Hyderabad

More information

Multicasting over Multiple-Access Networks

Multicasting over Multiple-Access Networks ing oding apacity onclusions ing Department of Electrical Engineering and omputer Sciences University of alifornia, Berkeley May 9, 2006 EE 228A Outline ing oding apacity onclusions 1 2 3 4 oding 5 apacity

More information

A Bit of network information theory

A Bit of network information theory Š#/,% 0/,94%#(.)15% A Bit of network information theory Suhas Diggavi 1 Email: suhas.diggavi@epfl.ch URL: http://licos.epfl.ch Parts of talk are joint work with S. Avestimehr 2, S. Mohajer 1, C. Tian 3,

More information

Computer Networks. Week 03 Founda(on Communica(on Concepts. College of Information Science and Engineering Ritsumeikan University

Computer Networks. Week 03 Founda(on Communica(on Concepts. College of Information Science and Engineering Ritsumeikan University Computer Networks Week 03 Founda(on Communica(on Concepts College of Information Science and Engineering Ritsumeikan University Agenda l Basic topics of electromagnetic signals: frequency, amplitude, degradation

More information

Introduction to Coding Theory

Introduction to Coding Theory Coding Theory Massoud Malek Introduction to Coding Theory Introduction. Coding theory originated with the advent of computers. Early computers were huge mechanical monsters whose reliability was low compared

More information

Joint Relaying and Network Coding in Wireless Networks

Joint Relaying and Network Coding in Wireless Networks Joint Relaying and Network Coding in Wireless Networks Sachin Katti Ivana Marić Andrea Goldsmith Dina Katabi Muriel Médard MIT Stanford Stanford MIT MIT Abstract Relaying is a fundamental building block

More information

SHANNON showed that feedback does not increase the capacity

SHANNON showed that feedback does not increase the capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 5, MAY 2011 2667 Feedback Capacity of the Gaussian Interference Channel to Within 2 Bits Changho Suh, Student Member, IEEE, and David N. C. Tse, Fellow,

More information

A Brief Introduction to Information Theory and Lossless Coding

A Brief Introduction to Information Theory and Lossless Coding A Brief Introduction to Information Theory and Lossless Coding 1 INTRODUCTION This document is intended as a guide to students studying 4C8 who have had no prior exposure to information theory. All of

More information

2. TELECOMMUNICATIONS BASICS

2. TELECOMMUNICATIONS BASICS 2. TELECOMMUNICATIONS BASICS The purpose of any telecommunications system is to transfer information from the sender to the receiver by a means of a communication channel. The information is carried by

More information

Error Performance of Channel Coding in Random-Access Communication

Error Performance of Channel Coding in Random-Access Communication IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 6, JUNE 2012 3961 Error Performance of Channel Coding in Random-Access Communication Zheng Wang, Student Member, IEEE, andjieluo, Member, IEEE Abstract

More information

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

Lecture5: Lossless Compression Techniques

Lecture5: Lossless Compression Techniques Fixed to fixed mapping: we encoded source symbols of fixed length into fixed length code sequences Fixed to variable mapping: we encoded source symbols of fixed length into variable length code sequences

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 14: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 25 th, 2015 1 Previous Lecture: Source Code Generation: Lossless

More information

photons photodetector t laser input current output current

photons photodetector t laser input current output current 6.962 Week 5 Summary: he Channel Presenter: Won S. Yoon March 8, 2 Introduction he channel was originally developed around 2 years ago as a model for an optical communication link. Since then, a rather

More information

Coding for Efficiency

Coding for Efficiency Let s suppose that, over some channel, we want to transmit text containing only 4 symbols, a, b, c, and d. Further, let s suppose they have a probability of occurrence in any block of text we send as follows

More information

Outline. EEC-484/584 Computer Networks. Homework #1. Homework #1. Lecture 8. Wenbing Zhao Homework #1 Review

Outline. EEC-484/584 Computer Networks. Homework #1. Homework #1. Lecture 8. Wenbing Zhao Homework #1 Review EEC-484/584 Computer Networks Lecture 8 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline Homework #1 Review Protocol verification Example

More information

Broadcast Networks with Layered Decoding and Layered Secrecy: Theory and Applications

Broadcast Networks with Layered Decoding and Layered Secrecy: Theory and Applications 1 Broadcast Networks with Layered Decoding and Layered Secrecy: Theory and Applications Shaofeng Zou, Student Member, IEEE, Yingbin Liang, Member, IEEE, Lifeng Lai, Member, IEEE, H. Vincent Poor, Fellow,

More information

THE Shannon capacity of state-dependent discrete memoryless

THE Shannon capacity of state-dependent discrete memoryless 1828 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 5, MAY 2006 Opportunistic Orthogonal Writing on Dirty Paper Tie Liu, Student Member, IEEE, and Pramod Viswanath, Member, IEEE Abstract A simple

More information

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding Comm. 50: Communication Theory Lecture 6 - Introduction to Source Coding Digital Communication Systems Source of Information User of Information Source Encoder Source Decoder Channel Encoder Channel Decoder

More information

Network Information Theory

Network Information Theory 1 / 191 Network Information Theory Young-Han Kim University of California, San Diego Joint work with Abbas El Gamal (Stanford) IEEE VTS San Diego 2009 2 / 191 Network Information Flow Consider a general

More information

Information Theory and Huffman Coding

Information Theory and Huffman Coding Information Theory and Huffman Coding Consider a typical Digital Communication System: A/D Conversion Sampling and Quantization D/A Conversion Source Encoder Source Decoder bit stream bit stream Channel

More information

Interference Mitigation Through Limited Transmitter Cooperation I-Hsiang Wang, Student Member, IEEE, and David N. C.

Interference Mitigation Through Limited Transmitter Cooperation I-Hsiang Wang, Student Member, IEEE, and David N. C. IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 57, NO 5, MAY 2011 2941 Interference Mitigation Through Limited Transmitter Cooperation I-Hsiang Wang, Student Member, IEEE, David N C Tse, Fellow, IEEE Abstract

More information

WIRELESS or wired link failures are of a nonergodic nature

WIRELESS or wired link failures are of a nonergodic nature IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY 2011 4187 Robust Communication via Decentralized Processing With Unreliable Backhaul Links Osvaldo Simeone, Member, IEEE, Oren Somekh, Member,

More information

Information Theory: the Day after Yesterday

Information Theory: the Day after Yesterday : the Day after Yesterday Department of Electrical Engineering and Computer Science Chicago s Shannon Centennial Event September 23, 2016 : the Day after Yesterday IT today Outline The birth of information

More information

Multiplexing Concepts and Introduction to BISDN. Professor Richard Harris

Multiplexing Concepts and Introduction to BISDN. Professor Richard Harris Multiplexing Concepts and Introduction to BISDN Professor Richard Harris Objectives Define what is meant by multiplexing and demultiplexing Identify the main types of multiplexing Space Division Time Division

More information

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication 1 Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING 1.1 SOURCE CODING Whether a source is analog or digital, a digital communication system is designed to transmit information in digital form.

More information

On the Capacity Regions of Two-Way Diamond. Channels

On the Capacity Regions of Two-Way Diamond. Channels On the Capacity Regions of Two-Way Diamond 1 Channels Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang arxiv:1410.5085v1 [cs.it] 19 Oct 2014 Abstract In this paper, we study the capacity regions of

More information

On Optimum Communication Cost for Joint Compression and Dispersive Information Routing

On Optimum Communication Cost for Joint Compression and Dispersive Information Routing 2010 IEEE Information Theory Workshop - ITW 2010 Dublin On Optimum Communication Cost for Joint Compression and Dispersive Information Routing Kumar Viswanatha, Emrah Akyol and Kenneth Rose Department

More information

T325 Summary T305 T325 B BLOCK 3 4 PART III T325. Session 11 Block III Part 3 Access & Modulation. Dr. Saatchi, Seyed Mohsen.

T325 Summary T305 T325 B BLOCK 3 4 PART III T325. Session 11 Block III Part 3 Access & Modulation. Dr. Saatchi, Seyed Mohsen. T305 T325 B BLOCK 3 4 PART III T325 Summary Session 11 Block III Part 3 Access & Modulation [Type Dr. Saatchi, your address] Seyed Mohsen [Type your phone number] [Type your e-mail address] Prepared by:

More information

Protocol Coding for Two-Way Communications with Half-Duplex Constraints

Protocol Coding for Two-Way Communications with Half-Duplex Constraints Protocol Coding for Two-Way Communications with Half-Duplex Constraints Petar Popovski and Osvaldo Simeone Department of Electronic Systems, Aalborg University, Denmark CWCSPR, ECE Dept., NJIT, USA Email:

More information

Problem Sheet 1 Probability, random processes, and noise

Problem Sheet 1 Probability, random processes, and noise Problem Sheet 1 Probability, random processes, and noise 1. If F X (x) is the distribution function of a random variable X and x 1 x 2, show that F X (x 1 ) F X (x 2 ). 2. Use the definition of the cumulative

More information

A unified graphical approach to

A unified graphical approach to A unified graphical approach to 1 random coding for multi-terminal networks Stefano Rini and Andrea Goldsmith Department of Electrical Engineering, Stanford University, USA arxiv:1107.4705v3 [cs.it] 14

More information

Multiuser Information Theory and Wireless Communications. Professor in Charge: Toby Berger Principal Lecturer: Jun Chen

Multiuser Information Theory and Wireless Communications. Professor in Charge: Toby Berger Principal Lecturer: Jun Chen Multiuser Information Theory and Wireless Communications Professor in Charge: Toby Berger Principal Lecturer: Jun Chen Where and When? 1 Good News No homework. No exam. 2 Credits:1-2 One credit: submit

More information

Wireless Network Information Flow

Wireless Network Information Flow Š#/,% 0/,94%#(.)15% Wireless Network Information Flow Suhas iggavi School of Computer and Communication Sciences, Laboratory for Information and Communication Systems (LICOS), EPFL Email: suhas.diggavi@epfl.ch

More information

Capacity-Achieving Rateless Polar Codes

Capacity-Achieving Rateless Polar Codes Capacity-Achieving Rateless Polar Codes arxiv:1508.03112v1 [cs.it] 13 Aug 2015 Bin Li, David Tse, Kai Chen, and Hui Shen August 14, 2015 Abstract A rateless coding scheme transmits incrementally more and

More information

4118 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Zhiyu Yang, Student Member, IEEE, and Lang Tong, Fellow, IEEE

4118 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Zhiyu Yang, Student Member, IEEE, and Lang Tong, Fellow, IEEE 4118 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 Cooperative Sensor Networks With Misinformed Nodes Zhiyu Yang, Student Member, IEEE, and Lang Tong, Fellow, IEEE Abstract The

More information

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program.

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program. Combined Error Correcting and Compressing Codes Extended Summary Thomas Wenisch Peter F. Swaszek Augustus K. Uht 1 University of Rhode Island, Kingston RI Submitted to International Symposium on Information

More information

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access Spread Spectrum Chapter 18 FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access Single Carrier The traditional way Transmitted signal

More information

SOME PHYSICAL LAYER ISSUES. Lecture Notes 2A

SOME PHYSICAL LAYER ISSUES. Lecture Notes 2A SOME PHYSICAL LAYER ISSUES Lecture Notes 2A Delays in networks Propagation time or propagation delay, t prop Time required for a signal or waveform to propagate (or move) from one point to another point.

More information

Lecture 1 Introduction

Lecture 1 Introduction Lecture 1 Introduction I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw September 22, 2015 1 / 46 I-Hsiang Wang IT Lecture 1 Information Theory Information

More information

Digital Television Lecture 5

Digital Television Lecture 5 Digital Television Lecture 5 Forward Error Correction (FEC) Åbo Akademi University Domkyrkotorget 5 Åbo 8.4. Error Correction in Transmissions Need for error correction in transmissions Loss of data during

More information

Burst Error Correction Method Based on Arithmetic Weighted Checksums

Burst Error Correction Method Based on Arithmetic Weighted Checksums Engineering, 0, 4, 768-773 http://dxdoiorg/0436/eng04098 Published Online November 0 (http://wwwscirporg/journal/eng) Burst Error Correction Method Based on Arithmetic Weighted Checksums Saleh Al-Omar,

More information

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication Available online at www.interscience.in Convolutional Coding Using Booth Algorithm For Application in Wireless Communication Sishir Kalita, Parismita Gogoi & Kandarpa Kumar Sarma Department of Electronics

More information

Introduction to Telecommunications and Computer Engineering Unit 3: Communications Systems & Signals

Introduction to Telecommunications and Computer Engineering Unit 3: Communications Systems & Signals Introduction to Telecommunications and Computer Engineering Unit 3: Communications Systems & Signals Syedur Rahman Lecturer, CSE Department North South University syedur.rahman@wolfson.oxon.org Acknowledgements

More information

Index Terms Deterministic channel model, Gaussian interference channel, successive decoding, sum-rate maximization.

Index Terms Deterministic channel model, Gaussian interference channel, successive decoding, sum-rate maximization. 3798 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 6, JUNE 2012 On the Maximum Achievable Sum-Rate With Successive Decoding in Interference Channels Yue Zhao, Member, IEEE, Chee Wei Tan, Member,

More information

Optimal Power Allocation over Fading Channels with Stringent Delay Constraints

Optimal Power Allocation over Fading Channels with Stringent Delay Constraints 1 Optimal Power Allocation over Fading Channels with Stringent Delay Constraints Xiangheng Liu Andrea Goldsmith Dept. of Electrical Engineering, Stanford University Email: liuxh,andrea@wsl.stanford.edu

More information

Frequency-Domain Sharing and Fourier Series

Frequency-Domain Sharing and Fourier Series MIT 6.02 DRAFT Lecture Notes Fall 200 (Last update: November 9, 200) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 4 Frequency-Domain Sharing and Fourier Series In earlier

More information

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON).

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). 1. Some easy problems. 1.1. Guessing a number. Someone chose a number x between 1 and N. You are allowed to ask questions: Is this number larger

More information

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27) ECEn 665: Antennas and Propagation for Wireless Communications 131 9. Modulation Modulation is a way to vary the amplitude and phase of a sinusoidal carrier waveform in order to transmit information. When

More information

Channel Concepts CS 571 Fall Kenneth L. Calvert

Channel Concepts CS 571 Fall Kenneth L. Calvert Channel Concepts CS 571 Fall 2006 2006 Kenneth L. Calvert What is a Channel? Channel: a means of transmitting information A means of communication or expression Webster s NCD Aside: What is information...?

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

Capacity and Optimal Resource Allocation for Fading Broadcast Channels Part I: Ergodic Capacity

Capacity and Optimal Resource Allocation for Fading Broadcast Channels Part I: Ergodic Capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 3, MARCH 2001 1083 Capacity Optimal Resource Allocation for Fading Broadcast Channels Part I: Ergodic Capacity Lang Li, Member, IEEE, Andrea J. Goldsmith,

More information

CONSIDER a sensor network of nodes taking

CONSIDER a sensor network of nodes taking 5660 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 Wyner-Ziv Coding Over Broadcast Channels: Hybrid Digital/Analog Schemes Yang Gao, Student Member, IEEE, Ertem Tuncel, Member,

More information

TSIN01 Information Networks Lecture 9

TSIN01 Information Networks Lecture 9 TSIN01 Information Networks Lecture 9 Danyo Danev Division of Communication Systems Department of Electrical Engineering Linköping University, Sweden September 26 th, 2017 Danyo Danev TSIN01 Information

More information

BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS

BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS Aminata A. Garba Dept. of Electrical and Computer Engineering, Carnegie Mellon University aminata@ece.cmu.edu ABSTRACT We consider

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes Information is stored and exchanged in the form of streams of characters from some alphabet. An alphabet is a finite set of symbols, such as the lower-case Roman alphabet {a,b,c,,z}.

More information

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix Send SMS s : ONJntuSpeed To 9870807070 To Recieve Jntu Updates Daily On Your Mobile For Free www.strikingsoon.comjntu ONLINE EXMINTIONS [Mid 2 - dc] http://jntuk.strikingsoon.com 1. Two binary random

More information

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam German University in Cairo - GUC Faculty of Information Engineering & Technology - IET Department of Communication Engineering Dr.-Ing. Heiko Schwarz COMM901 Source Coding and Compression Winter Semester

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

IN recent years, there has been great interest in the analysis

IN recent years, there has been great interest in the analysis 2890 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 7, JULY 2006 On the Power Efficiency of Sensory and Ad Hoc Wireless Networks Amir F. Dana, Student Member, IEEE, and Babak Hassibi Abstract We

More information

Coding for Noisy Networks

Coding for Noisy Networks Coding for Noisy Networks Abbas El Gamal Stanford University ISIT Plenary, June 2010 A. El Gamal (Stanford University) Coding for Noisy Networks ISIT Plenary, June 2010 1 / 46 Introduction Over past 40+

More information

ITM 1010 Computer and Communication Technologies

ITM 1010 Computer and Communication Technologies ITM 1010 Computer and Communication Technologies Lecture #14 Part II Introduction to Communication Technologies: Digital Signals: Digital modulation, channel sharing 2003 香港中文大學, 電子工程學系 (Prof. H.K.Tsang)

More information

Multiple Input Multiple Output (MIMO) Operation Principles

Multiple Input Multiple Output (MIMO) Operation Principles Afriyie Abraham Kwabena Multiple Input Multiple Output (MIMO) Operation Principles Helsinki Metropolia University of Applied Sciences Bachlor of Engineering Information Technology Thesis June 0 Abstract

More information

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Error Control Coding Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Topics Introduction The Channel Coding Problem Linear Block Codes Cyclic Codes BCH and Reed-Solomon

More information

CSCD 433 Network Programming Fall Lecture 5 Physical Layer Continued

CSCD 433 Network Programming Fall Lecture 5 Physical Layer Continued CSCD 433 Network Programming Fall 2016 Lecture 5 Physical Layer Continued 1 Topics Definitions Analog Transmission of Digital Data Digital Transmission of Analog Data Multiplexing 2 Different Types of

More information

Communications Overhead as the Cost of Constraints

Communications Overhead as the Cost of Constraints Communications Overhead as the Cost of Constraints J. Nicholas Laneman and Brian. Dunn Department of Electrical Engineering University of Notre Dame Email: {jnl,bdunn}@nd.edu Abstract This paper speculates

More information

Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione. Physical layer. Fundamentals of Communication Networks

Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione. Physical layer. Fundamentals of Communication Networks Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione Physical layer Fundamentals of Communication Networks 1 Disclaimer o The basics of signal characterization (in time and frequency

More information

Chapter 2 Direct-Sequence Systems

Chapter 2 Direct-Sequence Systems Chapter 2 Direct-Sequence Systems A spread-spectrum signal is one with an extra modulation that expands the signal bandwidth greatly beyond what is required by the underlying coded-data modulation. Spread-spectrum

More information

(Refer Slide Time: 2:23)

(Refer Slide Time: 2:23) Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture-11B Multiplexing (Contd.) Hello and welcome to today s lecture on multiplexing

More information

THE mobile wireless environment provides several unique

THE mobile wireless environment provides several unique 2796 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 7, NOVEMBER 1998 Multiaccess Fading Channels Part I: Polymatroid Structure, Optimal Resource Allocation Throughput Capacities David N. C. Tse,

More information

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems 1 Introduction The reliable transmission of information over noisy channels is one of the basic requirements of digital information and communication systems. Here, transmission is understood both as transmission

More information

Performance of Single-tone and Two-tone Frequency-shift Keying for Ultrawideband

Performance of Single-tone and Two-tone Frequency-shift Keying for Ultrawideband erformance of Single-tone and Two-tone Frequency-shift Keying for Ultrawideband Cheng Luo Muriel Médard Electrical Engineering Electrical Engineering and Computer Science, and Computer Science, Massachusetts

More information

Optimum Power Allocation in Cooperative Networks

Optimum Power Allocation in Cooperative Networks Optimum Power Allocation in Cooperative Networks Jaime Adeane, Miguel R.D. Rodrigues, and Ian J. Wassell Laboratory for Communication Engineering Department of Engineering University of Cambridge 5 JJ

More information

Antennas and Propagation. Chapter 6b: Path Models Rayleigh, Rician Fading, MIMO

Antennas and Propagation. Chapter 6b: Path Models Rayleigh, Rician Fading, MIMO Antennas and Propagation b: Path Models Rayleigh, Rician Fading, MIMO Introduction From last lecture How do we model H p? Discrete path model (physical, plane waves) Random matrix models (forget H p and

More information

Frequency-Hopped Spread-Spectrum

Frequency-Hopped Spread-Spectrum Chapter Frequency-Hopped Spread-Spectrum In this chapter we discuss frequency-hopped spread-spectrum. We first describe the antijam capability, then the multiple-access capability and finally the fading

More information