COURSE MATERIAL Subject Name: Communication Theory UNIT V

Size: px
Start display at page:

Download "COURSE MATERIAL Subject Name: Communication Theory UNIT V"

Transcription

1 NH-67, TRICHY MAIN ROAD, PULIYUR, C.F , KARUR DT. DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING COURSE MATERIAL Subject Name: Communication Theory Subject Code: Class/Sem: BE (ECE)/VII Staff Name: Devilatha D UNIT V INFORMATION THEORY Syllabus: Discrete Messages and Information Content, Concept of Amount of Information, Average information, Entropy, Information rate, Source coding to increase average information per bit, Shannon-Fano coding, Huffman coding, Lempel-Ziv (LZ) coding, Shannon s Theorem, Channel Capacity, Bandwidth- S/N trade-off, Mutual information and channel capacity, rate distortion theory, Lossy Source coding. Introduction: At about 1960 Claude E. Shannon (MIT) and Robert M. Fano (Bell Laboratories) had developed a coding procedure to generate a binary code tree. The procedure evaluates the symbol's probability and assigns code words with a corresponding code length. Compared to other methods the Shannon-Fano coding is easy to implement. In practical operation Shannon-Fano coding is not of larger importance. This is especially caused by the lower code efficiency in comparison to Huffman coding as demonstrated later. Utilization of Shannon-Fano coding makes primarily sense if it is desired to apply a simple algorithm with high performance and minimum requirements for programming. An example is the compression method IMPLODE as specified e.g. in the ZIP format. To create a code tree according to Shannon and Fano an ordered table is required providing the frequency of any symbol. Each part of the table will be divided into two segments. The algorithm has to ensure that either the upper and the lower part of the

2 segment have nearly the same sum of frequencies. This procedure will be repeated until only single symbols are left. Symbol Frequency Code Code Total Length Length A B C D E total: 62 symbols SF coded: 140 Bit linear (3 Bit/Symbol): 186 Bit Figure i. Code tree The original data can be coded with an average length of 2.26 bit. Linear coding of 5 symbols would require 3 bit per symbol. But, before generating a Shannon-Fano code tree the table must be known or it must be derived from preceding data. Step-by-Step Construction Freq- 1. Step 2. Step 3. Step Symbol quency Sum Kode Sum Kode Sum Kode A B C

3 D E Code trees according to the steps mentioned above: Figure ii) Shannon-Fano versus Huffman The point is whether another method would provide a better code efficiency. According to information theory a perfect code should offer an average code length of bit or 134,882 bit in total.

4 For comparison purposes the former example will be endcoded by the Huffman algorithm: Shannon-Fano Huffman Sym. Freq. code len. tot. code len. tot A B C D E total The Shannon-Fano code does not offer the best code efficiency for the exemplary data structure. This is not necessarily the case for any frequency distribution. But, the Shannon-Fano coding provides a similar result compared with Huffman coding at the best. It will never exceed Huffman coding. The optimum of 134,882 bit will not be matched by both. 1. HUFFMAN CODE The algorithm as described by David Huffman assigns every symbol to a leaf node of a binary code tree. These nodes are weighted by the number of occurences of the corresponding symbol called frequency or cost.

5 The tree structure results from combining the nodes step-by-step until all of them are embedded in a root tree. The algorithm always combines the two nodes providing the lowest frequency in a bottom up procedure. The new interior nodes gets the sum of frequencies of both child nodes. 1.1 Code Tree according to Huffman The branches of the tree represent the binary values 0 and 1 according to the rules for common prefix-free code trees. The path from the root tree to the corresponding leaf node defines the particluar code word. The following example bases on a data source using a set of five different symbols. The symbol's frequencies are: Symbol Frequency A 24 B 12 C 10 D 8 E > total 186 bit (with 3 bit per code word) The two rarest symbols 'E' and 'D' are connected first, followed by 'C' and 'D'. The new parent nodes have the frequency 16 and 22 respectively and are brought together in the next step. The resulting node and the remaining symbol 'A' are subordinated to the root node that is created in a final step.

6 1.2 Code Tree according to Huffman Symbol Frequency Code Code total Length Length A B C D E ges. 186 bit tot. 138 bit (3 bit code) 1.3 Characteristics of Huffman Codes Huffman codes are prefix-free binary code trees, therefore all substantial considerations apply accordingly. Codes generated by the Huffman algorithm achieve the ideal code length up to the bit boundary. The maximum deviation is less than 1 bit. Example: Symbol P(x) I(x) Code H(x) Length A 0,387 1, ,530 B 0,194 2, ,459 C 0,161 2, ,425 D 0,129 2, ,381 E 0,129 2, ,381

7 theoretical minimum: 2,176 bit code length Huffman: 2,226 bit The computation of the entropy results in an average code length of bit per symbol on the assumption of the distribution mentioned. In contrast to this a Huffman code attains an average of bits per symbol. Thus Huffman coding approaches the optimum on 97.74%. An even better result can be achieved only with the arithmetic coding, but its usage is restricted by patents. 1.4 Variants The construction of a code tree for the Huffman coding is based on a certain probability distribution. Three different variants result from the question about the determination of this distribution: static probability distribution dynamic probability distribution adaptive probability distribution At the time of the encoding the probability distribution is unknown and can only be determined by analysing the entire set of data. This requires on the one hand an additional access to the complete data volume before starting the encoding process. On the other hand information about this distribution must be provided in addition to the contents for decoding purposes. Moreover the distribution usually is not constant for the entire data set what negatively affects the compression efficiency. 1.5 Static Distribution Coding procedures with static Huffman codes operate with a predefined code tree. This is previously defined for any type of data and is independent from the particular contents. Normally such trees base on a standard analysis, e.g. for English texts, and take the frequency of each individual symbol to be found there. Provided that the source data correspond to the adopted frequency distribution, an acceptable efficiency of the coding can be achieved. It is not necessary to store the Huffman tree or the frequencies within the encoded data. It is sufficient to keep them available within the encoder or decoder software. Additionally the coding tables do not need to be generated at run-time.

8 The primary problem of a static, predefined code tree arises, if the real probability distribution strongly differs from the assumptions. In this case the compression rate decreases drastically. 1.6 Dynamic Distribution Instead of a static tree being identical for any type of data, a dynamic analysis of the probability distribution could take place. Codes generated from these code trees match the real conditions clearly better than standard distributions. The major disadvantage of this procedure is, that the information about the Huffman tree has to be embedded into the compressed files or data transmissions. A code table or the symbol's frequencies must be part of the header data. If the dynamic Huffman code is applied for the entire data set, internal variations would not be considered. A practical solution is the subdivision into segments and periodical updates of the code tree. But the header size would increase due to these additional data. 1.7 Adaptive Distribution The adaptive coding procedure uses a code tree that is permanently adapted to the previously encoded or decoded data. Starting with an empty tree or a standard distribution, each encoded symbol will be used to refine the code tree. This way a continuous adaption will be achieved and local variations will be compensated at runtime. Adaptive Huffman codes initially using empty trees operate with a special control character identifying new symbols currently not being part of the tree. This variant is characterized by its minimum requirements for header data, but the attainable compression rate is unfavourable at the beginning of the coding or for small files. 1.8 Dynamic Huffman Code This coding scheme presupposes a previous determination of the symbol distribution. The actual algorithm starts with this distribution which is regarded as constant about the entire data. If the symbol distribution changes, then either losses in compression or a completely new construction of the code tree must be accepted (incl. header data required). In the following an example for a simple code tree is presented together with some principal considerations.

9 1.9 Construction of the Tree The Huffman algorithm generates the most efficient binary code tree at given frequency distribution. Prerequisite is a table with all symbols and their frequency. Any symbol represents a leaf node within the tree. The following general procedure has to be applied: search for the two nodes providing the lowest frequency, which are not yet assigned to a parent node couple these nodes together to a new interior node add both frequencies and assign this value to the new interior node The procedure has to be repeated until all nodes are combined together in a root node. Example: "abracadabra" Symbol Frequency a 5 b 2 r 2 c 1 d 1 According to the outlined coding scheme the symbols "d" and "c" will be coupled together in a first step. The new interior node will get the frequency 2. Lempel-Ziv-77 (LZ77) 1.10 Developement: Jacob Ziv and Abraham Lempel had introduced a simple and efficient compression method published in their article "A Universal Algorithm for Sequential Data Compression". This algorithm is referred to as LZ77 in honour to the authors and the publishing date LZ77 is a dictionary based algorithm that addresses byte sequences from former contents instead of the original data. In general only one coding scheme exists, all data will be coded in the same form: Address to already coded contents Sequence length First deviating symbol

10 If no identical byte sequence is available from former contents, the address 0, the sequence length 0 and the new symbol will be coded. Example "abracadabra": Addr. Length deviating Symbol abracadabra 0 0 'a' a bracadabra 0 0 'b' ab racadabra 0 0 'r' abr acadabra 3 1 'c' abrac adabra 2 1 'd' abracad abra 7 4 '' Because each byte sequence is extended by the first symbol deviating from the former contents, the set of already used symbols will continuously grow. No additional coding scheme is necessary. This allows an easy implementation with minimum requirements to the encoder and decoder Restrictions: To keep runtime and buffering capacity in an acceptable range, the addressing must be limited to a certain maximum. Contents exceeding this range will not be regarded for coding and will not be covered by the size of the addressing pointer Compression Efficiency: The achievable compression rate is only depending on repeating sequences. Other types of redundancy like an unequal probability distribution of the set of symbols cannot be reduced. For that reason the compression of a pure LZ77 implementation is relatively low. A significant better compression rate can be obtained by combining LZ77 with an additional entropy coding algorithm. An example would be Huffman or Shannon-Fano coding. The wide-spread Deflate compression method (e.g. for GZIP or ZIP) uses Huffman codes for instance. 2. LEMPEL-ZIV-STORER-SZYMANSKI (LZSS) The LZSS coding procedure was invented by James A. Storer and Thomas G. Szymanski in Theoretical and practical considerations were published in their article "Data Compression via Textual Substitution". The compression method bases on the work of Abraham Lempel and Jacob Ziv known as LZ77.

11 LZSS encodes data as a string consisting of the original data and pointers to a dictionary. The already coded data are serving as a dictionary like in LZ77. The access to former contents has the form [address + sequence length]. Example: abracadabra abracad(1,4) Only sequences exceeding a certain minimum length will be coded, if the substitution offers a compression effect. That is the main difference to LZ77 that always uses the form [address + sequence length + deviating symbol], also if this enlarges the compressed data. In pratice LZSS requires a flag or special character differentiating between uncompressed contents and pointers. This negatively influences the compression rate. The original publication describes different coding schemes varying in its implementation. But the fundamental mechanism bases on the same principles. Lempel-Ziv-78 (LZ78) One year after publishing LZ77 Jacob Ziv and Abraham Lempel hat introduced another compression method ("Compression of Individual Sequences via Variable-Rate Coding"). Accordingly this procdure will be called LZ Fundamental algorithm: LZ78 is based on a dictionary that will be created dynamically at runtime. Both the encoding and the decoding process use the same rules to ensure that an identical dictionary is available. This dictionary contains any sequence already used to build the former contents. The compressed data have the general form: Index addressing an entry of the dictionary First deviating symbol In contrast to LZ77 no combination of address and sequence length is used. Instead only the index to the dictionary is stored. The mechanism to add the first deviating symbol remains from LZ77. Beispiel "abracadabra": deviating index symbol new entry dictionary

12 abracadabra 0 'a' 1 "a" a bracadabra 0 'b' 2 "b" ab racadabra 0 'r' 3 "r" abr acadabra 1 'c' 4 "ac" abrac adabra 1 'd' 5 "ad" abracad abra 1 'b' 6 "ab" abracadab ra 3 'a' 7 "ra" A LZ78 dictionary is slowly growing. For a relevant compression a larger amount of data must be processed. Additionally the compression is mainly depending on the size of the dictionary. But a larger dictionary requires higher efforts for addressing and administration both at runtime. In practice the dictionary would be implemented as a tree to minimize the efforts for searching. Starting with the current symbol the algorithm evaluates for every succeeding symbol whether it is available in the tree. If a leaf node is found, the corresponding index will be written to the compressed data. The decoder could be realized with a simple table, because the decoder does not need the search function. The size of the dictionary is growing during the coding process, so that the size for addressing the table would increase continuously. In parallel the requirements for storing and searching would be also enlarged permanently. A limitation of the dictionary and corresponding update mechanisms are required. LZ78 is the base for other compression methods like the wide-spread LZW used e.g. for GIF graphics. 2.2 LEMPEL-ZIV-WELCH (LZW) The LZW compression method is derived from LZ78 as introduced by Jacob Ziv and Abraham Lempel. It was invented by Terry A. Welch in 1984 who had published his considerations in the article "A Technique for High-Performance Data Compression". At that time Terry A. Welch was employed in a leading position at the Sperry Research Center. The LZW method is covered by patents valid for a number of countries, e.g. in USA, Europe and Japan. Meanwhile Unisys holds the rights, but there are probably more patents also from other companies regarding LZW. Some of these patents expire in 2003 (USA) and 2004 (Europe, Japane). LZW is an important part of a variety of data formats. Graphic formats like gif, tif (optional) and Postscript (optional) are using LZW for entropy coding.

13 2.2.1 Fundamental algorithm: LZW is developing a dictionary that contains any byte sequence already coded. The compressed data exceptionally consist of indices to this dictionary. Before starting, the dictionary is preset with entries for the 256 single byte symbols. Any entry following represents sequences larger than one byte.the algorithm presented by Terry Welch defines mechanisms to create the dictionary and to ensure that it will be identical for both the encoding and decoding process. 2.3 Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bitresolution of binary code trees does not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. Unfortunately the usage is restricted by patents. As far as known it is not allowed to use arithmetic coding without acquiring licences. Arithmetic coding is part of the JPEG data format. Alternative to Huffman coding it will be used for final entropy coding. In spite of its less efficiency Huffman coding remains the standard due to the legal restrictions mentioned above. In electrical engineering, computer science and information theory, channel capacity is the tightest upper bound on the amount of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution. 2.4 RATE DISTORTION THEORY It a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal amount of entropy (or information) R that should be communicated over a channel, so that the

14 source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D Introduction Rate distortion theory gives theoretical bounds for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate distortion functions. Rate distortion theory was created by Claude Shannon in his foundational work on information theory. In rate distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the variance of the difference between input and output signal (i.e., the mean squared error of the difference). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix Rate distortion functions The functions that relate the rate and distortion are found as the solution of the following minimization problem: Here Q Y X (y x), sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) Y for a given input (original signal) X, and I Q (Y ; X) is the mutual information between Y and X defined as I(Y;X) = H(Y) - H(Y X) where H(Y) and H(Y X) are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively:

15 The problem can also be formulated as a distortion rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is: The two formulations lead to functions which are inverses of each other. The mutual information can be understood as a measure for prior uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal (H(Y X)). Of course the decrease in uncertainty is due to the communicated amount of information, which is I(Y; X). As an example, in case there is no communication at all, then H(Y X) = H(Y) and I(Y; X) = 0. Alternatively, if the communication channel is perfect and the received signal Y is identical to the signal X at the sender, then H(Y X) = 0 and I(Y; X) = H(Y) = H(X). In the definition of the rate distortion function, D Q and D * are the distortion between X and Y for a given Q Y X (y x) and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitudecontinuous signals): As the above equations show, calculating a rate distortion function requires the stochastic description of the input X in terms of the PDF P X (x), and then aims at finding the conditional PDF Q Y X (y x) that minimize rate for a given distortion D *. These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well. An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate distortion functions in real life tend to have very similar forms).

16 Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy, where h(d) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers. The Blahut-Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances. 3. MUTUAL INFORMATION In probability theory and information theory, the mutual information (sometimes known by the archaic term transinformation) of two random variables is a quantity that measures the mutual dependence of the two variables. The most common unit of measurement of mutual information is the bit, when logarithms to the base 2 are used. 3.1 Definition of mutual information Formally, the mutual information of two discrete random variables X and Y can be defined as: where p(x,y) is the joint probability distribution function of X and Y, and p 1 (x) and p 2 (y) are the marginal probability distribution functions of X and Y respectively. In the continuous case, summation is matched with a definite double integral: where p(x,y) is now the joint probability density function of X and Y, and p 1 (x) and p 2 (y) are the marginal probability density functions of X and Y respectively.

17 These definitions are ambiguous because the base of the log function is not specified. To disambiguate, the function I could be parameterized as I(X,Y,b) where b is the base. Alternatively, since the most common unit of measurement of mutual information is the bit, a base of 2 could be specified. Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces our uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X and Y are identical then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in the case of identity the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X: clearly if X and Y are identical they have equal entropy). Mutual information quantifies the dependence between the joint distribution of X and Y and what the joint distribution would be if X and Y were independent. Mutual information is a measure of dependence in the following sense: I(X; Y) = 0 if and only if X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p(x,y) = p(x) p(y), and therefore: Moreover, mutual information is nonnegative (i.e. I(X;Y) 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)). 4. ENTROPY In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits. Equivalently, the Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable. The concept was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication". Shannon's entropy represents an absolute limit on the best possible lossless compression of any communication, under certain constraints: treating messages to be encoded as a sequence of independent and identically-distributed random variables, Shannon's source coding theorem shows that, in the limit, the average length of the shortest possible representation to encode the messages in a given alphabet is their entropy divided by the logarithm of the number of symbols in the target alphabet.

18 A fair coin has an entropy of one bit. However, if the coin is not fair, then the uncertainty is lower (if asked to bet on the next outcome, we would bet preferentially on the most frequent result), and thus the Shannon entropy is lower. Mathematically, a coin flip is an example of a Bernoulli trial, and its entropy is given by the binary entropy function. A long string of repeating characters has an entropy rate of 0, since every character is predictable. The entropy rate of English text is between 1.0 and 1.5 bits per letter, or as low as 0.6 to 1.3 bits per letter, according to estimates by Shannon based on human experiments Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. 5. SOURCE CODING Data compression (source coding): There are two formulations for the compression problem: 1. lossless data compression: the data must be reconstructed exactly; 2. lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of Information theory is called rate distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models. 5.1 Source theory Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically-distributed random variable, whereas the properties of ergodicity and

19 stationarity impose more general constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. 5.2 Rate Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding. 6. CHANNEL CAPACITY Communications over a channel such as an ethernet cable is the primary motivation of information theory. As anyone who's ever used a telephone (mobile or landline) knows, however, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. How much information can one hope to communicate over a noisy (or otherwise imperfect) channel? Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y x) be the conditional probability distribution function of Y given X. We will consider p(y x) to be an inherent fixed

20 property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate R and a decoding algorithm, such that the maximal probability of block error is ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. 6.1 Capacity of particular channel models A continuous-time analog communications channel subject to Gaussian noise see Shannon Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 H b (p) bits per channel use, where H b is the binary entropy function to the base 2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol

21 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 - p bits per channel use. 6.2 Applications to other fields Intelligence uses and secrecy applications Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of WWII in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even informationtheoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.

A Brief Introduction to Information Theory and Lossless Coding

A Brief Introduction to Information Theory and Lossless Coding A Brief Introduction to Information Theory and Lossless Coding 1 INTRODUCTION This document is intended as a guide to students studying 4C8 who have had no prior exposure to information theory. All of

More information

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Course Presentation Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Data Compression Motivation Data storage and transmission cost money Use fewest number of

More information

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 1 LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 2 STORAGE SPACE Uncompressed graphics, audio, and video data require substantial storage capacity. Storing uncompressed video is not possible

More information

Lecture5: Lossless Compression Techniques

Lecture5: Lossless Compression Techniques Fixed to fixed mapping: we encoded source symbols of fixed length into fixed length code sequences Fixed to variable mapping: we encoded source symbols of fixed length into variable length code sequences

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 13: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 22 th, 2015 1 o Source Code Generation Lecture Outlines Source Coding

More information

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK Subject Name: Year /Sem: II / IV UNIT I INFORMATION ENTROPY FUNDAMENTALS PART A (2 MARKS) 1. What is uncertainty? 2. What is prefix coding? 3. State the

More information

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains: The Lecture Contains: The Need for Video Coding Elements of a Video Coding System Elements of Information Theory Symbol Encoding Run-Length Encoding Entropy Encoding file:///d /...Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2040/40_1.htm[12/31/2015

More information

Entropy, Coding and Data Compression

Entropy, Coding and Data Compression Entropy, Coding and Data Compression Data vs. Information yes, not, yes, yes, not not In ASCII, each item is 3 8 = 24 bits of data But if the only possible answers are yes and not, there is only one bit

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 14: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 25 th, 2015 1 Previous Lecture: Source Code Generation: Lossless

More information

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution 2.1. General Purpose There are many popular general purpose lossless compression techniques, that can be applied to any type of data. 2.1.1. Run Length Encoding Run Length Encoding is a compression technique

More information

Course Developer: Ranjan Bose, IIT Delhi

Course Developer: Ranjan Bose, IIT Delhi Course Title: Coding Theory Course Developer: Ranjan Bose, IIT Delhi Part I Information Theory and Source Coding 1. Source Coding 1.1. Introduction to Information Theory 1.2. Uncertainty and Information

More information

Information Theory and Communication Optimal Codes

Information Theory and Communication Optimal Codes Information Theory and Communication Optimal Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/1 Roadmap Examples and Types of Codes Kraft Inequality

More information

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding Comm. 50: Communication Theory Lecture 6 - Introduction to Source Coding Digital Communication Systems Source of Information User of Information Source Encoder Source Decoder Channel Encoder Channel Decoder

More information

Information Theory: the Day after Yesterday

Information Theory: the Day after Yesterday : the Day after Yesterday Department of Electrical Engineering and Computer Science Chicago s Shannon Centennial Event September 23, 2016 : the Day after Yesterday IT today Outline The birth of information

More information

Information Theory and Huffman Coding

Information Theory and Huffman Coding Information Theory and Huffman Coding Consider a typical Digital Communication System: A/D Conversion Sampling and Quantization D/A Conversion Source Encoder Source Decoder bit stream bit stream Channel

More information

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems 1 Introduction The reliable transmission of information over noisy channels is one of the basic requirements of digital information and communication systems. Here, transmission is understood both as transmission

More information

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication 1 Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING 1.1 SOURCE CODING Whether a source is analog or digital, a digital communication system is designed to transmit information in digital form.

More information

FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING

FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING Harman Jot, Rupinder Kaur M.Tech, Department of Electronics and Communication, Punjabi University, Patiala, Punjab, India I. INTRODUCTION

More information

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure CHAPTER 2 Syllabus: 1) Pulse amplitude modulation 2) TDM 3) Wave form coding techniques 4) PCM 5) Quantization noise and SNR 6) Robust quantization Pulse amplitude modulation In pulse amplitude modulation,

More information

Computing and Communications 2. Information Theory -Channel Capacity

Computing and Communications 2. Information Theory -Channel Capacity 1896 1920 1987 2006 Computing and Communications 2. Information Theory -Channel Capacity Ying Cui Department of Electronic Engineering Shanghai Jiao Tong University, China 2017, Autumn 1 Outline Communication

More information

Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS

Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS 44 Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS 45 CHAPTER 3 Chapter 3: LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING

More information

Lecture 1 Introduction

Lecture 1 Introduction Lecture 1 Introduction I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw September 22, 2015 1 / 46 I-Hsiang Wang IT Lecture 1 Information Theory Information

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pk Pakorn Watanachaturaporn, Wt ht Ph.D. PhD pakorn@live.kmitl.ac.th,

More information

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON).

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). 1. Some easy problems. 1.1. Guessing a number. Someone chose a number x between 1 and N. You are allowed to ask questions: Is this number larger

More information

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam German University in Cairo - GUC Faculty of Information Engineering & Technology - IET Department of Communication Engineering Dr.-Ing. Heiko Schwarz COMM901 Source Coding and Compression Winter Semester

More information

A Hybrid Technique for Image Compression

A Hybrid Technique for Image Compression Australian Journal of Basic and Applied Sciences, 5(7): 32-44, 2011 ISSN 1991-8178 A Hybrid Technique for Image Compression Hazem (Moh'd Said) Abdel Majid Hatamleh Computer DepartmentUniversity of Al-Balqa

More information

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix Send SMS s : ONJntuSpeed To 9870807070 To Recieve Jntu Updates Daily On Your Mobile For Free www.strikingsoon.comjntu ONLINE EXMINTIONS [Mid 2 - dc] http://jntuk.strikingsoon.com 1. Two binary random

More information

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27)

ECEn 665: Antennas and Propagation for Wireless Communications 131. s(t) = A c [1 + αm(t)] cos (ω c t) (9.27) ECEn 665: Antennas and Propagation for Wireless Communications 131 9. Modulation Modulation is a way to vary the amplitude and phase of a sinusoidal carrier waveform in order to transmit information. When

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Chapter 9 Image Compression Standards

Chapter 9 Image Compression Standards Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how

More information

Coding for Efficiency

Coding for Efficiency Let s suppose that, over some channel, we want to transmit text containing only 4 symbols, a, b, c, and d. Further, let s suppose they have a probability of occurrence in any block of text we send as follows

More information

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -

More information

DEGRADED broadcast channels were first studied by

DEGRADED broadcast channels were first studied by 4296 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 9, SEPTEMBER 2008 Optimal Transmission Strategy Explicit Capacity Region for Broadcast Z Channels Bike Xie, Student Member, IEEE, Miguel Griot,

More information

A Bit of network information theory

A Bit of network information theory Š#/,% 0/,94%#(.)15% A Bit of network information theory Suhas Diggavi 1 Email: suhas.diggavi@epfl.ch URL: http://licos.epfl.ch Parts of talk are joint work with S. Avestimehr 2, S. Mohajer 1, C. Tian 3,

More information

CGT 511. Image. Image. Digital Image. 2D intensity light function z=f(x,y) defined over a square 0 x,y 1. the value of z can be:

CGT 511. Image. Image. Digital Image. 2D intensity light function z=f(x,y) defined over a square 0 x,y 1. the value of z can be: Image CGT 511 Computer Images Bedřich Beneš, Ph.D. Purdue University Department of Computer Graphics Technology Is continuous 2D image function 2D intensity light function z=f(x,y) defined over a square

More information

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Home The Book by Chapters About the Book Steven W. Smith Blog Contact Book Search Download this chapter in PDF

More information

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007 3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 53, NO 10, OCTOBER 2007 Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution Yingbin Liang, Member, IEEE, Venugopal V Veeravalli, Fellow,

More information

Assistant Lecturer Sama S. Samaan

Assistant Lecturer Sama S. Samaan MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard

More information

Tarek M. Sobh and Tarek Alameldin

Tarek M. Sobh and Tarek Alameldin Operator/System Communication : An Optimizing Decision Tool Tarek M. Sobh and Tarek Alameldin Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania,

More information

Rab Nawaz. Prof. Zhang Wenyi

Rab Nawaz. Prof. Zhang Wenyi Rab Nawaz PhD Scholar (BL16006002) School of Information Science and Technology University of Science and Technology of China, Hefei Email: rabnawaz@mail.ustc.edu.cn Submitted to Prof. Zhang Wenyi wenyizha@ustc.edu.cn

More information

Huffman Coding - A Greedy Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley

Huffman Coding - A Greedy Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley - A Greedy Algorithm Slides based on Kevin Wayne / Pearson-Addison Wesley Greedy Algorithms Greedy Algorithms Build up solutions in small steps Make local decisions Previous decisions are never reconsidered

More information

MAS160: Signals, Systems & Information for Media Technology. Problem Set 4. DUE: October 20, 2003

MAS160: Signals, Systems & Information for Media Technology. Problem Set 4. DUE: October 20, 2003 MAS160: Signals, Systems & Information for Media Technology Problem Set 4 DUE: October 20, 2003 Instructors: V. Michael Bove, Jr. and Rosalind Picard T.A. Jim McBride Problem 1: Simple Psychoacoustic Masking

More information

Jitter in Digital Communication Systems, Part 1

Jitter in Digital Communication Systems, Part 1 Application Note: HFAN-4.0.3 Rev.; 04/08 Jitter in Digital Communication Systems, Part [Some parts of this application note first appeared in Electronic Engineering Times on August 27, 200, Issue 8.] AVAILABLE

More information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information 1992 2008 R. C. Gonzalez & R. E. Woods For the image in Fig. 8.1(a): 1992 2008 R. C. Gonzalez & R. E. Woods Measuring

More information

Unit 1.1: Information representation

Unit 1.1: Information representation Unit 1.1: Information representation 1.1.1 Different number system A number system is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set,

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

Speech Coding in the Frequency Domain

Speech Coding in the Frequency Domain Speech Coding in the Frequency Domain Speech Processing Advanced Topics Tom Bäckström Aalto University October 215 Introduction The speech production model can be used to efficiently encode speech signals.

More information

Classical Cryptography

Classical Cryptography Classical Cryptography CS 6750 Lecture 1 September 10, 2009 Riccardo Pucella Goals of Classical Cryptography Alice wants to send message X to Bob Oscar is on the wire, listening to all communications Alice

More information

SHANNON S source channel separation theorem states

SHANNON S source channel separation theorem states IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 9, SEPTEMBER 2009 3927 Source Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Member, IEEE, Elza Erkip, Senior Member,

More information

The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob Ziv (AT&T Bell Labs / Technion Israel) and Abraham Lempel (IBM) in 1978;

The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob Ziv (AT&T Bell Labs / Technion Israel) and Abraham Lempel (IBM) in 1978; Georgia Institute of Technology - Georgia Tech Lorraine ECE 6605 Information Theory Lempel-Ziv Lossless Compresion General comments The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob

More information

photons photodetector t laser input current output current

photons photodetector t laser input current output current 6.962 Week 5 Summary: he Channel Presenter: Won S. Yoon March 8, 2 Introduction he channel was originally developed around 2 years ago as a model for an optical communication link. Since then, a rather

More information

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program.

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program. Combined Error Correcting and Compressing Codes Extended Summary Thomas Wenisch Peter F. Swaszek Augustus K. Uht 1 University of Rhode Island, Kingston RI Submitted to International Symposium on Information

More information

Masters of Engineering in Electrical Engineering Course Syllabi ( ) City University of New York--College of Staten Island

Masters of Engineering in Electrical Engineering Course Syllabi ( ) City University of New York--College of Staten Island City University of New York--College of Staten Island Masters of Engineering in Electrical Engineering Course Syllabi (2017-2018) Required Core Courses ELE 600/ MTH 6XX Probability Theory and Stochastic

More information

Bell Labs celebrates 50 years of Information Theory

Bell Labs celebrates 50 years of Information Theory 1 Bell Labs celebrates 50 years of Information Theory An Overview of Information Theory Humans are symbol-making creatures. We communicate by symbols -- growls and grunts, hand signals, and drawings painted

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

UNIT 7C Data Representation: Images and Sound

UNIT 7C Data Representation: Images and Sound UNIT 7C Data Representation: Images and Sound 1 Pixels An image is stored in a computer as a sequence of pixels, picture elements. 2 1 Resolution The resolution of an image is the number of pixels used

More information

2. REVIEW OF LITERATURE

2. REVIEW OF LITERATURE 2. REVIEW OF LITERATURE Digital image processing is the use of the algorithms and procedures for operations such as image enhancement, image compression, image analysis, mapping. Transmission of information

More information

Lossless Image Compression Techniques Comparative Study

Lossless Image Compression Techniques Comparative Study Lossless Image Compression Techniques Comparative Study Walaa Z. Wahba 1, Ashraf Y. A. Maghari 2 1M.Sc student, Faculty of Information Technology, Islamic university of Gaza, Gaza, Palestine 2Assistant

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 1: Introduction & Overview Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Information Theory 1 / 26 Outline 1 Course Information 2 Course Overview

More information

Keywords Audio Steganography, Compressive Algorithms, SNR, Capacity, Robustness. (Figure 1: The Steganographic operation) [10]

Keywords Audio Steganography, Compressive Algorithms, SNR, Capacity, Robustness. (Figure 1: The Steganographic operation) [10] Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Audio Steganography

More information

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication INTRODUCTION Digital Communication refers to the transmission of binary, or digital, information over analog channels. In this laboratory you will

More information

Part A: Question & Answers UNIT I AMPLITUDE MODULATION

Part A: Question & Answers UNIT I AMPLITUDE MODULATION PANDIAN SARASWATHI YADAV ENGINEERING COLLEGE DEPARTMENT OF ELECTRONICS & COMMUNICATON ENGG. Branch: ECE EC6402 COMMUNICATION THEORY Semester: IV Part A: Question & Answers UNIT I AMPLITUDE MODULATION 1.

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society Abstract MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING A Public Lecture to the Uganda Mathematics Society F F Tusubira, PhD, MUIPE, MIEE, REng, CEng Mathematical theory and techniques play a vital

More information

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria

Error Control Coding. Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Error Control Coding Aaron Gulliver Dept. of Electrical and Computer Engineering University of Victoria Topics Introduction The Channel Coding Problem Linear Block Codes Cyclic Codes BCH and Reed-Solomon

More information

Fundamentals of Digital Communication

Fundamentals of Digital Communication Fundamentals of Digital Communication Network Infrastructures A.A. 2017/18 Digital communication system Analog Digital Input Signal Analog/ Digital Low Pass Filter Sampler Quantizer Source Encoder Channel

More information

Solutions to Assignment-2 MOOC-Information Theory

Solutions to Assignment-2 MOOC-Information Theory Solutions to Assignment-2 MOOC-Information Theory 1. Which of the following is a prefix-free code? a) 01, 10, 101, 00, 11 b) 0, 11, 01 c) 01, 10, 11, 00 Solution:- The codewords of (a) are not prefix-free

More information

Introduction to Source Coding

Introduction to Source Coding Comm. 52: Communication Theory Lecture 7 Introduction to Source Coding - Requirements of source codes - Huffman Code Length Fixed Length Variable Length Source Code Properties Uniquely Decodable allow

More information

Digital Asset Management 2. Introduction to Digital Media Format

Digital Asset Management 2. Introduction to Digital Media Format Digital Asset Management 2. Introduction to Digital Media Format 2010-09-09 Content content = essence + metadata 2 Digital media data types Table. File format used in Macromedia Director File import File

More information

Outline. Communications Engineering 1

Outline. Communications Engineering 1 Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband channels Signal space representation Optimal

More information

COMMUNICATION SYSTEMS

COMMUNICATION SYSTEMS COMMUNICATION SYSTEMS 4TH EDITION Simon Hayhin McMaster University JOHN WILEY & SONS, INC. Ш.! [ BACKGROUND AND PREVIEW 1. The Communication Process 1 2. Primary Communication Resources 3 3. Sources of

More information

6.004 Computation Structures Spring 2009

6.004 Computation Structures Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 6.004 Computation Structures Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Welcome to 6.004! Course

More information

6.450: Principles of Digital Communication 1

6.450: Principles of Digital Communication 1 6.450: Principles of Digital Communication 1 Digital Communication: Enormous and normally rapidly growing industry, roughly comparable in size to the computer industry. Objective: Study those aspects of

More information

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression # 2 ECE 253a Digital Image Processing Pamela Cosman /4/ Introductory material for image compression Motivation: Low-resolution color image: 52 52 pixels/color, 24 bits/pixel 3/4 MB 3 2 pixels, 24 bits/pixel

More information

PROBABILITY AND STATISTICS Vol. II - Information Theory and Communication - Tibor Nemetz INFORMATION THEORY AND COMMUNICATION

PROBABILITY AND STATISTICS Vol. II - Information Theory and Communication - Tibor Nemetz INFORMATION THEORY AND COMMUNICATION INFORMATION THEORY AND COMMUNICATION Tibor Nemetz Rényi Mathematical Institute, Hungarian Academy of Sciences, Budapest, Hungary Keywords: Shannon theory, alphabet, capacity, (transmission) channel, channel

More information

UNIT 7C Data Representation: Images and Sound Principles of Computing, Carnegie Mellon University CORTINA/GUNA

UNIT 7C Data Representation: Images and Sound Principles of Computing, Carnegie Mellon University CORTINA/GUNA UNIT 7C Data Representation: Images and Sound Carnegie Mellon University CORTINA/GUNA 1 Announcements Pa6 is available now 2 Pixels An image is stored in a computer as a sequence of pixels, picture elements.

More information

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression The Need for Data Compression Data Compression (for Images) -Compressing Graphical Data Graphical images in bitmap format take a lot of memory e.g. 1024 x 768 pixels x 24 bits-per-pixel = 2.4Mbyte =18,874,368

More information

Digital Image Processing Introduction

Digital Image Processing Introduction Digital Processing Introduction Dr. Hatem Elaydi Electrical Engineering Department Islamic University of Gaza Fall 2015 Sep. 7, 2015 Digital Processing manipulation data might experience none-ideal acquisition,

More information

Ch. 3: Image Compression Multimedia Systems

Ch. 3: Image Compression Multimedia Systems 4/24/213 Ch. 3: Image Compression Multimedia Systems Prof. Ben Lee (modified by Prof. Nguyen) Oregon State University School of Electrical Engineering and Computer Science Outline Introduction JPEG Standard

More information

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression 15-462 Computer Graphics I Lecture 2 Image Processing April 18, 22 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Display Color Models Filters Dithering Image Compression

More information

Huffman Coding For Digital Photography

Huffman Coding For Digital Photography Huffman Coding For Digital Photography Raydhitya Yoseph 13509092 Program Studi Teknik Informatika Sekolah Teknik Elektro dan Informatika Institut Teknologi Bandung, Jl. Ganesha 10 Bandung 40132, Indonesia

More information

Game Theory and Randomized Algorithms

Game Theory and Randomized Algorithms Game Theory and Randomized Algorithms Guy Aridor Game theory is a set of tools that allow us to understand how decisionmakers interact with each other. It has practical applications in economics, international

More information

On the efficiency of luminance-based palette reordering of color-quantized images

On the efficiency of luminance-based palette reordering of color-quantized images On the efficiency of luminance-based palette reordering of color-quantized images Armando J. Pinho 1 and António J. R. Neves 2 1 Dep. Electrónica e Telecomunicações / IEETA, University of Aveiro, 3810

More information

Chapter IV THEORY OF CELP CODING

Chapter IV THEORY OF CELP CODING Chapter IV THEORY OF CELP CODING CHAPTER IV THEORY OF CELP CODING 4.1 Introduction Wavefonn coders fail to produce high quality speech at bit rate lower than 16 kbps. Source coders, such as LPC vocoders,

More information

Topic 1: defining games and strategies. SF2972: Game theory. Not allowed: Extensive form game: formal definition

Topic 1: defining games and strategies. SF2972: Game theory. Not allowed: Extensive form game: formal definition SF2972: Game theory Mark Voorneveld, mark.voorneveld@hhs.se Topic 1: defining games and strategies Drawing a game tree is usually the most informative way to represent an extensive form game. Here is one

More information

TCET3202 Analog and digital Communications II

TCET3202 Analog and digital Communications II NEW YORK CITY COLLEGE OF TECHNOLOGY The City University of New York DEPARTMENT: SUBJECT CODE AND TITLE: COURSE DESCRIPTION: REQUIRED COURSE Electrical and Telecommunications Engineering Technology TCET3202

More information

Lab/Project Error Control Coding using LDPC Codes and HARQ

Lab/Project Error Control Coding using LDPC Codes and HARQ Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an

More information

Pulse Code Modulation

Pulse Code Modulation Pulse Code Modulation Modulation is the process of varying one or more parameters of a carrier signal in accordance with the instantaneous values of the message signal. The message signal is the signal

More information

Communications Overhead as the Cost of Constraints

Communications Overhead as the Cost of Constraints Communications Overhead as the Cost of Constraints J. Nicholas Laneman and Brian. Dunn Department of Electrical Engineering University of Notre Dame Email: {jnl,bdunn}@nd.edu Abstract This paper speculates

More information

Yale University Department of Computer Science

Yale University Department of Computer Science LUX ETVERITAS Yale University Department of Computer Science Secret Bit Transmission Using a Random Deal of Cards Michael J. Fischer Michael S. Paterson Charles Rackoff YALEU/DCS/TR-792 May 1990 This work

More information

MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007

MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007 MIT OpenCourseWare http://ocw.mit.edu MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007 For information about citing these materials or our Terms of Use, visit:

More information

3. Image Formats. Figure1:Example of bitmap and Vector representation images

3. Image Formats. Figure1:Example of bitmap and Vector representation images 3. Image Formats. Introduction With the growth in computer graphics and image applications the ability to store images for later manipulation became increasingly important. With no standards for image

More information

ECE Advanced Communication Theory, Spring 2007 Midterm Exam Monday, April 23rd, 6:00-9:00pm, ELAB 325

ECE Advanced Communication Theory, Spring 2007 Midterm Exam Monday, April 23rd, 6:00-9:00pm, ELAB 325 C 745 - Advanced Communication Theory, Spring 2007 Midterm xam Monday, April 23rd, 600-900pm, LAB 325 Overview The exam consists of five problems for 150 points. The points for each part of each problem

More information

Optimal Coded Information Network Design and Management via Improved Characterizations of the Binary Entropy Function

Optimal Coded Information Network Design and Management via Improved Characterizations of the Binary Entropy Function Optimal Coded Information Network Design and Management via Improved Characterizations of the Binary Entropy Function John MacLaren Walsh & Steven Weber Department of Electrical and Computer Engineering

More information

Chapter 8. Representing Multimedia Digitally

Chapter 8. Representing Multimedia Digitally Chapter 8 Representing Multimedia Digitally Learning Objectives Explain how RGB color is represented in bytes Explain the difference between bits and binary numbers Change an RGB color by binary addition

More information

Channel Concepts CS 571 Fall Kenneth L. Calvert

Channel Concepts CS 571 Fall Kenneth L. Calvert Channel Concepts CS 571 Fall 2006 2006 Kenneth L. Calvert What is a Channel? Channel: a means of transmitting information A means of communication or expression Webster s NCD Aside: What is information...?

More information

15.Calculate the local oscillator frequency if incoming frequency is F1 and translated carrier frequency

15.Calculate the local oscillator frequency if incoming frequency is F1 and translated carrier frequency DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING SUBJECT NAME:COMMUNICATION THEORY YEAR/SEM: II/IV SUBJECT CODE: EC 6402 UNIT I:l (AMPLITUDE MODULATION) PART A 1. Compute the bandwidth of the AMP

More information

OFDM Transmission Corrupted by Impulsive Noise

OFDM Transmission Corrupted by Impulsive Noise OFDM Transmission Corrupted by Impulsive Noise Jiirgen Haring, Han Vinck University of Essen Institute for Experimental Mathematics Ellernstr. 29 45326 Essen, Germany,. e-mail: haering@exp-math.uni-essen.de

More information

DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE

DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE Asst.Prof.Deepti Mahadeshwar,*Prof. V.M.Misra Department of Instrumentation Engineering, Vidyavardhini s College of Engg. And Tech., Vasai Road, *Prof

More information