A Brief Introduction to Information Theory and Lossless Coding

Size: px
Start display at page:

Download "A Brief Introduction to Information Theory and Lossless Coding"

Transcription

1 A Brief Introduction to Information Theory and Lossless Coding 1 INTRODUCTION This document is intended as a guide to students studying 4C8 who have had no prior exposure to information theory. All of the following material is covered in 3C5/4BIO2. Information Theory is a subject first described in the seminal paper titled A Mathematical Theory of Communications by Claude Shannon. Although Information Theory is concerned with communications, here we are only interested in Information Theory as it relates to compression. 2 DATA, INFORMATION AND REDUNDANCY The first key concept is to understand the differences between these 3 terms. This is best explained by the following example. Consider the following sentence THE ENGLISH LANGUAGE IS ABOUT ONE HALF REDUNDANT This sentence is said to convey a message (or information) about the redundancy of english. However it is possible to convey the same message using less words (or data) ENGLISH IS HALF REDUNDANT So the original sentence uses more words than is necessary to convey the message. Loosely we can say that DATA = INFORMATION + REDUNDANCY. By removing redundancy, we can reduce the amount of data used to convey a message without reducing the information content. Therefore, we can think of information content as the minimum amount of data that we can use to represent the information contained within a file. 3 ASSIGNING CODEWORDS TO SYMBOLS In general, any file whether an image, video, audio or text is said to consist of a string of symbols. The set of possible symbols in a file is called the alphabet. To represent each symbol on a computer we need to assign it a string of bits.

2 3.1 A BASIC EXAMPLE Consider the following file that consists of only 4 symbols A, B, C and D. The file consists of one these symbols chosen at random. For example, ABACDDCABBCA. One way to assign a codeword to each symbol is to use a binary number of fixed length for each symbol. For example The file in binary is then A 00 C 10 B 01 D In general the number of bits for each symbol = log 2 (#symbols) So a 4 symbols set/alphabet requires 2 bits per symbol, a file with an 8 symbol alphabet would require 3 bits per symbol etc. 3.2 AN EXAMPLE WITH ONLY 3 SYMBOLS Consider the following file where log 2 (#symbols) is not an integer. ABABCCBCACBCABB 1. #symbols = 3 which is > 2. So more than 1 bit is needed per symbol. We could use 2 bits per symbol A 00 C 10 B 01 As the file is symbols long we require 30 bits of data to encode the entire file. The encoded file is then Notice the code 11 is unused. Perhaps we can use less bits for one of the symbols. For example A 00 C 1 B 01

3 So the file is That is 25 bits in total. So it obviously takes fewer bits to encode. However, is it possible to decode the original string of symbols using the above string of bits and the codetable only? 4 DECODING AND INSTANTANEOUS CODES Decoding files is easy when the length of each code is the same and the codetable is known. We just divide up the file into chunks according to the code length and translate from the codetable. (Example No.1 in Section 3.2) 00,01,00,01,10,10,01,10,00,10,01,10,00,01,01 However, this becomes trickier when the codelengths are different. However, in Example No.2 in Section 3.2 the file can be decoded easily by looking ahead in the bit stream until we see a code that exists in the code table A 00 C 1 B 01 Starting at the start of the stream the first bit 0 is not a codeword but the first 2 bits together is the code for symbol A Proceeding from the 3 rd bit, the 3 rd and 4 th bits (01) is the codeword for B Repeating this process the next two decoded symbols are A and B. The remaining portion of the file left to be decoded is now The next bit is 1 which is the code for symbol C. We decode the symbol and proceed from the next bit in the stream. Repeating this process until the end will give back the original unencoded file. Codes that can be decoded this way are called INSTANTANEOUS CODES. When choosing codewords for symbols these types of codes are always used.

4 4.1 DEFINITION A set of codewords are instantaneous if and only if the prefix of each codeword is not itself one of the codewords in the set. The prefix of a codeword is all the bits of a codeword except the last bit. For example Codeword Prefix Looking at all of the codetables that have been used in the previous examples, it can be seen that they all are instantaneous codes. 5 TOWARDS MINIMIZING FILE SIZES - ENTROPY Q. Revisiting example 3 in Section 3.2, would we get a smaller filesize if we used the 1 bit code for another symbol other than C? A. Yes, we should give it to the symbol that occurs most often, which is the symbol B. In terms of probability, we give it to the symbol that occurs with highest probability. Symbol Number of Occurrences Probability A 4 4/ B 6 6/ C 5 5/ So if we give the shorter code to the symbol B, we will be able to save one more bit overall. (24 bits instead of 25). This leads to a more general question. If we have a file that contains many symbols (eg. 256 for an image) that each occur with different probability, what codeword length should we choose for each symbol? To answer this question, we need to look at the concept of entropy.

5 5.1 ENTROPY In A Mathematical Theory of Communications Claude Shannon developed the concept of entropy as a means to quantity the average information content per symbol for a file with a set of symbols whose probabilities are known. Under Shannon s reasoning, the amount of information gained, I k, upon observation of a symbol k of probability p k in the file is given by I k = log 2 (p k ) = log 2 ( 1 p k ). Information content is also measured in units of bits. The Entropy of an information source is defined as the average amount of information gained upon observation of a symbol generated by the source. It is simply the average of the information content of each symbol in the alphabet weighted in proportion to the probability of each symbol occurring. H(X) = p k log 2 (p k ) k Recalling example 2 in Section 3.2, the entropy of the information source that generates the file is H(X) = 4 log 2 ( 4 ) 6 log 2 ( 6 ) 5 log 2 ( 5 ) = 1.57 bits/symbol. 5.2 SOURCE CODING THEOREM The Source Coding Theorem (aka Shannon s Source Coding Theorem) establishes that the entropy is the minimum bound on the average codeword length that will preserve the information content of the file. We cannot go lower than this value without losing information. It also says that we can always achieve an average codeword length within 1 bit per symbol of the minimum. In practice it is possible to get much closer to the entropy than that. Recall that we needed 24 bits to encode symbols in the earlier example. Therefore the average codeword length is 1.6 bits/symbol which is only slightly greater than the entropy (1.57 bits/symbol). Another way to estimate the average codeword length is by a weighted average of the codeword length, l k, for each symbol as follows L = p k l k. k

6 Hence, in our example, L = = 1.6 bits/symbol. 6 PRACTICAL CODING ALGORITHMS THE HUFFMAN ALGORITHM Unfortunately, the coding theorem tells us nothing about how we might select the optimal code length for each symbol. This was easy for our example with only 3 symbols but in general it is much more difficult. Example 3 Consider an information source with four possible symbols A, B, C and D where p A = 0.6 p B = 0.25 p C = 0.1 p D = 0.05 The entropy of the source works out to be H(X) = 1.49 bits/symbol. Lets consider 2 possible ways of encoding the source 1. Assign a fixed codeword length of 2 for each symbol. For example A 00 C 10 B 01 D 11 The average codeword length (L ) here is 2 bits/symbol which is within 1 bit per symbol of the entropy in accordance with the coding theorem but is not very good. The coding efficiency which is defined as η = H(X) 75%. L We should be able to do better. 2. Assign a codeword length close to the information content of each symbol. For example, we could round up the information content to the nearest integer.

7 Symbol, k Probability, p k Information Content, Codeword I k Length, l k A B C D We could then assign an instantaneous code with the above code lengths as follows A 0 C 1100 B 10 D This would give an average codeword length of L = = 1.75 bits/symbol So this is obviously better than before. However, we could have used 4 bits for symbol D and the code would also have been instantaneous. So this code is not optimal. 6.1 THE HUFFMAN CODING ALGORITHM In 1951 David Huffman proposed a method of assigning optimal codewords to symbols, and more specifically of assigning optimal code lengths. No other method of mapping codewords to individual symbols will give a lower average codeword length. The algorithm works by building a binary tree in which the symbols form the leaves of the tree. The algorithm works by building the tree from the leaves to the root. The tree is built by essentially performing the following step over and over. 1. Replace the two symbols with the lowest probabilities/frequencies with a new symbol whose probability/frequency is the sum of the probabilities/frequencies of the two symbols until there is only one symbol left. The Huffman Tree for the above example above is shown in figure 1. The length of the code for each symbol is given by the number of paths between each symbol and the root of the tree. Therefore the optimal codeword lengths for this example are 1 bit for A, 2 bits for B and 3 bits for C and D.

8 A B C D Figure 1: The Huffman Coding Tree for example 3. Symbols C and D are grouped together first (red), symbols B and the replacement symbol CD (green) are grouped, and finally A with the replacement symbol BCD (purple). You can choose any bits you want for each codeword as long as the correct lengths are used and the set of codes is instantaneous. People generally do this by giving each branch of the binary tree a bit value where the two branches of the binary tree at each level are given opposite values. This can be done in any order. The table below summarises the results for this example. Symbol, k Probability, p k Codeword Length, l k Code A B C D The average codeword length obtained using the Huffman code is And the efficiency is now L = = 1.55 bits/symbol η = %.

9 6.1.1 Practical Considerations The Huffman algorithm is widely used algorithm in the compression of all types of data and is used in the majority of compression algorithms and standards (Zip, PNG, JPEG, MPEG, MP3, AAC etc.). This is due to the simplicity of the encoding procedure and due to the fact that it works reliably. To compress a file, the file is first scanned to generate a histogram of the symbols in the file and the Huffman tree and codetable is then calculated. In order to decompress the file, the code table must be included along with the compressed data. This obviously leads to a data overhead. However, it is less significant for larger files. 7 A PROBLEM SOURCES WITH MEMORY, JOINT ENTROPY AND ENTROPY RATE In the previous sections we have been thinking of information sources that an information source simply selects symbols at random from a set independently to form a file. These sources are sometimes referred to as Memoryless. However, this is obviously not a practical model for real life files. Consider a text file that starts with the following characters. The sky is blu Obviously, we would expect the next letter to be the character e with high probability, and much higher than the marginal probability of the character e occurring in English. If we use the Huffman coding procedure as described previously (ie. based on the marginal probabilities of the characters), we will be using too long of a codeword and hence the code will be inefficient. Practical information sources like this are referred to as Sources with Memory. More fundamentally, the concept of entropy has to be adapted to deal with sources with memory. Rather than thinking of a file being generated by one information source, we can consider each observed symbol as being generated by a different information source where the probabilities of the symbols vary. Although in theory it would be possible to calculate a different Huffman codetable for each symbol, it is obviously out of the question given the overhead of including the codetables in the compressed file. So we have to do something else. There are many different types of entropy that can be used to describe sources with memory, here we will look at only 2: joint entropy and the entropy rate.

10 7.1 JOINT ENTROPY Consider a file of N symbols each generated by separate information sources X 1, X 2,, X N. Then the joint entropy is defined as H(X 1, X 2,, X N ) = p k1,k 2,,k N log 2 (p k1,k 2,,k N ), all possible combinations of symbols Where p k1,k 2,,k N is the joint probability distribution over the N symbols. The joint entropy is essentially the information content of the entire file. This is the theoretical minimum number of bits that we need to encode the file. 7.2 ENTROPY RATE If the joint entropy represents the information content of a file, then the entropy rate is the average information content of a given symbol. The entropy rate of an alphabet X is defined as H(X) = lim ( H(X 1, X 2,, X N ) ) N N Practically, it is almost impossible to calculate. For example, it is not feasible to capture entirely in probabilistic terms the spatial dependency of the pixel intensities in an image. The entropy rate generalizes the concept of entropy to account for sources with memory. It is the minimum bound on average number of bits to encode a symbol of any file, whether the information source is a source with memory or is memoryless. Hence, the entropy rate is notionally the best lossless compression we can achieve. However, we almost never know what this optimum is. If a source is memoryless then the entropy rate is the same as the entropy of the source calculated on the marginal probability distribution of the symbols (ie. Using the equation in Section 5.1). If the source has memory, then the entropy rate is always less. As the entropy rate is normally less than the entropy, more compression is possible than is predicted by the entropy value. In fact the entropy of an 8-bit greyscale image is commonly greater than 7 bits per symbol. But as we have seen already in 4C8, we can achieve lossless compression at a much lower bit rate. The lossless compression algorithms used in file formats like ZIP, GZIP and PNG use dictionary coding techniques in conjunction with the Huffman algorithm to achieve better efficiency. These algorithms achieve average symbol code lengths that tend to the entropy rate as file length tends to infinity. However, efficiency is less for small files. DEFLATE is an example of such a coding technique. The most popular dictionary coding algorithms are LZ77, LZ78 and LZW. The initials refer to the names of the authors of the papers where the algorithms were proposed (Abraham

11 Lempel, Jacob Ziv, and Terry Welch) and the numbers to the year of publication of the papers that describe the algorithms. You will most likely look at dictionary coding techniques in your telecoms courses. As they are not commonly used in lossy compression formats for audio and video, we will not revisit them here.

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 1 LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 2 STORAGE SPACE Uncompressed graphics, audio, and video data require substantial storage capacity. Storing uncompressed video is not possible

More information

Lecture5: Lossless Compression Techniques

Lecture5: Lossless Compression Techniques Fixed to fixed mapping: we encoded source symbols of fixed length into fixed length code sequences Fixed to variable mapping: we encoded source symbols of fixed length into variable length code sequences

More information

Entropy, Coding and Data Compression

Entropy, Coding and Data Compression Entropy, Coding and Data Compression Data vs. Information yes, not, yes, yes, not not In ASCII, each item is 3 8 = 24 bits of data But if the only possible answers are yes and not, there is only one bit

More information

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Course Presentation Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Data Compression Motivation Data storage and transmission cost money Use fewest number of

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 13: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 22 th, 2015 1 o Source Code Generation Lecture Outlines Source Coding

More information

Coding for Efficiency

Coding for Efficiency Let s suppose that, over some channel, we want to transmit text containing only 4 symbols, a, b, c, and d. Further, let s suppose they have a probability of occurrence in any block of text we send as follows

More information

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains: The Lecture Contains: The Need for Video Coding Elements of a Video Coding System Elements of Information Theory Symbol Encoding Run-Length Encoding Entropy Encoding file:///d /...Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2040/40_1.htm[12/31/2015

More information

Information Theory and Huffman Coding

Information Theory and Huffman Coding Information Theory and Huffman Coding Consider a typical Digital Communication System: A/D Conversion Sampling and Quantization D/A Conversion Source Encoder Source Decoder bit stream bit stream Channel

More information

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression # 2 ECE 253a Digital Image Processing Pamela Cosman /4/ Introductory material for image compression Motivation: Low-resolution color image: 52 52 pixels/color, 24 bits/pixel 3/4 MB 3 2 pixels, 24 bits/pixel

More information

Introduction to Source Coding

Introduction to Source Coding Comm. 52: Communication Theory Lecture 7 Introduction to Source Coding - Requirements of source codes - Huffman Code Length Fixed Length Variable Length Source Code Properties Uniquely Decodable allow

More information

Information Theory and Communication Optimal Codes

Information Theory and Communication Optimal Codes Information Theory and Communication Optimal Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/1 Roadmap Examples and Types of Codes Kraft Inequality

More information

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON).

SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). SOME EXAMPLES FROM INFORMATION THEORY (AFTER C. SHANNON). 1. Some easy problems. 1.1. Guessing a number. Someone chose a number x between 1 and N. You are allowed to ask questions: Is this number larger

More information

FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING

FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING Harman Jot, Rupinder Kaur M.Tech, Department of Electronics and Communication, Punjabi University, Patiala, Punjab, India I. INTRODUCTION

More information

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication 1 Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING 1.1 SOURCE CODING Whether a source is analog or digital, a digital communication system is designed to transmit information in digital form.

More information

Huffman Coding - A Greedy Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley

Huffman Coding - A Greedy Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley - A Greedy Algorithm Slides based on Kevin Wayne / Pearson-Addison Wesley Greedy Algorithms Greedy Algorithms Build up solutions in small steps Make local decisions Previous decisions are never reconsidered

More information

Module 3 Greedy Strategy

Module 3 Greedy Strategy Module 3 Greedy Strategy Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Introduction to Greedy Technique Main

More information

The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob Ziv (AT&T Bell Labs / Technion Israel) and Abraham Lempel (IBM) in 1978;

The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob Ziv (AT&T Bell Labs / Technion Israel) and Abraham Lempel (IBM) in 1978; Georgia Institute of Technology - Georgia Tech Lorraine ECE 6605 Information Theory Lempel-Ziv Lossless Compresion General comments The Lempel-Ziv (LZ) lossless compression algorithm was developed by Jacob

More information

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003 Motivation Large amount of data in images Color video: 200Mb/sec Landsat TM multispectral satellite image: 200MB High potential for compression Redundancy (aka correlation) in images spatial, temporal,

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 14: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 25 th, 2015 1 Previous Lecture: Source Code Generation: Lossless

More information

Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS

Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS 44 Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS 45 CHAPTER 3 Chapter 3: LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING

More information

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression The Need for Data Compression Data Compression (for Images) -Compressing Graphical Data Graphical images in bitmap format take a lot of memory e.g. 1024 x 768 pixels x 24 bits-per-pixel = 2.4Mbyte =18,874,368

More information

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution 2.1. General Purpose There are many popular general purpose lossless compression techniques, that can be applied to any type of data. 2.1.1. Run Length Encoding Run Length Encoding is a compression technique

More information

Solutions to Assignment-2 MOOC-Information Theory

Solutions to Assignment-2 MOOC-Information Theory Solutions to Assignment-2 MOOC-Information Theory 1. Which of the following is a prefix-free code? a) 01, 10, 101, 00, 11 b) 0, 11, 01 c) 01, 10, 11, 00 Solution:- The codewords of (a) are not prefix-free

More information

COURSE MATERIAL Subject Name: Communication Theory UNIT V

COURSE MATERIAL Subject Name: Communication Theory UNIT V NH-67, TRICHY MAIN ROAD, PULIYUR, C.F. - 639114, KARUR DT. DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING COURSE MATERIAL Subject Name: Communication Theory Subject Code: 080290020 Class/Sem:

More information

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program.

1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program. Combined Error Correcting and Compressing Codes Extended Summary Thomas Wenisch Peter F. Swaszek Augustus K. Uht 1 University of Rhode Island, Kingston RI Submitted to International Symposium on Information

More information

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam

COMM901 Source Coding and Compression Winter Semester 2013/2014. Midterm Exam German University in Cairo - GUC Faculty of Information Engineering & Technology - IET Department of Communication Engineering Dr.-Ing. Heiko Schwarz COMM901 Source Coding and Compression Winter Semester

More information

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding

Comm. 502: Communication Theory. Lecture 6. - Introduction to Source Coding Comm. 50: Communication Theory Lecture 6 - Introduction to Source Coding Digital Communication Systems Source of Information User of Information Source Encoder Source Decoder Channel Encoder Channel Decoder

More information

Lossless Image Compression Techniques Comparative Study

Lossless Image Compression Techniques Comparative Study Lossless Image Compression Techniques Comparative Study Walaa Z. Wahba 1, Ashraf Y. A. Maghari 2 1M.Sc student, Faculty of Information Technology, Islamic university of Gaza, Gaza, Palestine 2Assistant

More information

A Hybrid Technique for Image Compression

A Hybrid Technique for Image Compression Australian Journal of Basic and Applied Sciences, 5(7): 32-44, 2011 ISSN 1991-8178 A Hybrid Technique for Image Compression Hazem (Moh'd Said) Abdel Majid Hatamleh Computer DepartmentUniversity of Al-Balqa

More information

Assistant Lecturer Sama S. Samaan

Assistant Lecturer Sama S. Samaan MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard

More information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information 1992 2008 R. C. Gonzalez & R. E. Woods For the image in Fig. 8.1(a): 1992 2008 R. C. Gonzalez & R. E. Woods Measuring

More information

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK Subject Name: Year /Sem: II / IV UNIT I INFORMATION ENTROPY FUNDAMENTALS PART A (2 MARKS) 1. What is uncertainty? 2. What is prefix coding? 3. State the

More information

A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES

A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES Shreya A 1, Ajay B.N 2 M.Tech Scholar Department of Computer Science and Engineering 2 Assitant Professor, Department of Computer Science

More information

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction 1514 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 8, DECEMBER 2000 A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction Bai-Jue Shieh, Yew-San Lee,

More information

CGT 511. Image. Image. Digital Image. 2D intensity light function z=f(x,y) defined over a square 0 x,y 1. the value of z can be:

CGT 511. Image. Image. Digital Image. 2D intensity light function z=f(x,y) defined over a square 0 x,y 1. the value of z can be: Image CGT 511 Computer Images Bedřich Beneš, Ph.D. Purdue University Department of Computer Graphics Technology Is continuous 2D image function 2D intensity light function z=f(x,y) defined over a square

More information

Tarek M. Sobh and Tarek Alameldin

Tarek M. Sobh and Tarek Alameldin Operator/System Communication : An Optimizing Decision Tool Tarek M. Sobh and Tarek Alameldin Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania,

More information

MAS160: Signals, Systems & Information for Media Technology. Problem Set 4. DUE: October 20, 2003

MAS160: Signals, Systems & Information for Media Technology. Problem Set 4. DUE: October 20, 2003 MAS160: Signals, Systems & Information for Media Technology Problem Set 4 DUE: October 20, 2003 Instructors: V. Michael Bove, Jr. and Rosalind Picard T.A. Jim McBride Problem 1: Simple Psychoacoustic Masking

More information

The Strengths and Weaknesses of Different Image Compression Methods. Samuel Teare and Brady Jacobson

The Strengths and Weaknesses of Different Image Compression Methods. Samuel Teare and Brady Jacobson The Strengths and Weaknesses of Different Image Compression Methods Samuel Teare and Brady Jacobson Lossy vs Lossless Lossy compression reduces a file size by permanently removing parts of the data that

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pk Pakorn Watanachaturaporn, Wt ht Ph.D. PhD pakorn@live.kmitl.ac.th,

More information

Lossless Grayscale Image Compression using Blockwise Entropy Shannon (LBES)

Lossless Grayscale Image Compression using Blockwise Entropy Shannon (LBES) Volume No., July Lossless Grayscale Image Compression using Blockwise ntropy Shannon (LBS) S. Anantha Babu Ph.D. (Research Scholar) & Assistant Professor Department of Computer Science and ngineering V

More information

Indian Institute of Technology, Roorkee, India

Indian Institute of Technology, Roorkee, India Volume-, Issue-, Feb.-7 A COMPARATIVE STUDY OF LOSSLESS COMPRESSION TECHNIQUES J P SATI, M J NIGAM, Indian Institute of Technology, Roorkee, India E-mail: jypsati@gmail.com, mkndnfec@gmail.com Abstract-

More information

MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007

MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007 MIT OpenCourseWare http://ocw.mit.edu MAS.160 / MAS.510 / MAS.511 Signals, Systems and Information for Media Technology Fall 2007 For information about citing these materials or our Terms of Use, visit:

More information

Unit 1.1: Information representation

Unit 1.1: Information representation Unit 1.1: Information representation 1.1.1 Different number system A number system is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set,

More information

CSE 100: BST AVERAGE CASE AND HUFFMAN CODES

CSE 100: BST AVERAGE CASE AND HUFFMAN CODES CSE 100: BST AVERAGE CASE AND HUFFMAN CODES Recap: Average Case Analysis of successful find in a BST N nodes Expected total depth of all BSTs with N nodes Recap: Probability of having i nodes in the left

More information

Monday, February 2, Is assigned today. Answers due by noon on Monday, February 9, 2015.

Monday, February 2, Is assigned today. Answers due by noon on Monday, February 9, 2015. Monday, February 2, 2015 Topics for today Homework #1 Encoding checkers and chess positions Constructing variable-length codes Huffman codes Homework #1 Is assigned today. Answers due by noon on Monday,

More information

Digital Asset Management 2. Introduction to Digital Media Format

Digital Asset Management 2. Introduction to Digital Media Format Digital Asset Management 2. Introduction to Digital Media Format 2010-09-09 Content content = essence + metadata 2 Digital media data types Table. File format used in Macromedia Director File import File

More information

Huffman Coding For Digital Photography

Huffman Coding For Digital Photography Huffman Coding For Digital Photography Raydhitya Yoseph 13509092 Program Studi Teknik Informatika Sekolah Teknik Elektro dan Informatika Institut Teknologi Bandung, Jl. Ganesha 10 Bandung 40132, Indonesia

More information

UNIT 7C Data Representation: Images and Sound

UNIT 7C Data Representation: Images and Sound UNIT 7C Data Representation: Images and Sound 1 Pixels An image is stored in a computer as a sequence of pixels, picture elements. 2 1 Resolution The resolution of an image is the number of pixels used

More information

REVIEW OF IMAGE COMPRESSION TECHNIQUES FOR MULTIMEDIA IMAGES

REVIEW OF IMAGE COMPRESSION TECHNIQUES FOR MULTIMEDIA IMAGES REVIEW OF IMAGE COMPRESSION TECHNIQUES FOR MULTIMEDIA IMAGES 1 Tamanna, 2 Neha Bassan 1 Student- Department of Computer science, Lovely Professional University Phagwara 2 Assistant Professor, Department

More information

An Analytical Study on Comparison of Different Image Compression Formats

An Analytical Study on Comparison of Different Image Compression Formats IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 7 December 2014 ISSN (online): 2349-6010 An Analytical Study on Comparison of Different Image Compression Formats

More information

Topics. 1. Raster vs vector graphics. 2. File formats. 3. Purpose of use. 4. Decreasing file size

Topics. 1. Raster vs vector graphics. 2. File formats. 3. Purpose of use. 4. Decreasing file size Topics 1. Raster vs vector graphics 2. File formats 3. Purpose of use 4. Decreasing file size Vector graphics Object-oriented graphics or drawings Consist of a series of mathematically defined points that

More information

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression 15-462 Computer Graphics I Lecture 2 Image Processing April 18, 22 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Display Color Models Filters Dithering Image Compression

More information

Wednesday, February 1, 2017

Wednesday, February 1, 2017 Wednesday, February 1, 2017 Topics for today Encoding game positions Constructing variable-length codes Huffman codes Encoding Game positions Some programs that play two-player games (e.g., tic-tac-toe,

More information

GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE

GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE Wook-Hyun Jeong and Yo-Sung Ho Kwangju Institute of Science and Technology (K-JIST) Oryong-dong, Buk-gu, Kwangju,

More information

Byte = More common: 8 bits = 1 byte Abbreviation:

Byte = More common: 8 bits = 1 byte Abbreviation: Text, Images, Video and Sound ASCII-7 In the early days, a was used, with of 0 s and 1 s, enough for a typical keyboard. The standard was developed by (American Standard Code for Information Interchange)

More information

Chapter 9 Image Compression Standards

Chapter 9 Image Compression Standards Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how

More information

Digital Image Fundamentals

Digital Image Fundamentals Digital Image Fundamentals Computer Science Department The University of Western Ontario Presenter: Mahmoud El-Sakka CS2124/CS2125: Introduction to Medical Computing Fall 2012 October 31, 2012 1 Objective

More information

Raster Image File Formats

Raster Image File Formats Raster Image File Formats 1995-2016 Josef Pelikán & Alexander Wilkie CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ 1 / 35 Raster Image Capture Camera Area sensor (CCD, CMOS) Colours:

More information

HUFFMAN CODING. Catherine Bénéteau and Patrick J. Van Fleet. SACNAS 2009 Mini Course. University of South Florida and University of St.

HUFFMAN CODING. Catherine Bénéteau and Patrick J. Van Fleet. SACNAS 2009 Mini Course. University of South Florida and University of St. Catherine Bénéteau and Patrick J. Van Fleet University of South Florida and University of St. Thomas SACNAS 2009 Mini Course WEDNESDAY, 14 OCTOBER, 2009 (1:40-3:00) LECTURE 2 SACNAS 2009 1 / 10 All lecture

More information

6.02 Introduction to EECS II Spring Quiz 1

6.02 Introduction to EECS II Spring Quiz 1 M A S S A C H U S E T T S I N S T I T U T E O F T E C H N O L O G Y DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE 6.02 Introduction to EECS II Spring 2011 Quiz 1 Name SOLUTIONS Score Please

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 1: Introduction & Overview Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Information Theory 1 / 26 Outline 1 Course Information 2 Course Overview

More information

What You ll Learn Today

What You ll Learn Today CS101 Lecture 18: Image Compression Aaron Stevens 21 October 2010 Some material form Wikimedia Commons Special thanks to John Magee and his dog 1 What You ll Learn Today Review: how big are image files?

More information

Run-Length Based Huffman Coding

Run-Length Based Huffman Coding Chapter 5 Run-Length Based Huffman Coding This chapter presents a multistage encoding technique to reduce the test data volume and test power in scan-based test applications. We have proposed a statistical

More information

6.004 Computation Structures Spring 2009

6.004 Computation Structures Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 6.004 Computation Structures Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Welcome to 6.004! Course

More information

A Modified Image Template for FELICS Algorithm for Lossless Image Compression

A Modified Image Template for FELICS Algorithm for Lossless Image Compression Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet A Modified

More information

Keywords Audio Steganography, Compressive Algorithms, SNR, Capacity, Robustness. (Figure 1: The Steganographic operation) [10]

Keywords Audio Steganography, Compressive Algorithms, SNR, Capacity, Robustness. (Figure 1: The Steganographic operation) [10] Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Audio Steganography

More information

Arithmetic Compression on SPIHT Encoded Images

Arithmetic Compression on SPIHT Encoded Images Arithmetic Compression on SPIHT Encoded Images Todd Owen, Scott Hauck {towen, hauck}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 UWEE Technical Report Number UWEETR-2002-0007

More information

3. Image Formats. Figure1:Example of bitmap and Vector representation images

3. Image Formats. Figure1:Example of bitmap and Vector representation images 3. Image Formats. Introduction With the growth in computer graphics and image applications the ability to store images for later manipulation became increasingly important. With no standards for image

More information

Digital Image Processing Introduction

Digital Image Processing Introduction Digital Processing Introduction Dr. Hatem Elaydi Electrical Engineering Department Islamic University of Gaza Fall 2015 Sep. 7, 2015 Digital Processing manipulation data might experience none-ideal acquisition,

More information

CHAPTER 5 PAPR REDUCTION USING HUFFMAN AND ADAPTIVE HUFFMAN CODES

CHAPTER 5 PAPR REDUCTION USING HUFFMAN AND ADAPTIVE HUFFMAN CODES 119 CHAPTER 5 PAPR REDUCTION USING HUFFMAN AND ADAPTIVE HUFFMAN CODES 5.1 INTRODUCTION In this work the peak powers of the OFDM signal is reduced by applying Adaptive Huffman Codes (AHC). First the encoding

More information

Speech Coding in the Frequency Domain

Speech Coding in the Frequency Domain Speech Coding in the Frequency Domain Speech Processing Advanced Topics Tom Bäckström Aalto University October 215 Introduction The speech production model can be used to efficiently encode speech signals.

More information

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University Color & Compression Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University Outline Color Color spaces Multispectral images Pseudocoloring Color image processing

More information

Course Developer: Ranjan Bose, IIT Delhi

Course Developer: Ranjan Bose, IIT Delhi Course Title: Coding Theory Course Developer: Ranjan Bose, IIT Delhi Part I Information Theory and Source Coding 1. Source Coding 1.1. Introduction to Information Theory 1.2. Uncertainty and Information

More information

CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES. Every image has a background and foreground detail.

CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES. Every image has a background and foreground detail. 69 CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES 6.0 INTRODUCTION Every image has a background and foreground detail. The background region contains details which

More information

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Home The Book by Chapters About the Book Steven W. Smith Blog Contact Book Search Download this chapter in PDF

More information

Lossy and Lossless Compression using Various Algorithms

Lossy and Lossless Compression using Various Algorithms Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 6.017 IJCSMC,

More information

UNIT 7C Data Representation: Images and Sound Principles of Computing, Carnegie Mellon University CORTINA/GUNA

UNIT 7C Data Representation: Images and Sound Principles of Computing, Carnegie Mellon University CORTINA/GUNA UNIT 7C Data Representation: Images and Sound Carnegie Mellon University CORTINA/GUNA 1 Announcements Pa6 is available now 2 Pixels An image is stored in a computer as a sequence of pixels, picture elements.

More information

Computing and Communications 2. Information Theory -Channel Capacity

Computing and Communications 2. Information Theory -Channel Capacity 1896 1920 1987 2006 Computing and Communications 2. Information Theory -Channel Capacity Ying Cui Department of Electronic Engineering Shanghai Jiao Tong University, China 2017, Autumn 1 Outline Communication

More information

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society Abstract MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING A Public Lecture to the Uganda Mathematics Society F F Tusubira, PhD, MUIPE, MIEE, REng, CEng Mathematical theory and techniques play a vital

More information

Module 6 STILL IMAGE COMPRESSION STANDARDS

Module 6 STILL IMAGE COMPRESSION STANDARDS Module 6 STILL IMAGE COMPRESSION STANDARDS Lesson 16 Still Image Compression Standards: JBIG and JPEG Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the

More information

Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor

Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor A Study of Image Compression Techniques Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor Department of Computer Science & Engineering, BPS Mahila Vishvavidyalya, Sonipat kulriapooja@gmail.com,

More information

Anti aliasing and Graphics Formats

Anti aliasing and Graphics Formats Anti aliasing and Graphics Formats Eric C. McCreath School of Computer Science The Australian National University ACT 0200 Australia ericm@cs.anu.edu.au Overview 2 Nyquist sampling frequency supersampling

More information

Information representation

Information representation 2Unit Chapter 11 1 Information representation Revision objectives By the end of the chapter you should be able to: show understanding of the basis of different number systems; use the binary, denary and

More information

Channel Coding/Decoding. Hamming Method

Channel Coding/Decoding. Hamming Method Channel Coding/Decoding Hamming Method INFORMATION TRANSFER ACROSS CHANNELS Sent Received messages symbols messages source encoder Source coding Channel coding Channel Channel Source decoder decoding decoding

More information

Lab 9: Huff(man)ing and Puffing Due April 18/19 (Implementation plans due 4/16, reports due 4/20)

Lab 9: Huff(man)ing and Puffing Due April 18/19 (Implementation plans due 4/16, reports due 4/20) Lab 9: Huff(man)ing and Puffing Due April 18/19 (Implementation plans due 4/16, reports due 4/20) The number of bits required to encode an image for digital storage or transmission can be quite large.

More information

Lecture 1 Introduction

Lecture 1 Introduction Lecture 1 Introduction I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw September 22, 2015 1 / 46 I-Hsiang Wang IT Lecture 1 Information Theory Information

More information

Comparative Analysis of Lossless Image Compression techniques SPHIT, JPEG-LS and Data Folding

Comparative Analysis of Lossless Image Compression techniques SPHIT, JPEG-LS and Data Folding Comparative Analysis of Lossless Compression techniques SPHIT, JPEG-LS and Data Folding Mohd imran, Tasleem Jamal, Misbahul Haque, Mohd Shoaib,,, Department of Computer Engineering, Aligarh Muslim University,

More information

15110 Principles of Computing, Carnegie Mellon University

15110 Principles of Computing, Carnegie Mellon University 1 Last Time Data Compression Information and redundancy Huffman Codes ALOHA Fixed Width: 0001 0110 1001 0011 0001 20 bits Huffman Code: 10 0000 010 0001 10 15 bits 2 Overview Human sensory systems and

More information

Digital Communication Systems ECS 452

Digital Communication Systems ECS 452 Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 2. Source Coding 1 Office Hours: BKD, 6th floor of Sirindhralai building Monday 10:00-10:40 Tuesday 12:00-12:40

More information

Image Processing. Adrien Treuille

Image Processing. Adrien Treuille Image Processing http://croftonacupuncture.com/db5/00415/croftonacupuncture.com/_uimages/bigstockphoto_three_girl_friends_celebrating_212140.jpg Adrien Treuille Overview Image Types Pixel Filters Neighborhood

More information

An Efficient Approach for Image Compression using Segmented Probabilistic Encoding with Shanon Fano[SPES].

An Efficient Approach for Image Compression using Segmented Probabilistic Encoding with Shanon Fano[SPES]. An Efficient Approach for Compression using Segmented Probabilistic Encoding with Shanon Fano[SPES]. Dr. T. Bhaskara Reddy 1, Miss. Hema Suresh Yaragunti 2, Mr. T. Sri Harish Reddy 3, Dr. S. Kiran 4 1

More information

Error Correcting Code

Error Correcting Code Error Correcting Code Robin Schriebman April 13, 2006 Motivation Even without malicious intervention, ensuring uncorrupted data is a difficult problem. Data is sent through noisy pathways and it is common

More information

EE303: Communication Systems

EE303: Communication Systems EE303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London An Overview of Fundamentals: Channels, Criteria and Limits Prof. A. Manikas (Imperial

More information

Chapter 8. Representing Multimedia Digitally

Chapter 8. Representing Multimedia Digitally Chapter 8 Representing Multimedia Digitally Learning Objectives Explain how RGB color is represented in bytes Explain the difference between bits and binary numbers Change an RGB color by binary addition

More information

IMAGE STEGANOGRAPHY USING MODIFIED KEKRE ALGORITHM

IMAGE STEGANOGRAPHY USING MODIFIED KEKRE ALGORITHM IMAGE STEGANOGRAPHY USING MODIFIED KEKRE ALGORITHM Shyam Shukla 1, Aparna Dixit 2 1 Information Technology, M.Tech, MBU, (India) 2 Computer Science, B.Tech, GGSIPU, (India) ABSTRACT The main goal of steganography

More information

Rab Nawaz. Prof. Zhang Wenyi

Rab Nawaz. Prof. Zhang Wenyi Rab Nawaz PhD Scholar (BL16006002) School of Information Science and Technology University of Science and Technology of China, Hefei Email: rabnawaz@mail.ustc.edu.cn Submitted to Prof. Zhang Wenyi wenyizha@ustc.edu.cn

More information

Ch. 3: Image Compression Multimedia Systems

Ch. 3: Image Compression Multimedia Systems 4/24/213 Ch. 3: Image Compression Multimedia Systems Prof. Ben Lee (modified by Prof. Nguyen) Oregon State University School of Electrical Engineering and Computer Science Outline Introduction JPEG Standard

More information

2. REVIEW OF LITERATURE

2. REVIEW OF LITERATURE 2. REVIEW OF LITERATURE Digital image processing is the use of the algorithms and procedures for operations such as image enhancement, image compression, image analysis, mapping. Transmission of information

More information