DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK. Subject Name: Information Coding Techniques UNIT I INFORMATION ENTROPY FUNDAMENTALS

Similar documents
DIGITAL COMMINICATIONS

Department of Electronics and Communication Engineering 1

VARDHAMAN COLLEGE OF ENGINEERING (AUTONOMOUS) Affiliated to JNTUH, Hyderabad ASSIGNMENT QUESTION BANK

APPLICATIONS OF DSP OBJECTIVES

UNIT I Source Coding Systems

B. Tech. (SEM. VI) EXAMINATION, (2) All question early equal make. (3) In ease of numerical problems assume data wherever not provided.

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology

TCET3202 Analog and digital Communications II

Syllabus. osmania university UNIT - I UNIT - II UNIT - III CHAPTER - 1 : INTRODUCTION TO DIGITAL COMMUNICATION CHAPTER - 3 : INFORMATION THEORY

techniques are means of reducing the bandwidth needed to represent the human voice. In mobile

SCHEME OF COURSE WORK. Course Code : 13EC1114 L T P C : ELECTRONICS AND COMMUNICATION ENGINEERING

EC 6501 DIGITAL COMMUNICATION UNIT - II PART A

4. Which of the following channel matrices respresent a symmetric channel? [01M02] 5. The capacity of the channel with the channel Matrix

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

QUESTION BANK (VI SEM ECE) (DIGITAL COMMUNICATION)

Course Developer: Ranjan Bose, IIT Delhi

6. FUNDAMENTALS OF CHANNEL CODER

Entropy, Coding and Data Compression

QUESTION BANK EC 1351 DIGITAL COMMUNICATION YEAR / SEM : III / VI UNIT I- PULSE MODULATION PART-A (2 Marks) 1. What is the purpose of sample and hold

Communications Theory and Engineering

Level 6 Graduate Diploma in Engineering Communication systems

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

Problem Sheet 1 Probability, random processes, and noise

10 Speech and Audio Signals

DHANALAKSHMI SRINIVASAN COLLEGE OF ENGINEERING AND TECHNOLOGY CS6304- ANALOG AND DIGITAL COMMUNICATION BE-CSE/IT SEMESTER III REGULATION 2013 Faculty

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Digital Speech Processing and Coding

Waveform Encoding - PCM. BY: Dr.AHMED ALKHAYYAT. Chapter Two

Chapter-3 Waveform Coding Techniques

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003

Information Theory and Huffman Coding

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

Communication Theory II

QUESTION BANK. Staff In-Charge: M.MAHARAJA, AP / ECE

A Brief Introduction to Information Theory and Lossless Coding

EE521 Analog and Digital Communications

Revision of Lecture Eleven

KINGS DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DIGITAL COMMUNICATION TECHNIQUES YEAR/SEM: III / VI BRANCH : ECE PULSE MODULATION

QUESTION BANK. Sandeep Kumar Bansal. Electronics & Communication Department

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK. Subject Name: Digital Communication Techniques

Year : TYEJ Sub: Digital Communication (17535) Assignment No. 1. Introduction of Digital Communication. Question Exam Marks

Communication Theory II

COMMUNICATION SYSTEMS

Downloaded from 1

UNIT TEST I Digital Communication

FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY

Pulse Code Modulation

PULSE CODE MODULATION (PCM)

EC6501 Digital Communication

CODING TECHNIQUES FOR ANALOG SOURCES

EEE482F: Problem Set 1

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

15.Calculate the local oscillator frequency if incoming frequency is F1 and translated carrier frequency

Digital Audio. Lecture-6

Voice Transmission --Basic Concepts--

QUESTION BANK. SUBJECT CODE / Name: EC2301 DIGITAL COMMUNICATION UNIT 2

INSTITUTE OF AERONAUTICAL ENGINEERING

Assistant Lecturer Sama S. Samaan

Speech Coding Technique And Analysis Of Speech Codec Using CS-ACELP

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

RADIO SYSTEMS ETIN15. Channel Coding. Ove Edfors, Department of Electrical and Information Technology

Overview of Code Excited Linear Predictive Coder

Channel Coding RADIO SYSTEMS ETIN15. Lecture no: Ove Edfors, Department of Electrical and Information Technology

Outline. Communications Engineering 1

EE482: Digital Signal Processing Applications

d[m] = [m]+ 1 2 [m 2]

EE 225D LECTURE ON MEDIUM AND HIGH RATE CODING. University of California Berkeley

Pulse Code Modulation

Computing and Communications 2. Information Theory -Channel Capacity

Comm 502: Communication Theory

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK

LATHA MATHAVAN ENGINEERING COLLEGE Alagarkovil, Madurai

AMSEC/ECE

END-OF-YEAR EXAMINATIONS ELEC321 Communication Systems (D2) Tuesday, 22 November 2005, 9:20 a.m. Three hours plus 10 minutes reading time.

Digital Communication (650533) CH 3 Pulse Modulation

Speech Coding in the Frequency Domain

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Channel Coding and Cryptography

Simulation of Conjugate Structure Algebraic Code Excited Linear Prediction Speech Coder

ECE 556 BASICS OF DIGITAL SPEECH PROCESSING. Assıst.Prof.Dr. Selma ÖZAYDIN Spring Term-2017 Lecture 2

A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES

DIGITAL COMMUNICATION

CHAPTER 3 Syllabus (2006 scheme syllabus) Differential pulse code modulation DPCM transmitter

6.2 MIDI: Musical Instrument Digital Interface. 6.4 Further Exploration

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

SNR Scalability, Multiple Descriptions, and Perceptual Distortion Measures

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

Introduction to Source Coding

Fundamentals of Data and Signals

EXAMINATION FOR THE DEGREE OF B.E. Semester 1 June COMMUNICATIONS IV (ELEC ENG 4035)

Contents Preview and Introduction Waveform Encoding

COURSE MATERIAL Subject Name: Communication Theory UNIT V

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

MULTIMEDIA SYSTEMS

EXPERIMENT WISE VIVA QUESTIONS


B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER Third Semester Computer Science and Engineering CS 2204 ANALOG AND DIGITAL COMMUNICATION

6/29 Vol.7, No.2, February 2012

The Channel Vocoder (analyzer):

Transcription:

DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK Subject Name: Year /Sem: II / IV UNIT I INFORMATION ENTROPY FUNDAMENTALS PART A (2 MARKS) 1. What is uncertainty? 2. What is prefix coding? 3. State the channel coding theorem for a discrete memory less channel 4. Define a discrete memory less channel 5. State channel capacity theorem 6. Probability 0.25, 0.20, 0.15, 0.15, 0.10, 0.05 For the above given data, Calculate the entropy by encoding it using Shannon fano technique 7. Compare Huffman coding and Shannon fano coding 8. What are the advantages of Lempel Ziv encoding algorithms over Huffman coding? 9. Find entropy of a source emitting symbols X,Y,Z with the probabilities of 1/5,1/2,1/3 respectively 10. State source coding theorem 11. Define capacity 12. Define information 13. Define Entropy 14. List the properties of information 15. Define information rate 16. Give the upper bound and lower bound for entropy 17. Define discrete memory less source 18. What is the entropy of an extended discrete memory less source? 19. Calculate the primary source entropy and the entropy of third extension of binary source with probabilities P 0 = ¼, P 1 = ¾ 20. What are the two functional requirements needed in the development of an efficient source encoder? 21. What is Lmin? How it is determined? 22. Find the entropy of second order extension of a source whose alphabet x = {x0, x1, x2} with probability 23. What are uniquely decipherable codes?

24. What is data Compaction? 25. State Lempel-Ziv coding 26. If the probability of getting a head is ½ by tossing a coin, find the information associated with it? PART B (16 MARKS) 1. (i) How will you calculate channel capacity? (2) (ii)write channel coding theorem and channel capacity theorem (5) (iii)calculate the entropy for the given sample data AAABBBCCD (3) (iv)prove Shannon information capacity theorem (6) 2. (i)use differential entropy to compare the randomness of random variables (4) (ii)a four symbol alphabet has following probabilities Pr(a0) =1/2 Pr(a0) = 1/4 Pr(a0) = 1/8 Pr(a0) = 1/8 and an entropy of 1.75 bits. Find a codebook for this four letter alphabet that satisfies source coding theorem (4) (iii)write the entropy for a binary symmetric source (4) (iv)write down the channel capacity for a binary channel (4) 3. (a) A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence are as described here Symbols: X1 X2 X3 X4 X5 Probability: 0.2 0.2 0.1 0.1 0.4 Compare the Huffman code for this source.also calculates the efficiency of the source encoder (8) (b) A voice grade channel of telephone network has a bandwidth of 3.4 khz Calculate (i) The information capacity of the telephone channel for a signal to noise ratio of 30 db and (ii) The min signal to noise ratio required to support information transmission through the telephone channel at the rate of 9.6Kb/s (8) 4. A discrete memory less source has an alphabet of seven symbols whose probabilities of occurrence are as described below Symbol: s 0 s 1 s 2 s 3 s 4 s 5 s 6 Prob : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125 (i) Compute the Huffman code for this source moving a combined symbols as high as possible (10) (ii) Calculate the coding efficiency (4) (iii) Why the computed source has a efficiency of 100% (2)

5.(i) Consider the following binary sequences 111010011000101110100.Use the Lempel Ziv algorithm to encode this sequence. Assume that the binary symbols 1 and 0 are already in the code book (12) (ii)what are the advantages of Lempel Ziv encoding algorithm over Huffman coding? (4) 6.A discrete memory less source has an alphabet of five symbols with their probabilities for its output as given here [X] = [x1 x2 x3 x4 x5 ] P[X] = [0.45 0.15 0.15 0.10 0.15] Compute two different Huffman codes for this source.for these two codes.find (i) Average code word length (ii) Variance of the average code word length over the ensemble of source symbols (16) 7. A discrete memory less source X has five symbols x1,x2,x3,x4 and x5 with probabilities p(x1) 0.4, p(x2) = 0.19, p(x3) = 0.16, p(x4) = 0.15 and p(x5) = 0.1 (i) Construct a Shannon Fano code for X,and Calculate the efficiency of the code (7) (ii) Repeat for the Huffman code and Compare the results (9) (iii) 8. Consider that two sources S1 and S2 emit message x1, x2, x3 and y1, y2,y3 with joint probability P(X,Y) as shown in the matrix form. 3/40 1/40 1/40 P(X, Y) 1/20 3/20 1/20 1/8 1/8 3/8 Calculate the entropies H(X), H(Y), H(X/Y), and H (Y/X) (16) 9 Apply Huffman coding procedure to following massage ensemble and determine Average length of encoded message also. Determine the coding efficiency. Use coding alphabet D=4.there are 10 symbols. X = [x1, x2, x3 x10] P[X] = [0.18,.0.17,0.16,0.15,0.1,0.08,0.05, 0.05,0.04,0.2] (16)

UNIT II - DATA AND VOICE CODING PART A(2 MARKS) 1. Define Pulse Code Modulation (PCM.) 2. Give the basic operations performed in transmitter and receiver of PCM system 3. List the basic elements of a PCM system. 4. Define sampling 5. List the two different types of quantization. 6. List the different types of line codes 7. Give the block diagram of regenerative repeater. 8. List the- three 'basic functions performed by a regenerative repeater 9. A television signal with a bandwidth of 4.2 MHz. is transmitted using binary PCM.' The number of quantization levels is 512. Calculate the code word length 10. Briefly explain slope overloading 11. What are the advantages of coding speech at low bit rates? 12. Briefly explain sub band coding for speech signal 13. List the differences between delta modulation and adaptive delta modulation 14. Draw the block diagram for differential pulse code modulator 15. Draw the block diagram of DPCM signal encoder 16. Give the various pulse modulation techniques, pulse code modulation techniques available.how do they differ from each other? 17. Define Delta modulation 18. List the disadvantages of Delta modulation 19. Define Quantization 20. Define Quantization error PART B (16 MARKS) 1. (i) Compare and contrast DPCM and ADPCM (6) (ii) Define pitch, period and loudness (6) (iii) What is decibel? (2) (iv) What is the purpose of DFT? (2) 2. i. Explain delta modulation with examples (6) ii. Explain sub-band adaptive differential pulse code modulation (6) iii. What will happen if speech is coded at low bit rates (4) 3. With the block diagram explain DPCM system. Compare DPCM with PCM & DM systems (16) 4. i. Explain DM systems with block diagram (8) ii Consider a sine wave of frequency f m and amplitude A m, which is applied to a delta modulator of step size.show that the slope overload distortion will occur

if A m > / ( 2f m T s ) Where T s sampling. What is the maximum power that may be transmitted without slope overload distortion? (8) 5. Explain adaptive quantization and prediction with backward estimation in ADPCM system with block diagram (16) 6. (i) Explain delta modulation systems with block diagrams (8) (ii) What is slope overload distortion and granular noise and how it is overcome in adaptive delta modulation (8) 7. What is modulation? Explain how the adaptive delta modulator works with different algorithms? Compare delta modulation with adaptive delta modulation (16) 8. Explain pulse code modulation and differential pulse code modulation (16) UNIT III - ERROR CONTROL CODING PART A (2 MARKS) 1. What is a generator polynomial? Give some standard generator polynomials 2. What is hamming distance in error control coding? 3. Why cyclic codes are extremely well suited for error detection? 4. What is syndrome? 5. Define dual code 6. What is hamming code? 7. List the properties of generator polynomial of cyclic codes 8. Write the syndrome properties of linear block codes 9. Give the steps in encoding (n, k) cyclic codes 10. What are cyclic codes? Why they are called sub class of block codes 11. List the conditions satisfied by Hamming codes 12. Define convolution codes 13. Define linear block codes 14. Define constraint length of convolution code 15. What is the difference between systematic codes and non systematic codes 16. What is the use of syndromes? Explain syndrome decoding 17. Draw the block diagram of syndrome calculator 18. Draw the block diagram for an encoder of (7,4) Hamming code 19. Define minimum distance between code vectors 20. Define Hamming Weight 21. Define code rate 22. List the types of errors 23. List the types of codes 24. What is error control coding? Which are the functional blocks of a communication system that accomplish this?

25. List the properties of syndromes UNIT IV - COMPRESSION TECHNIQUES PART B(16 MARKS) 1. Consider a hamming code C which is determined by the parity check matrix 1 1 0 1 1 0 0 H = 1 0 1 1 0 1 0 0 1 1 1 0 0 1 (i) Show that the two vectors C1= (0010011) and C2 = (0001111) are code words of C and Calculate the hamming distance between them (4) (ii) Assume that a code word C was transmitted and that a vector r = c + e is received. Show that the syndrome S = r. H T only depends on error vector e. (4) (iii) Calculate the syndromes for all possible error vectors e with Hamming weight <=1 and list them in a table. How can this be used to correct a single bit error in an arbitrary position? (4) (iii) What is the length and the dimension K of the code? Why can the min Hamming distance d min not be larger than three? (4) 2. (i) Define linear block codes (2) (ii)how to find the parity check matrix? (4) (iii)give the syndrome decoding algorithm (4) (iv)design a linear block code with d min 3 for some block length n= 2 m -1 (6) 3. a. Consider the generator of a (7,4) cyclic code by generator polynomial g(x) 1+x+x3.Calculate the code word for the message sequence 1001 and Construct systematic generator matrix G. (8) b. Draw the diagram of encoder and syndrome calculator generated by polynomial g(x)? (8)

4. For the convolution encoder shown below encode the message sequence (10011). also prepare the code tree for this encoder (16) Path 1 + FF FF Msg Bits Output + Path 2 5. (i) Find a (7,4) cyclic code to encode the message sequence (10111) using generator matrix g(x) = 1+x+x 3 (8) (ii) Calculate the systematic generator matrix to the polynomial g(x) = 1+x+x 3. Also draw the encoder diagram (8) 6. Verify whether g(x) = 1+x+x 2 +x 3 +x 4 is a valid generator polynomial for generating a cyclic code for message [111] (16) 7. A convolution encoder is defined by the following generator polynomials: g 0 (x) = 1+x+x 2 +x 3 +x 4 g 1 (x) = 1+x+x 3 +x 4 g 2 (x) = 1+x 2 +x 4 (i) What is the constraint length of this code? (4) (ii) How many states are in the trellis diagram of this code ( 8) (iii) What is the code rate of this code? (4) 8. Construct a convolution encoder for the following specifications: rate efficiency =1/2 Constraint length =4.the connections from the shift to modulo 2 adders are described by following equations g 1 (x) = 1+x g 2 (x) = x Determine the output codeword for the input message [1110] (16)

UNIT IV -COMPRESSION TECHNIQUES PART A(16 MARKS) 1. Calculate the bit rate for a 16 bit per sample stereophonic music whose sampling rate is 44.1 KSPS. 2. Draw the Huffman code tree and find out the codes for the given data AAABBCDAB 3. What type of encoding technique is applied to AC and DC co-efficient in JPEG? 4. State the main applications of Graphics Interchange Format (GIF) 5. Explain run length encoding 6. Explain the Source Intermediate Format(SIF) 7. Why differential encoding is carried out only for DC co efficient in JPEG? 8. What do you understand by frequency masking? 9. What are make up codes and termination codes in digitization of documents? 10. What do you understand by GIFF interlaced mode? 11. Explain in brief Spatial frequency with the aid of a diagram 12. How arithmetic coding is advantages over Huffman coding for text compression? 13. What are Make up Codes and Termination codes in digitization of documents? 14. Define Compression 15. What is the need for text and image compression? 16. What are the principles and types of compression? 17. Differentiate the loss less and lossy compression 18. What is arithmetic coding? 19. What is JPEG standards? PART B (16 MARKS) 1. (i)discuss the various stages in JPEG standard (9) (ii)differentiate loss less and lossy compression technique and give one example for each (4) (iii)state the prefix property of Huffman code (3) 2. Write the following symbols and probabilities of occurrence, encode the Message went# using arithmetic coding algorithms. Compare arithmetic coding with Huffman coding principles (16) Symbols: e n t w # Prob : 0.3 0.3 0.2 0.1 0.1 3. (a) Draw the JPEG encoder schematic and explain (10)

(b) Assuming a quantization threshold value of 16, derive the resulting quantization error for each of the following DCT coefficients 127, 72, 64, 56,-56,-64,-72,-128. (6) 4. (i) Explain arithmetic coding with suitable example (12) (ii) Compare arithmetic coding algorithm with Huffman coding (4) 5. (i) Draw JPEG encoder block diagram and explain each block (14) (ii) Why DC and AC coefficients are encoded separately in JPEG (2) 6 (a) Discuss in brief,the principles of compression (12) (b) in the context of compression for Text,Image,audio and Video which of the compression techniques discussed above are suitable and Why? (4) 7 (i) Investigate on the block preparation and quantization phases of JPEG compression process with diagrams wherever necessary (8) (ii) Elucidate on the GIFF and TIFF image compression formats (8) UNIT V - AUDIO AND VIDEO CODING PART A (2 MARKS) 1. Explain CELP principles 2. What is significance of D- frames in video coding? 3. Find the average compression ratio of the GOP (Group of pictures) which has a frame sequence IBBPBBPBBPBB where the individual compression ratios of I,P and B are 10:1, 20:1,50:1 respectively 4. What is the need of MIDI standard? What is its application? 5. What is Dolby AC 1? 6. What is the need of Midi standard? 7. What is Dolby AC -1? 8. Define the terms GOP and Prediction Span with reference to video compression 9. Define the terms processing delay and algorithmic delay with respect to speech coders 10. What do you understand by frequency masking? 11. What is perceptual coding? 12. What are the applications of MPEG standards? 13. What is code excited LPC? Compare it with LPC 14. List the features of Perception 15. Define Pitch and period 16. List the applications of LPC 17. List the four international standards based on CELP 18. What is mean by temporal Masking? 19. What is MPEG? 20. Draw the frame format in MPEG audio encoder

PART B (16 MARKS) 1. (i) Explain the principles of perceptual coding (14) (ii) Why LPC is not suitable to encode music signal? (2) 2. (i)explain the encoding procedure of I,P and B frames in video encoding with suitable diagrams (14) (ii) What are the special features of MPEG -4 standards (2) 3. Explain the Linear Predictive Coding (LPC) model of analysis and synthesis of speech signal. State the advantages of coding speech signal at low bit rates (16) 4. Explain the encoding procedure of I,P and B frames in video compression techniques, State intended application of the following video coding standard MPEG -1, MPEG -2, MPEG -3, MPEG -4 (16) 5. (i) What are macro blocks and GOBs? (4) (ii) On what factors does the quantization threshold depend in H.261 standards? (3) (iii)discuss the MPEG compression techniques (9) 6. (i) Discuss about the various Dolby audio coders (8) (ii) Discuss about any two audio coding techniques used in MPEG (8) 7. (a) Discuss in brief, the following audio coders: (iv) MPEG audio coders (8) (v) DOLPY audio coders (8) (vi) 8. (i) Explain the Motion estimation and Motion Compensation phases of P and B frame encoding process with diagrams wherever necessary (12) (ii) Write a short note on the Macro Block format of H.261 compression standard (4) *************************