Indian Institute of Technology, Roorkee, India

Similar documents
2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES

A Hybrid Technique for Image Compression

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

A Brief Introduction to Information Theory and Lossless Coding

REVIEW OF IMAGE COMPRESSION TECHNIQUES FOR MULTIMEDIA IMAGES

Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

An Analytical Study on Comparison of Different Image Compression Formats

Lossless Image Compression Techniques Comparative Study

Chapter 3 LEAST SIGNIFICANT BIT STEGANOGRAPHY TECHNIQUE FOR HIDING COMPRESSED ENCRYPTED DATA USING VARIOUS FILE FORMATS

A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION

Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor

Communication Theory II

Comparative Analysis of Lossless Image Compression techniques SPHIT, JPEG-LS and Data Folding

CHAPTER 5 PAPR REDUCTION USING HUFFMAN AND ADAPTIVE HUFFMAN CODES

[Srivastava* et al., 5(8): August, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116

CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES. Every image has a background and foreground detail.

Fundamentals of Multimedia

An Enhanced Approach in Run Length Encoding Scheme (EARLE)

3. Image Formats. Figure1:Example of bitmap and Vector representation images

PERFORMANCE EVALUATION OFADVANCED LOSSLESS IMAGE COMPRESSION TECHNIQUES

Lecture5: Lossless Compression Techniques

Ch. 3: Image Compression Multimedia Systems

Speeding up Lossless Image Compression: Experimental Results on a Parallel Machine

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

A Modified Image Template for FELICS Algorithm for Lossless Image Compression

Comparison of Data Compression in Text Using Huffman, Shannon-Fano, Run Length Encoding, and Tunstall Method

Raster Image File Formats

CGT 511. Image. Image. Digital Image. 2D intensity light function z=f(x,y) defined over a square 0 x,y 1. the value of z can be:

K-RLE : A new Data Compression Algorithm for Wireless Sensor Network

OPTIMIZING THE WAVELET PARAMETERS TO IMPROVE IMAGE COMPRESSION

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University

FAST LEMPEL-ZIV (LZ 78) COMPLEXITY ESTIMATION USING CODEBOOK HASHING

Information Theory and Communication Optimal Codes

Unit 1.1: Information representation

Module 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:

Lossy and Lossless Compression using Various Algorithms

Scientific Working Group on Digital Evidence

UNIT 7C Data Representation: Images and Sound

2. REVIEW OF LITERATURE

On the efficiency of luminance-based palette reordering of color-quantized images

COMPRESSION OF SENSOR DATA IN DIGITAL CAMERAS BY PREDICTION OF PRIMARY COLORS

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor

Entropy, Coding and Data Compression

MULTIMEDIA SYSTEMS

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

ISSN: (Online) Volume 3, Issue 4, April 2015 International Journal of Advance Research in Computer Science and Management Studies

New Lossless Image Compression Technique using Adaptive Block Size

Digital Image Fundamentals

On the Performance of Lossless Wavelet Compression Scheme on Digital Medical Images in JPEG, PNG, BMP and TIFF Formats

LOSSLESS DIGITAL IMAGE COMPRESSION METHOD FOR BITMAP IMAGES

Multimedia Communications. Lossless Image Compression

Audio and Speech Compression Using DCT and DWT Techniques

A New Compression Method for Encrypted Images

Indexed Color. A browser may support only a certain number of specific colors, creating a palette from which to choose

# 12 ECE 253a Digital Image Processing Pamela Cosman 11/4/11. Introductory material for image compression

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE

The Application of Selective Image Compression Techniques

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Lossless Grayscale Image Compression using Blockwise Entropy Shannon (LBES)

Keywords Audio Steganography, Compressive Algorithms, SNR, Capacity, Robustness. (Figure 1: The Steganographic operation) [10]

Keyword:RLE (run length encoding), image compression, R (Red), G (Green ), B(blue).

Module 6 STILL IMAGE COMPRESSION STANDARDS

Coding for Efficiency

The Strengths and Weaknesses of Different Image Compression Methods. Samuel Teare and Brady Jacobson

A Review on Medical Image Compression Techniques

SPIHT Algorithm with Huffman Encoding for Image Compression and Quality Improvement over MIMO OFDM Channel

Approximate Compression Enhancing compressibility through data approximation

Module 3 Greedy Strategy

UNIT 7C Data Representation: Images and Sound Principles of Computing, Carnegie Mellon University CORTINA/GUNA

Chapter 1 INTRODUCTION TO SOURCE CODING AND CHANNEL CODING. Whether a source is analog or digital, a digital communication

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

HYBRID COMPRESSION FOR MEDICAL IMAGES USING SPIHT Preeti V. Joshi 1, C. D. Rawat 2 1 PG Student, 2 Associate Professor

HUFFMAN CODING. Catherine Bénéteau and Patrick J. Van Fleet. SACNAS 2009 Mini Course. University of South Florida and University of St.

image Scanner, digital camera, media, brushes,

Comparative Analysis of WDR-ROI and ASWDR-ROI Image Compression Algorithm for a Grayscale Image

Image compression using Weighted Average and Least Significant Bit Elimination Approach S.Subbulakshmi 1 Ezhilarasi Kanagasabai 2

JOINT BINARY CODE COMPRESSION AND ENCRYPTION

Image Compression Using Hybrid SVD-WDR and SVD-ASWDR: A comparative analysis

Module 3 Greedy Strategy

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003

Teaching Scheme. Credits Assigned (hrs/week) Theory Practical Tutorial Theory Oral & Tutorial Total

IMAGE STEGANOGRAPHY USING MODIFIED KEKRE ALGORITHM

Digital Asset Management 2. Introduction to Digital Media Format

Image Compression Using SVD ON Labview With Vision Module

Tarek M. Sobh and Tarek Alameldin

Improved Method for Lossless Compression of Images using Dithering

Compression and Image Formats

ROI-based DICOM image compression for telemedicine

Course Developer: Ranjan Bose, IIT Delhi

Modified TiBS Algorithm for Image Compression

LECTURE 02 IMAGE AND GRAPHICS

A new quad-tree segmented image compression scheme using histogram analysis and pattern matching

Chapter 9 Image Compression Standards

What You ll Learn Today

Lossy Image Compression Using Hybrid SVD-WDR

Chapter 8. Representing Multimedia Digitally

Transcription:

Volume-, Issue-, Feb.-7 A COMPARATIVE STUDY OF LOSSLESS COMPRESSION TECHNIQUES J P SATI, M J NIGAM, Indian Institute of Technology, Roorkee, India E-mail: jypsati@gmail.com, mkndnfec@gmail.com Abstract- As we are dealing with more and more digital data, several compression techniques are developed for increasing need to store more data in lesser memory. Compressing can save storage capacity, speed file transfer, and decrease costs for storage hardware and network bandwidth. This paper intends to provide the performance analysis of lossless compression techniques with respect to various parameters like, compression factor, saving percentage, compression and de-compression timeetc. It provides the relevant data about variations in parameters as well as describes the possible causes for it. The simulation results of image compression are achieved in MATLAB R9a. The paper focuseson thedecompression time and the reasons for differences in comparison. Keywords- Run length Coding (RLE), Huffman, Arithmetic, Lempel-ziv-welch (LZW), Compression Ratio. I. INTRODUCTION Data compression is a technique that transforms the data from one representation to another new compressed (in bits) representation, which contains the same information but with smallest possible size []. The size of data is reduced by removing the excessive or redundant information. The data is stored or transmitted at reduced storage and/or communication costs. Compressing a file to half of its original size is equivalent to doubling the capacity of the storage medium. It may then become feasible to store the data at a higher level of the storage hierarchy and reduce the load on the input/output channels of the system. Fig: Compression and de-compression process There are two compression techniques named as Lossless and Lossy compression. In lossless compression scheme the reconstructed image is same as the input image. Lossless image compression techniques first convert the images into the image pixels. Then processing is done on eachsingle pixel. The First step includes prediction of next image pixel value from the neighbourhoodpixels. In the second stage the difference between the predicted value and the actual intensity of thenext pixel is coded using different encoding methods. Lossy compression technique provides higher as compared to lossless compression. In this method, thecompressed image is not same as the original image; there is some amount of information loss in the image. In lossy compression, much information can be simply discarded away from image data/audio data/video data and when they are uncompressed the data will still be of acceptable quality. II. LOSSLESS COMPRESSION METHODS Commonly used lossless compression techniques are (RLE), Huffman Coding, Arithmetic coding and Lempel-Ziv-Welch (LZW). Run length coding Run-length encoding (RLE) is a data compression algorithm that is supported by most bitmap file formats, such as TIFF, BMP, and PCX. RLE is suited for compressing any type of data regardless of its information content, but the content of the data will affect the achieved by RLE. RLE is both easy to implement and quick to execute, making it a good alternative to either using a complex compression algorithm or leaving your image data uncompressed. RLE works by reducing the physical size of a repeating string of characters. This repeating string, called a run, is typically encoded into two bytes. The first byte represents the number of characters in the run and is called the run count. In practice, an encoded run may contain to 8 or 56 characters; the run count usually contains as the number of characters minus one (a value in the range of to 7 or 55). The second byte is the value of the character in the run, which is in the range of to 55, and is called the run value. Arun of 5 A s would normally require 5 bytes to store: AAAAAAAAAAAAAAA 5

The same string after RLE would require only two bytes: 5A.This compression technique is useful for monochrome images or images having the same background pixels. Implementation of Run-length encoding is carried out in MATLAB R9a. The stepsfor executing the code are as follows:- Read the grey scale image and rearrange data of image as single row vector. Convert all intensities values to binary state & obtain a binary stream representation of image. Count consecutive s and s appeared in a sequence and stored as run length encoded sequence. Reconstruct the original image. Calculate the as the ratio of original image and size of run lengthencoded sequence.. Huffman coding It is a variable length coding technique which is coded for the symbols based on their probabilities. Symbols are generated based on the pixels in an image. On the basis of the frequency of occurrence of the symbols, bits are assigned to it. Less bits are assigned to the symbols that occur more frequently while more number of bits are assigned to the symbols that occur less frequently. In Huffman coding the generated binary code of any symbol is not the prefix of the code of any other symbol [] [5]. Implementation of Huffman coding was carried out in MATLAB R9a. The stepsfor executing the code are as follows:- Read the greyscale image and convert the array into singlerow vector. From the grey scale image, form a Huffman encoding tree using probability of symbols in the image. Encode each symbol independently using the Huffman encoding tree. Reconstruct the original image by decompressing it using Huffman decoding. Calculate the as the ratio of original image and size of Huffman coded sequence.. Arithmetic encoding Arithmetic coding is also a variable length coding technique.in this technique, the entire symbols generated from the pixels is converted into a single floating point number also termed as binary fraction. In arithmetic coding technique, a tag is generated for the sequence which is to be encoded. This tag signifies the given binary fraction and becomes the unique binary code for the sequence. This unique binary code generated for a given sequence of certain length is not dependent on the entire length of sequence [] [] []. Volume-, Issue-, Feb.-7 Implementation of Arithmetic codingwas carried out in MATLAB R9a []. The stepsfor executing the code are as follows:- Read the greyscale image and store all the intensity values as a single row vector. Convert the matrix into binary form and arrange all the bits in binary stream representing the same image. Encode the entire stream using arithmetic encoding algorithm. Calculate the as the ratio of original image and size of Arithmetic coded sequence.. Lempel-Ziv-Welch (LZW) coding LZW compression algorithm is dictionary based algorithm. This means that instead of tabulating character counts and building trees (as for Huffman encoding), LZW encodes data by referencing a dictionary. It representsthe variable length symbols with fixed length codes. The original version of this method was created by Lempel and Ziv in 978 (LZ78) and was further refined by Welch in 98, hence the LZW acronym. Dictionary based coding scheme are of two types, Static and Adaptive. In Static Dictionary based coding, dictionary size is fixed during encoding and decoding processes and in Adaptive Dictionary based coding; dictionary size is updated and reset when it is completely filled. Since images are used as data, static coding suits for the compression job with minimum delay [] [6] [7]. Implementation of LZW encoding was carried out in MATLAB R9a [6][7]. The stepsfor executing the code are as follows:- Read an image and arrange all the intensity values in single row vector. Convert all the values in binary form and achieve a single row binary representation. Initialize the dictionary with basic symbols and. Start encoding & decoding based on search & find method. Add any new word found in dictionary and encode the sequence. If dictionary is completely filled, continue using same dictionary. Calculate the as the ratio of original image and size of encoded sequence. III. EVALUATION AND COMPARISON. Performance Parameters Depending on the nature of the application there are various criteria to measure the performance of a compression algorithm.following are some measurements parameters used to evaluate the performances of lossless algorithms. 6

Compression Ratio is the ratio between the size of the compressed file and the size of the source file. size after compression = size before compression Compression Factor is the inverse of the compression ratio. That is the ratio between the size of the source file and the size of the compressed file. Table : Compressionratio Volume-, Issue-, Feb.-7 compressionfactor = sizebeforecompression sizeaftercompression Saving Percentage calculates the shrinkage of the source file as a percentage. Table : Compression factor savingpercentage = sizeaftercompression sizebeforecompression Bits per pixel is the number of bits per pixel used in the compressed representation of the image. bitsperpixel = 8 Along with the above parameters compression and de-compression time, are also used to measure the effectiveness. Compression and De-Compression Time Time taken for the compression and decompression should be considered separately. For some applications like transferring compressed video data, the de-compression time is more important, while for some other applications both compression and decompression time are equally important. If the compression and decompression times of an algorithm are less or in an acceptable level it implies that the algorithm is acceptable with respect to the time factor. Table : Saving Percentage Table : Bits per pixel.. Results with real images Following images with sizes given in tables are used for comparison. Fig: Test images As seen in the table,, and the relative s, compression factor, saving percentage and bits per pixel are displayed respectively with respect to each technique used for compression. Among all, the run length encoding shows maximum but run length algorithm simply works to reduce inter-pixel redundancy which exists only when extreme shades are significant. Since the most of the real world images lack such dominance of shades, RLE is rarely used now a days for lossless data compression. Considering the available data about compression ratio, Huffman encoding scheme is found to be optimum since it solely works on reducing 7

redundancy in input data. Though Arithmetic encoding also seems to generate closest results as Huffman encoding, it also considers inter-pixel redundancy which reduces the compression factor. Lempel-Ziv-Welch encoding totally works on dictionary size as a key factor to achieve greater s. Thus with lower dictionary sizes the compression results are inferior as compared to other compression techniques..comparison wrt. Compression ratios.5.5 Lempel-Ziv-Welch (LZW) method Volume-, Issue-, Feb.-7.5.....5.6.7.8.9 Fig 7: CR against probability of sfor LZW coding The variations of with respect tothe probability of zero are achieved by generating random images of same size 5x5 pixels while changing probabilities for symbol zero in each by.. The images shown in Fig are used for this purpose. Fig: Test images to compare the s 5 5.....5.6.7.8.9 Fig: CR against probability of s for Run length coding.5.5.5.5 Huffman Method.....5.6.7.8.9 Fig5: CR against probability of sfor Huffman coding 5 Arithematic Encoding The graphs in Fig, 5, 6and 7 shows variation in with respect to the probability of zero in the image to be compressed. If we see all the graphs, we find that irrespective of technique used for compression the increases as the probability of zero approaches to zero. The results shows that minimum value of occur when probability of zero is.5 irrespective of the method used. This could be understood with the standard entropy of any binary data with respect to probability of occurrences of symbols. When probability of zero and one is equal (each as.5) the content of information is maximum. As we tend to move towards extreme probabilities the redundancy in information becomes more & more significant. Thus by any lossless technique, the compression results are best when probabilities of symbols lies in either of extreme probabilities. The Huffman coding shows almost linear increase and decrease in as we move away from the centre probability. The other methods show the nonlinearity in the same. This is because the Huffman coding is totally based on modifying information by simply assigning bits to respective symbols. Other techniques follow the technique of data modification by means of counting the repeating symbols, probability range split or dictionary, which are nonlinear. Therefore the variations in s are nonlinear for other techniques..comparison of Compression time To compare the compression time, random images of different sizes but same probabilities of zero are taken. Three different data sets are generated with probability of symbol zero for image a), b), and c) in figure 8as.5.5 &.75 respectively. samples of each image are used with varying size of image from kb to kb..5.5.5.5.5.....5.6.7.8.9 Fig6: CR against probability of sfor Arithmetic coding Fig 8: Test images to compare the compression and decompression time 8

It s obvious that as the size of image increases the compression time also increases. But the compression timeprofile shows changes if there is any variation in probability of zero while keeping the size variations same..5. at 5%.5.5.5 at 5% Volume-, Issue-, Feb.-7.5..5..5. 5 6 7 8 9 Fig 9: Compression timevariations against size of image for Run length encoding In the Fig 9which shows compression time profile for run length compression technique, delay in processing is independent of probability of zero. The Principle of run length encoding is simply counting the number of same symbols (both s as well as s) in sequence. Thus whatever may be probability of zero, the encoding process has no effect on it and hence the delay variations are not observed with variations in probability of zero for same image size. at 5% Huffman Method.5 5 6 7 8 9 Fig : Compression timevariations against size of image for Arithmetic encoding In the Fig, first of all the entire probability range is segmented as per the probability of symbol to be fetched. This step is followed till the end of sequence and the final value at the centre of segment is treated as encoded value. In the case of binary data, the arithmetic encoding process changes the current segment as there is transition from to or to, which is a delaying process. If the probability of either symbol is less, the transitions among symbols ( and ) are also less. Therefore the segment change is less frequent and delay is also less. Hence when the probability of zero is.5 the delay observed is maximum as the transitions are maximum. Delay reduces as we move away from equiprobable point. 5 5 at 5% Lempel-Ziv-Welch (LZW) method 8 6 5 5 5 5 6 7 8 9 Fig : Compression time variations against size of image for Huffman encoding In the Fig: which shows compression time profile forhuffman coding,compression time reduces as we increase probability of zero in image. For Huffman compression technique, the compression is basically rearrangement of bits as per the content of the information. Now as per the Huffman encoding tree structure in MATLAB, the coding will firstly be done to assignment and then assignment.thus more the no of zeroes, more the assignment in coding table and more is the delay for encoding entire data. 5 6 7 8 9 Fig : Compression timevariations against size of image for LZW encoding The Lempel-Ziv-Welch (LZW) compression technique is fully based on the formation of dictionary rather than other probability dependent techniques. Thus in Fig it can be seen that compression timeis almost same for any given image size irrespective of the probability of symbols..comparison of De-compression time De-compressiontime is calculated for the same images which are used for compression time. 9

Fig : de-compression time variations against size of image for Run length encoding In the Fig which shows de-compression time profile for run length compression technique, delay in processing is independent of probability of zero. In the de-compression, the encoded data is simply converted back to run.therefore the delay variations are not observed with variations in probability of zero for the same image size..5.5..5..5..5..5 5 6 7 8 9 9 8 7 6 5 at 5% at 5% Huffman Method Volume-, Issue-, Feb.-7 the delay observed is maximum as the transitions are maximum. Delay reduces as we move away from equiprobable point....8.6.. at 5% Fig 6: de-compression time variations against size of image for LZWencoding The de-compression time for Lempel-Ziv-Welch (LZW) compression technique, as shown in Fig 6, does not vary for any given image size irrespective of the probability of symbols. CONCLUSION Lempel-Ziv-Welch (LZW) method 5 6 7 8 9 An experimental comparison of different lossless compression algorithms for text data is carried out. Several existing lossless compression methods are compared for their effectiveness. Considering the, compression times and decompression times of all the algorithms, the following conclusions are made:- 5 6 7 8 9 Fig : de-compression time variations against size of image for Huffman method In the Fig which shows de-compression time profile for Huffman coding, as the number of zeros increase in an image,the assignment is more in coding table and hence the delay for encoding entire data is more..5..5..5..5 at 5% Arithmetic Encoding 5 6 7 8 9 Fig 5: de-compression time variations against size of image for Arithmetic encoding In case of Arithmetic coding as shown in the Fig 5, the de-compression time is very less compared to the compression time. When the probability of zero is.5 Huffman method is found better than other techniques since it follows optimal method to remove redundancy from given data. The achieved would be maximum if one of the symbols (either or ) has much greater probability than other in data. The relative comparison of compression time shows RLE and LZW methods do not show any significant change in delay with change in the probability of symbol. While for Huffman & arithmetic methods, probabilities of symbol affects compression time. Similar to the compression time analysis, the de- compression time for Huffman & arithmetic method varies with the probabilities of symbol whereas it does not show any significant change for RLE and LZW methodswiththe change in the probability of symbols. REFERENCES []. Dhananjay Patel, VinayakBhogan& Alan Janson Simulation and Comparison of Various Lossless Data Compression Techniques based on Compression Ratio and Processing Delay, International Journal of Computer Applications (975 8887) Volume 8 No, November []. Mohammed Al-laham &Ibrahiem M. M. El Emary, Comparative Study BetweenVarious Algorithms of Data 5

Volume-, Issue-, Feb.-7 Compression Techniques, Proceedings of the World [9]. David Jeff Jackson & Sidney Joel Hannah, Comparative Congress on Engineering and Computer Science 7 WCE Analysis of image Compression Techniques, System Theory CS 7, October -6, 7, San Francisco, USA. 99, Proceedings SSST 9, 5th Southeastern []. Sonal Dinesh Kumar, A Study of Various Image Symposium,pp 5-57, 7 9March 99. CompressionTechniques, Proceedings of COIT, RIMT []. Khalid Sayood, Introduction to Data Compression, nd Institute of Engineering and Technology, Pacific,, pp. Edition,San Francisco, CA, Morgan Kaufmann,. 799-8. []. Dr. T. Bhaskara Reddy, Miss. Hema Suresh Yaragunti, Dr. S. []. Amir said, Introduction to arithmetic coding- theory and Kiran, Mrs. T. Anuradha, A Novel Approach of Lossless practice,imaging System Laboratory HP Laboratories Palo Image Compression using Hashing andhuffman Coding, Alto HPL--76, April,. International Journal of Engineering Research & Technology [5]. Huffman D.A., A method for the construction of (IJERT) ISSN: 78-8, Vol. Issue, March. minimumredundancycodes, Proceedings of the Institute of []. Paul G. Howard and Jeffrey Scott Vitter, Arithmetic coding RadioEngineers, (9), pp. 98, September 95. for Data Compression, Proceeding of the IEEE, VOL 8, [6]. Ziv. J and Lempel A., A Universal Algorithm for Sequential No. 6, June 99. Data Compression, IEEE Transactions on Information []. Amit Jain, Kamaljit I. Lakhtaria, Prateek Srivastava, A Theory (), pp. 7, May 977. Comparative Study of Lossless Compression Algorithm on [7]. Ziv. J and Lempel A., Compression of Individual Sequences Text Data, Proc. of Int. Conf. on Advances in Computer via Variable-Rate Coding, IEEE Transactions on Science, AETACS, Elsevier, InformationTheory (5), pp. 5 56, September 978. []. S.R.Kodituwakku,U.S.Amarasinghe, Comparison of Lossless [8]. Subramanya A, Image CompressionTechnique, Potentials Data Compression Algorithms for Text Data, Indian Journal IEEE, Vol., Issue, pp 9-, Feb-March. of Computer Science and Engineering Vol No 6-5. 5