Lossy Image Compression

Similar documents
Chapter 9 Image Compression Standards

Ch. 3: Image Compression Multimedia Systems

Compression and Image Formats

Module 6 STILL IMAGE COMPRESSION STANDARDS

An Analytical Study on Comparison of Different Image Compression Formats

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Assistant Lecturer Sama S. Samaan

Fractal Image Compression By Using Loss-Less Encoding On The Parameters Of Affine Transforms

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

Subjective evaluation of image color damage based on JPEG compression

Hybrid Coding (JPEG) Image Color Transform Preparation

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

2. REVIEW OF LITERATURE

Bitmap Vs Vector Graphics Web-safe Colours Image compression Web graphics formats Anti-aliasing Dithering & Banding Image issues for the Web

The Strengths and Weaknesses of Different Image Compression Methods. Samuel Teare and Brady Jacobson

PENGENALAN TEKNIK TELEKOMUNIKASI CLO

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

REVIEW OF IMAGE COMPRESSION TECHNIQUES FOR MULTIMEDIA IMAGES

The next table shows the suitability of each format to particular applications.

OFFSET AND NOISE COMPENSATION

image Scanner, digital camera, media, brushes,

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

Prof. Feng Liu. Fall /02/2018

Digital Media. Lecture 4: Bitmapped images: Compression & Convolution Georgia Gwinnett College School of Science and Technology Dr.

LECTURE 02 IMAGE AND GRAPHICS

Determination of the MTF of JPEG Compression Using the ISO Spatial Frequency Response Plug-in.

The Need for Data Compression. Data Compression (for Images) -Compressing Graphical Data. Lossy vs Lossless compression

Digital Image Processing Introduction

Topics. 1. Raster vs vector graphics. 2. File formats. 3. Purpose of use. 4. Decreasing file size

3. Image Formats. Figure1:Example of bitmap and Vector representation images

Chapter 8. Representing Multimedia Digitally

Practical Content-Adaptive Subsampling for Image and Video Compression

Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor

CS101 Lecture 19: Digital Images. John Magee 18 July 2013 Some material copyright Jones and Bartlett. Overview/Questions

Bitmap Image Formats

Templates and Image Pyramids

Image Optimization for Print and Web

Objective Evaluation of Edge Blur and Ringing Artefacts: Application to JPEG and JPEG 2000 Image Codecs

CS 262 Lecture 01: Digital Images and Video. John Magee Some material copyright Jones and Bartlett

4 Images and Graphics

Images and Colour COSC342. Lecture 2 2 March 2015

Byte = More common: 8 bits = 1 byte Abbreviation:

CMPT 165 INTRODUCTION TO THE INTERNET AND THE WORLD WIDE WEB

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University

Huffman Coding For Digital Photography

Graphics for Web. Desain Web Sistem Informasi PTIIK UB

Applying mathematics to digital image processing using a spreadsheet

Raster (Bitmap) Graphic File Formats & Standards

PERFORMANCE EVALUATION OFADVANCED LOSSLESS IMAGE COMPRESSION TECHNIQUES

Templates and Image Pyramids

15110 Principles of Computing, Carnegie Mellon University

SYLLABUS CHAPTER - 2 : INTENSITY TRANSFORMATIONS. Some Basic Intensity Transformation Functions, Histogram Processing.

What You ll Learn Today

Picsel epage. Bitmap Image file format support

Sampling Rate = Resolution Quantization Level = Color Depth = Bit Depth = Number of Colors

Comparative Analysis of Lossless Image Compression techniques SPHIT, JPEG-LS and Data Folding

Lossless Huffman coding image compression implementation in spatial domain by using advanced enhancement techniques

A Hybrid Technique for Image Compression

Image Compression Using Huffman Coding Based On Histogram Information And Image Segmentation

Artifacts and Antiforensic Noise Removal in JPEG Compression Bismitha N 1 Anup Chandrahasan 2 Prof. Ramayan Pratap Singh 3

Image Perception & 2D Images

LECTURE 03 BITMAP IMAGE FORMATS

A Modified Image Coder using HVS Characteristics

Fundamentals of Multimedia

Dr. Shahanawaj Ahamad. Dr. S.Ahamad, SWE-423, Unit-06

Developing Multimedia Assets using Fireworks and Flash

ENEE408G Multimedia Signal Processing

JPEG Encoder Using Digital Image Processing

B.Digital graphics. Color Models. Image Data. RGB (the additive color model) CYMK (the subtractive color model)

Information Hiding: Steganography & Steganalysis

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor

Keywords: BPS, HOLs, MSE.

Factors to Consider When Choosing a File Type

Digital Imaging & Photoshop

Understanding Image Formats And When to Use Them

Digital imaging or digital image acquisition is the creation of digital images, typically from a physical scene. The term is often assumed to imply

A COMPARATIVE ANALYSIS OF DCT AND DWT BASED FOR IMAGE COMPRESSION ON FPGA

A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES

Digital Image Fundamentals

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

HTTP transaction with Graphics HTML file + two graphics files

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

1. Describe how a graphic would be stored in memory using a bit-mapped graphics package.

INTRODUCTION TO COMPUTER GRAPHICS

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

Image compression with multipixels

Color, graphics and hardware Monitors and Display

DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE

ECC419 IMAGE PROCESSING

Analysis on Color Filter Array Image Compression Methods

Lecture - 3. by Shahid Farid

Genuine Fractals 4.1 Evaluation Guide

Sensors & Transducers 2015 by IFSA Publishing, S. L.

Specific structure or arrangement of data code stored as a computer file.

The Application of Selective Image Compression Techniques

Improvements of Demosaicking and Compression for Single Sensor Digital Cameras

CS101 Lecture 12: Digital Images. What You ll Learn Today

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

SPIHT Algorithm with Huffman Encoding for Image Compression and Quality Improvement over MIMO OFDM Channel

Transcription:

Lossy Image Compression Robert Jessop Department of Electronics and Computer Science University of Southampton December 13, 2002 Abstract Representing image files as simple arrays of pixels is generally very inefficient in terms of memory. Storage space and network bandwidth is limited so it is necessary to compress image files. There are many methods that achieve very high compression ratios by allowing the data to be corrupted in ways that are not easily perceivable to the human eye. A new concept is described, combining fractal compression with sampling and interpolation. This could allow greater fidelity at higher compression ratios than current methods. 1

CONTENTS 2 Contents 1 Introduction 2 1.1 Background.......................................... 2 1.2 Overview of compression methods.............................. 2 1.3 Patent Issues.......................................... 3 2 Image Compression Methods 3 2.1 JPEG.............................................. 3 2.2 Wavelet Methods and JPEG2000............................... 4 2.3 Binary Tree Predictive Encoding and Non-Uniform Sampling and Interpolation...... 5 2.4 Fractal Image Compression.................................. 6 3 Comparison of Decoded Colour Images 8 4 Combining Fractal Compression with Sampling and Interpolation 11 5 Conclusions 11 1 Introduction 1.1 Background The algorithms described in this report are all for real world or natural images such as photographs. These images contain colour gradients and complex edges, not flat areas of one colour or perfectly straight edges. All these algorithms are lossy; the compressed image is not exactly the same as the original. For most uses some loss is acceptable. In photographs there is the image is already an imperfect representation due to the limitations of the camera. The data is already quantized and sampled at a finite resolution and there is normally a little noise all over the image. When lossless compression is needed or images are non-natural, such as diagrams, then another format, such as TIFF, GIF or PNG, should be used. Though I will not discuss video, it is a related topic as advances in video compression are usually based on research in still image compression. Image compression algorithms take advantage of the properties of human perception. Low intensity high frequency noise is not important and we are much less sensitive to differences in colour than light intensity. These aspects can be reproduced less accurately when an image is decompressed. 1.2 Overview of compression methods In their simplest form images are represented as a two dimensional array (bitmap) with each element containing the colour of one pixel. Colour is usually described as three values - the intensities of red, green and blue light (RGB). The most widespread algorithm is JPEG, which is more commonly known as just JPEG ( Joint Photographic Experts Group). It is based on DCTs (Discrete Cosine Transforms). Its dominance is a result of wide application support, in particular web browsers. The vast majority of photographs on the Internet are

2 IMAGE COMPRESSION METHODS 3 JPEG files. It is described in section 2.1. The common JPEG file format is correctly called JFIF (JPEG File Interchange Format). JPEG2000 is a relatively new standard that is supposed to replace JPEG ( Joint Photographic Experts Group). It is based on wavelets. It offers better results than JPEG, especially at higher compression ratios. It also offers better features, including better progressive display. It is described in section 2.2. Binary Tree Predictive Coding (Robinson, 1994), and Non-Uniform Sampling and interpolation (Rosenberg), are similar techniques. They are based on encoding the value of only some of the pixels in the image and the rest of the image is predicted from those values. It is described in section 2.3. There is no standard for fractal image compression, but the nearest thing to one is Iterated Systems FIF format ( Iterated Systems Inc). Fractal compression is based on PIFS (Partitioned Iterated Function Systems). Two good books on fractal image compression are (Barnsley and Hurd, 1992) and (Fisher, 1994). One advantage of fractal methods is super resolution; decoding the image at higher resolutions leads to much better results than stretching the original image. It is described in section 2.4. Image compression usually involves a lossy transformation on the data followed by a lossless compression of the transformed data. Entropy encoding such as Huffman or arithmetic encoding (Barnsley and Hurd, 1992) is usually used. There is information and links for most types of compression in (loup Gailly). 1.3 Patent Issues Parts of the techniques described here are covered by patents. Iterated Systems control of a patent for fractal compression has probably prevented it becoming widespread. Iterated Systems has licensed their technology for use in other applications, such as Microsoft Encarta. Their are many patents covering wavelets but the JPEG committee has managed to ensure that all patent holding companies have agreed to allow free use of JPEG2000. JPEG was used freely for a long time but recently Forgent Networks has claimed royalties for its use ( Joint Photographic Experts Group). 2 Image Compression Methods This section will describe the main algorithms for lossy image compression in non-mathematical terms. They are described in the simplest way and actual implementations may work slightly differently to maximize speed. Each method has references which contain the relevant equations and detail in full. 2.1 JPEG JPEG is an effective way of compressing photographs at high quality levels and is widely used on the Internet. The basic algorithm is as follows: 1. The colours in the image are converted from RGB (red, green and blue) to YUV, which is: brightness, hue (colour) and saturation (how colourful or grey it is). The resolution of the hue and saturation components are halved in each axis because the human eye is less sensitive to colour than brightness. The three channels are then encoded separately. 2. Each channel is split into 8 8 blocks of pixels. Each block is then encoded separately.

2 IMAGE COMPRESSION METHODS 4 Figure 1: The 64 coefficients from the discrete cosine transform are weights for the 64 patterns above, which when combined give the 8 8 block of pixels. Image from (Salomon) 3. A DCT (Discrete Cosine Transform) is applied to each block (Figure 1)and the 64 resulting coefficients are rounded to integers and each divided by a number. The higher frequency coefficients have bigger divisors than the lower frequencies. The actual set of divisors used depends of the quality desired. Since it is integer division this is a lossy process; it can only be reversed approximately by multiplying. 4. The first coefficient is the average value for the whole block and is encoded as the difference from the average value in the previous block. 5. The 64 coefficients are ordered in a zig-zag pattern from top left to bottom right so the lower frequencies are read first. They are run length encoded. Due to the dividing stage most blocks should end in a long run of zeros in the higher frequencies and will compress well. 6. Huffman encoding is applied to the run length encoded data. To decode the process is reversed. Progressive display and lossless compression are supported as well but rarely used. The format also specifies an option to use arithmetic encoding instead of Huffman encoding but due to patent royalties it is rarely used. There is a detailed description in (Barnsley and Hurd, 1992) and the full specification is available from ( Joint Photographic Experts Group). JPEG compression can cause ripples can appear next to sharp edges. 2.2 Wavelet Methods and JPEG2000 Wavelet methods give very good performance at both high and low image fidelities and have the advantage of describing the image starting at a low resolution first and increasing the resolution until all the detail is there. This is useful for progressive viewing while transferring on slow Internet connections. The general algorithm is as follows:

2 IMAGE COMPRESSION METHODS 5 Figure 2: One dimensional wavelets. From (Dartmouth College) 1. Like JPEG, colours are converted to YUV, and the U and V components are encoded with less fidelity. 2. The image is expressed at a low resolution as a regular array of wavelets. For example a 512 512 pixels image might be scanned with a 64 64 array of wavelets. Each wavelet is given an amplitude such that they best describe the image locally. The wavelets overlap with their neighbors so each wavelet would be 16 16 pixels in this example. The height of the wavelet at each given pixel determines the weight of that pixel in determining the value for the wavelet s amplitude. There are many different wavelets but only one is used for a given compression scheme. The wavelets are in two dimensions but you can see some one dimensional examples in Figure 2. 3. Now there is an approximation to the image as an low resolution array of wavelets. This is subtracted from the original image to get the difference. The difference image is then encoded the same way using an array of wavelets twice the resolution in both axis. This is repeated until the final difference image has been encoded losslessly using wavelets with a width of one pixel. 4. The data is now compressed. Because the low resolution approximations are near the original image most of the wavelet amplitudes at higher resolutions are zero or very low. This means they can be compressed extremely well using entropy (huffman or arithmetic) encoding. Also for any zero valued wavelet where the higher resolution wavelets beneath it are zero a special zero-tree symbol is output and the higher resolutions wavelets underneath are not output at all. 5. As described the algorithm is lossless. Very good lossy compression is achieved by quantizing the wavelet amplitudes (like the dividing of the coefficients in the JPEG algorithm). The highest resolutions can be quantized the most coarsely. This makes more of the amplitudes zero allowing more zero trees and improving the effectiveness of the entropy encoding stage. JPEG2000 is an attempt to standardise wavelet image compression ( Joint Photographic Experts Group). 2.3 Binary Tree Predictive Encoding and Non-Uniform Sampling and Interpolation These two methods are very similar. They both work by storing only the values of some of the pixels in an image and allowing the rest to be predicted by the decoder. Non-uniform sampling and interpolation (NSI) encodes the values of the most important pixels and when decoding the rest of the pixels are interpolated (predicted) from nearby known pixels (Rosenberg). Figure 3

2 IMAGE COMPRESSION METHODS 6 Figure 3: The sample points used to encode Lena with NSI. From (Rosenberg) shows the sample points used in an encoding of the Lena image. The method encodes large flattish areas very well but uses a lot of points for edges and textures. The method is not as good as BTPC because the positions of the pixels sampled is more complicated to encode. Binary Tree Predictive Coding (BTPC) works in a similar way to NSI except that, instead of choosing the most important pixels, the pixels are sampled in a binary tree structure, with pixels nearer the root used to predict those below them. A zero-tree symbol is output when all pixels at lower levels can be predicted accurately enough. For details of the algorithm see (Robinson, 1994). 2.4 Fractal Image Compression The word fractal is hard to define but is usually used to describe anything with infinite detail and self similarity on different scales. Fractal image compression is based on partitioned IFS (Iterated Function Systems) (Barnsley and Hurd, 1992). Compressed Images are described entirely in terms of themselves. Here is the simplest encoding algorithm: 1. The image is partitioned into blocks called ranges. 2. For each range find a contractive transformation that describes it as closely as possible. Each range is defined as looking like a larger area (called a domain) anywhere in the image and has a brightness and a contrast adjustment. To guarantee decoding will work the contrast adjustment must be less than 1 (the contrast is reduced in the transformation) but it has been found that some contrast adjustments greater than 1 can be safely allowed. 3. The transformation data is quantized. In practice the quantization is taken into account when judging how well a domain is matched to a range. A very good and detailed account of fractal image compression is in (Fisher, 1994). It may seem useless to define the image in terms of itself but if you start with any image and iteratively apply the transformations, it will converge on the encoded image. This decoding process is fast and only needs a few iterations. This convergence has been mathematically proved and is due to the transformations

2 IMAGE COMPRESSION METHODS 7 Figure 4: Range blocks used in the quadtree compression of Figure 6(e) being contractive. Since it is unlikely that every range will have a domain that matches perfectly, fractal compression is by nature lossy and there is a limit to the fidelity of the reconstruction possible for any given image and range size. In the simplest algorithm the ranges are all the same size but much better results are obtained by adapting the partitions to suit the image. Sometimes large areas can be described by one transformation and other areas need many ranges and many transformations. A simple method is quadtree partitioning (Figure 4). Better is HV partitioning, which starts with one range and recursively divides ranges unevenly into two (Fisher, 1994). Starting with small ranges and merging adjacent ranges gives good results as well (Ruhl et al., 1997). One problem with fractal image compression is that encoding takes a long time because to find each transformation a large number of possible domains must be compared to the range. Classifying ranges and domains and only comparing those in the same class reduces the time taken (Fisher, 1994). Even faster encoding can be achieved by building a multi-dimensional search tree (Saupe, 1995)(Cardinal, 2001). With the basic algorithm you often get visible artifacts at the edges of ranges (as in Figure 6(e)). It helps to blur the edges of ranges into each other but this does not eliminate the problem entirely. The quality possible depends on the image encoded. Fractal image compression can be combined with wavelet encoding, which gives better image quality without range edge artifacts. Due to the hierarchical nature of wavelet compression this allows decoding in one step instead of iterating (Levy and Wilson, 1995)(van de Walle, 1995). Since the image is encoded by transformations rather than pixels you can decode the image at any resolution. When decoded at a higher resolution extra detail is generated by the transformations and this often gives better quality enlargements than stretching the original image (Figure 5). This property is called super resolution.

3 COMPARISON OF DECODED COLOUR IMAGES 8 (a) Lena s eyes enlarged 400% by pixel resize (b) Lena s eyes decoded at 400% size from a 20:1 FIF encoding Figure 5: Super Resolution 3 Comparison of Decoded Colour Images In figures 6 and 7 you can compare decoded images from five different algorithms. There are mathematical metrics for judging the difference between two images but for most applications the human eye is the best judge. The image used is the Lena image because it is like an unofficial standard for image processing research and is used in most research papers on image compression. Using only one image makes this comparison less than thorough but you can see the sort of artifacts and information lost when using the different algorithms. I have used colour images and not greyscale images because colour images are more relevant to real world uses. The images are 512 512 pixels and 24 bit colour. The Lena image and other test images are available from (Kominek). The programs used to encode these images were: Jasc Paint Shop Pro 5.0 for the DCT based JPEG. LuraWave SmartCompress ( AlgoVision-LuraTech) for the wavelet based JPEG2000. Iterated Systems Fractal Imager 1.6 ( Iterated Systems Inc) for FIF, which appears to be based on fractal transformations and wavelets. FRACOMP 1.0 (Kassler, 1996) for QIF, which is fractal based and uses quadtrees. BTPC 4.1 (Robinson, 1994) for Binary Tree Predictive Coding.

3 COMPARISON OF DECODED COLOUR IMAGES 9 (a) The original uncompressed Lena image (768KB) (b) JPEG (20KB) (c) LuraWave JPEG2000 (20KB) (d) Iterated Systems FIF (19KB) (e) FRACOMP QIF (23KB) (f) Binary Tree Predictive Coding (19KB) Figure 6: Comparison of the Lena image compressed around 38:1 using different methods.

3 COMPARISON OF DECODED COLOUR IMAGES 10 (a) The original uncompressed Lena image (768KB) (b) JPEG (10KB) (c) LuraWave JPEG2000 (10KB) (d) Iterated Systems FIF (10KB) (e) FRACOMP QIF (10KB) (f) Binary Tree Predictive Coding (10KB) Figure 7: Comparison of the Lena image compressed around 76:1 using different methods.

4 COMBINING FRACTAL COMPRESSION WITH SAMPLING AND INTERPOLATION 11 4 Combining Fractal Compression with Sampling and Interpolation The biggest problem with standard fractal compression is the artifacts at the edges of ranges. Postprocessing can help but not eliminate the problem. One successful way of improving it is by combining fractal compression with wavelets, as in (Levy and Wilson, 1995) and (van de Walle, 1995). However, This method restricts the size of the domain pool because the domains must be aligned with wavelets on the earlier levels. Generally in fractal compression the larger the domain pool the better quality or compression ratio can be achieved (Fisher, 1994). Another way to improved image quality might be to use the principles of binary tree predictive coding and non-uniform sampling. This is an area I am currently researching. My idea is to remove the brightness offset in the transformations and encode the actual colour values of the four corner pixels of each range. In applying the transformation the domain would be tilted so that the colours of its four corners map to the four corners of the range. Each corner in the transformation would effectively have its own brightness offset calculated. The brightness offset for each pixel in the range would be calculated by interpolating between the offsets of the four corners. The corner pixel values would be shared between adjacent ranges. For an even grid of ranges of the same size every corner not on the image edge is shared by four ranges so storing them would take no more space than storing brightness offsets does. With HV Partitioning there are the four corners of the image to begin with and each time a range is split two new corners are added so there are roughly twice as many corners to encode as there are ranges. By defining exactly the colour of each corner of each range there would be no artifacts on the corners of ranges and artifacts across edges would be greatly reduced. Since the corners are shared by ranges, the line of pixels on the edge of two ranges could be calculated as the average of the results of the two ranges transformations. As well as removing the edge artifacts the automatic tilting of the domain to fit the range would allow better domain-range matches. For example, a textured range with a constant gradient could be matched with a domain with no gradient and the same texture. Another area fractal image compression could be improved is the size of domains. Most current methods use domains four times the size of ranges because to take a simple mean of four pixels is simple. Better matches might be found with domains only slightly bigger than the ranges. It would also be possible to use domains the same size for some ranges as long as there are no circular references leading to a range being mapped to itself. This could be prevented by insisting on a contrast adjustment strictly less than 1 for transformations that are not spatially contractive. The pillar in the left of Figure 4 could have been encoding with fewer ranges if one part of it had been encoded with a few contractive transformations and the rest of it encoded as looking like that part. I am currently working on implementing these ideas and will report the results in my 3rd year project final report in May 2003. 5 Conclusions There are many good algorithms to compete with the widespread JPEG standard, but judging from Figure 6 none of them appear to provide better quality at the 38:1 compression ratio, which is suitable for web images. This is a little surprising considering the number of claims of compression better than JPEG in

REFERENCES 12 research papers and books. These claims are usual based on higher compression ratios, as in Figure 7, or on image quality metrics such as PSNR (Peak Signal to Noise Ratio), which does not model the properties of human vision well. It looks likely JPEG will remain the most popular format for the forseeable future. Each algorithm has its own advantages and drawbacks in different areas. For example: In Figure 6 the FIF and BTPC images have the best looking sharp edges but in the FIF image the hat looks very bad and the BTPC image has speckles. JPEG gives the best hat texture but has minor artifacts at some edges. JPEG2000 is a good compromise between edges and textures, performing nearly as well as the best in both areas. JPEG2000 s wavelet based format is overall probably the best available. It performs extremely well over a wide range of compression ratios. The format is also the most flexible; it supports transparency, gamma correction, progressive decoding and regions of interest (increased quality for part of an image). There are still possibilities for further research. Fractal image compression still has a lot of potential and sampling pixels at range block corners could lead to a very significant improvement in image quality for any given compression ratio. References AlgoVision-LuraTech. Lurawave smartcompress 3.0. http://www.algovision-luratech.com/. Iterated Systems Inc. Fractal imager plus 1.6. The website www.iterated.com is often referenced but it has little information and only a photoshop plug-in to download, However with a google search you can find Iterated Systems free stand-alone FIF compressor: fi16 mmx.exe. Joint Photographic Experts Group. Jpeg specifications and news. http://www.jpeg.org/. Michael F. Barnsley and Lyman P. Hurd. Fractal Image Compression. AK Peters, 1992. ISBN 1-56881- 0000-8. Jean Cardinal. Fast fractal compresison of greyscale images. IEEE Transactions on Image Processing, 10 (1), 2001. Dartmouth College. Wavelets diagram. http://eamusic.dartmouth.edu/~book/matcpages/chap.3/3.6.alts_fft.html. Yuval Fisher. Fractal Image Compression Theory and Application. Springer-Verlag, 1994. ISBN 0-387- 94211-4. Andreas Kassler. Fracomp 1.0 fraktale bildkompression, 1996. http://www-vs.informatik.uni-ulm.de/mitarbeiter/kassler/fractals.htm. John Kominek. Waterloo bragzone. I. Levy and R. G. Wilson. A hybrid fractal-wavelet transform image data compression algorithm. Technical Report CS-RR-289, Coventry, UK, 1995. http://citeseer.nj.nec.com/329032.html. Jean loup Gailly. comp.compression frequently asked questions. http://www.faqs.org/faqs/compression-faq/.

REFERENCES 13 John A. Robinson. Binary tree predictive coding, 1994. Web pages based on a paper submitted to IEEE Transactions on Image Processing. http://www.elec.york.ac.uk/visual/jar11/btpc/btpc.html. Chuck Rosenberg. Non-uniform sampling and interpolation for lossy image compression. http://www-2.cs.cmu.edu/~chuck/nsipg/nsi.html#nsi. Matthias Ruhl, Hannes Hartenstein, and Dietmar Saupe. Adaptive partitionings for fractal image compression. In Proceedings ICIP-97 (IEEE International Conference on Image Processing), volume II, pages 310 313, Santa Barbara, CA, USA, 1997. http://citeseer.nj.nec.com/ruhl97adaptive.html. David Salomon. Dct diagram. http://www.ecs.csun.edu/~dxs/dc2advertis/dcomp2ad.html. Dietmar Saupe. Accelerating fractal image compression by multi-dimensional nearest neighbor search. In J. A. Storer and M. Cohn, editors, Proceedings DCC 95 (IEEE Data Compression Conference), pages 222 231, Snowbird, UT, USA, 1995. http://citeseer.nj.nec.com/458915.html. Axel van de Walle. Relating fractal image compression to transform methods. Master s thesis, Waterloo, Canada, 1995. http://citeseer.nj.nec.com/walle95relating.html.