Split field coding: low complexity, error-resilient entropy coding for image compression
|
|
- Chrystal Miller
- 6 years ago
- Views:
Transcription
1 Split field coding: low complexity, error-resilient entropy coding for image compression James J. Meany* a, Christopher J. Martens a a Boeing, P.O. Box 516, MC S , St. Louis, MO, USA ABSTRACT In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG Keywords: entropy coding, Golomb coding, Rice coding, start-step coding, error resilience, image compression, wavelet compression, JPEG INTRODUCTION While modern compression methods can substantially reduce the size required to represent imagery and video, the variable length coding employed in compression makes the compressed data highly susceptible to bit errors, so that even a single bit error may totally corrupt the reconstructed content. A variety of methods may be employed to protect compressed data against such errors, including resynchronization to contain error effects, error concealment strategies to mask the effects of errors in the reconstructed imagery or video, error correction coding strategies to detect and correct errors through controlled use of redundancy, and error-resilient coding strategies to provide robust reconstruction in the presence of errors. This paper describes split field coding, an approach for low complexity, error-resilient entropy coding which combines parametric coding methods (such as Rice coding [1]) with the separation of code words into two fields: a variable length prefix and an error-resilient fixed length suffix. Split field coding divides a source distribution into groups of consecutive bins, called superbins, each comprised of source values whose code words all share the same length. The variable length prefix of a code word designates the superbin for the encoded source value, while the fixed length suffix designates the encoded value within the superbin. This approach offers computational efficiency, efficient coding of transform coefficients (resulting from transform coding or predictive coding), and error resilience properties due to the separation of the prefix and suffix fields. Once a prefix has been decoded correctly, errors on the associated fixed length suffix field cause no loss of code word synchronization, while the decoded value is constrained to the range of the associated superbin. Split field coding segregates the error resilient suffix fields to a separate suffix block. Due to the error resilience properties of the suffixes, the suffix block may be afforded a lower level of protection from errors than the remainder of the bitstream. The compressed bitstream is then suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. The remainder of this paper is organized as follows: In Section 2, we provide background on parametric entropy coding methods and their benefits in terms of coding efficiency and low computational complexity. In Section 3, we introduce the split field coding strategy for providing error resilience to portions of the compressed bitstream, and describe its implementation in the context of a wavelet-based image codec. In Section 4, we report the results of simulations for split field coding, showing rate-distortion results, computational performance, and error resilience properties in the presence of truncation and channel errors. Finally, conclusions are given in the last section. *james.j.meany@boeing.com; phone +1 (314) Applications of Digital Image Processing XXXI, edited by Andrew G. Tescher, Proc. of SPIE Vol. 7073, , (2008) X/08/$18 doi: / SPIE Digital Library -- Subscriber Archive Copy Proc. of SPIE Vol
2 NUMBER OF OCCURRENCES SYMBOL VALUES Figure 1. Notional depiction of symbol distribution partitioned into superbins for Golomb and Rice coding. For this example, each superbin consists of four bins (i.e., the Golomb parameter m = 4 ). The dotted lines represent partitions between the superbins. The unary codes above the distribution represent the code word prefixes for symbols within the respective superbins. 2. PARAMETRIC APPROACHES TO ENTROPY CODING Traditional approaches to entropy coding estimate a codebook based on the raw empirical statistics of the source (typically represented in a histogram). These include such well-known methods as Huffman coding [2] and arithmetic coding [3,4]. Parametric entropy coders provide an alternative approach, using an approximation of the source statistics based on an appropriate mathematical or statistical distribution function. With the parametric coding approach, the codebook is characterized by one to several parameters, which may be either parameters of the distribution model or else parameters of the entropy coder itself, which in turn should impute an appropriate distribution function. 2.1 Golomb and Rice coding The most well-known parametric coding method is Golomb coding [5], which can be viewed as a special case of Huffman coding for which the source is modeled using a discrete geometric or exponential distribution, given by: λ λn n n ( n) = ( 1 e ) e = ( 1 p) p qp P = Golomb coding was first developed as a method for encoding run lengths of a binary source with probabilities p and q = 1 p (with p q ). Because an exponential distribution is unbounded, the traditional Huffman codebook estimation procedure, which starts from the least probable symbol value, may not be applied. Instead the Golomb coding approach recognizes that the probabilities in an exponential distribution are halved at a constant interval, the value for which is given by: = 1 log p (2) m 2 Considering two symbols from the distribution, n 1 and n 2 = n 1 + m, the probabilities are related by P ( n2 ) = 0.5P( n1 ). Efficient coding dictates that the code word for n 2 should be one bit longer than the code word for n 1. Golomb coding partitions the distribution into superbins, where each superbin is a set of adjacent bins whose code words all share the same code word length. The codebook can be characterized by the value m, the so-called Golomb parameter, which is equal to the nominal width of the superbins. Figure 1 depicts an example of a distribution partitioned into superbins, with m = 4 (with each superbin comprising four individual bins). Each code word for the Golomb code consists of two fields: a unary prefix, which designates the superbin, followed by a binary suffix, which designates the individual bin within the superbin. Thus, in the example of Figure 1, the symbol value 14 would be encoded as , with the unary prefix of 1110 designating the superbin containing the values from 12 to 15, and the binary prefix of 10 designating the third symbol value (14), within the superbin. (1) Proc. of SPIE Vol
3 Table 1 shows the first 14 entries of the Golomb codebook for four different values of the Golomb parameter. From these examples, it may be observed that each unary prefix is associated with a fixed length binary suffix only when the Golomb parameter is equal to a power of 2. When the Golomb parameter is not equal to a power of 2, then the suffix field has a variable length and consequently, the prefixes do not align directly with the superbins. Also, in such cases, the first superbin comprises fewer than the nominal number of bins. Specifically, it may be shown that efficient coding dictates that the width for the first superbin is given by: ( log2 m +1) m = 2 m (3) 0 Rice coding [1] is a special case of Golomb coding, with the Golomb parameter constrained to be a power of 2: K m = 2 (4) The binary suffix for Rice coding has a fixed length (designated by the parameter K), thus providing for simpler coding implementations at a typically slight cost in coding efficiency. Golomb coding has been shown to be the minimum redundancy code for exponentially distributed sources. [6] However, source distribution characteristics will not in general indicate an integer value for the Golomb parameter (nor a power of 2 value in the case of Rice coding). In those situations, the minimum redundancy Golomb codebook will have superbins which oscillate between superbin widths of 1 log 2 p and 1 log 2 p + 1. Similarly, the minimum redundancy Rice K codebook will have superbins which oscillate between superbin widths of 2 and 2 K +1, where K = log 2( 1 log 2 p). Many implementations will not support oscillation of the superbin width, but instead require that the same superbin width apply for all superbins (except possibly for the first superbin, as noted above). In this case, minimum redundancy coding is achieved not by choosing the integer (or power of 2) value for the Golomb parameter m which is nearest to 1 log 2 p, but the integer (or power of 2) value for m which results in the smallest expected average code word length for the Golomb or Rice code. For the Golomb code, the optimal integer setting for the Golomb parameter is given by: [6] m = log 1+ p / log p (5) ( ) ( ) Due to a simple relationship between the probability p and the mean µ of the exponential distribution (i.e., ( p) µ = p 1 ), it is possible to estimate the appropriate setting for the Golomb parameter directly from the estimated mean µˆ of the source distribution, using the following piecewise linear approximation: m ˆ = ˆ µ = ˆ µ (6) Table 1. Sample Codebooks for Golomb and Rice Codes m = 2 m = 3 m = 4 m = 5 n P(n) Code n P(n) Code n P(n) Code n P(n) Code Proc. of SPIE Vol
4 For the Rice code, it can be shown that the appropriate power of 2 setting for the Golomb parameter m can be estimated very accurately using the following piecewise linear approximation: Kˆ ˆ 2 where K = log 2 ( ˆ ) m = ˆ µ Golomb and Rice coding are easily extended to handle signed data modeled by a Laplacian (two-sided exponential) distribution. Most commonly, the sign is signaled as part of the prefix, with zero-valued symbols encoded separately (by methods such as run length coding). Another method for handling signed data is to fold the distribution, producing a one-sided distribution with interleaved positive and negative values. The asymptotic growth of the code word length for Golomb and Rice coding is a linearly increasing function of the symbol value, which can lead to very long code words when unexpectedly large symbol values occur. Such large symbol values may arise due to nonstationary sources, or when the source statistics depart from the Laplacian / exponential model. Due to the nonstationarity of image and signal data, a mixture of statistics that may be locally exponential or Laplacian can result in heavy-tailed distributions. To deal with such issues, variations of Golomb and Rice coding may allow for increases in the superbin width for large symbol values. Increases in the superbin width may be intrinsic to a particular coding method, as with the start-step coding method described below, or may be implemented by an escape mechanism which departs from the normal Golomb-Rice coding procedure when the symbol value or the superbin number exceeds a specified threshold. Such strategies can prevent excessively long code words by reducing the asymptotic growth of the code word length to a logarithmically increasing function of the symbol value. 2.2 Start-step coding Start-step coding generalizes Rice coding by defining systematic increases to the superbin width. [7] Thus, the width for superbin i is given by: m i Ki = 2 where K = START + i STEP (8) i where START, STEP, i, and K are all non-negative integers. Thus, Rice coding corresponds to start-step coding with K = START and STEP = 0. Figure 2 depicts an example of a distribution of signed symbol values, partitioned in accordance with start-step coding, with parameters START = 1 and STEP = 1. To handle the signed values, the prefix is augmented by a leading bit to indicate sign. The zero value in this distribution would be coded separately, by methods such as run lengths of significant (non-zero) and insignificant (zero) values. In the example of Figure 2, the symbol value 8 would fall into the superbin associated with the prefix 1110 and would require a 3 bit suffix field of 001 to index bin number 1 from among eight individual bins with indices 0 to 7. Thus, the resulting codeword would be The asymptotic growth of the code word length for start-step coding with STEP = 1 is a logarithmically increasing function of the symbol value. For larger values for the STEP parameter, the code word length grows even more slowly. The start-step code also sometimes includes a STOP parameter, which specifies a maximum number of bits for the suffix field. (7) NUMBER OF OCCURRENCES SYMBOL VALUES Figure 2. Notional depiction of a distribution of signed symbol values. This distribution is partitioned into superbins in accordance with start-step coding, with parameter settings of START = 1 and STEP = 1. Proc. of SPIE Vol
5 2.3 Application benefits of parametric entropy coding Parametric entropy coding offers several distinct benefits. These include computational efficiency both for codebook estimation and for code word generation and decoding, efficient entropy coding, and fast / local adaptation to the nonstationary statistics. Codebook estimation for parametric coding is simplified in that only one to a few parameters must be estimated, and these may typically be estimated by a simple relationship to scalar measures which are easily computed from the source statistics, as described above. By contrast, traditional codebook formation processes (as for Huffman or arithmetic coding) commonly require maintenance of complete source statistics in the form of a histogram, with nontrivial incremental algorithms for estimation of the codebook. [2, 4] Furthermore, the structured nature of parametric entropy coding, such as the prefix/suffix structure of Golomb, Rice, or start-step coding, leads to efficient implementations for encoding/decoding of the code words. The coding efficiency of parametric coding approaches is due primarily to two factors: (1) the statistical distributions associated with parametric coding methods such as Golomb and Rice coding are well-matched to the statistics encountered in transform coding of imagery, and (2) the parametric models can be quickly and efficiently adapted to local statistics in nonstationary data. The use of parametric entropy coding approaches is supported by efforts to characterize the coefficient statistics for wavelet transforms and discrete cosine transforms (DCTs) of natural imagery, as reported in [8-10]. These papers show that wavelet and DCT coefficients may be effectively modeled using either Laplacian or generalized Gaussian distributions. Generalized Gaussian distributions are often used to model data sets that contain a mixture of several simpler independent distributions (such as Laplacian or Gaussian distributions). Joshi and Fischer [10] observed that when the coefficients of the DCT are classified into blocks based on ac energy, then the coefficient data is more readily modeled by simpler Laplacian and Gaussian distributions than by generalized Gaussian distributions. These results support approaches which are locally adapted based on Laplacian and related distributions. Also, the positions for significant (nonzero) and insignificant (zero) coefficients may be effectively represented by run length codes, which may be modeled by an exponential distribution, and effectively coded using Golomb, Rice, or start-step coding. The simplified source models of parametric coding are well-suited for local adaptation to nonstationary statistics, using either forward or backward adaptation. Forward adaptation defines a codebook for a block of data based on a first pass through the block prior to coding. The codebook must then be transmitted as side information prior to the encoded data. With traditional methods such as Huffman and arithmetic coding, the amount of side information can be large, due to the reliance on empirical modeling of the source. With parametric coding, the side information consists only of the codebook parameter(s), which typically can be transmitted in a few bits. This allows the adaptation to be performed over relatively small blocks of the source data, with good adaptation to local statistics. With backward adaptation, the codebook is estimated on the fly, based on the statistics of previously coded symbols or previous code words. While backward-adaptive coding can be performed in a single pass and requires no explicit side information, the coding is inefficient until the codebook converges to a good approximation of the source statistics. The resulting coding inefficiencies constitute a form of implicit side information. Traditional Huffman and arithmetic coding approaches are slow to converge, due to the brute force modeling of source statistics (by histogram), and do not adapt quickly to nonstationary statistics. The simplified source models of parametric coding enable rapid local adaptation to changing source statistics, so that the parameter(s) may adapt to a significant change within only a few source symbols. Due to these benefits, parametric coding approaches have been frequently adopted for use in image and video coding standards. The JPEG-LS lossless image coding standard [11, 12] employs a modified version of Golomb coding to achieve efficient coding with low complexity. [13] The H.264/AVC video coding standard [14-16] uses Context- Adaptive Variable Length Coding (CAVLC) [17], which is a state-machine based coder similar to Golomb coding, and exponential Golomb coding [15], which is equivalent to start-step coding with STEP = 1. The proposed JPEG XR standard (based on Microsoft s HD Photo format) uses the flexbits method, with code words consisting of a variable length prefix (generated using a state machine) and a fixed length suffix. [18, 19] Proc. of SPIE Vol
6 3. SPLIT FIELD CODING ERROR-RESILENCE BY SEPARATION OF FIXED LENGTH SUFFIXES 3.1 Effects of bit errors on compressed imagery Unfortunately, compressed imagery is very susceptible to catastrophic error effects due to the incidence of bit errors on the compressed bitstream. This sensitivity to bit errors is largely due to the use of variable length codes, as well as positional codes such as run lengths and significance codes to indicate the placement of coefficients or other compressed content. Bit errors on these codes typically disrupt the synchronization between the encoder and decoder, often resulting in severe distortion. Even a single bit error can totally corrupt a decoded image. Figure 3 provides examples of effects that a single bit error may have on a wavelet compressed image. The wavelet codec in these examples uses the 5-3 reversible wavelet transform [20, 21] (which is also employed in JPEG 2000 [22]), scalar quantization, and forward-adaptive start-step coding. The bit errors are applied to a lossless wavelet compression of a 256x256 section cropped from the café image in the JPEG 2000 test set, which is shown in Figure 3(a). Figure 3(b) shows catastrophic distortion resulting from a (a) Original image (b) Catastrophic distortion due to single bit error (c) Mild distortion due to single bit error (d) Difference image between (a) and (c) Figure 3. Examples of error effects on a single bit in wavelet compressed imagery. In both examples, the bit error occurred on the code word for an encoded wavelet coefficient. Proc. of SPIE Vol
7 single bit error on a code word for an intermediate scale wavelet coefficient. This bit error caused a loss of code word synchronization, which in turn led to complete loss of synchronization for the remainder of the encoded data. For this example, normal resynchronization mechanisms were purposely disabled to allow the loss of synchronization to propagate throughout the image. If resynchronization had been active, then the effects of this bit error would have been contained to a spatial subset within a single scale of the wavelet transform. Figure 3(c) and Figure 3(d) respectively show a reconstructed image and difference image with mild error effects resulting from a single bit error on a code word for a coarse scale wavelet coefficient. The difference image is included because the distortion is masked by the image content, and is barely visible in the reconstructed image. While this bit error also caused a loss of code word synchronization, the synchronization was spontaneously recovered after several misdecoded coefficients. (Spontaneous recovery of synchronization is not unusual for forward-adaptive coding, but extremely rare for backward-adaptive coding due to divergence of codebooks maintained by the encoder and decoder.) Some bit errors will not result in a loss of code word synchronization and may affect the decoding of only a single coefficient. Bit errors on run lengths or other positional codes typically result in a catastrophic loss of synchronization. Common strategies for dealing with errors in compressed imagery include resynchronization mechanisms to contain the effects of an error to a small portion of the data [23] (typically at a cost of about 0.5% to 2% bitstream overhead), error concealment strategies to mask error effects in the reconstructed imagery [24], error-resilient coding strategies [25], and error correction coding (ECC) strategies which detect and correct channel errors through controlled use of redundancy [26]. 3.2 Error resilience of fixed length suffixes Certain parametric entropy coding approaches, such as Rice coding and start-step coding, use code words with a fixed length suffix. While these variable length coding methods are in general highly susceptible to bit error effects, the fixed length suffix portions of the code words exhibit two forms of resilience to bit errors. First, any errors resulting in partial or total loss of the bits within a suffix field do not result in a loss of code word synchronization. Second, the misdecoded coefficient value resulting from partial or total loss of the bits within a suffix field are constrained to the contiguous range of coefficient values indicated by the associated prefix field (which for a fixed length suffix is the superbin), thus limiting the distortion due to the error(s). This limited distortion also typically benefits from a form of visual masking. This masking occurs because the distortion introduced due to the error(s) on the suffix field of an encoded wavelet or DCT coefficient takes the form of a noise basis function which is directly aligned with a signal basis function, which is typically larger. This masking is due to the Weber-Fechner law [27-28] of human perception which says that the smallest noticeable change in a stimuli (visual or auditory) is proportionate to the overall magnitude of the stimuli, and furthermore that the relationship between a perceived change in stimulus and the overall magnitude of the stimulus is logarithmic. Thus, a larger wavelet or DCT basis function will to some degree mask a relatively smaller change in that basis function. A necessary condition for these two error resilience properties of the suffix ((a) preservation of synchronization, and (b) distortion constrained to bounds of the superbin) is that the prefix must be decoded free of errors. In order to effectively exploit these properties, the prefix fields must be afforded greater protection from bit errors than the suffix fields. 3.3 Split field coding Split field coding is an error resilience strategy which separates the prefix fields from the fixed length suffix fields in order to afford greater protection for the prefixes and to exploit the resilience properties of the suffixes. Without such separation of the prefixes and suffixes and some form of greater protection for the prefixes, this approach would not be differentiated in any essential manner from conventional parametric coding methods, such as Golomb-Rice coding or start-step coding. The split field coding strategy may be employed together with any parametric coding method for which the code words include fixed length suffixes. Such entropy codes include Rice coding and start-step coding, as well as the flexbits method included in the proposed JPEG XR standard, where the suffix fields (called in-bin addresses) are isolated to a separate block to afford resilience to errors due to truncation of the compressed bitstream. [19, 29] Split field coding does not provide an error resilience benefit for the coding of positional information such as run length codes. Any errors in the decoding of positional information will result in a loss of synchronization between the encoder and decoder, even if those errors do not lead directly to a loss of code word synchronization. Thus, split field coding is Proc. of SPIE Vol
8 Wavelet-Based Image Encoder Compressed Bit Stream Original Original Image Image Fast Wavelet Transform Derive Quantization Interval and Clipping Threshold Desired Image Quality / Compression Settings Scalar Quantization of Coefficients Run Length Coding of Coefficient Positions Parametric Coding of Significant (Non-zero) Coefficient Values Parametric Coding of Run Lengths Separation of Selected Suffix Bits Primary block Overhead parameters Encoded coefficient positions Coefficient code prefixes Selected suffix bits of coefficient codes Suffix block Selected suffix bits of coefficient codes Wavelet-Based Image Decoder Compressed Bit Stream Primary block Suffix block Parametric Decoding of Run Lengths Joining of Prefix and Suffix Fields Run Length Decoding of Coefficient Positions Parametric Decoding of Significant Coefficient Values Scalar Dequantization Inverse Wavelet Transform Reconstructed Reconstructed Image Image Figure 4. Block diagram of a wavelet codec with scalar quantization, parametric coding, with split field coding of coefficients. applied to code words for coefficient values, such as wavelet and DCT coefficients or residual coefficients of predictive coding, but is not typically applied to positional code words. All or selected portions of the suffix fields of coefficient code words are placed in a separate block, called the suffix block, with the remainder of the bitstream allocated to a primary block. Some of the suffix bits may be allocated to the primary block due to considerations such as how various bitplanes contribute to distortion measures. Alternatively, the codebook estimation may be constrained to produce a minimum suffix field width at various scales, in order to increase the relative size of the noise resilient suffix block, at some cost in coding efficiency. Figure 4 depicts a codec utilizing such a strategy. This codec combines the wavelet transform, scalar quantization of wavelet coefficients, and parametric coding of coefficient values and run lengths, with split field coding applied to the encoded coefficient values. This represents one configuration of Boeing s EagleEye low complexity wavelet codec. The split field coding strategy is compatible with both forward-adaptive and backward-adaptive codebook estimation. In the case of forward adaptation, the codebook parameter(s) are transmitted as side information, so that the codebook is not affected by bit errors on the suffix fields. In the case of backward adaptation, errors on the suffix fields can affect the decoder codebook, leading to divergence between the encoder and decoder codebooks. To prevent such divergence, backward-adaptive codebook estimation may rely only on those bits from previous code words which are not in the suffix block, but may not make use of bits of previous code words placed within the suffix block (and thus treated as error resilient data exposed to a higher probability of errors). Note that some suffix bits may be placed in the primary block and that these may be included in backward-adaptive codebook estimation. There are many possible approaches for codebook estimation in the absence of all or some of the suffix field information. The estimation may infer the characteristics of the distribution model. For example, when using codebook estimation based on estimates of the mean (absolute) value of the distribution, the updates to the estimated mean value must be made based on only approximate symbol values (due to the excluded suffix information), but these approximate values may be refined in accordance with the current distribution model (such as an exponential model). Other codebook estimation methods can be defined which adjust the codebook in accordance with coding performance, for example, ensuring that about half of the samples fall into the first superbin with the remaining half falling into outer superbins. Such schemes are not dependent upon the knowledge of the suffix field information. Proc. of SPIE Vol
9 JPEG 2000 (5-3 Filter) JPEG 2000 (9-7 Filter) 5-3 Wavelet with Start-Step Coding 5-3 Wavelet with Rice Coding Infinite PSNR changed to 100 to fit graph JPEG 2000 (5-3 Filter) JPEG 2000 (9-7 Filter) 5-3 Wavelet with Start-Step Coding 5-3 Wavelet with Rice Coding PSNR (db) JPEG RMS Error JPEG Bits Per Pixel 1 10 (a) Peak Signal to Noise Ratio vs. Bits per Pixel Compression Ratio (b) RMS Error vs. Compression Ratio Figure 5. Rate-distortion comparison of JPEG 2000 and JPEG with wavelet codecs using parametric coding. Greater protection of the primary block (relative to the suffix block) may be accomplished in a variety of ways. The simplest and likely most common method is to place the primary block earlier within the bitstream than the suffix block. Because later positions in the bitstream are subject to greater exposure to a wide variety of error effects, earlier placement within the bitstream offers a very simple mechanism for providing more protection to higher priority content. Some examples of effects which expose later portions of the bitstream to a higher incidence of errors include termination of progressive transmission, truncation of a bitstream to meet bit rate targets, the use of Droptail buffering policies in QoS protocols, strategies for progressive coding on noisy channels [30], and various other limitations on storage and transmission channel bandwidth. In the event of bitstream truncation or termination, the missing suffix bits are typically replaced with zeros, thus minimizing the decoded value within the range of the applicable superbin. This strategy has the benefit of improving the visual masking of the distortion effects, especially when the corrupted coefficient value is small. Alternative estimation strategies include using a decoded value set to either the midpoint or the expected value over the range of the applicable superbin. In some applications, more elaborate protection schemes may be used to provide even higher levels of protection for the primary block. One approach is to use varying levels of error correction coding (ECC) for unequal error protection (UEP), providing a higher level of protection to the primary block. Another UEP approach is to transmit different blocks using different channels of a quadrature amplitude modulation (QAM) constellation, with the different channels inherently subject to differing signal to noise ratios. Still another approach is to apply different strategies for packet retransmission in an Automatic Repeat-reQuest (ARQ) error control protocol, so that higher priority packets are retransmitted upon errors, while lower priority packets are not. For example, the primary block could be sent using Transfer Control Protocol (TCP), which uses ARQ methods to provide a measure of reliability in delivery, while the suffix block could be sent using Universal Datagram Protocol (UDP), which provides no assurance of reliable delivery. Split field coding may be applied to data sets not characterized by a well-behaved distribution. This application can be accomplished by initially sorting the data set to produce a re-ordered monotonically descending distribution. This approach will result in error resilience in the sense that a bit error in the suffix field will not result in a loss of code word synchronization. However, the resulting error in the decoded value will not be constrained to a particular range since the sorting of the data set will destroy the contiguity of the superbins associated with specific prefix field values. 4. EXPERIMENTAL RESULTS 4.1 Coding and computational efficiency of parametric coding Parametric coding methods provide rate-distortion performance competitive with more elaborate methods such as the embedded binary arithmetic coder employed in JPEG 2000 [31], while offering significant advantages in computational complexity. Figure 5 shows the rate-distortion performance of both JPEG and JPEG 2000 in comparison with wavelet codecs using parametric coding. For JPEG 2000, results are shown for both the 5-3 and 9-7 wavelet filters. Two parametric wavelet codecs are shown, one using forward-adaptive start-step coding and the other using backward- Proc. of SPIE Vol
10 JPEG 2000 (5-3 Filter) 0.06 JPEG 2000 (5-3 Filter) Compression Time (seconds) JPEG 2000 (9-7 Filter) 5-3 Wavelet with Start-Step Coding 5-3 Wavelet with Rice Coding Decompression Time (seconds) JPEG 2000 (9-7 Filter) 5-3 Wavelet with Start-Step Coding 5-3 Wavelet with Rice Coding Bits Per Pixel 1 10 (a) Compression Timing Bits Per Pixel 1 10 (b) Decompression Timing Figure 6. Compression and decompression timing for JPEG 2000 and wavelet codecs using parametric coding. adaptive Rice coding, but both using the 5-3 wavelet filter. The results are plotted in two ways: peak signal to noise ratio vs. bits per pixel (which emphasizes differences at high bit rates) and root mean squared error vs. compression ratio (which emphasizes differences at low bit rates). JPEG 2000 results were obtained using the Kakadu version 6 codec. The results shown here were obtained from the 512x512 monochrome Lena image, but are typical of results obtained on any natural imagery. The performance of all four wavelet codecs is very similar and distinctly better than the JPEG results. The 9-7 filter does not support lossless compression, but provides slightly better results for lossy compression (as it was designed to do), except at very low bit rates where the parametric codecs provide a slight advantage. Figure 6 shows timing results for JPEG 2000 and the parametric wavelet codecs. It is difficult to measure or compare computational complexity for different algorithms. To focus the comparison on the core wavelet algorithms, we used a single plane image (512x512 Lena image). We used optimized code bases (Kakadu 6 for JPEG 2000 and Boeing s EagleEye codec for the parametric wavelet coding), compiled with Visual C++ 6.0, using standard optimizations, but with multi-threading and SIMD extensions disabled. Simulations were performed on a Dell Latitude D620 with 1828 MHz Core 2 Duo processor, with all other applications disabled. Timing was measured using the QueryPerformanceFrequency( ) & QueryPerformanceCounter( ) Win32 functions, which provide about 2 microsecond resolution (compared to 1 msec for the clock( ) function). Timing was performed end-to-end (including image file I/O, but not compressed file I/O). Reported compression times have 150 microseconds subtracted and reported decompression times have 1200 microseconds subtracted from the measured end-to-end times to account for the image file I/O. We performed multiple simulations, taking the median as the best time measurement. These results show a several-fold reduction in complexity for parametric coding compared to the embedded block coding with optimal truncation (EBCOT) approach of JPEG [31] This is not surprising as the embedded coding requires repeated passes through the coefficients. The context adaptivity of the EBCOT approach also adds complexity. The JPEG 2000 results also show increased complexity for the 9-7 filters, which use floating point computations. 4.2 Proportion of suffix bits in split field coding The utility of the split field coding approach depends significantly on the proportion of suffix bits in the compressed bitstream. To investigate this, we used the EagleEye wavelet codec with a split field coding implementation based on forward-adaptive start-step coding. The use of the split field coding strategy has a negligible effect on the bit rate because the code words generated by the start-step coding are not changed, but are simply rearranged, with all or some of the coefficient code word suffix bits segregated to a suffix block. Table 2 shows proportions of suffix bits for lossless compression of eleven monochrome sample images from the JPEG 2000 test set. The cafe256 image is a subset of the cafe image, cropped to 256x256 to fit at full resolution within this paper. Several trends are apparent from these results. First, the proportion of suffix bits increases with the complexity of the image (as measured by the lossless compression bit rate). This is not surprising, as the image complexity takes the form of either fine details (sharp edges, point features, texture) or noise within the image. Such features tend to increase the variance of the coefficient distributions, leading to larger superbins and a higher proportion of suffix bits. Second, the Proc. of SPIE Vol
11 Table 2. Proportion of suffix bits for split field coding with lossless compression of sample monochrome images Image Width Height Pixel Depth BPP TOTAL BYTES SUFFIX BYTES SUFFIX PERCENT seismic % mat % goldhill % aerial % cafe % cafe % txtur % elev % ct % x_ray % sar % proportion of suffix bits increases with the pixel depth of the image. This is also not surprising. Increased pixel depth is commonly utilized to provide large dynamic range to accommodate the large variation in illumination conditions and contrast in visible imagery. The increased pixel depths have two effects which increase the image complexity. First, well-illuminated features in the imagery are represented with increased dynamic range, and second, the dynamic range commonly extends below the noise floor of the data. Both of these effects tend to increase the variance of the coefficient distributions, and consequently the proportion of suffix bits. This is important as many applications, including digital cameras, are seeing increased use of high dynamic range imagery. For the purposes of image results displayed for the simulations that follow, we choose the cafe256 image, which provides a high proportion of suffix bits (more typical of high dynamic range imagery), as well as 8 bit grayscale and small size (suitable for use in printed results). Table 3 shows proportions of suffix bits for lossy compression of the cafe256 image. Not surprisingly, the proportion of suffix bits decreases rapidly as the bit rate is reduced. At high bit rates, very much of the bitstream is expended for the representation of fine details and noise within the image. As the bit rate is reduced, this content is removed by quantization, reducing the variance of the quantized coefficient distributions and the proportion of suffix bits. These results, together with the results of Table 2, imply that split field coding is primarily useful for lossless or high bit rate compression or for compression of high dynamic range imagery. Table 4 shows the distribution of the suffix bits across the various scales and bitplanes of the wavelet transform. These results are reported in units of bytes, because data is written to the bitstream as eight bit bytes. Thus, this shows the distribution of the suffix bytes for lossless compression of the cafe256 image, with the count in each bitplane of Table 3. Proportion of suffix bits for split field coding with lossy compression of cafe256 image BPP PSNR TOTAL BYTES SUFFIX BYTES SUFFIX PERCENT Infinite % % % % % % % % % % % Table 4. Distribution of suffix bits by scale and bitplane for lossless compression of cafe256 image Bitplane Scale Approx Proc. of SPIE Vol
12 (a) Truncation of suffix block (all scales and bitplane) (b) Truncation of suffix block (up to scale 4 and bitplane 5) S S (c) Difference between original image and (b) (d) Truncation of 50% of bitstream with resolution progression Figure 7. Examples of truncation of a lossless bitstream, with and without split field coding. each scale rounded to the nearest byte. An examination of this distribution shows that for this case 98.8% of the suffix bits fall into the five least significant bitplanes of the four least significant scales. Such distributions provide guidelines for which suffix bits should be placed within the suffix block. The higher significance bits of the coarser bitplanes carry higher weight under L2 norms and can contribute considerably to distortion measures such as RMS error and PSNR. Thus, it is advantageous to keep these bits out of the suffix block so as not to expose them to a higher incidence of errors. The distribution above also shows that the vast majority of suffix bits are located in the less significant bitplanes of the finer scales, so that most of the benefits of split field coding may be preserved even while allocating the higher weight suffix bits to the primary block. For the use of split field coding in simulations reported in the rest of this paper, only the suffix bits from the five least significant bitplanes of the four least significant scales are allocated to the suffix block, but as noted above, this comprises 98.8% of the suffix bits. 4.3 Effects of truncation on split field coding Figure 7 depicts several images showing the results of truncating a lossless bitstream, compressed with and without split field coding. Truncation in this case was applied to a lossless compressed bitstream for the cafe256 image (using Proc. of SPIE Vol
13 Boeing s EagleEye codec). Figure 7(a) shows the results of truncating the entire suffix block for the case where all suffix bits (up to scale 7 and bitplane 8) are included in the suffix block (referred to as full suffix block allocation below). Significant distortion is apparent due to the elimination of some high significance bits at the coarser scales (including the approximation band). Figure 7(b) shows the results of truncating the suffix block with the allocation limited to the four finest scales and five least significant bitplanes (referred to as limited suffix block allocation below). Here the distortion is much reduced, being limited to the lower significance bits from the finer scales. The distortion that is present is primarily in the form of mild blotchiness in smooth areas. Even so this distortion is not pronounced. Figure 7(c) shows the difference between the original image and Figure 7(b). The loss is significantly more apparent in this difference image, which indicates that much of the distortion is masked by image content. This masking effect is not surprising, since errors on the less significant bits of the wavelet coefficients introduce distortion elements in the form of low amplitude noise wavelets, which are largely masked by the corresponding signal wavelets. Finally, Figure 7(d) shows the results of truncating 50% of the bitstream with a resolution progression. This results in distinct aliasing, especially along diagonal edges in the image. Figure 8 shows the rate-distortion effects of bitstream truncation with and without split field coding. The lossless compression was performed under three conditions: (a) using start-step coding without the split field coding strategy (resulting in a bitstream with a resolution progression), (b) using split field coding, with full suffix block allocation, and (c) using split field coding, with limited suffix block allocation limited to the four finest scales and five least significant bitplanes,. With split field coding, the lossless compression produced a bitstream of bytes, of which bytes (51.5%) were suffix bytes. With the limited suffix block allocation, there were still bytes in the suffix block, or 98.8% of the suffix bytes. These results are plotted against the lossy rate-distortion results obtained using split field coding (with forward adaptive start-step coding) without truncation. Several things may be noted from these results. First, the lossy compression results are equivalent to what would be expected for truncation on an embedded bitstream using a quality progression (such as JPEG 2000). (We have already shown that the parametric coding approach produces very similar rate-distortion performance to JPEG 2000.) Second, truncation of the bitstream with a resolution progression (compressed by start-step coding without split field coding) results in substantially poorer rate-distortion performance than standard lossy compression. Third, the truncation of the split field coding case with full suffix block allocation suffers a severe drop in quality (as measured by PSNR), as soon as the truncation eliminates the full suffix block. At the point where the entire suffix block (28794 bytes) has been truncated, the PSNR drops to db. This occurs because the suffix block in this case included the more significant bitplanes in the coarser scales, which introduce considerable distortion when they are truncated. Fourth, the split field coding case with the limited suffix block allocation provides R-D performance which is significantly better than a resolution progression, but not as good as a quality progression. In the limited allocation case, with the truncation of the entire suffix block (in this case bytes), the PSNR drops to db, significantly better than for the case of the full suffix block allocation. 4.4 Effects of channel errors on split field coding Figure 9 provides measures of the distortion effects of channel errors on split field coding. For these results, the cafe256 image was losslessly compressed using split field coding with limited suffix block allocation (the four finest scales and the five least significant bitplanes). Since the error resilience properties of the suffix bits depend upon correct decoding of the corresponding prefix bits, the primary block must be transmitted with extremely low probability of errors, while the error-resilient suffix block may be exposed to a relatively higher probability of errors. This may be accomplished using an unequal error protection (UEP) scheme based either on error correction coding (ECC) or on channels with differing SNRs (such as provided by QAM constellations). When an ECC-based UEP scheme is used, then the primary block will be transmitted with greater redundancy and a consequently higher channel rate than the suffix block. In that case, it is possible to modify the codebook estimation for split field coding so that the differing channel rates for the primary block and the suffix block are taken into account. This will typically result in a higher proportion of bits for the suffix block, which will be transmitted at a lower channel rate, thus improving throughput. An investigation of these effects is not included within the scope of this paper. For the simulation results reported in Figure 9, a specified bit error rate (BER) is applied only to the suffix block, with 4 rates ranging from 10 (for which about 23 randomly selected suffix block bits out of 230,252 bits were corrupted) up to 1.0 (for which every suffix block bit was flipped). Each BER was applied to the compressed image ten separate times, with the mean, maximum, and minimum PSNRs computed at each BER. What is notable about these results is that the Proc. of SPIE Vol
14 No Split Field Coding (Resolution Progression) Suffix Block to Scale 7, Plane 8 Suffix Block to Scale 4, Plane 5 Lossy Compression (Similar to Quality Progression) Max Mean Min PSNR (db) PSNR (db) Bits Per Pixel Figure 8. Rate-distortion effects of truncation with split field coding (2 cases), resolution progression, and lossy compression. channel errors on the suffix bits do not result in catastrophic distortion, but rather a gradual increase in distortion as the bit error rate increases. It is also interesting to note that the PSNR of db for the case of a BER = 1.0 is close to the PSNR of DB for the case where the suffix block was completely truncated. Figure 10 depicts the effects of channel errors on this same split field coded image. Figure 10(a) shows the effects of a BER of 0.5 on the suffix block (half of the suffix block bits are flipped), with the corresponding difference image in Figure 10(b). This image shows significantly greater blotchiness than the images in Figure 7(b), in which the entire suffix block of the exact same image is truncated. Interestingly enough, Figure 10(a) has a higher PSNR than Figure 7(b) (26.50 db vs db), but looks worse. The reason for this is that the truncation of the bitstream, which results in zeroing of the missing suffix bits, reduces the magnitudes of the affected wavelet coefficients, resulting in better visual masking than when the suffix bits are randomly flipped. Inspecting the difference images for these two cases (in Figure 10(b) and Figure 7(c), respectively) shows that the differences due to truncation manifest significant image structure, whereas the differences due to channel errors appear mostly random. Yet the structured distortion in the truncation case is masked because it is comprised of noise wavelets aligned with and masked by corresponding signal wavelets. 1 Figure 10(c) shows the effects of a BER of 10 on the suffix block (one out of every ten suffix block bits is flipped). Here the distortion (again in the form of blotchiness) is much less apparent, and is revealed more clearly in the 2 corresponding difference image in Figure 10(d). For BERs of 10 and higher (which are not shown), distortion is not visually apparent, and can usually be detected only by computing a difference image or by flickering between the original and the noisy image. 5. CONCLUSIONS Suffix Block Bit Error Rate Figure 9. Distortion effects of channel errors applied to the suffix block. In this paper, a low complexity, error-resilient entropy coding method is presented based on the use of parametric coding methods, and the separation of the code words into two fields, including an error-resilient fixed length suffix. Experimental results show that this approach provides equivalent rate-distortion performance to more elaborate methods such as embedded arithmetic coding, while achieving a several-fold reduction in complexity. It was also shown that when the suffix fields are isolated to a separate block, that the resulting suffix block is highly resilient to various errors such as truncation errors and channel errors. The proposed method is suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Proc. of SPIE Vol
15 (a) BER of 0.5 on the suffix block (b) Difference between original image and (a) 1 (c) BER of 10 on the suffix block (d) Difference between original image and (c) Figure 10. Examples of channel errors applied to the suffix block for an image compressed losslessly with split field coding. REFERENCES 1. R. F. Rice, Some Practical Universal Noiseless Coding Techniques, JPL Publication 79-22, Jet Propulsion Laboratory, Pasadena, California, March D. A. Huffman, A Method for the Construction of Minimum Redundancy Codes, Proceedings of the IRE, vol. 40, no. 10, pp , J. Rissanen, Generalized Kraft Inequality and Arithmetic Coding, IBM Journal of Research and Development, vol. 20, I. H. Witten, R. M. Neal, and J. G. Cleary, Arithmetic Coding for Data Compression, Communications of the ACM, vol. 30, no. 6, pp , June S. W. Golomb, Run-Length Encodings, IEEE Transactions on Information Theory, vol. 12, no. 3, pp , July R. Gallager and D. V. Voorhis, Optimal source codes for geometrically distributed integer alphabets, IEEE Transactions on Information Theory, vol. 21, no. 2, pp , March Proc. of SPIE Vol
Chapter 9 Image Compression Standards
Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how
More information2. REVIEW OF LITERATURE
2. REVIEW OF LITERATURE Digital image processing is the use of the algorithms and procedures for operations such as image enhancement, image compression, image analysis, mapping. Transmission of information
More informationUNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik
UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik Department of Electrical and Computer Engineering, The University of Texas at Austin,
More informationModule 6 STILL IMAGE COMPRESSION STANDARDS
Module 6 STILL IMAGE COMPRESSION STANDARDS Lesson 16 Still Image Compression Standards: JBIG and JPEG Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the
More informationCompression and Image Formats
Compression Compression and Image Formats Reduce amount of data used to represent an image/video Bit rate and quality requirements Necessary to facilitate transmission and storage Required quality is application
More information2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution
2.1. General Purpose There are many popular general purpose lossless compression techniques, that can be applied to any type of data. 2.1.1. Run Length Encoding Run Length Encoding is a compression technique
More informationA new quad-tree segmented image compression scheme using histogram analysis and pattern matching
University of Wollongong Research Online University of Wollongong in Dubai - Papers University of Wollongong in Dubai A new quad-tree segmented image compression scheme using histogram analysis and pattern
More informationJPEG Image Transmission over Rayleigh Fading Channel with Unequal Error Protection
International Journal of Computer Applications (0975 8887 JPEG Image Transmission over Rayleigh Fading with Unequal Error Protection J. N. Patel Phd,Assistant Professor, ECE SVNIT, Surat S. Patnaik Phd,Professor,
More informationA SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES
A SURVEY ON DICOM IMAGE COMPRESSION AND DECOMPRESSION TECHNIQUES Shreya A 1, Ajay B.N 2 M.Tech Scholar Department of Computer Science and Engineering 2 Assitant Professor, Department of Computer Science
More informationAssistant Lecturer Sama S. Samaan
MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard
More informationNonuniform multi level crossing for signal reconstruction
6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven
More informationEfficient Image Compression Technique using JPEG2000 with Adaptive Threshold
Efficient Image Compression Technique using JPEG2000 with Adaptive Threshold Md. Masudur Rahman Mawlana Bhashani Science and Technology University Santosh, Tangail-1902 (Bangladesh) Mohammad Motiur Rahman
More informationAudio and Speech Compression Using DCT and DWT Techniques
Audio and Speech Compression Using DCT and DWT Techniques M. V. Patil 1, Apoorva Gupta 2, Ankita Varma 3, Shikhar Salil 4 Asst. Professor, Dept.of Elex, Bharati Vidyapeeth Univ.Coll.of Engg, Pune, Maharashtra,
More informationWavelet-based image compression
Institut Mines-Telecom Wavelet-based image compression Marco Cagnazzo Multimedia Compression Outline Introduction Discrete wavelet transform and multiresolution analysis Filter banks and DWT Multiresolution
More informationDEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE
DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE Asst.Prof.Deepti Mahadeshwar,*Prof. V.M.Misra Department of Instrumentation Engineering, Vidyavardhini s College of Engg. And Tech., Vasai Road, *Prof
More informationA Modified Image Coder using HVS Characteristics
A Modified Image Coder using HVS Characteristics Mrs Shikha Tripathi, Prof R.C. Jain Birla Institute Of Technology & Science, Pilani, Rajasthan-333 031 shikha@bits-pilani.ac.in, rcjain@bits-pilani.ac.in
More informationLevel-Successive Encoding for Digital Photography
Level-Successive Encoding for Digital Photography Mehmet Celik, Gaurav Sharma*, A.Murat Tekalp University of Rochester, Rochester, NY * Xerox Corporation, Webster, NY Abstract We propose a level-successive
More information10 Speech and Audio Signals
0 Speech and Audio Signals Introduction Speech and audio signals are normally converted into PCM, which can be stored or transmitted as a PCM code, or compressed to reduce the number of bits used to code
More informationA Hybrid Technique for Image Compression
Australian Journal of Basic and Applied Sciences, 5(7): 32-44, 2011 ISSN 1991-8178 A Hybrid Technique for Image Compression Hazem (Moh'd Said) Abdel Majid Hatamleh Computer DepartmentUniversity of Al-Balqa
More informationA Modified Image Template for FELICS Algorithm for Lossless Image Compression
Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet A Modified
More informationSubjective evaluation of image color damage based on JPEG compression
2014 Fourth International Conference on Communication Systems and Network Technologies Subjective evaluation of image color damage based on JPEG compression Xiaoqiang He Information Engineering School
More informationAnna University, Chennai B.E./B.TECH DEGREE EXAMINATION, MAY/JUNE 2013 Seventh Semester
www.vidyarthiplus.com Anna University, Chennai B.E./B.TECH DEGREE EXAMINATION, MAY/JUNE 2013 Seventh Semester Electronics and Communication Engineering EC 2029 / EC 708 DIGITAL IMAGE PROCESSING (Regulation
More informationCommunication Theory II
Communication Theory II Lecture 13: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 22 th, 2015 1 o Source Code Generation Lecture Outlines Source Coding
More informationA COMPARATIVE ANALYSIS OF DCT AND DWT BASED FOR IMAGE COMPRESSION ON FPGA
International Journal of Applied Engineering Research and Development (IJAERD) ISSN:2250 1584 Vol.2, Issue 1 (2012) 13-21 TJPRC Pvt. Ltd., A COMPARATIVE ANALYSIS OF DCT AND DWT BASED FOR IMAGE COMPRESSION
More informationTime division multiplexing The block diagram for TDM is illustrated as shown in the figure
CHAPTER 2 Syllabus: 1) Pulse amplitude modulation 2) TDM 3) Wave form coding techniques 4) PCM 5) Quantization noise and SNR 6) Robust quantization Pulse amplitude modulation In pulse amplitude modulation,
More informationA Joint Source-Channel Distortion Model for JPEG Compressed Images
IEEE TRANSACTIONS ON IMAGE PROCESSING, XXXX 1 A Joint Source-Channel Distortion Model for JPEG Compressed Images Muhammad F. Sabir, Student Member, IEEE, Hamid R. Sheikh, Member, IEEE, Robert W. Heath
More informationModified TiBS Algorithm for Image Compression
Modified TiBS Algorithm for Image Compression Pravin B. Pokle 1, Vaishali Dhumal 2,Jayantkumar Dorave 3 123 (Department of Electronics Engineering, Priyadarshini J.L.College of Engineering/ RTM N University,
More informationAudio Signal Compression using DCT and LPC Techniques
Audio Signal Compression using DCT and LPC Techniques P. Sandhya Rani#1, D.Nanaji#2, V.Ramesh#3,K.V.S. Kiran#4 #Student, Department of ECE, Lendi Institute Of Engineering And Technology, Vizianagaram,
More informationA Brief Introduction to Information Theory and Lossless Coding
A Brief Introduction to Information Theory and Lossless Coding 1 INTRODUCTION This document is intended as a guide to students studying 4C8 who have had no prior exposure to information theory. All of
More informationModule 8: Video Coding Basics Lecture 40: Need for video coding, Elements of information theory, Lossless coding. The Lecture Contains:
The Lecture Contains: The Need for Video Coding Elements of a Video Coding System Elements of Information Theory Symbol Encoding Run-Length Encoding Entropy Encoding file:///d /...Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2040/40_1.htm[12/31/2015
More information[Srivastava* et al., 5(8): August, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY COMPRESSING BIOMEDICAL IMAGE BY USING INTEGER WAVELET TRANSFORM AND PREDICTIVE ENCODER Anushree Srivastava*, Narendra Kumar Chaurasia
More informationMultimedia Communications. Lossless Image Compression
Multimedia Communications Lossless Image Compression Old JPEG-LS JPEG, to meet its requirement for a lossless mode of operation, has chosen a simple predictive method which is wholly independent of the
More informationGENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE
GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE Wook-Hyun Jeong and Yo-Sung Ho Kwangju Institute of Science and Technology (K-JIST) Oryong-dong, Buk-gu, Kwangju,
More informationSERIES T: TERMINALS FOR TELEMATIC SERVICES. ITU-T T.83x-series Supplement on information technology JPEG XR image coding system System architecture
`````````````````` `````````````````` `````````````````` `````````````````` `````````````````` `````````````````` International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF
More informationThe Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.
The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Home The Book by Chapters About the Book Steven W. Smith Blog Contact Book Search Download this chapter in PDF
More informationImages with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information
Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information 1992 2008 R. C. Gonzalez & R. E. Woods For the image in Fig. 8.1(a): 1992 2008 R. C. Gonzalez & R. E. Woods Measuring
More informationCh. 3: Image Compression Multimedia Systems
4/24/213 Ch. 3: Image Compression Multimedia Systems Prof. Ben Lee (modified by Prof. Nguyen) Oregon State University School of Electrical Engineering and Computer Science Outline Introduction JPEG Standard
More informationUnit 1.1: Information representation
Unit 1.1: Information representation 1.1.1 Different number system A number system is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set,
More informationTowards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement
Towards Real-time Gamma Correction for Dynamic Contrast Enhancement Jesse Scott, Ph.D. Candidate Integrated Design Services, College of Engineering, Pennsylvania State University University Park, PA jus2@engr.psu.edu
More informationComputer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015
Computer Graphics Si Lu Fall 2017 http://www.cs.pdx.edu/~lusi/cs447/cs447_547_comput er_graphics.htm 10/02/2015 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/
More informationThe ITU-T Video Coding Experts Group (VCEG) and
378 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 3, MARCH 2005 Analysis, Fast Algorithm, and VLSI Architecture Design for H.264/AVC Intra Frame Coder Yu-Wen Huang, Bing-Yu
More informationTemplates and Image Pyramids
Templates and Image Pyramids 09/07/17 Computational Photography Derek Hoiem, University of Illinois Why does a lower resolution image still make sense to us? What do we lose? Image: http://www.flickr.com/photos/igorms/136916757/
More informationMULTIMEDIA SYSTEMS
1 Department of Computer Engineering, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pk Pakorn Watanachaturaporn, Wt ht Ph.D. PhD pakorn@live.kmitl.ac.th,
More informationENEE408G Multimedia Signal Processing
ENEE48G Multimedia Signal Processing Design Project on Image Processing and Digital Photography Goals:. Understand the fundamentals of digital image processing.. Learn how to enhance image quality and
More informationSPIHT Algorithm with Huffman Encoding for Image Compression and Quality Improvement over MIMO OFDM Channel
SPIHT Algorithm with Huffman Encoding for Image Compression and Quality Improvement over MIMO OFDM Channel Dnyaneshwar.K 1, CH.Suneetha 2 Abstract In this paper, Compression and improving the Quality of
More informationIMPROVED RESOLUTION SCALABILITY FOR BI-LEVEL IMAGE DATA IN JPEG2000
IMPROVED RESOLUTION SCALABILITY FOR BI-LEVEL IMAGE DATA IN JPEG2000 Rahul Raguram, Michael W. Marcellin, and Ali Bilgin Department of Electrical and Computer Engineering, The University of Arizona Tucson,
More informationLECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR
1 LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 2 STORAGE SPACE Uncompressed graphics, audio, and video data require substantial storage capacity. Storing uncompressed video is not possible
More informationSpeech Coding in the Frequency Domain
Speech Coding in the Frequency Domain Speech Processing Advanced Topics Tom Bäckström Aalto University October 215 Introduction The speech production model can be used to efficiently encode speech signals.
More informationFIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 22.
FIBER OPTICS Prof. R.K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture: 22 Optical Receivers Fiber Optics, Prof. R.K. Shevgaonkar, Dept. of Electrical Engineering,
More informationKeywords: BPS, HOLs, MSE.
Volume 4, Issue 4, April 14 ISSN: 77 18X International Journal of Advanced earch in Computer Science and Software Engineering earch Paper Available online at: www.ijarcsse.com Selective Bit Plane Coding
More informationApproximate Compression Enhancing compressibility through data approximation
Approximate Compression Enhancing compressibility through data approximation A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Harini Suresh IN PARTIAL FULFILLMENT
More informationVery High Speed JPEG Codec Library
UDC 621.397.3+681.3.06+006 Very High Speed JPEG Codec Library Arito ASAI*, Ta thi Quynh Lien**, Shunichiro NONAKA*, and Norihisa HANEDA* Abstract This paper proposes a high-speed method of directly decoding
More information1 Introduction. Abstract
Abstract We extend the work of Sherwood and Zeger [1, 2] to progressive video coding for noisy channels. By utilizing a three-dimensional (3-D) extension of the set partitioning in hierarchical trees (SPIHT)
More informationCompression. Encryption. Decryption. Decompression. Presentation of Information to client site
DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -
More informationTSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.
TSTE17 System Design, CDIO Lecture 5 1 General project hints 2 Project hints and deadline suggestions Required documents Modulation, cont. Requirement specification Channel coding Design specification
More informationFundamentals of Digital Communication
Fundamentals of Digital Communication Network Infrastructures A.A. 2017/18 Digital communication system Analog Digital Input Signal Analog/ Digital Low Pass Filter Sampler Quantizer Source Encoder Channel
More informationA Spatial Mean and Median Filter For Noise Removal in Digital Images
A Spatial Mean and Median Filter For Noise Removal in Digital Images N.Rajesh Kumar 1, J.Uday Kumar 2 Associate Professor, Dept. of ECE, Jaya Prakash Narayan College of Engineering, Mahabubnagar, Telangana,
More informationA Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor
A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor Umesh 1,Mr. Suraj Rana 2 1 M.Tech Student, 2 Associate Professor (ECE) Department of Electronic and Communication Engineering
More informationAudio Compression using the MLT and SPIHT
Audio Compression using the MLT and SPIHT Mohammed Raad, Alfred Mertins and Ian Burnett School of Electrical, Computer and Telecommunications Engineering University Of Wollongong Northfields Ave Wollongong
More informationTemplates and Image Pyramids
Templates and Image Pyramids 09/06/11 Computational Photography Derek Hoiem, University of Illinois Project 1 Due Monday at 11:59pm Options for displaying results Web interface or redirect (http://www.pa.msu.edu/services/computing/faq/autoredirect.html)
More informationMultimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology
Course Presentation Multimedia Systems Entropy Coding Mahdi Amiri February 2011 Sharif University of Technology Data Compression Motivation Data storage and transmission cost money Use fewest number of
More informationBalancing Bandwidth and Bytes: Managing storage and transmission across a datacast network
Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network Pete Ludé iblast, Inc. Dan Radke HD+ Associates 1. Introduction The conversion of the nation s broadcast television
More informationLossless Huffman coding image compression implementation in spatial domain by using advanced enhancement techniques
Lossless Huffman coding image compression implementation in spatial domain by using advanced enhancement techniques Ali Tariq Bhatti 1, Dr. Jung H. Kim 2 1,2 Department of Electrical & Computer engineering
More informationPractical Content-Adaptive Subsampling for Image and Video Compression
Practical Content-Adaptive Subsampling for Image and Video Compression Alexander Wong Department of Electrical and Computer Eng. University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 a28wong@engmail.uwaterloo.ca
More informationEfficient Hardware Architecture for EBCOT in JPEG 2000 Using a Feedback Loop from the Rate Controller to the Bit-Plane Coder
Efficient Hardware Architecture for EBCOT in JPEG 2000 Using a Feedback Loop from the Rate Controller to the Bit-Plane Coder Grzegorz Pastuszak Warsaw University of Technology, Institute of Radioelectronics,
More informationSECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS
RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT
More informationImage Compression Using Huffman Coding Based On Histogram Information And Image Segmentation
Image Compression Using Huffman Coding Based On Histogram Information And Image Segmentation [1] Dr. Monisha Sharma (Professor) [2] Mr. Chandrashekhar K. (Associate Professor) [3] Lalak Chauhan(M.E. student)
More informationComparative Analysis of WDR-ROI and ASWDR-ROI Image Compression Algorithm for a Grayscale Image
Comparative Analysis of WDR- and ASWDR- Image Compression Algorithm for a Grayscale Image Priyanka Singh #1, Dr. Priti Singh #2, 1 Research Scholar, ECE Department, Amity University, Gurgaon, Haryana,
More informationObjective Evaluation of Edge Blur and Ringing Artefacts: Application to JPEG and JPEG 2000 Image Codecs
Objective Evaluation of Edge Blur and Artefacts: Application to JPEG and JPEG 2 Image Codecs G. A. D. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences and Technology, Massey
More informationIMAGE PROCESSING: POINT PROCESSES
IMAGE PROCESSING: POINT PROCESSES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lecture # 11 IMAGE PROCESSING: POINT PROCESSES N. C. State University CSC557 Multimedia Computing
More informationProf. Feng Liu. Fall /02/2018
Prof. Feng Liu Fall 2018 http://www.cs.pdx.edu/~fliu/courses/cs447/ 10/02/2018 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/ Homework 1 due in class
More informationECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003
Motivation Large amount of data in images Color video: 200Mb/sec Landsat TM multispectral satellite image: 200MB High potential for compression Redundancy (aka correlation) in images spatial, temporal,
More information1 This work was partially supported by NSF Grant No. CCR , and by the URI International Engineering Program.
Combined Error Correcting and Compressing Codes Extended Summary Thomas Wenisch Peter F. Swaszek Augustus K. Uht 1 University of Rhode Island, Kingston RI Submitted to International Symposium on Information
More information88 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 1, NO. 1, MARCH 1999
88 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 1, NO. 1, MARCH 1999 Robust Image and Video Transmission Over Spectrally Shaped Channels Using Multicarrier Modulation Haitao Zheng and K. J. Ray Liu, Senior Member,
More informationCooperative Source and Channel Coding for Wireless Multimedia Communications
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 1, MONTH, YEAR 1 Cooperative Source and Channel Coding for Wireless Multimedia Communications Hoi Yin Shutoy, Deniz Gündüz, Elza Erkip,
More informationAn Inherently Calibrated Exposure Control Method for Digital Cameras
An Inherently Calibrated Exposure Control Method for Digital Cameras Cynthia S. Bell Digital Imaging and Video Division, Intel Corporation Chandler, Arizona e-mail: cynthia.bell@intel.com Abstract Digital
More informationComparative Analysis of Lossless Image Compression techniques SPHIT, JPEG-LS and Data Folding
Comparative Analysis of Lossless Compression techniques SPHIT, JPEG-LS and Data Folding Mohd imran, Tasleem Jamal, Misbahul Haque, Mohd Shoaib,,, Department of Computer Engineering, Aligarh Muslim University,
More informationComputer Vision. Howie Choset Introduction to Robotics
Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points
More informationPERFORMANCE EVALUATION OFADVANCED LOSSLESS IMAGE COMPRESSION TECHNIQUES
PERFORMANCE EVALUATION OFADVANCED LOSSLESS IMAGE COMPRESSION TECHNIQUES M.Amarnath T.IlamParithi Dr.R.Balasubramanian M.E Scholar Research Scholar Professor & Head Department of Computer Science & Engineering
More informationINSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad
INSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad - 500 043 ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK Course Title Course Code Class Branch DIGITAL IMAGE PROCESSING A70436 IV B. Tech.
More informationChapter 2 Distributed Consensus Estimation of Wireless Sensor Networks
Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Recently, consensus based distributed estimation has attracted considerable attention from various fields to estimate deterministic
More informationEC 6501 DIGITAL COMMUNICATION UNIT - II PART A
EC 6501 DIGITAL COMMUNICATION 1.What is the need of prediction filtering? UNIT - II PART A [N/D-16] Prediction filtering is used mostly in audio signal processing and speech processing for representing
More informationDigital Watermarking Using Homogeneity in Image
Digital Watermarking Using Homogeneity in Image S. K. Mitra, M. K. Kundu, C. A. Murthy, B. B. Bhattacharya and T. Acharya Dhirubhai Ambani Institute of Information and Communication Technology Gandhinagar
More informationCHAPTER. delta-sigma modulators 1.0
CHAPTER 1 CHAPTER Conventional delta-sigma modulators 1.0 This Chapter presents the traditional first- and second-order DSM. The main sources for non-ideal operation are described together with some commonly
More informationH.264 Video with Hierarchical QAM
Prioritized Transmission of Data Partitioned H.264 Video with Hierarchical QAM B. Barmada, M. M. Ghandi, E.V. Jones and M. Ghanbari Abstract In this Letter hierarchical quadrature amplitude modulation
More informationA Robust Nonlinear Filtering Approach to Inverse Halftoning
Journal of Visual Communication and Image Representation 12, 84 95 (2001) doi:10.1006/jvci.2000.0464, available online at http://www.idealibrary.com on A Robust Nonlinear Filtering Approach to Inverse
More informationEEE 309 Communication Theory
EEE 309 Communication Theory Semester: January 2016 Dr. Md. Farhad Hossain Associate Professor Department of EEE, BUET Email: mfarhadhossain@eee.buet.ac.bd Office: ECE 331, ECE Building Part 05 Pulse Code
More informationPooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor
A Study of Image Compression Techniques Pooja Rani(M.tech) *, Sonal ** * M.Tech Student, ** Assistant Professor Department of Computer Science & Engineering, BPS Mahila Vishvavidyalya, Sonipat kulriapooja@gmail.com,
More informationPerformance Evaluation of H.264 AVC Using CABAC Entropy Coding For Image Compression
Conference on Advances in Communication and Control Systems 2013 (CAC2S 2013) Performance Evaluation of H.264 AVC Using CABAC Entropy Coding For Image Compression Mr.P.S.Jagadeesh Kumar Associate Professor,
More informationFigure 1 HDR image fusion example
TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively
More informationalgorithm with WDR-based algorithms
Comparison of the JPEG2000 lossy image compression algorithm with WDR-based algorithms James S. Walker walkerjs@uwec.edu Ying-Jui Chen yrchen@mit.edu Tarek M. Elgindi elgindtm@uwec.edu Department of Mathematics;
More informationISSN: Seema G Bhateja et al, International Journal of Computer Science & Communication Networks,Vol 1(3),
A Similar Structure Block Prediction for Lossless Image Compression C.S.Rawat, Seema G.Bhateja, Dr. Sukadev Meher Ph.D Scholar NIT Rourkela, M.E. Scholar VESIT Chembur, Prof and Head of ECE Dept NIT Rourkela
More informationAN ERROR LIMITED AREA EFFICIENT TRUNCATED MULTIPLIER FOR IMAGE COMPRESSION
AN ERROR LIMITED AREA EFFICIENT TRUNCATED MULTIPLIER FOR IMAGE COMPRESSION K.Mahesh #1, M.Pushpalatha *2 #1 M.Phil.,(Scholar), Padmavani Arts and Science College. *2 Assistant Professor, Padmavani Arts
More informationRun-Length Based Huffman Coding
Chapter 5 Run-Length Based Huffman Coding This chapter presents a multistage encoding technique to reduce the test data volume and test power in scan-based test applications. We have proposed a statistical
More informationHYBRID MEDICAL IMAGE COMPRESSION USING SPIHT AND DB WAVELET
HYBRID MEDICAL IMAGE COMPRESSION USING SPIHT AND DB WAVELET Rahul Sharma, Chandrashekhar Kamargaonkar and Dr. Monisha Sharma Abstract Medical imaging produces digital form of human body pictures. There
More informationLecture5: Lossless Compression Techniques
Fixed to fixed mapping: we encoded source symbols of fixed length into fixed length code sequences Fixed to variable mapping: we encoded source symbols of fixed length into variable length code sequences
More informationExploiting "Approximate Communication" for Mobile Media Applications
Exploiting "Approximate Communication" for Mobile Media Applications Sayandeep Sen, Stephen Schmitt, Mason Donahue, Suman Banerjee University of Wisconsin, Madison, WI 53706, USA ABSTRACT Errors are integral
More informationIterative Joint Source/Channel Decoding for JPEG2000
Iterative Joint Source/Channel Decoding for JPEG Lingling Pu, Zhenyu Wu, Ali Bilgin, Michael W. Marcellin, and Bane Vasic Dept. of Electrical and Computer Engineering The University of Arizona, Tucson,
More informationLab/Project Error Control Coding using LDPC Codes and HARQ
Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an
More informationDigital Image Processing 3/e
Laboratory Projects for Digital Image Processing 3/e by Gonzalez and Woods 2008 Prentice Hall Upper Saddle River, NJ 07458 USA www.imageprocessingplace.com The following sample laboratory projects are
More information