Satellite Image Fusion Algorithm using Gaussian Distribution model on Spectrum Range

Similar documents
Combination of IHS and Spatial PCA Methods for Multispectral and Panchromatic Image Fusion

New Additive Wavelet Image Fusion Algorithm for Satellite Images

MANY satellite sensors provide both high-resolution

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 42, NO. 6, JUNE

A Pan-Sharpening Based on the Non-Subsampled Contourlet Transform and Discrete Wavelet Transform

MULTISCALE DIRECTIONAL BILATERAL FILTER BASED FUSION OF SATELLITE IMAGES

Spectral and spatial quality analysis of pansharpening algorithms: A case study in Istanbul

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform

QUALITY ASSESSMENT OF IMAGE FUSION TECHNIQUES FOR MULTISENSOR HIGH RESOLUTION SATELLITE IMAGES (CASE STUDY: IRS-P5 AND IRS-P6 SATELLITE IMAGES)

Comparison between Mallat s and the à trous discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images

Image Fusion Processing for IKONOS 1-m Color Imagery Kazi A. Kalpoma and Jun-ichi Kudoh, Associate Member, IEEE /$25.

A New Method to Fusion IKONOS and QuickBird Satellites Imagery

MANY satellites provide two types of images: highresolution

Measurement of Quality Preservation of Pan-sharpened Image

Fusion and Merging of Multispectral Images using Multiscale Fundamental Forms

Benefits of fusion of high spatial and spectral resolutions images for urban mapping

Novel Hybrid Multispectral Image Fusion Method using Fuzzy Logic

MTF-tailored Multiscale Fusion of High-resolution MS and Pan Imagery

Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain

A Review on Image Fusion Techniques

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

ISVR: an improved synthetic variable ratio method for image fusion

Improving Spatial Resolution Of Satellite Image Using Data Fusion Method

THE CURVELET TRANSFORM FOR IMAGE FUSION

LANDSAT-SPOT DIGITAL IMAGES INTEGRATION USING GEOSTATISTICAL COSIMULATION TECHNIQUES

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Survey of Spatial Domain Image fusion Techniques

Interpolation of CFA Color Images with Hybrid Image Denoising

Indusion : Fusion of Multispectral and Panchromatic Images Using Induction Scaling Technique

Pixel-based Image Fusion Using Wavelet Transform for SPOT and ETM+ Image

Selective Synthetic Aperture Radar and Panchromatic Image Fusion by Using the à Trous Wavelet Decomposition

High-resolution Image Fusion: Methods to Preserve Spectral and Spatial Resolution

Fusion of Multispectral and SAR Images by Intensity Modulation

An Improved Intensity-Hue-Saturation for A High-Resolution Image Fusion Technique Minimizing Color Distortion

MOST of Earth observation satellites, such as Landsat-7,

Improvement of Satellite Images Resolution Based On DT-CWT

EVALUATION OF SATELLITE IMAGE FUSION USING WAVELET TRANSFORM

Enhanced DCT Interpolation for better 2D Image Up-sampling

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

BEMD-based high resolution image fusion for land cover classification: A case study in Guilin

APPLICATION OF PANSHARPENING ALGORITHMS FOR THE FUSION OF RAMAN AND CONVENTIONAL BRIGHTFIELD MICROSCOPY IMAGES

What is Remote Sensing? Contents. Image Fusion in Remote Sensing. 1. Optical imagery in remote sensing. Electromagnetic Spectrum

United States Patent (19) Laben et al.

Wavelet-based image fusion and quality assessment

ASSESSMENT OF VERY HIGH RESOLUTION SATELLITE DATA FUSION TECHNIQUES FOR LANDSLIDE RECOGNITION

Comparison of Several Fusion Rule Based on Wavelet in The Landsat ETM Image

Coresident Sensor Fusion and Compression Using the Wavelet Transform

Today s Presentation. Introduction Study area and Data Method Results and Discussion Conclusion

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 14, NO. 10, OCTOBER

ANALYSIS OF SPOT-6 DATA FUSION USING GRAM-SCHMIDT SPECTRAL SHARPENING ON RURAL AREAS

Statistical Estimation of a 13.3 micron Channel for VIIRS using Multisensor Data Fusion with Application to Cloud-Top Pressure Estimation

Advanced Techniques in Urban Remote Sensing

The optimum wavelet-based fusion method for urban area mapping

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

MULTI-SENSOR DATA FUSION OF VNIR AND TIR SATELLITE IMAGERY

DISCRETE WAVELET TRANSFORM-BASED SATELLITE IMAGE RESOLUTION ENHANCEMENT METHOD

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1

Pyramid-based image empirical mode decomposition for the fusion of multispectral and panchromatic images

IMPLEMENTATION AND COMPARATIVE QUANTITATIVE ASSESSMENT OF DIFFERENT MULTISPECTRAL IMAGE PANSHARPENING APPROACHES

THE IMAGE REGISTRATION TECHNIQUE FOR HIGH RESOLUTION REMOTE SENSING IMAGE IN HILLY AREA

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Sensory Fusion for Image

Super-Resolution of Multispectral Images

A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan 2 1,2 INTRODUCTION

Efficient Target Detection from Hyperspectral Images Based On Removal of Signal Independent and Signal Dependent Noise

Recent Trends in Satellite Image Pan-sharpening techniques

ABSTRACT - The remote sensing images fusing is a method, which integrates multiform image data sets into a

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

A New PAPR Reduction in OFDM Systems Using SLM and Orthogonal Eigenvector Matrix

A New Method for Improving Contrast Enhancement in Remote Sensing Images by Image Fusion

International Journal of Advance Engineering and Research Development CONTRAST ENHANCEMENT OF IMAGES USING IMAGE FUSION BASED ON LAPLACIAN PYRAMID

Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation

ILTERS. Jia Yonghong 1,2 Wu Meng 1* Zhang Xiaoping 1

Low Power Approach for Fir Filter Using Modified Booth Multiprecision Multiplier

Synthetic Aperture Radar (SAR) Image Fusion with Optical Data

Design and Testing of DWT based Image Fusion System using MATLAB Simulink

Satellite Image Resolution Enhancement Technique Using DWT and IWT

Concealed Weapon Detection Using Color Image Fusion

Removing Thick Clouds in Landsat Images

ADAPTIVE INTENSITY MATCHING FILTERS : A NEW TOOL FOR MULTI-RESOLUTION DATA FUSION.

FPGA implementation of DWT for Audio Watermarking Application

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Region Adaptive Unsharp Masking Based Lanczos-3 Interpolation for video Intra Frame Up-sampling

Abstract Quickbird Vs Aerial photos in identifying man-made objects

DIFFERENTIAL APPROACH FOR MAP REVISION FROM NEW MULTI-RESOLUTION SATELLITE IMAGERY AND EXISTING TOPOGRAPHIC DATA

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

FUSION OF LANDSAT- 8 THERMAL INFRARED AND VISIBLE BANDS WITH MULTI- RESOLUTION ANALYSIS CONTOURLET METHODS

Comparision of different Image Resolution Enhancement techniques using wavelet transform

Classification in Image processing: A Survey

Augment the Spatial Resolution of Multispectral Image Using PCA Fusion Method and Classified It s Region Using Different Techniques.

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

CHANGE DETECTION BY THE IR-MAD AND KERNEL MAF METHODS IN LANDSAT TM DATA COVERING A SWEDISH FOREST REGION

Online publication date: 14 December 2010

Improving the Quality of Satellite Image Maps by Various Processing Techniques RUEDIGER TAUCH AND MARTIN KAEHLER

Chapter 1. Introduction

High Resolution Satellite Data for Mapping Landuse/Land-cover in the Rural-Urban Fringe of the Greater Toronto Area

A DUAL TREE COMPLEX WAVELET TRANSFORM CONSTRUCTION AND ITS APPLICATION TO IMAGE DENOISING

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

Increasing the potential of Razaksat images for map-updating in the Tropics

Multiresolution Wavelet Decomposition I m e Merger of Landsat Thematic Mapper and SPOT Panchromatic Data

Transcription:

Satellite Image Fusion Algorithm using Gaussian Distribution model on Spectrum Range Younggun, Lee and Namik Cho 2 Department of Electrical Engineering and Computer Science, Korea Air Force Academy, Korea 2 Department of Electrical and Computer Engineering, Seoul National University, Korea Abstract - This paper proposes a new wavelet domain satellite image fusion algorithm. Traditional fusion algorithm using wavelet transform produces a high resolution satellite image by extracting high spatial resolution component from panchromatic image and adding it to 3-channel images in the wavelet domain. In this process, the same component is added to every channel, and thus channel properties are not considered. Hence in this paper, we present a new method that considers the intensity and spectral range of each image, and their relative spectral responses. For this purpose, we represent the spectral response of each channel by using the sum of Gaussian functions and then obtain high spatial and spectral resolution image using by the adjustment of modeled Gaussian functions. The experimental results on IKONOS satellite images show that the proposed method provides better performance in PSNR, RMSE and correlation coefficients compared to the conventional methods. Keyword: satellite image, wavelet-domain, gaussian function, relative spectral response Introduction Most of imaging systems in satellites have some limitations compared to the common digital camera systems []: the incoming energy to the sensor is very low and the bandwidth for the data transmission is not so high. Thus not every components of satellite images are obtained in full resolution. In general, only the panchromatic (P) image is captured in full resolution, and the other multispectral images that correspond to color band (R, G, B) and near infra red range (NIR) are obtained in reduced resolution. Receiving these images, the full resolution color images are fused from these,,, and images [2]. There have been many image fusion methods, which can be roughly categorized into three classes : intensity-hue-saturation (IHS) methods [3] [5], principal component analysis (PCA) [6] [7], and wavelet-domain methods [8] [2]. Among these approaches, the IHS method is the most popular because it requires the least amount of computations [3]. In this method, R, G and B are enlarged to the resolution of P, and then transformed into intensity (I), hue (H), and saturation (S) components. The I component is just replaced by and then its inverse transform is considered as the full resolution color (RGB) image [4]. In PCA approaches, multispectral images are first decomposed by the eigenvalue analysis. The component that corresponds to the largest eigenvalue is considered as the intensity component, and this is replaced by the [3],[5]. However, since the I component and also the first component of PCA are not actually the same as, these approaches bring some color distortion [7]. When the color distortion is concerned, the wavelet-based method would be preferred because it shows less distortion than the IHS or PCA approaches [] []. Its disadvantage is that it requires more computations than the IHS method, but this is not a big concern considering the power of computers nowadays. In the wavelet approaches, the is decomposed down to the resolution of multispectral components. For example, if the resolution of multispectral image is the quarter of, the dyadic wavelet decomposition is applied to twice, or the multispectral images are interpolated to the size of and undecimated wavelet decomposition is applied twice to every component. When the lowest band (scale coefficients) of is just replaced with R, then its inverse wavelet transform results into the high resolution R image. Other high resolution images (G, B, NIR) are obtained in the same way, and thus we get a higher resolution color image. This can also be interpreted that the higher frequency components extracted from are added to each multispectral component. However, it can be seen that every multispectral component is given the same amount of high frequency components in the conventional schemes [] [], which does not match with the fact that each color component has different intensity portion at each pixel and has different spectral relationship with. More specifically, Fig. shows the relative spectral response of each component, where it can be seen that R and NIR are inside the range that covers

whereas the B and G are not. The assumption in satellite image fusion is that Panchromatic image is composed of and specifically it is represented as follows: () Using this assumption, high frequency components are obtained from and complement low resolution mutispectral images. The proposed algorithm is to describe every component can be described by the sum of Gaussian distribution functions, and then represent the P according to the basis function modeled by the spectral responses of R, G, B, and NIR. The experiments on IKONOS satellite images show that the proposed algorithm gives higher PSNR and correlation coefficients compared to the conventional methods. The rest of this paper is organized as follows. In Section 2, the notations that will be used is first summarized, and the conventional wavelet-domain image fusion method is explained. The proposed algorithm is given in Section 3 and the experimental results are given in Section 4. Finally, Section 5 concludes the paper. 2 Wavelet-domain satellite image fusion method Before introducing the conventional wavelet -domain image fusion algorithm, the notations and abbreviations that will be used are summarized here. 2. Notations P denotes the panchromatic image, { } represent the low resolution {Red, Green, Blue} component, and { } are the enlargement (interpolation) of { } to the size of P. { } denote the high resolution spectral images that are estimated from P considering the relative intensity portion, and { } are final composition of high resolution images. Each of these notations is a 2-dimensional matrix, the elements of which are pixel intensities. The size of P is the full-resolution, and the matrices with are half or quarter sized. In the interpolation of to (and also other components), a simple bilinear transform approach is sufficient [9]. For better performance, more sophisticated interpolation or super-resolution methods can also be used. Let us define more notations for convenience. First, the average pixel intensities of { } are defined as { }. The average of all these components are denoted as and the relative intensities are denoted as { }, which are defined as follows : (2). (3) Finally, denotes the wavelet coefficient matrix of P in the th band. Note that is also a matrix with the size of P, because undecimated wavelet transform is used in the image fusion. 2.2 Conventional wavelet-domain fusion algorithm Among many wavelet decomposition methods, we consider the method [] because it is the most frequently used in the conventional wavelet-domain fusion algorithms [], [6]. In this method, the (3 rd order spline function) is used as the scaling function for the wavelet decomposition [7]. Specifically, the convolution of given image with results in the first scale coefficients, and the subtraction of this scale coefficients from the given image corresponds to the wavelet coefficients in the first band. This process is repeated for the scale coefficients for generating higher scale and wavelet coefficients. Wavelet decomposition of panchromatic image according to this process can be represented as (4) where is the th scale coefficients, and the rest notations are given in the previous subsection. In the conventional AWRGB wavelet-domain fusion algorithm [], all the multispectral components are first interpolated to be the size of. As stated previously, the simple bilinear interpolation is sufficient, and more complicated methods can be considered for better but minor performance improvement [9], [2]. Each of these interpolated multispectral image are considered as the approximation coefficients of each (actually non-existing) high-resolution multispectral image. Hence the high resolution images are estimated as follows :. (5) From this equation, it can be seen that the same amount of high frequency components (wavelet coefficients extracted from ) is added to all the spectral components in the AWRGB algorithm.

3 Proposed algorithm In general, each multispectral image has different average intensity and different amount of spectrum overlap with the as shown in Fig.. Hence, the proposed algorithm is based on the idea that different amount of high frequency addition would bring better results than adding the same amount. For this purpose, we first estimate the high frequency component of each spectral images { } defined in section 2. from, considering their relative pixel intensity. A naive method would be (for the example of R), these are adjusted to fit that of. In more detail, the proposed algorithm is consisted of 3-steps. The first step is to represent the 4-channel spectral responses ( ) by using 2 or 3 Gaussian distribution functions respectively. (6) where A(i, j) means the (i, j)-th element of the matrix A. However, since each component has different energy and also the addition of these energies is different from that of, we normalize each component compared to the average intensity. This is why we defined the normalization factors and in subsection 2.. As an example, when (the average pixel intensity of ) is larger than (average pixel intensity of all the components), the red component should be amplified to be as affective as the other components. This is achieved by multiplying to each pixel of R, because the = is larger than in this case. Based on these normalization factors, high frequency components are estimated as follows The wavelet coefficients of these images, which are the estimates of high resolution images considering the relative color intensity, are to be appended to the conventional wavelet addition in eq. (7). In appending the coefficients, the amount of spectrum overlap in Fig. is considered. In the case of R, it is inside the spectrum of and thus the amount of high frequency addition needs no special consideration. But in the case of G and B, much of their energies are not inside the P. This means that the addition of high frequency components to G and B needs to be boosted inversely proportional to the amount of overlap. For this purpose, we define a measure of overlap using gaussian distribution model. The proposed algorithm assumes that the basis of and multispectral components are same. Hence, after acquiring the basis functions (distribution model) of multispectral components, (7) Fig. IKONOS relative spectral response The second step is to represent the PAN spectral response by using the basis functions of 4-channel images obtained in the previous step. The final step is to find the parameters for fusing the results, Each step is explained as follows. 3. Modeling spectral responses by Gaussian functions First of all, spectral responsivity of each channel is described by the sum of 2 or 3 Gaussian distribution functions where each Gaussian function is. (8) The number of Gaussian functions is dependent on a number of peaks. For example, R channel has 2 peaks in 655 and 68 nm as shown in Fig. 2. Since R and B channels have 2 peaks, and they represented as a sum of 2 Gaussian functions. In the case of G and NIR, they have 3 peaks and thus represented by the sum of 3 Gaussian functions. The value of and for each function for the representations summarized in Table, where the fitting error ratio is the ratio of the difference between the original response and the modeled function. The equation for the fitting error ratio is defined as (9) where is gaussian function of the first peak and is the amplitude in -channel.

.6.4.2. 3 4 5 6 7 8 9.6.4.2. 3 4 5 6 7 8 9 (a) R channel (b) G channel.6.4.2..6.4.2. 3 4 5 6 7 8 9 3 4 5 6 7 8 9 (c) B channel (d) NIR channel Fig 2. Representation of each channel by the sum of Gaussian distribution functions. Table. Basis Gaussian functions of 4-channel R-channel G-channel B-channel N-channel A (scale factor) 655 5.5 532 68.25 968 53 8.25 898 55 2.8 977 575 4.5 972 465 6.25 68 495 3.55 78 9.5 82 3.25 57 855 4.5.4877 fitting error ratio 7.4% 4.86% 5.8%.5% 3.2 Basis functions for modeling PAN spectrum where the range of is the whole range that is verified in Fig. and means the basis function of the first peak in -channel. 3.3 Parameters for the fusion Fusion parameters, T are obtained by using the area of Gaussian distribution in ) and 2). The first term, is low frequency of R-channel and the second term, is high frequency of R-channel and the part of enhanced by proposed algorithm in eq. (). The basis functions for the representation of P image are acquired by using the Gaussian functions used for each channel in eq. (). The area of a basis function is and the basis function is also gaussian distribution. In other words, the basis function is induced by division between whole area of gaussian distribution and gaussian distribution. The equation of basis function is as () () The important factor is how much amount of,

which is made from PAN, is added to each channel. In our experiments, low frequency of R, G, B and spectral responsivity are considered. The parameter is obtained as (2) For example, R-channel has 2 peaks as can be seen in Fig.. Hence, R-channel is decomposed to in each peak. The numerator in eq. (2) is the size of overlap area between 2 functions, and the denominator in eq. (2) is the overlap area between the two convolution results. The convolution is executed between the basis function and spectral responsivity of PAN, where in eq. (2), the basis functions are and. As a result, is dependent on the overlap area between the spectral responses of PAN and R-channel. Speficially, if the overlap gets small, then gets large, and vice versa. Hence, the effect of overlap and size to is well reflected in eq. (). In summary, the parameters for each channel are obtained as =.57, =.64, =.3836, and the experimental results on 5 image set using these costs are shown in Table. 2. Table 2. Comparison of IHS [4], Conventional Wavelet based Method [], and the Proposed Algorithm 2 3 4 5 6 Method IHS AWRGB Proposed PSNR 2.8673 26.977 26.78862 RMSE 267837 2.493665.674 CC 768 54 5833 PSNR 2.83957 25.83725 26.375323 RMSE 2.63486 3.3786 2.239792 CC 228 58392 6399 PSNR 24.226647 27.4824 28.3552 RMSE 5.675 76746 9.74574 CC 46 74425 79542 PSNR 22.835976 28.36637 28.94365 RMSE 8.39676 9.732498 9.6677 CC 34399 6624 767 PSNR 24.5843 27.6694 28.338468 RMSE 5.4988 54665 9.763753 CC 53488 76932 845 PSNR 23.43764 26.495354 27.4 RMSE 7.65568 2.782.37954 CC 4974 7322 76473 4 Experimental results The proposed algorithm is applied to a set of IKONOS images, and cropped and magnified results are compared in images are shown in Fig. 3 and 4. Compared with the original image, it can be seen that the proposed method s color distortion is less than that of IHS, but the difference from the conventional wavelet method is not noticeable in this resolution. So we also compare the results objectively, following the method in the literature [2], [4], [2]. That is, we assume that the current are the original, is reduced to the size of, and all the spectral components are reduced to the quarter resolution. Then the fusion algorithms are applied to these reduced set of satellite images, and the fused color components are compared with the original. The objective measure considered here are peak signal to noise ratio (PSNR) for measuring the color distortion and cross correlation (CC) for measuring the spatial distortion [8]. The PSNR between the image matrices and is defined as log (3) where is the maximum pixel intensity and is the sum of squared differences of all the pixels in and. In this experiment, PSNR for each of color component is calculated according to the above equation, and then the average PSNRs are compared. The CC between two image matrices and is defined as (4) where is mean value of matrix and is the transpose of matrix. Table 2 shows the PSNR and CC for the conventional IHS [4] and wavelet algorithms [], and also the proposed algorithm for several set of images. It can be seen that the proposed method improves both PSNR and CC, which means that both of color and spatial distortions are reduced. 5 Conclusions We have proposed a new wavelet-based satellite image fusion algorithm. While the conventional wavelet-based methods add the same amount of high frequency contents to all of multispectral components, the proposed method adds the contents considering the relative intensity and the spectrum overlap with. The fusion parameters for the addition is controlled based on the amount of energy overlap between the sum of Gaussian distribution model of spectral components, and also the convolution of and the basis of spectral components in the frequency domain. When the overlap with the is small, larger cost is given and vice versa. Also the average intensity of a

spectral image compared to other components is also considered in the form of normalization. The experimental results show that the proposed method provides higher PSNR and CC, which means that both of color and spatial qualities are improved. (3..) IHS (3..2) Original (3..3) Proposed (3.2.) IHS (3.2.2) Original (3.2.3) Proposed Fig. 3 Comparison of IHS and proposed algorithm (4..) Conventional wavelet (4..2) Original (4..3) Proposed (4.2.) Conventional wavelet (4.2.2) Original (4.2.3) Proposed Fig. 4. Comparison of the conventional wavelet method and the proposed method 6 References [] Y. Zhang, Understanding image fusion, Photogramm. Eng. Remote Sens., vol. 7, no. 6, pp. 657 66, Jun. 24. [2] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, A comparative analysis of image fusion methods, IEEE. Trans. Geosci. Remote Sens., vol. 43, no. 6, pp. 39 42, Jun. 25. [3] R. Haydn, G. W. Dalke, J. Henkel, and J. E. Bare, Applications of the IHS color transform to

the processing of multisensor data and image enhancement, Proc. Int. Sym. Remote Sens. Arid. Semi-Arid Lands., pp.595 66, Jan. 982. [4] T. M. Tu, P. S. Huang, C. L. Hung, and C. P. Chang, A fast intensity - hue - saturation fusion technique with spectral adjustment for IKONOS imagery, IEEE. Geosci. Remote Sens. Letters., vol., no. 4, pp. 39 32, Oct. 24. [5] M. J. Choi, A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter, IEEE. Trans. Geosci. Remote Sens., vol. 44, no. 6, pp. 672 682, Jun. 26. [6] M. Gonz alez-aud ıcana, J. L. Saleta, R. G. Catal an, and R. Garc ıa, Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition, IEEE. Trans. Geosci. Remote Sens., vol. 42, no. 6, pp.29 299, Jun. 24. [7] P. S. Chavez, J. Stuart, C. Sides, and J. A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic, Photogramm. Eng. Remote Sens., vol. 57, no. 3, pp. 295 33, Mar. 99. [8] D. A. Yocky, Image merging and data fusion by means of the discrete two-dimensional wavelet transform, J. Opt. Soc. Amer. A, vol. 2, no. 9, pp. 834 84, 995. [9] S. G. Mallat, A theory for multiresolution signal decomposition: The wawelet respresentation, IEEE Trans. Pattern Anal. Machine Intell., vol., no. 7, pp. 674 693, Jul. 989. [] M. Gonz alez-aud ıcana, X. Otazu, O. Fors and A. Seco, Comparison between Mallat s the a trous discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images, Int. J. Remote Sens., vol. 26, no. 3, pp.595 64, Feb. 25. [] J. N u nez, X. Otazu, O. Fors, Albert Prades, Vicenc Pal`a, and Rom an Arbiol, Multiresolution-based image fusion with additive wavelet decomposition, IEEE. Trans Geosci. Remote Sens., vol. 37, no. 3, pp. 24 2, May. 999. [2] J. Zhou, D. L. Civco, and J. A. Silander, A wavelet transform method to merge Landsat TM and SPOT panchromatic data, Int. J. Remote Sens., vol. 9, no. 4, pp.743 757, Mar. 998. [3] P. S. Chavez, and J. A. Bowell, Comparison of the spectral information content of Landsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona region, Photogramm. Eng. Remote Sens., vol. 9, no. 5, pp. 699 78, Dec. 988. [4] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, The use of intensity - hue - saturation transformation for merging SPOT panchromatic and multispectral image data, Photogramm. Eng. Remote Sens., vol. 56, no. 4, pp. 459 467, Apr. 99. [5] V. K. Shettigara, A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution dataset, Photogramm. Eng. Remote Sens., vol. 58, no. 5, pp. 56 567, May. 992. [6] M. Holschneider, and P. Tchamitchian, Regularit e local de la function non-defferentiable the Riemann, In les ondelettes en 989, P. G. Lemari e, Ed. Paris, France:Springer-Verlag, 99. [7] J. L. Starck, and F. Murtagh, Image restoration with noise suppression using the wavelet transform, Astron. Astrophys., vol. 288, no., pp. 342 348, Jan. 994. [8] R. S. Blum, and Z. Liu, Multi-sensor image fusion and its applications. Taylor and Francis, 26. [9] S. E. Umbaugh, Computer imaging: Digital image analysis and processing. Taylor and Francis, Ch. 9, pp. 455, 25. [2] J. H. Kim, S. H. Lee, and N. I. Cho, Bayesian image interpolation based on the learning and estimation of higher band wavelet coefficients, IEEE Int. Conf. on Image Pro., pp. 685 688, Oct. 26. [2] H. C. Kim, J. G. Kuk, H. S. Song, S. H. Lee, M. J. Choi, N. I. Cho, IKONOS image fusion by minimisation of spectral distortion using MAP estimator, Electronics Letters, pp. 97 97, Aug. 27.