DATA FUSION AND TEXTURE-DIRECTION ANALYSES FOR URBAN STUDIES IN VIETNAM

Similar documents
QUALITY ASSESSMENT OF IMAGE FUSION TECHNIQUES FOR MULTISENSOR HIGH RESOLUTION SATELLITE IMAGES (CASE STUDY: IRS-P5 AND IRS-P6 SATELLITE IMAGES)

Today s Presentation. Introduction Study area and Data Method Results and Discussion Conclusion

Combination of IHS and Spatial PCA Methods for Multispectral and Panchromatic Image Fusion

Improving Spatial Resolution Of Satellite Image Using Data Fusion Method

Spectral and spatial quality analysis of pansharpening algorithms: A case study in Istanbul

ISVR: an improved synthetic variable ratio method for image fusion

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Advanced Techniques in Urban Remote Sensing

Classification in Image processing: A Survey

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

LANDSAT-SPOT DIGITAL IMAGES INTEGRATION USING GEOSTATISTICAL COSIMULATION TECHNIQUES

What is Remote Sensing? Contents. Image Fusion in Remote Sensing. 1. Optical imagery in remote sensing. Electromagnetic Spectrum

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform

Benefits of fusion of high spatial and spectral resolutions images for urban mapping

Detecting Land Cover Changes by extracting features and using SVM supervised classification

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain

IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY

ANALYSIS OF SPOT-6 DATA FUSION USING GRAM-SCHMIDT SPECTRAL SHARPENING ON RURAL AREAS

United States Patent (19) Laben et al.

INTEGRATED DEM AND PAN-SHARPENED SPOT-4 IMAGE IN URBAN STUDIES

Texture Feature Extraction for Land-cover Classification of Remote Sensing Data in Land Consolidation District Using Semi-variogram

IMAGE QUATY ASSESSMENT FOR VHR REMOTE SENSING IMAGE CLASSIFICATION

Measurement of Quality Preservation of Pan-sharpened Image

Novel Hybrid Multispectral Image Fusion Method using Fuzzy Logic

Synthetic Aperture Radar (SAR) Image Fusion with Optical Data

Survey of Spatial Domain Image fusion Techniques

A New Method to Fusion IKONOS and QuickBird Satellites Imagery

Statistical Analysis of SPOT HRV/PA Data

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD

A Review on Image Fusion Techniques

New Additive Wavelet Image Fusion Algorithm for Satellite Images

DIFFERENTIAL APPROACH FOR MAP REVISION FROM NEW MULTI-RESOLUTION SATELLITE IMAGERY AND EXISTING TOPOGRAPHIC DATA

High-resolution Image Fusion: Methods to Preserve Spectral and Spatial Resolution

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY

USE OF LANDSAT 7 ETM+ DATA AS BASIC INFORMATION FOR INFRASTRUCTURE PLANNING

LAND USE MAP PRODUCTION BY FUSION OF MULTISPECTRAL CLASSIFICATION OF LANDSAT IMAGES AND TEXTURE ANALYSIS OF HIGH RESOLUTION IMAGES

THE IMAGE REGISTRATION TECHNIQUE FOR HIGH RESOLUTION REMOTE SENSING IMAGE IN HILLY AREA

An Improved Intensity-Hue-Saturation for A High-Resolution Image Fusion Technique Minimizing Color Distortion

METHODS FOR IMAGE FUSION QUALITY ASSESSMENT A REVIEW, COMPARISON AND ANALYSIS

A Pan-Sharpening Based on the Non-Subsampled Contourlet Transform and Discrete Wavelet Transform

Vol.14 No.1. Februari 2013 Jurnal Momentum ISSN : X SCENES CHANGE ANALYSIS OF MULTI-TEMPORAL IMAGES FUSION. Yuhendra 1

Increasing the potential of Razaksat images for map-updating in the Tropics

Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana. Geob 373 Remote Sensing. Dr Andreas Varhola, Kathry De Rego

ILTERS. Jia Yonghong 1,2 Wu Meng 1* Zhang Xiaoping 1

* Tokai University Research and Information Center

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images

MULTIRESOLUTION SPOT-5 DATA FOR BOREAL FOREST MONITORING

Satellite Image Fusion Algorithm using Gaussian Distribution model on Spectrum Range

A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan 2 1,2 INTRODUCTION

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

MULTI-SENSOR DATA FUSION OF VNIR AND TIR SATELLITE IMAGERY

Textural analysis of coca plantations using 1-meter-resolution remotely-sensed data

EVALUATING THE EFFICIENCY OF MULTISENSOR SATELLITE DATA FUSION BASED ON THE ACCURACY LEVEL OF LAND COVER/USE CLASSIFICATION

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Super-Resolution of Multispectral Images

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1

Urban Feature Classification Technique from RGB Data using Sequential Methods

High Resolution Satellite Data for Mapping Landuse/Land-cover in the Rural-Urban Fringe of the Greater Toronto Area

CLASSIFICATION OF VEGETATION AREA FROM SATELLITE IMAGES USING IMAGE PROCESSING TECHNIQUES ABSTRACT

Augment the Spatial Resolution of Multispectral Image Using PCA Fusion Method and Classified It s Region Using Different Techniques.

COMBINATION OF OBJECT-BASED AND PIXEL-BASED IMAGE ANALYSIS FOR CLASSIFICATION OF VHR IMAGERY OVER URBAN AREAS INTRODUCTION

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

CHANGE DETECTION BY THE IR-MAD AND KERNEL MAF METHODS IN LANDSAT TM DATA COVERING A SWEDISH FOREST REGION

The techniques with ERDAS IMAGINE include:

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES

Chapter 1. Introduction

Evaluating the Effects of Shadow Detection on QuickBird Image Classification and Spectroradiometric Restoration

Comparing of Landsat 8 and Sentinel 2A using Water Extraction Indexes over Volta River

Spectral information analysis of image fusion data for remote sensing applications

The optimum wavelet-based fusion method for urban area mapping

A Hierarchical Fuzzy Classification Approach for High-Resolution Multispectral Data Over Urban Areas

MTF-tailored Multiscale Fusion of High-resolution MS and Pan Imagery

MOST of Earth observation satellites, such as Landsat-7,

06 th - 09 th November 2012 School of Environmental Sciences Mahatma Gandhi University, Kottayam, Kerala In association with &

Digital Image Processing

MULTISPECTRAL IMAGE PROCESSING I

Detection of Compound Structures in Very High Spatial Resolution Images

Title pseudo-hyperspectral image synthesi. Author(s) Hoang, Nguyen Tien; Koike, Katsuaki.

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post

Image interpretation and analysis

Image interpretation I and II

RADIOMETRIC CALIBRATION

Image Analysis based on Spectral and Spatial Grouping

GIS and Remote Sensing

ADAPTIVE INTENSITY MATCHING FILTERS : A NEW TOOL FOR MULTI-RESOLUTION DATA FUSION.

Terrain Classification in Urban Wetlands with High-spatial Resolution Multi-spectral Imagery

DEM GENERATION WITH WORLDVIEW-2 IMAGES

CHAPTER 7: Multispectral Remote Sensing

GE 113 REMOTE SENSING

Fusing high-resolution SAR and optical imagery for improved urban land cover study and classification

EVALUATION OF SATELLITE IMAGE FUSION USING WAVELET TRANSFORM

San Diego State University Department of Geography, San Diego, CA. USA b. University of California, Department of Geography, Santa Barbara, CA.

A MULTISTAGE APPROACH FOR DETECTING AND CORRECTING SHADOWS IN QUICKBIRD IMAGERY

DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES

APPLICATION OF PANSHARPENING ALGORITHMS FOR THE FUSION OF RAMAN AND CONVENTIONAL BRIGHTFIELD MICROSCOPY IMAGES

Transcription:

1 DATA FUSION AND TEXTURE-DIRECTION ANALYSES FOR URBAN STUDIES IN VIETNAM Tran Dong Binh 1, Weber Christiane 1, Serradj Aziz 1, Badariotti Dominique 2, Pham Van Cu 3 1. University of Louis Pasteur, Department of Geography and Management, 3 rue de l Argonne, 67000 Strasbourg, France. Tel: 0033.390240959; Fax: 0033.390240950, tran@lorraine.u-strasbg.fr, christiane.weber@lorraine.ustrasbg.fr, aziz.serradj@lorraine.u-strasbg.fr 2. University of Pau, Department of Geography, SET Laboratory, PAU, France, dominique.badariotti@univ-pau.fr 3. Remote Sensing and Geomatic Center (VTGEO), Institute of Geology, Ha Noi, Vietnam, phamvancu@hn.vnn.vn ABSTRACT Urban forms features are generally considered with both spectral and spatial features, but in the case of the Asian cities, it is delicate to extract them from high spatial resolution images because of the small parcels, compact structures and narrow street pattern ((xiv). Vietnamese cities characterized by a dense pattern composed of pencil-like buildings arranged next to each other and urban networks are seldom distinctive. This leads to extract urban zones rather than individual objects from high spatial resolution of Landsat ETM7 images. One of the difficulties lies in the urban structures characterization whose basic elements, buildings or building blocks are heterogeneous and have very small size. Consequently, they are difficult to be identified. In this case, the images fusion between Landsat multispectral images et Spot panchromatic image is investigated to benefit the high spatial resolution of panchromatic data using various grey levels of co-occurrence indices based texture and direction measures (iv) respectively as well as different pixel window contribute to highlight urban forms of Da Nang city. We observed that the best results are obtained by a 5 x 5 pixel window with homogeneity, contrast, entropy and angle second moment measures and four directions and the average. The detection of urban forms is significantly improved by fusing multispectral with texture and direction information of panchromatic data. INTRODUCTION Remotely sensed data proves its contributions for urban monitoring and planning despite of different urban context in developed or developing countries because of its fine spatial resolution (ii, vii). However, it subsists potential problems for urban research in term of its capacity for routine, periodic description, classification, measurements of critical physical properties, etc. (vii). Spatial resolution constitutes here an essential key element for urban research in these countries. In fact, remotely sensed data for urban research in developing countries not only takes important place for country management but also constitutes challenges for scientist community. In the urban context of Asian cities, Welch (xiv) proposed different spatial resolution adapted for man-made objects of cities in the world in which he required 5m spatial resolution at least for urban Asian cities. Vietnamese cities are characterized by a dense pattern composed of small pencil-like buildings arranged next to each other and urban networks are seldom distinctive. This urban form type results mainly of self-construction process. Landsat 7 multispectral images (MIs) with 30m spatial resolution, allows us to study the urban evolution but not the extraction of urban objects. This is a compelling aspect because in this context, the heterogeneous land cover types and the characteristics of urban objects need to be performed by the extraction of spectral and also spatial information. It is therefore necessary to use effective fusion technique to benefit of high spatial resolution of panchromatic image (PI) and high spectral resolution of MIs. Therefore, data fusion between Spot PI and Landsat ETM7 MIs has been performed to enhance spatial and spectral resolution at 10m. In this study, five fusion methods used in ENVI 4.1 (Environment for Visualizing Images) software have been examined (Hue-Saturation-Value (HSV), Principal component transformation (PCT), Brovey transformation (BT), colour normalised spectral (CN) and Gram-Schmidt transformation (GST)). Spectral distortion is firstly tested across of visual inspection like quality validation. We applied Wald s protocol (xiii) and new Universal Image Quality Index (UIQI) proposed by Wang (xvi) for validating the quantity of synthesis image. The graphical comparison of spectral curves for each class by average values of 3x3 pixels size in relatively homogeneous areas, showed close results between visual comparison and statistic tests. Texture-direction analyses have been applied in order to extract urban objects. Several supervised classifications tests using maximum

2 likelihood algorithm have been performed to define which characteristics could be consider as of major importance in urban object detection. The global aim of this article is to determine which method of fusion could be the more appropriated regarding our objectives and the data collected. Various methods usually used have been tested adapted to our data sets and to urban objects in the particular urban context of Da Nang city (ix). DATA SET AND STUDY AREA Landsat ETM7 sub-scene acquired on 23 of Mars 2001 and Spot PI subscene acquired on 12 of May 1998 (at path 124, row 49) covers the entire urban area of Da Nang city, Vietnam. PI and MIs have size of 900 x 700 and 318 x 247 pixels respectively. They have been acquired at the projection UTM, 49 North and datum is WGS-84. Bands 1, 2, 3, 4 were used in the fusion and the classification applies for all bands (from 1 to 7, except ETM7 band 6) to benefit from their spectral properties. Land use of Da Nang is characterized by a very fragmentary and dense real estate. Most houses either in the urban center or in the periphery have small size and compact forms associated to narrow streets. Figure 1: Spot PI and Landsat ETM7MIs (RGB) resized by cubic interpolation at 10m FUSION METHODS Several definitions can be found in the literature of fusion. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm (viii). Wald (xii) referred to the application meaning of data fusion, data fusion is a formal framework in which are expressed means and tools for the alliance of data originating from different sources. It aims at obtaining information of greater quality; the exact definition of greater quality will depend upon the application. In this study, we have tested five fusion methods as follows: HSV which is closed to the IHS method (iii), PCT (xv), BT (x, viii), CN (xi) and GST (i). The images used here must be georeferenced or have the same image dimensions. Comparison of the spectral and spatial of the fused images The criteria for the assessment of the fusion result depend on the application (xii). In this paper, we try to improve the spatial resolution while maintaining the spectral characteristics. In this case, the fusion process can be evaluated in terms of spatial and spectral quality. The evaluation spatial quality is examined by comparing the PI, MIs and the fused image (FI) to see whether the structures of the objects in PI are incorporated into the MIs. The evaluation of the spectral quality can be checked by comparing the fused image with the original MIs to see whether the radiometry of these both data sets are as identical as possible. Visual inspection is still the most common method to evaluate the fusion result qualitatively. In addition, two quantitative indexes such as: correlation coefficient (CC), UIQI (xvi) can be also used to assess the similarity between the original images and fusion result.

3 Figure 2: The original MIs and the fused resulting images. (Left to right sequence, row by row) Original MIs, HSV, PCT, CNT, BT and GST Spatial quality: Visual inspection between PI, MIs and FI were used to test the spatial quality. Urban objects in general are identical for all of methods. More particular, settlements in city center are well distinguished while they form an urban compact area in 30m pixel size of Landsat ETM. We can more precisely observe settlements and recognize damier structure with narrow streets and small parcels. Moreover, the airport can be identified with very details, especially its equipment zones. In addition, the colour of the water, residential areas and sand band is almost identical in all the methods. However, the green areas in the PCT and GST are almost the same colour and brightness as those in the original ETM images, except other methods. The colour effects of the HSV, BT and CN are distorted than those of the PCT and GST, more particular the green is very weak in the case of HSV, BT and CN. The contrast is the same colour effects; it is no changed in the PCT and GST. Table 1: CCs between Spot PI (resampled at 30m) and original Landsat ETM7 MIs Band Pan SPOT (0.51-0.73µm) Bleu (0.45-0.52µm) Green (0.53-0.61 µm) Red (0.63-0.69 µm) NIR (0.78-0.90 µm) Pan Spot 1.000000 Bleu 0.548720 1.000000 Green 0.657123 0.925509 1.000000 Red 0.702891 0.889175 0.973392 1.000000 NIR 0.665296 0.392612 0.613036 0.670624 1.000000 CCs (table 1) show Red Band is the most correlated with panchromatic image. It means that Red Band of Landsat ETM7 is important to the Spot Pan image and to the fusion process. NIR is less correlated than Red but more correlated than Green and Bleu. Between multispectral bands, the more they are nearly, the more they correlate closely. The low correlation between Spot panchromatic and ETM7 multispectral can be explained by the different time of data acquisition and different satellite geometries. Spectral quality: The performance of each fusion method relies on maintaining of the radiometric fidelity of the data set while increasing its spatial resolution. Radiometric fidelity of synthesis images firstly depend upon the correlation between the MIs and the PI and algorithm transformation (xvii). Universal Image Quality Index (xvi): The UIQI is designed for similarity of images by combining three factors: correlation, radiometric and contrast distortion. It is defined as follows: σ 2 µ µ 2 AB A B σ Aσ B UIQI = 2 2 2 2, where σ σ Aσ B µ + µ σ + A σ A, σ B are the standard deviation of image A and B, respectively. B A B σ AB is covariance between image A and B; µ A and µ B are the mean of image A and B, respectively.

4 This index is performed because it describes all statistic characteristics of images. The first component is correlation coefficient for image A and image B. The second component measures how close the mean gray level of image A and image B is, while the third measures is the similarity between the contrast of image A and image B. The dynamic range is [-1, 1]. If two images are identical, the similarity is maximal and equals 1. Wang (xvii) used this index to measure the similarity between original images and synthesis images and applied conjointly the Wald s protocol to evaluate the quality of images of both spatial and spectral information for fusing IKONOS images. We use here the same idea to assess our methods in the urban characteristics and urban context of Vietnam. Wald s protocol (xiii): The Wald s protocol is applied in this article to verify radiometric fidelity by considering original MIs. The protocol evaluates quality by checking three properties: Property 1 means that any fused image (FI) once degraded to its original resolution should be as identical as possible to the original image. Property 2 signifies any FI should be as identical as possible to the image that the corresponding sensor would observe with the highest spatial resolution and Property 3 proposes the multispectral set of FI should be as identical as possible to the multispectral set of images that the corresponding sensor would observe with the highest spatial resolution. In order to test the first property, we resample the FI to the spatial resolution of Landsat images (30m) by cubic interpolation and computed the similarity between FI re-sampled and ETM7 image by UIQI. Table 2 shows CCs and the similarity between the FI and original MIs by different methods: Table 2: CCs and similarity between the FI (resampled at 30m) and original MIs N Method B G R NIR CC UIQI CC UIQI CC UIQI CC UIQI 1 HSV 0.5291 0.1768 0.6801 0.354 0.7729 0.5745 0.8233 0.7264 2 PCT 0.7432 0.7419 0.7270 0.7275 0.7340 0.7336 0.8226 0.8225 3 CN 0.4550 0.2806 0.6506 0.5059 0.7034 0.6330 0.8617 0.7810 4 BT 0.4324 0.2746 0.6513 0.3935 0.8025 0.4124 0.8976 0.4895 5 GST 0.7293 0.7274 0.7177 0.7166 0.7282 0.7421 0.8316 0.8305 To assess the second and third properties, Wald (xiii) and Wang (xvii) re-sampled the MIs and PI at a lower spatial resolution of respecting to a ratio of spatial resolution between the images before fusion. Once properties 2 and 3 are satisfied at lower resolution, it is assumed they are also satisfied at the original resolution level. In this case, SPOT PI and Landsat ETM7 MIs are degraded respectively to the resolution of 30m and 90m. It means that we have FI at 30m fused by degraded Spot PI 30m and Landsat MIs 90m. Then, this data set was applied for each fusion method and UIQI calculated for all (table 3). FI finally will be compared with original MIs. Table 3: CCs and similarity between the FI and original MIs at 30m spatial resolution (fusion at lower resolution) N Method B G R NIR CC UIQI CC UIQI CC UIQI CC UIQI 1 HSV 0.5139 0.1739 0.6514 0.3421 0.7240 0.5415 0.8007 0.7131 2 PCT 0.7003 0.6974 0.7077 0.7053 0.7226 0.7216 0.8178 0.8177 3 CN 0.4408 0.2705 0.6448 0.500 0.6896 0.6209 0.8441 0.7665 4 BT 0.4434 0.2784 0.6336 0.3640 0.7502 0.3769 0.8697 0.4608 5 GST 0.6954 0.6932 0.7052 0.7044 0.7211 0.7210 0.8204 0.8191 CCs can explain the radiometric distortion level between original MIs and FI but it in not sufficient to indicate exactly the performance of method. UIQI effectively provides some knowledge on the results of fusion; it tends to examine all statistic parameters such as mean, correlation, covariance, standard deviation. ording to the spectral previous inspection, the statistic parameters highlight the PCT and GST methods as the best methods in this case.

5 The BT, HSV and CN distorted the spectral characteristics of the data used. The reason is because the colour normalised (in the case of BT or CN) by simple multiple ratio of PI and divided by the sum of MIs. Also, the replacement of Value band by PI in the case of HSV depends upon the ETM bands combination used (Value band is different between subgroups of ETM bands 1, 2 and 3 or 1, 2 and 4). More particular, HSV and BT have disadvantage that only three bands can be fused at the same time. PCT and GST also distorted slightly spectral characteristics but in general they preserve the spectral characteristics trends. These both methods are more performing, because the algorithms permit to calculate the first principal component (PC1) (case of PCT) and simulated panchromatic band (SP) (case of GST) from multispectral bands. It means that the PC1 and SP are closer to the panchromatic band than Value band (HSV method) or normalized bands (BT and CN methods) (see the PCA methods in (iii)). Graphical comparison: Our aim in this section is to select the best FI in order to integrate it in supervised classification process. The results show that PCT and GST give the similar spectral quality and quantity. Our results are similar to those found by Ali (i) in which PCT and GST bring the similar spectral fidelity for results obtained by fusing EO1 Hyperion with Spot pan and Quickbird MS. Therefore, average values over 3x3 pixels in homogeneous window have been computed for both methods, assessing urban object identification by comparing spectral curves (five samples for each class) for FI (by GST and PCT methods) and original MIs re-sampled at 10m. The results over spectral curves show very similar values, so any of these two methods might be used to run a supervised classification processing. The PCT method has been selected for the next classification but the extraction of spatial elements might still be refined. TEXTURE DIRECTION ANALYSES Urban objects are difficult to identify and they are better observed through their spatial characteristics rather than their spectral reflection properties (vi, xviii). Textural information derived by spatial intrinsic relation of a pixels group contributes to get complementary information for spectral classification and may allow eliminating confusion situations between urban classes in the classification process (v). Texture can be defined as the spatial variation of the grey level in an image. One of the most used methods to measure texture involves the application of the grey level co-occurrence matrix (GLCM). GLCM is considered as bi-dimensional space of the frequency with which two neighboring pixels defined by a distance d and a given pixel angle a, occur on a given spatial relationship (iv). Four textures indices, Contrast, Entropy, Angle second moment and Homogeneity, are examined on PI which carries out rich information of textures for our data set. These four textures have been obtained (after several texts) with a 5x5 pixel window. The self-construction process induces a very complex urban structure and urban objects combination that are very difficult to be extracted. This leads to implement urban objects orientation to enhance the identification. So in this study, we examine the direction filter for PI and integrate it in the classification process. SUPERVISED CLASSIFICATION This stage consists to select the training region for the maximum likelihood supervised classification algorithm. The FI by PCT method is finally used in the supervised classification. In order to examine the contribution of textural information, we run separable three different supervised classifications (spectral, textural-directional and spectral-textural-directional). Seven classes (water, vegetation, urban, industry-under-construction, agriculture, bare soil and periphery classes) that have been previously identified (ix) over the same region have been chosen to define land use cover. When the multispectral or textural-directional bands alone, the statistical separability between urban classes is low, even very low in the case of classification of texture and direction, due to the fact that the cover types share common elements with similar spectral or textural properties. For example, industry-under construction class is mixed with bare soil-sand class in the spectral class;

1st EARSeL Workshop of the SIG Urban Remote Sensing 6 on the other hand, the confusion between industry-under construction class and periphery class is very strong. When texture features are incorporated, the class separability increases because a unique texture pattern characterizes each class. Table 4: Thematic accuracy of different classification methods Spectral classification Class Prod Water 99.29 Vegetation Urban 99.02 Industry-under construction 89.22 Agriculture 91.36 Bare soil 96.04 Periphery User s 91.82 94.79 98.67 90.65 Kappa 99.29 99.02 89.22 91.36 96.04 Textural-directional classification Prod User s Kappa 95.02 80.91 95.02 27.64 40 27.64 43.14 52.38 43.14 32.35 62.26 32.35 40.74 56.90 40.74 33.66 87.18 33.66 89.36 22.34 89.36 Spectral-texturaldirectional classification Prod User s Kappa 97.14 96.30 96.30 The supervised classification requires thematic accuracy to achieve satisfying final classification results. It exists in general several approaches to assess the resulting classification. The confusion matrix method is adopted here using ground truth training zones because of its robustness including the overall accuracy, kappa coefficient, errors of commission, errors of omission, producer accuracy, and user accuracy. The classification accuracy obtained for each cover type is low, even very low when we classify separable spectral and textural properties. The results are considerably improved when the texture feature is added to the spectral classification. The most significant improvements are showed in the industry-under construction class (from 89 to %), bare soil (from 96 to %) and agriculture (from 91 to 96%), etc. Water Vegetation Urban Industry-under contrustion Periphery Agriculture Bare soil Figure 3: Classification results (from left to right): spectral classification, textural-directional classification and spectral-textural-directional classification CONCLUSIONS This study underlines the importance of algorithms transformation for data fusion from different sensors and texture-direction analyses in the detection of urban objects. The particularity of urban characteristics of Vietnamese cities in general and, of Da Nang city in particular, highly compels the extraction of the urban elements. Therefore, data fusion presents efficacious tool to perform this extraction. PCT and GST provide some interesting results through automatic computing procedures and contribute to enhance both spectral and spatial images synthesis. Urban areas of Da Nang city have a complex structure mixing fragmentary and compact forms. These are very difficult to identify from space. In this case, texture-direction analyses based on cooccurrence matrix provide effective inputs and contribute largely as spatial complementary information for the discrimination the man-made objects. Separability study confirms this approach.

7 REFERENCES i. Ali D.B., Kappas M & Erasmi S., 2005. Hyper-spectral/High resolution data fusion : Assessing the quality of EO1-Hyperion/Spot-pan and Quickbird-MS fused images in spectral domain, http://www.ipi.uni-hannover.de/html/publikationen/2005/workshop/073-darvishi.pdf ii. Aplin P., Atkinson PN., &Curran PJ., (1997). Fine spatial resolution satellite sensors for the next decade. International journal of remote sensing, 18: 3873-3882. iii. Chavez S. Et al., 1991. Comparison of three different methods to merge multiresolution and multispctral data: Landsat TM and Spot panchromatic. Photogrammetric Engineering & Remote Sensing, 55, 3: 339-348. iv. Haralick R M, K Shanmugam & I Dinstein, 1973. Texture features for image classification. IEEE Transactions on systems, man and cybernetics, 3, 6: 610-621. v. He D.C., Wang L., et al., 1994. Classification spectrale et texturale des données d images SPOT en milieu urbain. International journal of remote sensing, 15, 10: 2145-2152. vi. Kiema J.B.K., 2002. Texture analysis and data fusion in the extraction of topographic objects from satellite imagery. International journal of remote sensing, 23, 4: 767-776. vii. Miller R.S & Small C., 2003. Cities from space: potential applications of remote sensing in urban environmental research and policy. Environmental Science and Policy, 6: 129-137. viii. Polh C. & Van Genderen J.L., 1998. Multisensor image fusion in remote sensing: concepts, methods and applications. International journal of remote sensing, 19, 5: 823-854. ix. Tran D.B., Badariotti D., Weber C., Pham V.C., 2005. Fractal analysis application: The diachronic evolution of the urban morphology in Da Nang city Vietnam. Proceeding of 14th European Colloquium on Theoretical and Quantitative Geography, 12p. x. Vrabel J., 1996. Multispectral imagery band sharpening study. Photogrammetric Engineering & Remote Sensing, 62, 9: 1075-1083. xi. Vrabel J., 2002. Demonstration of the accuracy of improved resolution hyperspectral imagery. Proceedings of SPIE, 4725: 556-567. xii. Wald L., 1999. Some terms of reference in data fusion. IEEE Transactions on Geoscience and Remote Sensing, 37, 3: 1190-1193. xiii. Wald L, T Ranchin & M Mangolini, 1997. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogrammetric Engineering & Remote Sensing, 63, 6: 691-699. xiv. Welch, R, 1982. Spatial resolution requirements for urban studies. International Journal of Remote Sensing, 3, 2: 139-146. xv. Welch R & M Ehlers, 1987. Merging multiresolution Spot HRV and Landsat TM data. Photogrammetric Engineering & Remote Sensing, 53, 3: 301-303. xvi. Wang Z., 2002. A universal Image quality index. IEEE Signal processing letters, 9, 3, pp. 81-84. xvii. Wang Z. et al., 2005. A comparative analysis of image fusion methods. IEEE Transactions on Geoscience and Remote Sensing, 43, 6: 1391-1402. xviii. Zhang Y, 1999. A new merging method and its spectral and spatial effects. International Journal of Remote Sensing, 28, 10: 2003-2014.