OBJECT-BASED IMAGE ANALYSIS FOR MAPPING TSUNAMI-AFFECTED AREAS ABSTRACT

Similar documents
Multi-level detection of damaged buildings from high-resolution optical satellite images

Detection and Animation of Damage Using Very High-Resolution Satellite Data Following the 2003 Bam, Iran, Earthquake

Building Damage Mapping of the 2003 Bam, Iran, Earthquake Using Envisat/ASAR Intensity Imagery

Use of Satellite Remote Sensing in Tsunami Damage Assessment

USE OF DIGITAL AERIAL IMAGES TO DETECT DAMAGES DUE TO EARTHQUAKES

Use of digital aerial camera images to detect damage to an expressway following an earthquake

USE OF OPTICAL SATELLITE IMAGES FOR THE RECOGNITION OF AREAS DAMAGED BY EARTHQUAKES ABSTRACT

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images

ACCURATE EVALUATION OF BUILDING DAMAGE IN THE 2003 BOUMERDES, ALGERIA EARTHQUAKE FROM QUICKBIRD SATELLITE IMAGES

Building Damage Mapping of the 2006 Central Java, Indonesia Earthquake Using High-Resolution Satellite Images

Remote Sensing Technology for Earthquake Damage Detection

Applications of remote sensing and GIS for damage assessment

DETECTION OF BUILDING SIDE-WALL DAMAGE CAUSED BY THE 2011 TOHOKU, JAPAN EARTHQUAKE TSUNAMIS USING HIGH-RESOLUTION SAR IMAGERY

AN ASSESSMENT OF SHADOW ENHANCED URBAN REMOTE SENSING IMAGERY OF A COMPLEX CITY - HONG KONG

INFORMATION CONTENT ANALYSIS FROM VERY HIGH RESOLUTION OPTICAL SPACE IMAGERY FOR UPDATING SPATIAL DATABASE

The Role of Urban Development Patterns in Mitigating the Effects of Tsunami Run-up: Final Report

Estimation of Damage Areas due to the 2010 Maule, Chile Earthquake Tsunami Using ASTER/DEM and ALOS/PALSAR Images

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010

Automated speed detection of moving vehicles from remote sensing images

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY

Texture Analysis for Correcting and Detecting Classification Structures in Urban Land Uses i

REMOTE SENSING INTERPRETATION

HYPERSPECTRAL IMAGERY FOR SAFEGUARDS APPLICATIONS. International Atomic Energy Agency, Vienna, Austria

On December 26, 2004, 00:58:53 UTC,

DIGITALGLOBE ATMOSPHERIC COMPENSATION

Blacksburg, VA July 24 th 30 th, 2010 Remote Sensing Page 1. A condensed overview. For our purposes

Abstract Quickbird Vs Aerial photos in identifying man-made objects

Remote Sensing for Rangeland Applications

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

A MULTISTAGE APPROACH FOR DETECTING AND CORRECTING SHADOWS IN QUICKBIRD IMAGERY

Introduction to Remote Sensing Fundamentals of Satellite Remote Sensing. Mads Olander Rasmussen

EXAMPLES OF OBJECT-ORIENTED CLASSIFICATION PERFORMED ON HIGH-RESOLUTION SATELLITE IMAGES

Urban Feature Classification Technique from RGB Data using Sequential Methods

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

GE 113 REMOTE SENSING

Unsupervised Pixel Based Change Detection Technique from Color Image

TechTime New Mapping Tools for Transportation Engineering

Module 3 Introduction to GIS. Lecture 8 GIS data acquisition

DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES

How to Access Imagery and Carry Out Remote Sensing Analysis Using Landsat Data in a Browser

Damage assessment on buildings using multisensor multimodal very high resolution images and ancillary data

9/12/2011. Training Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana. Geob 373 Remote Sensing. Dr Andreas Varhola, Kathry De Rego

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

CURRENT SCENARIO AND CHALLENGES IN THE ANALYSIS OF MULTITEMPORAL REMOTE SENSING IMAGES

Statistical Analysis of SPOT HRV/PA Data

Classification in Image processing: A Survey

Remote sensing imagery for damage assessment of buildings after destructive seismic events

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

large area By Juan Felipe Villegas E Scientific Colloquium Forest information technology

Fusion of Heterogeneous Multisensor Data

COMBINATION OF OBJECT-BASED AND PIXEL-BASED IMAGE ANALYSIS FOR CLASSIFICATION OF VHR IMAGERY OVER URBAN AREAS INTRODUCTION

Separation of crop and vegetation based on Digital Image Processing

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY

Tsunami- Great Sumatra Earthquake Tsunami disaster (2004), Tohoku Earthquake and Tsunami(2011)

Digitization of Trail Network Using Remotely-Sensed Data in the CFB Suffield National Wildlife Area

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD

White Paper. Medium Resolution Images and Clutter From Landsat 7 Sources. Pierre Missud

Image interpretation. Aliens create Indian Head with an ipod? Badlands Guardian (CBC) This feature can be found 300 KMs SE of Calgary.

ANALYSIS OF SPOT-6 DATA FUSION USING GRAM-SCHMIDT SPECTRAL SHARPENING ON RURAL AREAS

Aral Sea profile Selection of area 24 February April May 1998

Overview of how remote sensing is used by the wildland fire community.

REMOTE SENSING FOR FLOOD HAZARD STUDIES.

Introduction to Remote Sensing

Introduction to Remote Sensing

Lecture 13: Remotely Sensed Geospatial Data

In late April of 1986 a nuclear accident damaged a reactor at the Chernobyl nuclear

INTRODUCTION TO REMOTE SENSING AND ITS APPLICATIONS

THE modern airborne surveillance and reconnaissance

Lecture 6: Multispectral Earth Resource Satellites. The University at Albany Fall 2018 Geography and Planning

Advanced Techniques in Urban Remote Sensing

Image Analysis based on Spectral and Spatial Grouping

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Spatial Resolution

remote sensing? What are the remote sensing principles behind these Definition

POTENTIAL OF MANUAL AND AUTOMATIC FEATURE EXTRACTION FROM HIGH RESOLUTION SPACE IMAGES IN MOUNTAINOUS URBAN AREAS

Chapter 1 Overview of imaging GIS

Urban Road Network Extraction from Spaceborne SAR Image

Detection of Compound Structures in Very High Spatial Resolution Images

Digital Image Processing

2017 REMOTE SENSING EVENT TRAINING STRATEGIES 2016 SCIENCE OLYMPIAD COACHING ACADEMY CENTERVILLE, OH

A (very) brief introduction to Remote Sensing: From satellites to maps!

Remote sensing satellite imagery and risk management: image based information extraction

Multilook scene classification with spectral imagery

Introduction of Satellite Remote Sensing

Automated GIS data collection and update

Exercise 4-1 Image Exploration

Land Cover Change in Saipan, CNMI from 1978 to 2009

Topographic mapping from space K. Jacobsen*, G. Büyüksalih**

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post

The Marmara earthquake occurred during a time of unprecedented technological

Development of the Technology of Utilization of Airborne Synthetic Aperture Radar (SAR)

OPTICAL RS IMAGE INTERPRETATION

NRS 415 Remote Sensing of Environment

VALIDATION OF A SEMI-AUTOMATED CLASSIFICATION APPROACH FOR URBAN GREEN STRUCTURE

Remote Sensing Part 3 Examples & Applications

Image interpretation I and II

REGISTRATION OF OPTICAL AND SAR SATELLITE IMAGES BASED ON GEOMETRIC FEATURE TEMPLATES

Transcription:

Proceedings of the 8 th U.S. National Conference on Earthquake Engineering April 18-22, 2006, San Francisco, California, USA Paper No. 827 OBJECT-BASED IMAGE ANALYSIS FOR MAPPING TSUNAMI-AFFECTED AREAS T. T. Vu 1, M. Matsuoka 2 and F. Yamazaki 3 ABSTRACT The recent experiences in the 2004 Sumatra earthquake and tsunami showed the efficiency of remote sensing techniques in quick damage mapping and recovery efforts. A variety of satellite images at different resolutions provided different aspects of information about the affected zones. After remotely sensed images are acquired, choosing a suitable image analysis method is a critical requirement. This study develops an object-based image analysis method for mapping the tsunami-affected areas. Its basic idea is the combined analysis of spectral and morphological information in a multi-scale space. A concerned object class such as vegetation at a specific range of size can be extracted by observing its behavior across the multi-scale space. Tsunami-affected zones of Thailand are chosen for demonstrating the performance of this new method. Both medium resolution satellite images (ASTER) and high resolution satellite images (QuickBird) are used. Affected areas which are identified by stripped-away vegetated areas are clearly shown as the objects rather than fragmentary zones by the conventional methods. This output is ready-to-use for further damage assessment in a GIS environment. Speeding up the processing, testing on other types of disasters and areas, and sensitivity analysis should be carried out in further studies. Introduction Remote sensing techniques have played an important role in hazard monitoring and mapping. Providing the geo-spatial data over a large area, they are the most capable technique used in post-disaster response, especially for the hard-hit and difficult-to-access areas. More details and more aspects of the affected areas due to recent catastrophes have been captured as a result of the development of remote sensing techniques. A few examples are flood mapping (Barber et al. 1996), displacement mapping due to earthquakes (Massonnet et al. 1993), earthquake damage detection (Matsuoka and Yamazaki 1999), landslide mapping (Singhroy and Mattar 2000, Vohora and Donoghue 2004), and volcano observation (Mouginis-Mark et al. 1991, Andres and Rose 1995). Focusing on post-earthquake response, numerous researches and 1 Researcher, Earthquake Disaster Mitigation Research Center (EDM), NIED, Kobe, 651-0073, Japan. 2 Team Leader, EDM, NIED, Kobe, 651-0073, Japan. 3 Professor, Dept. of Urban Environment Systems, Faculty of Engineering, Chiba University, Chiba 263-8522, Japan.

implementations have been carried out such as the 1995 Kobe, Japan earthquake (Matsuoka and Yamazaki, 1999), the 1999 Kocaeli, Turkey earthquake (Eguchi et al. 2000, Estrada et al. 2000), the 2001 Gujarat, India earthquake (Mitomi et al. 2001, Saito et al. 2004), the 2003 Bam, Iran earthquake (Vu et al. 2005a, Yamazaki et al. 2005). Post-disaster response and recovery efforts require the information derived from a remotely sensed image rather than the image itself. Besides choosing the right data sources, deciding a suitable image processing method is critical. At the lowest level of processing, image processing deals with pixels. Each pixel possesses the grey-scale value which represents the spectral reflectance at its location. Based on that, a vast amount of pixel-based processing algorithms have been developed. Texture-based algorithms are used for a higher level of processing, which analyze different kinds of relationships among the neighbors of each pixel. At the next higher level, feature-based or object-based processing is the method to recognize the objects associated with their attributes. It prepares for the highest level, i.e. context-based processing to describe the image context (Fig. 1). Object-based processing has recently concerned as higher spatial resolution satellite images such as QuickBird and IKONOS are available (Willhauck 2000, Repaka et al. 2004). The lower processing levels cannot exploit all possessed information in such high resolution satellite images. Currently, visual interpretation, i.e. can be classified as a context-based processing, is preferred to process such images (Yamazaki et al. 2005) to obtain a reliable result. Figure 1. Illustration of image processing levels. We have proposed a new object-based method for detection of damaged buildings due to earthquakes (Vu et al. 2005b). In this paper, its implementation for mapping the affected areas in Thailand due to the Indian Ocean tsunami, which was triggered by the magnitude 9.0 earthquake (US Geological Survey) occurred off the coast of Sumatra, Indonesia on December 26, 2004, is presented. The washed-away vegetation due to the tsunami attack is clearly observed than the damaged buildings from a top-view image. Thus, the implementation of

mapping tsunami-affected areas focuses on vegetation and soil objects unlike the detection of damaged buildings. Section 2 describes the methodology to map the tsunami affected areas by extraction of vegetation and/or soil objects. Rather coarse resolution images like ASTER and high resolution images like QuickBird are used for demonstration of the method in Section 3. Conclusion and recommendation are given in the last section. Object-based Image Analysis The existence of an object depends on the scale of observation. Therefore, objects presented on an image possess the scale property. Exploitation of scale in image processing mimics the human perception. Human perception ignores the details and groups the pixels into an object on a specific scale of observation. Generally, a non-linear scale space performs a partition of an image into isolevel sets on each scale and links them with the closest one on the next scale. It keeps the main properties of a scale-space like luminance conservation, geometry, or morphology (Petrovic et al. 2004). The proposed method presented in this paper employs area morphology (Vincent, 1992). Detailed description of area morphology theory is referred to Vincent (1992). Its brief description was also given in our previous work (Vu et al. 2005b). Applying area opening followed by area closing with a parameter s, named AOC operator, on an image is flattening this image by parameter s. This performance segments an image into the flat zones of similar intensity or isolevel sets, in other words. Therefore, a scalespace can be generated by iteratively applying AOC with increasing s. The desirable properties of a scale space like fidelity, causality, Euclidean invariance, are hold by AOC scale-space (Acton and Mukherjee 2000). Across the scale space from a coarse scale to finer ones, an object is created and split. To extract the objects from the scale-space of an image, the linking trees of objects must be formed across the scale-space. On each scale, an object might have its parent on previous coarser scale and its children on next finer scale. The similarity of object s intensity is the criterion to determine the hierarchical relationship. If an object s intensity is much different from that of the big object which it falls into, the scale on which it is created is the root of the linking tree. Subsequently, an object can be extracted on its root scale along with its associated linking tree. The attributes of each object are SCALE: the current scale in which it exists, ID: the identified number on the current scale, AREA: object s area, X0, Y0: image coordinates of the starting point of an object, X, Y: image coordinates of the centroid; this point will be shifted to an arbitrary location inside the object if it is concave, SHAPE: to indicate the object is convex or concave, SPECTRAL: spectral class, SUPERID: the identified number of the object s father on the next coarser scale, this number equals 0 if the scale is the root of this object. Practically, multi-spectral images are used rather than grayscale ones. The scale-space scheme presented above for grayscale images is applied to each band. Subsequently, the spectral signatures are grouped and assigned the class indexes by K-mean clustering. A relational

database is generated to link the class indexes across the scale-space to linguistic variables which represent the land-cover types such as water body, vegetation, shadow, building roof, etc. This database will be used in the process of linking and segmentation of the objects across the scale-space. Fig. 2 illustrates the satellite images capturing an area in Phuket, Thailand before and after the tsunami attack in December, 2004. The illustration uses QuickBird images acquired on March 23, 2002 and January 02, 2005 for the pre-event (Figs. 2a and 2c) and post-event (Figs. 2b and 2d) images, respectively. These images are shown in true color composite. Obviously, vegetation was washed away by tsunami. The areas along the seashore was flooded which resulted in the overlaying of mud/sand on the ground or destroying the swimming pool of the resort. The tsunami also hit buildings, and destroys the lower floors. Therefore, most buildings retained their pre-event image, which is top-view. Several non-engineered buildings might be totally collapsed like the small buildings at the top-left corner of Figs. 2c and 2d. Three-year difference between the pre-event and post-event images also shows lots of man-made changes. Commission error is expected in change detection due to these changes. Figure 2. QuickBird images of a tsunami affected-area in Thailand: a) pre-event and b) postevent images. From the above observation, to detect the changes due to tsunami, it is easier to focus on the changes of vegetation objects and/or soil objects. The processing for mapping tsunamiaffected areas, therefore, is step-by-step as follows. Collect the corresponding images before and after tsunami Pan-sharpen the both images to facilitate multi-spectral information at a higher spatial resolution. For instance, 0.6 meter multi-spectral images can be achieved from 2.4 meter original multi-spectral QuickBird images.

Co-register the pre-event and post-event images. Focus on the areas along the coastline. Depending on topography, the construction in the area and the power of the tsunami hit, the affected-distance from the coastline by tsunami can be from several hundred meters to several kilometers. Generate the scale-space for the both images. Extract vegetation and/or soil objects from the both images. Building objects might or might not be extracted depending on the concerned area. Compare the extracted results to analyze the changes of each object and to generally detect the boundary of the affected area. The following section demonstrates the performance of an object-based image analysis using the QuickBird images acquired over the 2004 tsunami-affected areas in Thailand. Both medium-resolution image, like ASTER of 15 meter, and high resolution image, like QuickBird of 0.6 meter, were used. Mapping Affected-areas in Thailand due to The 2004 Tsunami ASTER and QuickBird images used for mapping the tsunami-affected areas in Thailand are shown in Table 1. As mentioned earlier, pixel-based processing can successfully detect the changes or damages due to the disaster from low and medium resolution satellite images. To see whether object-based processing can obtain further information in comparison with pixel-based processing, object-based analysis was carried out for both medium and high resolution satellite images in this study. It is noted that satellite images of hundreds meter spatial resolution like NOAA, MODIS are categorized as coarse resolution while ones like Landsat, SPOT, and ASTER are categorized as medium resolution. Sub-meter resolution satellite images like QuickBird, IKONOS are categorized as high resolution. Table 1. Satellite images used in demonstration. ASTER / Khao Lak, Phangnga QuickBird / Patong, Phuket Pre-event November 15, 2002 March 23, 2002 Post-event December 31, 2004 January 02, 2005 Medium resolution image by ASTER Prior to employing the object-based image analysis, comparison of Normalized Difference Vegetation Index (NDVI) values between the pre-event and post-event images was carried out. In this study, it is aimed to narrow down the working space of object-based image analysis and hence, to speed up the computational time as the objective of this paper is only to demonstrate the method. Generally speaking, for mapping an affected-area due to a catastrophe, the employment of multi-scale images is recommended as a cost-effective manner; A pixelbased method processes coarse/medium satellite images to quickly point out affected zones and subsequently, an object-based method processes an area of specific concern which is extracted from the detected affected zone.

Fig. 3 presents the pre-event and post-event ASTER images in false color composite (FCC) associated with their computed NDVI values. Obviously, a long strip of vegetation along the coastline was washed away. In demonstration of the object-based method, we focused on the small part in the lower part of the images. The extracted vegetation objects from the pre-event and post-event images are shown in Figs. 4a and 4b with black color as background and color code given by each object s ID and SCALE. The computed NDVI values from the pre-event and post-event images are shown in Figs. 4c and 4d, respectively. It seems to be easier to figure out the extent of tsunami affected areas by observing the changes of NDVI values. However, to extract the detailed information such as where vegetation was retained, the object-based method provided clearer information. The results of the object-based method are also ready to use in GIS database since each object was extracted with its all associated attributes. Quantitatively, approximately 4.63 km 2 washed-away vegetation were detected by the object-based method. The performance of the object-based method is further investigated with high resolution satellite images in the next sub-section. Figure 3. Affected area in Khao Lak, Phangnga as shown in ASTER image: a) FCC pre-event, b) FCC post-event, c) NDVI pre-event and d) NDVI post-event images. Figure 4. Object-based detected vegetation from a) pre-event image, b) post-event image and computed NDVI values from c) pre-event and d) post-event images.

High resolution image by QuickBird Since it is very difficult to demonstrate the capability of the object-based method using a large image, a small area along the coastline of Patong Beach, Phuket, Thailand, where was heavily damaged by tsunami, was used. False color composite pre-event and post-event images of the demonstrated area are shown in Figs. 5a and 5b, in which washed-away vegetation can be clearly observed. In addition, several small houses were also seen to disappear. Extraction was started with vegetation objects (Figs. 6a and 6b). Washed-away vegetation can be obtained by logical comparison between Figs. 6a and 6b, as resulted in Fig. 6c. Vegetation was successfully extracted from the pre-event image and the most part could also be extracted from the post-event image as well. The good point of object-based extraction was that individual stand-alone tree or bush could be extracted as shown in Fig. 6b. However, there were several omission errors such as trees and grassland at the bottom of the image due to the shadow of cloud. Figure 5. The study area presented in FCC a) pre-event and b) post-event. Figure 6. Extracted vegetation from a) pre-event, b) post-event images and c) location of washed-away vegetation in white. Subsequently, soil objects were concerned. There was none of soil objects extracted from

the pre-event image as the area was completely covered by grass, tree, streets and houses. On the contrary, the post-event image presented many areas of soil and sand. It is noted that sand has very high reflectance value so that it was classified as bright objects. The extracted results of bright objects and soil objects from the post-event images are shown in Fig. 7. Perhaps, these two classified objects might be in the same class but were classified differently due to the intensity. In fact, using only 4 spectral classes, it is unable to further classify different types of soil. Some commission errors were found as several buildings had similar strong reflectance as sand s. The extracted results show good agreement with the one by visual interpretation. Besides the capability of object-based comparison, it is easy to quickly compute the area of the affected zones. We found that 13,190 m2 among 24,490 m2 vegetation including trees and grassland was washed away. Also 26,390 m2 lands now are covered by sand or mud. It inferred that other objects like buildings were totally collapsed. By further extraction of small buildings from the pre-event images followed by the overlaying with the areas covered by mud/sand from the postevent images, we found 50 destroyed building blocks whose total areas are approximately 9,500 m2. Figure 7. Extracted results from the post-event images a) bare soil and b) bright objects. Conclusions A new object-based image analysis was implemented for mapping the tsunami-affected areas in Thailand due to the 2004 Indian Ocean tsunami. Analyzing objects in a morphological scale-space, it was able to separately extract different types of land-cover objects of every size. The extracted results are ready to use in GIS database and further GIS analysis. Depending on a catastrophe type and also the study area, we can approach different object types as the cue for mapping changes. In the case of tsunami disaster, vegetation and bare soil were extracted. The test results on ASTER and QuickBird acquired over Thailand tsunami-affected areas show a good agreement with visual inspection results. Moreover, the areas and the total number of affected objects could be easily obtained. Computational time is still a problem to be improved. In addition, finding a best scale parameter set and sensitive analysis will be further concerned.

Acknowledgments The QuickBird scenes used in this study are owned by DigitalGlobe Co., Ltd. References Acton, S. T., and D.P. Mukherjee, 2000. Scale space classification using area morphology, IEEE Transactions on Image Processing, 9 (4), 623-635. Andres, R.J. and W.I. Rose, 1995. Detection of thermal anomalies at Guatemalan volcanoes using Landsat TM images, Photogrammetric Engineering and Remote Sensing, 61, 775-782. Barber, D.G., K.P. Hochheim, R. Dixon, D.R. Mosscrop, and M.J. Mcmullan, 1996. The Role of Earth Observation Technologies in Flood Mapping: A Manitoba Case Study, Canadian Journal of Remote Sensing, 22 (1), 137-143. Eguchi, R.T., C.K. Huyck, B. Houshmand, B. Mansouri, M. Shinozuka, F. Yamazaki, and M. Matsuoka, 2000. The Marmara Earthquake: A View from space: The Marmara, Turkey Earthquake of August 17, 1999: Reconnaissance Report, Technical Report MCEER-00-0001, MCEER, 151-169. Estrada, M., F. Yamazaki, and M. Matsuoka, 2000. Use of Landsat Images for the Identification of Damage due to the 1999 Kocaeli, Turkey Earthquake, Proceeding of 21st Asian Conference on Remote Sensing, November 2001, Singapore, 1185-1190. Massonnet, D., M. Rossi, C. Carmona, F. Adragna, G. Peltzer, K. Feigl, and T. Rabaute, 1993. The displacement field of the Landers earthquake mapped by radar interferometry, Nature, 364, 138-142. Matsuoka, M. and F. Yamazaki, 1999. Characteristics of Satellite Images of Damaged Areas due to the 1995 Kobe Earthquake, Proceedings of 2nd Conference on the Applications of Remote Sensing and GIS for Disaster Management, The George Washington University, CD-ROM. Mitomi, H., J. Saita, M. Matsuoka, and F. Yamazaki, 2001. Automated Damage Detection of Buildings from Aerial Television Images of the 2001 Gujarat, India Earthquake, Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium, IEEE, CD-ROM. Mouginis-Mark, P., S. Rowland, P. Francis, T. Friedman, J. Gradie, S. Self, L. Wilson, J. Crisp, L. Glaze, K. Jones, A. Kahle, D. Pieri, H.A. Zebker, A. Kreuger, L. Walter, C. Wood, W. Rose, J. Adams, and R. Wolff, 1991. Analysis of active volcanoes from the Earth Observing System, Remote Sensing of the Environment, 36, 1-12. Petrovic, A., O. Divorra Escoda, and P. Vandergheynst, 2004. Multiresolution segmentation of natural images: From linear to non-linear scale-space representations, IEEE Transactions on Image Processing, 13 (8), 1104-1114. Repaka, S.R., D.D. Truax, E. Kolstad, and C.G. O Hara, 2004. Comparing Spectral and Object Based Approaches for Classification and Transportation Feature Extraction from High Resolution Multispectral Imagery, Proceedings of the American Society of Photogrammetry and Remote Sensing, Denver, May 2004.

Saito, K., R.J.S. Spence, C. Going, and M. Markus, 2004. Using high-resolution satellite images for postearthquake building damage assessment: a study following the 26 January 2001 Gujarat Earthquake, Earthquake Spectra, 20 (1), 145 169. Singhroy, V. and K. E. Mattar, 2000. SAR Image Techniques for Mapping Areas of Landslides, Proceedings of ISPRS Congress, Amsterdam, 2000, 1395-1402. Vincent, L., 1992. Morphological Area Opening and Closings for Greyscale Images, Proceeding NATO Shape in Picture workshop, Driebergen, The Netherlands, Springer-Verlag, 197-208. Vohora, V.K. and S.L. Donoghue, 2004. Application of Remote Sensing Data to Landslide Mapping In Hong Kong, Proceeding of ISPSRS Congress, July 2004, Istanbul, Turkey, DVD-ROM. Vu, T. T., M. Matsuoka, and F. Yamazaki, 2005a. Detection and Animation of Damage in Bam City Using Very High-resolution Satellite Data, Earthquake Spectra Special Issue, 2003 Bam, Iran, Earthquake Reconnaissance report, EERI (in press). Vu, T. T., M. Matsuoka, and F. Yamazaki, F, 2005b. Towards object-based damage detection, Proceeding of ISPRS workshop DIMAI'2005, November 4-6, Bangkok, Thailand. Willhauck, G. 2000. Comparison of object oriented classification techniques and standard image analysis for the use of change detection between SPOT multispectral satellite images and aerial photos, Proceedings of ISPRS Congress, Amsterdam, 2000. Yamazaki, F., Y. Yano, and M. Matsuoka, 2005. Visual Damage Interpretation of Buildings in Bam City Using QuickBird Images, Earthquake Spectra Special Issue, 2003 Bam, Iran, Earthquake Reconnaissance report, EERI (in press).