Land-Cover Mapping in Stockholm Using Fusion of ALOS PALSAR and SPOT Data

Size: px
Start display at page:

Download "Land-Cover Mapping in Stockholm Using Fusion of ALOS PALSAR and SPOT Data"

Transcription

1 Land-Cover Mapping in Stockholm Using Fusion of ALOS PALSAR and SPOT Data Johan Wallin Master s of Science Thesis in Geoinformatics TRITA-GIT EX School of Architecture and the Built Environment Royal Institute of Technology (KTH) Stockholm, Sweden October 2008

2 TRITA-GIT EX ISSN ISRN KTH/GIT/EX--08/012-SE

3 Land-Cover Mapping in Stockholm Using Fusion of ALOS PALSAR and SPOT Data Johan Wallin October 2008

4 ii

5 Acknowledgement I would like to express my gratitude towards my supervisor Yifang Ban for her support throughout this work. This research was support by the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning (FORMAS) awarded to my supervisor Professor Yifang Ban. iii

6 iv

7 Abstract The objective of this research was to investigate the capabilities for landcover classification using fusion of SAR data from the PALSAR sensor and optical data from the SPOT sensor using a hierarchical approach. The Study area was Stockholm, Sweden. Two dual polarization PAL- SAR images and one multispectral SPOT HRG image acquired summer 2007 was used. The data was classified in two levels. First the images were separated into four classes (Water, Forest, Urban and Open Area) with an artificial neural network (ANN) classifier. In the second step, these classes were refined by a hybrid classifier to Water, Forest, Low Density Built-up, High Density Built-up, Road, Park and Open Field. As some areas in the optical image were covered by clouds, a hierarchical classification using only PALSAR was made. This classification was used to fill in for information gaps in the joint classification of SPOT and PALSAR. The result from the hierarchical classifier shows an overall accuracy increase with more than 10% compared to an ordinary ANN-classifier (from 75,4% to 87,6%). The accuracy of all land cover classes increased except for the Low Density Built-up, where the two classifiers had approximately the same result. For testing the capabilities of PALSAR for Land cover classification, two reference classifications using only ANN where created. The comparison of those two land cover maps shows an overall accuracy increases when including PALSAR data compared to only using optical data. Especially the accuracy of the classes Forest and Open Field increased; forest from 87,6% to 94,0% and Open Field from 34,1% to 72,3%. The research shows that PALSAR data to some degree can be used to improve the land cover classification in urban areas, and the hierarchical approach increases the classification accuracy compared to pixel-based classification. v

8 vi

9 Contents Acknowledgement Abstract iii v 1 Introduction Rationale for the Research Research Objectives Literature Review Effects of SAR System Parameters on Urban Land-Cover Classification Frequency Polarization Incidence Angle Summary Analysis Methods Texture Speckle Filtering Optical data for urban land-cover mapping Fusion of SAR and Optical Data Image Classification ANN Classifier Object-Based Classification Hierarchical Approach Study Area and Data Description Study Area Data Description PALSAR SPOT Vector Data DEM vii

10 4 Methodology Geometric Correction Classification Scheme Backscatter Profiles Image Processing Speckle Filter Texture Filter PCA analysis Cloud Masking ANN classification Rule-based / Object-Based approach Segmentation Rules for Water Rules for Forest Rules for Low Density Built-up Rules for High Density Built-up Rules for Roads Rules for Recreational Areas Rules for Open Fields Accuracy Assessment Results and Discussion Geometric Correction Backscatter Profiles Image processing Texture Speckle Reference Classifications ANN Classification Segmentation Results Rule-Based Classification Fusion of SPOT and PALSAR Summary Conclusions 49 A Confusion matrices 55 viii

11 List of Figures 3.1 Study area PALSAR data SPOT data Flowchart Backscatter profiles from PALSAR Texture filters Speckle filters ANN classification Segmentation Classification results using SPOT and PALSAR Classification results using PALSAR Comparison of classification with and without PALSAR Final Classification ix

12 x

13 List of Tables 2.1 RADAR bands Configuration of PALSAR sensor Configuration of SPOT-5 HRG sensor Classification scheme Results of the geometric correction Standard deviation of Backscatter Profiles Accuracys for ANN classification using SPOT. This classification is used as a reference to evaluate the perfomnce of the first broad classifier Accuracys for reference classification using SPOT and PALSAR Accuracys for reference classification using SPOT Accuracys for reference classification using PALSAR ANN classification using SPOT and PALSAR ANN classification using PALSAR Accuracy of the hierachical classification using both SPOT and PALSAR Accuracy of the hierachical classification using only PALSAR Overall accuracies of the different classifiers A.1 Confusion matrix for classification of SPOT and PALSAR.. 56 A.2 Confusion matrix for reference classification using PALSAR. 57 A.3 Confusion matrix for classification of PALSAR A.4 Confusion matrix for reference classification using SPOT A.5 Confusion matrix for reference classification using SPOT and PALSAR xi

14 xii

15 Chapter 1 Introduction 1.1 Rationale for the Research The production of Land-Cover maps from satellite sensors is an important task. To name some applications, the maps produced can be used to extract information on change/growth of the urban area (and then also maybe decrease in green areas), it can be incorporated in GIS databases of state or local government databases (Shackelford & Davis, 2003b), or it can be used for environmental planning. As many land-cover types have similar spectral signatures in the optical part of the spectrum, a combination of different types of sensors can be used to imrove the classification. As new SAR sensors with higher resolution have been developed, fusion of SAR with optical data has become a growing field of interest (Orsomando & Lombardo, 2007; Ban, 2003; Ban & Hu, 2007). Data from SAR sensors and optical sensors can complement each other; because of the all weather capability of SAR data, the temporal frequency of SAR data is high, but on the other hand, the spatial resolution is often lower. Optical data has lower temporal frequency but often better spatial resolution. (Orsomando & Lombardo, 2007). Another reason for fusion is, as reported by Ban and Hu, that data from different parts of the spectrum provides complementary information and thereby often increases the classification accuracy (Ban et al., 2007). In recent years, work has been done trying to include SAR images in the classification of urban areas. The all weather capabilities of SAR is one of the main reasons to use it for classification. Another is that SAR conveys the greatest amount ofstructural information for urban areas (Dell Acqua & Gamba, 2003). SAR data can also be useful to fill information gaps for regions covered by clouds in optical images, and as Scandinavia is often covered by clouds, this is a valuable property of SAR. Many different approaches for the joint classification of SAR and Optical data have been investigated, and these methods have been proven to 1

16 generate good results for their specific landuse type, but as a method improves the classification accuracy for one landuse type, it might decrease the accuracy for another. In other words, one single method alone will have a problem classifying the area with high accuracy (Ban, 2003; Shackelford & Davis, 2003b). Shackelford and Davis has proposed a method to get around this problem; They use a normal Maximum likelihood classifier to separate four well defined groups (Grass-Tree, Road-Buildings, Water-Shadow, Bare Soil), then classifiers specialised for the specific landuse type are used to further separate the four classes into subclasses (Shackelford & Davis, 2003b). This method, called hierarchical approach, is similar to the method described by Ban, called sequential masking classification, where easily distinguishable classes are masked out to let you focus on separating classes with similar signatures (Ban, 2003). Using a hierarchical approach allows you to use different classification methods for different landuse types. Images acquired by the Advanced Land Observing Satellite (ALOS) and SPOT-5 are used. The ALOS platform carries three sensors, of witch The Phased Array type L-band Synthetic Aperture radar (PALSAR) is used for this study. From the SPOT satellite, a scene acquired in August 2007 by the multipectral HRG sensor is used, and from the PALSAR sensor, two scenes acquired June and July 2007 respectively are used. There are two main reasons for choosing the PALSAR data. First, not much work has been done on the PALSAR sensor yet, and second, and most important, the data suits the objectives. The SPOT HRG sensor was chosen partially because the 10x10 meter resolution matches the 12.5 meter PALSAR resolution, but also because the sensor has bands in both near infrared, mid infrared and in the visual part of the spectrum. The PALSAR image together with the HRG image forms the base for the first broad classification in the hierarchy, where built up, vegetation, bare soil and water are separated. The PALSAR data adds textural information needed for this classification, while the SPOT data adds spectral information. The hierarchical approach proposed by Shackelford and Davis is used; the data is separated into sets using a broad classifier, here using an artificial neural network (ANN) classifier. The sets are then refined using different methods for the different subsets. For the built-up areas, a classifier combining object based classification, rule based classification and brightness value is used. The object based classifier together with the rule-based classifier is used to separate buildings by the properties of their surrounding features. The brightness value from both optical multispectral data and SAR texture is used as further parameters in the classification. For the vegetation set, texture measures from the PALSAR data is combined with brightness value from the multispectral SPOT data for further classification. Some rule based/object based classifiers are also tested. 2

17 1.2 Research Objectives The objectives for this research are to improve land cover classification of urban areas using hierarchical approach, and to investigate the capabilities of SAR data, and specially PALSAR data together with SPOT data, for land cover classification of urban areas. 3

18 4

19 Chapter 2 Literature Review In this chapter, the state of research for land cover classification with SAR and optical data is presented, together with background information about the different techniques used for this research. 2.1 Effects of SAR System Parameters on Urban Land-Cover Classification The important SAR systems parameters are described below Frequency The Wavelengths in SAR systems are classified into different bands (table 2.1). Although some sensors have the ability to vary between different wavelengths, normally a sensor is built for one specific band radar. Wavelength determines whether the transmitted signal will be attenuated or dispersed by the atmosphere. Smaller wavelengths will be more affected by the atmosphere, and for wavelengths below 4 cm the atmospheric effect can be serious. Wavelengths below 2 cm may also be affected by rain and clouds, causing shadows in the image. For longer Wavelengths (P-band) with high altitude, the ionosphere will seriously effect the transmission (Lillesand et al., 2004, chap. 2). Wavelength will also determine what surface will look rough or smooth. A measure of this is the Ray light criterion, stating that a surface could be described as rough if the Root mean square of the height differences of the surface is bigger than divided by cosine of the local incident angle; else it is described as smooth. A smooth surface will reflect most of the signal away, while a rough surface will scatter the signal, reflecting more back to the sensor (Lillesand et al., 2004, chap. 8). The wavelength not only affects the roughness of surface, but also the resolution; all other parameters being the same, azimuth resolution will 5

20 Table 2.1: Radar Bands (Mikhail et al., 2001) Band Wavelength (cm) Frequency (GHz) K 0,83-2, ,9 X 2,75-5,21 10,9-5,75 C 5,21-7,69 5,75-3,9 S 7,69-19,4 3,9-1,55 L 19,4-76,9 1,55-0,39 P 76, ,39-0,225 increase with decreasing wavelength radar. In a study on the differences in interpretability between X-band and L- band images of urban areas, it was concluded that best result is achieved when combining both wavelengths, but if only one wavelength were to be used, X-band is preferable (Leonard Bryan, 1975). Haack compared likeand cross polarized L-band and X-band images over Los Angeles, and found that X-band Like polarization was the best, and L-band cross-polarized was the worst for interpretability (Henderson & Lewis, 1998, chap. 15) For vegetation identification, a general rule is that when the Wavelength approaches the mean size of plant components, the volume scatter increases. If the canopy is dense, this will lead to stronger backscatter. In general, shorter wavelength (2-6 cm) is better for identification of crop and leaves, while longer wavelength is better for three trunks and limbs (Lillesand et al., 2004, chap. 8) Polarization Because SAR consists of coherent waves, the sensor controls the orientation of the waves. The signals can be transmitted in either horizontal or vertical orientation, and when the signals interact with the earth surface, they are sometimes modified to include both polarizations (Mikhail et al., 2001, chap. 11). Some sensors can receive and transmit signals in both orientations, and combined this makes 4 different combinations of signals - HH, HV, VV, and VH, where the first character marks the transmitted signal and the second the received signal. HH and VV are referred to as like-polarized, as the transmitted and received signals are oriented the same way, and HV and VH are referred to as cross-polarized. The cross-polarized return is 8-25dB weaker than the polarized return (Simonette & Ulaby, 1983, chap. 4). Like-polarized reflection is mostly caused by surface- and volume scattering, while the cross-polarized return is mostly caused by either multiple scattering due to surface roughness or multiple volume scattering caused by inhomogeneities (Simonette & Ulaby, 1983, chap. 4). The orientation of the object strongly influences the amount of backscatter in the image. The images produced by the different combinations of polarization will 6

21 not be identical, but instead help us to differentiate geographic features. Cross-polarized images are reported to be the best choice for detecting shopping centres, institutional complexes and industrial areas - all with the lack of natural vegetation, while like polarized images is reported to be better for discriminating vegetated areas within an urban complex. Cross polarized images are also better for detecting linear features that are not parallel to the flight direction; in general, the effect of orientation is bigger in HH than in HV (Henderson & Lewis, 1998, chap. 15). In a study on differences between HH and HV in X-band images covering Los Angeles, it was concluded that when comparing 7 urban land cover classes, although they did have differences in response, the only one with statistically different signal response in HH compared to HV was the commercial category. Land cover classes with the most similar response in both polarizations were transportation, residential and industrial. It was also concluded that no single polarization is preferred for all land cover categories within the urban area (Henderson & Mogilski, 1987) Incidence Angle Incidence angle is defined as the angle between the radar line of sight and the local vertical the geoid at the target location. For a horizontal imaging plane, the incidence angle is the complement of the Depression angle, defined as the angle between the local horizontal at the antenna and the radar line of sight (Mikhail et al., 2001, chap. 11). For urban area remote sensing, the incidence angle will affect the detectability of settlements, as lower incidence angle will result in lower range resolution. Low incidence angle will also make the effect of radar shadows bigger; settlements on back slopes will become invisible. In a study on the effect of radar system parameters on the detectability on settlement detection using SIR-B sensor, it was concluded that images with steep incidence angle (below 20 ) wore were of minimum utility. These images had the poorest resolution, and thereby limiting the ability to locate small features. The study also showed that the overall detectability and detection accuracy increased with incidence angle until the angle reached 40, 9. After this point, the usability decreased. The study concluded that this might be the best angle for settlement detection, but that more work needs to be made in the area (Hendeson, 1995). In a multisensor analysis for land use mapping in Tunisia, it was concluded that SIR-A images were better than SEASAT images for settlement detection. The two sensors had the same wavelength, polarization and spatial resolution. The only major difference were the depression angle; SIR-A has a depression angle of 43 while the angle of SEASAT is 67 (Henderson & Lewis, 1998, chap. 15). 7

22 2.1.4 Summary This section has described how the system parameters of SAR affect urban land cover classificaiton. To summarize the section, the best choice of frequency, if both X and L-band is available, is a combination of of both. If only one is to be used, X-band is preferable. When it comes to polarization, no single polarization is best for all features, although cross-polarized return have reported to be the best choice for detecting vegetated areas inside the urban complex, and for mapping industrial complex. Finally, the image used should not be acquired with a too small incidence angle (below 20 ). 2.2 Analysis Methods Texture When humans interpret a remotely sensed scene, they take into account things like context, edges, texture and tonal variation. This can be compared to computer processing, where often only tonal information is used. To make the computer interpretation of images more like the one made by humans, and by that improving the classification accuracy, texture filters are often included in the process of image classification (Jensen, 2005, chap 8), (Ban & Wu, 2005). Texture can be defined as differences in discreet tonal values over a specific area. A useful way to estimate this when classifying SAR-images is the spatial autocorrelation (Henderson & Lewis, 1998, chap. 2). Autocorrelation measures how changes are related to distance. In SAR images, typically two different types of texture are present; scene texture and speckle. Scene texture can be compared to texture in optical images; it is caused by the nature of the surface you are looking at. Speckle on the other hand is caused by the radar or the processing system (Henderson & Lewis, 1998, chap. 2). Speckle, and how to deal with it in image classification, will be described in the next chapter. In a study, Dell acqua and Gamba compared the result of different texture filter for classification of a ERS-1 image in the urban and sub-urban areas of Pavia, northern Italy. Eight different second order texture filters, all based on the co-occurrence matrix, were used; contrast, correlation, dissimilarity, entropy, mean, homogeneity, second moment and variance. The study concludes that mean, entropy, second moment and variance, with a block area of 21 pixels, where among the best choice. Some of the filters that were best when used alone were strongly correlated, and as a result, the best combination of four filters was dissimilarity, entropy, mean and variance (Dell Acqua & Gamba, 2003). 8

23 2.2.2 Speckle Filtering All radar images contain some degree of speckle. They are caused by reflected signals being in our out of phase. This makes pixels in the image, seemingly by random, being brighter or darker (Lillesand et al., 2004, chap. 8). Speckle can be reduced by Multi look processing, where several looks of the same area is averaged together, but this will degrade the spatial resolution of the image (Nyoungui et al., 2002). Another way of reducing the speckle is to use a speckle filter. Many different filters have been developed for this purpose, but when using a speckle filter there is always a trade of between suppressing speckle and preserving edges. The both conditions are hard to combine. No single filter alone can be said to be the best. It all depends on the data used, what land cover is to be classified, what resolution your image have and so on. Some research have however been done comparing the different filters, as example Nyoungui et al concluded that for land cover classification Lee LS filter gave superior result for visual and computer interpretation (Nyoungui et al., 2002) 2.3 Optical data for urban land-cover mapping With the development of high resolution remote sensors, and better image segmentation and classification techniques, possibilities for detailed urban land-cover mapping are boundless. But there will still be a need for broader regional coverage that only medium scale resolution imagery (like Landsat TM, SPOT HRG etc.) can provide. The main difficulty when using medium resolution data in urban areas are the mixed pixels (Lee & Lathrop, 2005). In a study on urban classification over Huston, Texas using a Landsat UTM+ scene, a fuzzy spectral mixture analysis (SMA) was developed, to minimize the problem of mixed pixels. SMA as a method aims to map fractions of landscape classes inside a mixed pixel. The fuzzy SMA approach was shown to reduce the mean absolute (classification) error compared to Maximum likelihood from 0,58 to 0,18 (Tang et al., 2007). Another study for extracting more information from the mixed pixels, was tested for classification of Landsat ETM+ covering urban areas of New Jersey. Linear mixture modeling (LMM) was used to unmix the pixels, and the final classification was compared to a maximum likelihood classification of an IKONOS image. In the study, it was concluded that the method, together with Landsat ETM+ imagery, could provide a reasonablely accurate means of estimating urban land cover proportions (Lee & Lathrop, 2005). SPOT data has also been used for urban land cover classification. One example of this is presented in a study where SPOT data is used to capture the spatial patterns of Beijing. The study reports that the SPOT data, when 9

24 also including GLCM texture mesures, clearlt shows the spatial patterns of the city (classification accuracy 79%) (Zhang et al., 2003). Another type of sensors often used for urban land-cover classification is high resolution sensors like QuickBird and IKONOS. One example of this is presented in a study where wavelet decomposition is used for urban feature extraction. The results are reported to be promising, and usage of IKONOS images are also suggested for this method (Ouma et al., 2006). 2.4 Fusion of SAR and Optical Data Traditionally, urban land cover classification has been done using optical images. In later years, some research has been done trying to include SAR images in this process. When land cover classification is performed, it is always good to have information from many parts of the spectrum, and as SAR data represents a different spectrum than the optical image does, it could add information to the classification process, especially as many classes in the urban complex have similar signatures (Kuplich et al., 2000; Ban, 2003; Ban & Hu, 2007). The all weather capabilities of SAR data has also proven useful, it can fill in gaps of cloud cover in the optical images (Kuplich et al., 2000; Ban & Hu, 2007), and it also makes it available at more times; the SAR data can compensate its lack of details with high temporal frequency, while the optical image can compensate its lower temporal frequencies with more details (Orsomando & Lombardo, 2007). In a study made on fusion of RadarSAT and Quickbird data for classification of the urban areas of Toronto, it was concluded that the radarsat data was able to increase the classification compared to only using the QuickBird data. The accuracies of the soybean class increased from 71% to 90%, the classification accuracy of rap increased from 78% to 88% and the accuracy of commercial-industrial areas increased from 63% to 76% when including RadarSAT data compared to only using QuckBird. The SAR data was also used to fill in areas of clouds in the optical data (Ban et al., 2007). Another study made on synergy with Landsat TM and ERS-1, the radar data was reported to help the discrimination of built up areas due to strong reflectance in corner reflectors, while not helping to discriminate between different crops. They concluded that the radar data could bee seen as an important tool for the discrimination of certain land cover classes (Kuplich et al., 2000). This was also reported by Ban; the classification of crops only using ERS-1 SAR did not give satisfying result, but on the other hand the combination of Landsat TM and ERS-1 data was here reported to increase the classification result compared to only using Landsat TM with 8.3% (Ban & Howarth, 1999). 10

25 2.5 Image Classification ANN Classifier Conventional classifiers make assumptions about the data being classified, for example, many of the classifiers assume that the data is normally distributed. For SAR data, this assumption does not hold; due to speckle, radar data does not follow the Gaussian distribution. An alternative to the classical classifiers is the Artificial Neural Networks (ANN) classifier. ANN presents a distribution free, nonparametric approach (Ban, 2003). Neural networks simulate the thinking of the human mind, where neurons are used to process incoming information. In general, the ANN can be considered to comprise a large number of simple interconnected units that work in parallel to categorize the incoming data to output classes. ANN reaches a solution not in a step-by-step manner or a complex logical algorithm, but in a non-algorithmic way, based on adjustments of the weight of connected neurons. Due to the fact that ANN does not assume normal distribution, and due to its ability to adaptively simulate complex and nonlinear patterns, the ANN classifier has shown superior results compared to statistical classifiers (Foody et al., 1994), (Jensen, 2005, chap. 10) Object-Based Classification In a pixel-based classifier every pixel is classified separately, without considering the formation that pixel is a part of, or what features are surrounding it. Traditionally, only pixel-based classifiers have been used in remote sensing. An alternative to the pixel-based approach is the object based. By segmenting the image, in a meaningful way, into objects consisting of many pixels, object based classifiers typically incorporate both spatial and spectral information. Compared to a pixel-based approach, where only spectral and textural information is considered, the object based approach can also incorporate shape characteristics and neighbourhood relations (context) to the classification (Shackelford & Davis, 2003a; Ban & Hu, 2007), (Jensen, 2005, chap. 9). Several studies have indicated that, especially when working with high resolution images, object based rather than pixel-based classification should be used, to get the full potential of the image. Object based classification will also result in a more homogenous and more accurate mapping product, with higher detail in the class definition (Ban & Hu, 2007). The object based classification starts with segmentation, where the image is divided into objects. The aim of the segmentation is to create objects with a minimum of interior heterogeneity, where heterogeneity is defined both regarding spectral and spatial variances. When using the popular classification program ecognition, homogeneity in the spatial domain is defines 11

26 by compactness and smoothness. Compactness is measured as a ratio between the objects border length to the objects total number of pixels, while smoothness is measured as a ratio between the objects border length and length of the objects bounding box (Ban & Hu, 2007), but there are of course many other ways of defining the heterogeneity (Jensen, 2005, chap. 9). The actual classification of the image is done in a different manner than that in a pixel-based approach. The analyst is not constrained to just using the spectral information, but can use both the mean spectral information of the segment, or various shape measures. This introduces flexibility and robustness (Jensen, 2005, chap. 9) Hierarchical Approach Often, not one single classification method, or one single set of data are appropriate for the classification of all features in an image. The hierarchical approach allows for the use of specific classification methods for different classes. In the hierarchical approach, you first classify the image into broad categories, with similar signatures. These broad classes are then further separated into finer classes (Shackelford & Davis, 2003b). An approach where the same classifier is not used for all land cover types is referred to as a hybrid approach (Lo & Choi, 2004). An example of both hybrid and Hierarchical approach has been presented by Ban. The method described is called Sequential-masking approach. In this method the most distinct feature is classified first, and then masked out before the next land-cover type is classified. Not all features are classified with the same image; instead images from different dates are used for different features (Ban & Howarth, 1999). Another example of this approach is presented by Shackleford and Davis. In their attempt to classify IKANOS images over urban areas, they use both techniques. First, the image is classified using a normal Maximum likelihood classifier to separate four well defined groups (Grass-tree, Road-buildings, water-shadow, Bare soil). They then use methods specially design for every group to further refine them. For example they show that the entropy texture measure was good for separating grass from threes (Shackelford & Davis, 2003b). 12

27 Chapter 3 Study Area and Data Description 3.1 Study Area The study area for this research is Stockholm. Stockholm is the political and economical centre of Sweden. The city is located in the east part of Sweden, just at the boundary between the lake Mlaren, and the Baltic see. 2006, the number of inhabitants in Stockholm was At the same time, 1,9 million peoples were living in the region (stockholm.se). The city is growing at a rate of approximately 1% per year, giving the expected size of the region to be between 2.2 and 2.4 million inhabitants The settlement structure of Stockholm can be described to include a clear inner structure, with a strong regional core, radial settlement bands, green wedges and a large archipelago. The central parts of Stockholm are actually built on some of those islands. The green wedges, which there are ten of, give the city green corridors all the way in to the central parts of the city. The archipelago consists of about islands, from which many are populated, utilized and preserved (RUFS, 2002). The Stockholm region contains a large diversity of land-cover types: residential areas, dense urban centres, roads, agricultural areas, parks, and recreational areas. The average temperature of Stockholm is 16 C during summer and 3 C to 5 C during winter. The annual amount of rain is between 450 and 650 mm. The fact that most of the rain falls during summer and fall, makes it difficult to find good optical remotely sensed images of Stockholm. In the last ten years, a dramatic change in climate has appeared; statistical data shows that the temperature has increased with almost one degree compared to data from 1961 to 1990, and looking at the same period, the Amount of rain has increased with about 10% (SMHIa, 2005; SMHIb, 2005). 13

28 Figure 3.1: Study area. The left part of the images shows Sweden. The pink box marks the position on the map of the zoomed in area to the right, where the SPOT image marks the actual study area. Vector data property of Lantma teriet Data Description PALSAR The Advanced land observation satellite (ALOS) was launched in January 2006 by the Japanese Space Exploration Agency (JAXA). The satellite is placed in a sun synchronous orbit at 691 km, with a temporal pass structure of 17 or 29 days. Onboard, it carries three instruments: one panchromatic sensor for stereo mapping (PRISM), one multispectral sensor (AVNIR-2) and finally the Phased Array L-band Synthetic Aperture radar (PALSAR). PALSAR is an enhanced version of JERS-1. The sensor is a fully polarimetric instrument operating in the L-band, with a centre frequency of 1270 MHz, or 23,6 cm. There are five different modes for PALSAR; Fine Beam single polarization, Fine Beam Dual polarization Polarimetric mode, ScanSar mode and Direct transmission mode (Rosenqvist & Shimada, 2007). In this research, two PALSAR images are used, both acquired in the Fine Beam Dual Polarization mode during summer of 2007 (figure 3.2). The first 14

29 Figure 3.2: Two PALSAR images acquired in the Fine Beam Dual Polarization mode during summer of The left image was acquired June 18, and the right July 17. Both are used for the land-cover classification in this thesis. Table 3.1: Configuration of PALSAR sensor, where FBS means Fine Beam Single Polarization, FBD means Fine Beam Double polarization, DSN means Direct Downlink, PLR means Polarimetry and WB1 means ScanSAR. Out of the possible 132 modes, this table shows the standard configurations (ESA). For this thesis, an image acquired in FBD-mode was used. Pixel Spacing Off-nadir angle Incidence angle Swath Width Polarisation FBS 12.5 m m HH FBD 12.5 m km HH/HV 15 DSN 12.5 m km HH PLR 12.5 m km HH/HV + VV/VH WB1 100 m km HH

30 image was taken June 18, and the second July 17. The ground resolution of the images is 12,5 x 12,5 meters and off nadir angle 34.3 degrees. The polarizations in both images are HH and HV SPOT The first SPOT satellite was launched in February 1986 by the French government in participation with Sweden and Belgium. The satellite began a new era in remote sensing by using a linear array sensor and employ pushbroom scanning techniques. Shortly before the first satellite retired in 1990, SPOT-2, with similar configuration as SPOT-1, was launched. The same configuration was also used for SPOT-3, launched in SPOT-4 was an improved version of SPOT 1-3, and launched in On May , the SPOT program entered a new era with SPOT- 5, caring two High Resolution Geometric (HRG) sensors. These sensors can be used to acquire either panchromatic images, with a resolution of 2.4 or 5 meters, or to acquire multispectral images. The resolutions of the multispectral images are 10 meters in green, red and near infrared, and 20 meters in the mid-infrared. The HRG-sensors are also pointable in ±31, to make the revisit time smaller (Lillesand et al., 2004, chap. 6). For this research, a multipectral HRG scene acquired by SPOT-5 July was used (figure 3.3) Vector Data As reference data for the geometric correction of the images, vector data from the Swedish land survey was used. This data, called terrngkartan, has a resolution of 5x5 meters, and is projected in the Swedish system RT gon W ((Swedish Land Survey)). In the dataset, all features normally used in a map are included, but for the geometric correction, only the road data was used. For orientation purposes, the height curves where included in the process DEM An elevation model with a resolution of 50 x 50 meter from the Swedish land survey was used for the ortho-rectification of the SPOT- and PALSAR-data. This DEM is also projected in RT gon W. 16

31 Figure 3.3: A multispectral HRG scene acquired by SPOT-5 July 27, Here shown in false colour: R: NIR, G:Red, B:Green. Table 3.2: Configuration of SPOT-5 HRG sensor (Lillesand et al., 2004). SPOT-5 HRG Spectral Bands and resolution Panchromatic (5/2.5 m) Multispectral (10 m) Short-Wave Infrared(20 m) Spectral Range P: µm B1: µm B2: µm B3: µm B4: µm Swath 60 km to 80 km Incindence Angle ±31.06 Revisit Interval 1 to 4 days 17

32 18

33 Chapter 4 Methodology The methodology for this study is briefly described by the flowchart on page 20; As some of the study area is covered by clouds in the SPOT image, two different classifications are done, one using both SPOT and PALSAR (left flow in flowchart) and one using only PALSAR (right flow in flowchart). The classification using only PALSAR is used only for the areas covered by clouds in the SPOT image, for all other areas, the joint classification of SPOT and PALSAR is used. Fo both classifications, the hierarchical approach proposed by Shackelford and Davis is used; the data is separated into sets using a broad classifier, here using an artificial neural network (ANN) classifier. The sets are then refined using different methods for the different subsets. The first step in the flow was to geometrically correct and orthorectify both images. The SPOT image was corrected using a normal rationa-model with GCP:s, while the PALSAR iamge was first corrected using orbital models. When this was done, backscatter profiles were extracted from all PAL- SAR layers. The SAR data also went trough a process of speckle- and texture filtering. The ANN classifier was used to separate the images into four broad, and easily distinguished classes; Urban, Water Open Area and Forest. This was done once using only PALSAR and once using a combination of PALSAR and SPOT. The four classes was then refined to their final classes, Water, Forest, Low Density Built-up (LD), High Density Built-up (HD), Road, Recreational Area and Open Field, by different methods for every class, using the object based program ecognition. To be able to control if the hierarchical approach managed to improve the land-cover separation for the seven classes, reference classifications were created using an ANN classifier. The accuracy of the two resulting landcover maps was then compared with the accuracy of the reference maps. Finally, the two classifications were merged; the black holes in the joint classification were replaced by the PALSAR classification. 19

34 spot palsar geometric correction orbit model elevation model vector data backscatter profiles cloud mask texture speckle cloud-free areas clouded areas first classification (ann) first classification (ann) accuracy assessment accuracy assessment final classification final classification accuracy assessment accuracy assessment Figure 4.1: Flowchart as decribed in chapter 4. 20

35 4.1 Geometric Correction If combining different datasets in one project, images needs to be correct for geometric errors. Satellite images contain two kinds of geometric errors; systematic and non-systematic. The systematic errors can be caused by scan skew, panoramic distortion, platform velocity nonlinearities, perspective geometry and earth rotation. These errors can be corrected for using data from the platform. The non-linear errors are mainly caused by variations through time in the position and attitude angles of the satellite platform. These errors can only be corrected for using correction by Ground Control Points (GCPs) (Sertel et al., 2007). GCPs are points with known coordinates both in the image and in a terrestrial coordinate system. These points are used in a mathematical model, usually polynomial-, affine- or rational model, to correct the image by a least square adjustment (Sertel et al., 2007). In this research, the PALSAR images where first imported using the SAR orthorectification tool in Geomaticas OrthoEngine. This tool uses orbit parametrs from the satellite platform to correct the image. After this initial processing, both the PALSAR images and the SPOT image were further corrected using the rational model is used. The rational model is a simple math model that builds a correlation between the pixels and there ground location, using two polynomial functions for row and two for column. This model is more accurate than the polynomial model, as it includes elevation in the correction. When geometrically correcting an image, the number of coefficients needs to be decided. Adding more coefficients in the model will make the fitting better close to the GCPs, but will introduce new significant errors away from the control points (PCI Geomatica). 4.2 Classification Scheme To be able to successfully extract land cover/land use maps from remotely sensed data, classes must be carefully selected. The classes must be mutually exclusive (there is no overlap between classes), exhaustive (all land cover/land use types present in the image must be included), and hierarchical. This requires the use of classification schemes, with correct definition of the classes and a logical structure (Jensen, 2005, chap. 9). One of the most commonly used classification schemes are created by the U.S.Geological survey. It is a multilevel classification scheme, including all types of classes. As some of the classes in this scheme need very high resolution images to be extracted, many organizations have tried to modify it, to better suit the classification of remotely sensed images. One of the most used modifications is the one developed for the National Land Cover

36 Table 4.1: Classification scheme. Left, the classification scheme based on U.S geological survey, to the right, the classes developed for this thesis. Original Classes Developed Classes 1 Water 1 Water 11 Open Water 11 Water 12 Perennial Ice/Snow 2 Developed 2 Developed 21 Low Intensity Residential 21 Low Density Built-up 22 High Intensity Residential 22 High Density Built-up 23 Commercial/Industrial/Transportation 23 Roads 3 Barren 31 Bare Rock/Sand/Clay 32 Quarries/Strip Mines/Gravel Pits 33 Transitional 4 Forested Upland 4 Forested Upland 41 Deciduous Forest 41 Mixed Forest 42 Evergreen Forest 43 Mixed Forest 5 Shrubland 51 Shrubland 6 Non-Natural Woody 61 Orchards/Vineyards 7 Herbaceous Upland Natural/ Semi-natural Vegetation 71 Grasslands/Herbaceous 8 Herbaceous Planted/Cultivated 8 Herbaceous Planted/Cultivated 81 Pasture/Hay 81 Agricultural Fields 82 Row Crops 82 Urban/Recreation 83 Small Grains 84 Fallow 85 Urban/Recreation 86 Grasses 9 Wetland 91 Woody Wetlands 92 Emergent Herbaceous Wetlands 22

37 Dataset and Costal Change analysis program (NOAA) (table 4.1) (Jensen, 2005, chap. 9). This scheme forms the base for this research classification scheme. No Classification schemes are ideal for all situations, and therefore they have to be modified (Estes & Thorley, 1983, chap. 30). It is not appropriate to try to extract classes that, because of spatial resolution and interpretability, are extremely difficult to obtain. Classes from the scheme not present in the scene used must also be excluded. A modified classification scheme based on the classification scheme used for the National Land Cover Dataset and NOAA has been developed for this research (shown in 4.1). First, the classes Perennial ice/snow, Barren, Scrubland, Non-natural woody, Herbaceous Upland Natural/Semi-natural Vegetation and Wetland has been removed, as these classes are not present in the scenes used for this research. As no field data, or enough multidate images to classify by the phenological cycles (Ban & Howarth, 1999) where available, different crops could not be separated. The Herbaceous Planted/Cultivated-classes where joined to only two categories; Open Field and Recreational Areas. Regarding the class Commercial/Industrial/Transportation - as Commercial and Industrial could not be separated from High Density Built-up neither by pixel- or object based classification, it was included in the class High Density Built-up. Transportation formed the class Road. 4.3 Backscatter Profiles In the processing of the radar images, a scaling operation has been performed. Each digital number (DN) in the pixels represents a magnitude value of the cross product component in the data. For extraction of calibrated data, this process has to be reversed. From the DN, a unitless Beta Nought value (β) is calculated with the equation β j = 10 log ( ) DN 2 j + A3 A2 ratio (4.1) where A2 is the constant scaling gain, A3 is a fixed offset and ratio is a fixed ratio (for PALSAR always ) of the j th pixel. The values can be found in the metadata of the files. To extract Backscatter values (σ = Sigma Nought), the Beta Nought has to be corrected for incidence angel with the formula where I j is incindence angle of the j th pixel. σ j = β j + 10 log (sin (I j )) (4.2) 23

38 Calculating the incidence angle is actually the most difficult part of the process, and includes calculations of earth radius, satellite altitude and slant range for each pixel (JAXA, 2008). Luckily, version of Geomatica includes a set of toolboxes for these calculations; CDSAR, SARINCD and SARSIGM (PCI Geomatica). These toolboxes are used to extract Sigma Nought for each pixel in the four images (June 18 HH and HV, July 17 HH and HV). To extract backscatter profiles, training areas of each land-cover type must be selected. To be able to select the same training areas in all of the Sigma Nought images, (June 18 and July 17 HH and HV), the images must be geometrically corrected. This is done with the parameters described in part 4.1 Geometric Correction, and nearest neighbour resampling. From each land cover type, about 2000 pixels where selected, and the mean of each class was calculated in Matlab. 4.4 Image Processing Speckle Filter Four of the most commonly used speckle filters are Lee, Kaun, Frost and Gamma. Lee and Kaun work the same way, but with different signal model assumption; by computing linear combinations of the centre pixel and the average of the other pixels in the window. This way the filter maintain a balance between both average and identity filter. The Frost filter also adapt to the amount of variation within the window, but by forming an exponentially shaped filter kernel. If the window contains big variations, the filter averages. If the variations are smaller, no averaging is done. Gamma filters and extended Lee and Frost filters have three different modes; if the variations are below a lower threshold, pure averaging is used. If the variations are above a higher threshold a strict all-pass filtering is performed. If the variance is between the two thresholds, a balance of the two are performed (like with the Lee-, Kaun- and Frost filters)(yu & Acton, 2002). For trying out the speckle filters, a comparison between Enhanced Frost, Enhanced Lee, Frost, Gamma, Kaun and Lee-filters were made. All of the filters were used for a test classification using ANN classifier of a sample area of the PALSAR scene. In the test, four classes were used; Open Field, Forest, Built-up Area and Water. Five different window sizes of every speckle filter was tested; 3, 5, 7, 9 and 11 pixels Texture Filter Two types of texture analysis often used are the first- and second-order statistics. The first order statistics includes filters like mean, average, vari- 24

39 ance standard deviation and entropy (Jensen, 2005, chap. 8). The second order statistics are based on the grey level co-occurrence matrix (GLCM). The GLCM-method involves two steps. First, the spatial information of an image is calculated by a co-occurrence matrix moving over the image. Second, the GLCM information calculated in the first step is used to compute statistics to the describe the spatial information according to the relative position of the matrix elements, typically including measures like angular second moment, inverse difference moment, contrast, entropy and correlation (Gong et al., 1992). The information that could be extracted from the texture in the SARimage should in this research be used to extract the four classes Water, Forest, Urban, and Open area. By visual examination of the image, it was clear that the largest difference in texture was that between Forest and Open Area. The main purpose of the texture filter was therefore to extract this difference clearly. A comparison similar to the one made with the speckle filters was done to evalutae wich filter/filters to use, and as described in section 5.3.1, the used filters were dissimilarity, entropy and mean PCA analysis The Neural network classifier in Geomatica can take maximum 16 layers as input, and with all three texture filters applied on all four images (June 18, July 17 both HH and HV), 12 layers were produced only here, and as both specklefiltered images and optical images where used also, the layers had to be compressed. A commonly used technique to compress large datasets is the Principal component Analysis (PCA). Without losing much of the original information, PCA compresses the data into smaller datasets. The first component of the analysis contains the most variance of the original dataset, and subsequent orthogonal components hold the maximum of the remaining variance (Jensen, 2005, chap. 8). The PCA was used to compress the 12 texture layers to Cloud Masking In the final classification, a land cover map produced only by the PALSAR data should fill in where the SPOT data was covered with clouds. Therefore, clouds where masked out from the dataset used in the joint classification of SPOT and PALSAR. This was done manually, by drawing polygons that covered all clouds, and then setting the pixels inside those polygons to zero. 25

40 4.5 ANN classification The first step of the classification was to separate water, forest, open areas and urban areas. As discussed in the literature review (section 2.5.1), the most suitable method for this research is the ANN classifier. The feed-forward back-propagation ANN typically has input layers, hidden layers and output layers. The input layer consists of the data being classified, for example different SAR-data, optical data, texture layers and so on. The hidden layer simulates non linear patterns of the input data and finally, the output layer is the classified map. The first part of the classification process is the training. A training pixel is selected by the user, and its type (class) is sent to the output layer at the same time as the values of that pixel in the input layers are passed throw the network. The output neuron for this class is assigned a membership value of 1 while the other output neurons are being set to 0. The second step of the process is learning. In this step, areas selected as training areas by the user are being sent through the network. For each training example, the output of the system is compared with the true value, and differences between the two are regarded as errors. The weights in the hidden layer are updated with the same amount as the error, and the learning process starts again. This will continue until the errors are smaller than a predefined threshold, when the classification is considered to converge. The final weights are stored in the hidden layer. The last part of the classification is the testing. In this phase, every pixel of the input data is being sent through the neurons in the hidden layers, and is here assigned a membership value between 0 and 1, showing the amount of membership of the pixel to each output class. Finally the pixel is assigned to the class were its membership value is highest (Jensen, 2005, chap.10). In this research, the neural network classifier in Geomatica was used. Geomatica s neural network classifier is a supervised classifier consisting of three modules, NNCREATE, NNTRAIN, and NNCLASS. NNCRE- ATE creates a neural network segment for back propagation, NNTRAIN trains the network and finally, NNCLASS uses the trained network to classify the images. Input to the system was training areas for each class, and the image files. The maximum number of iterations is here set to 1500, but often the classifier converges before this limit is reached. For each classification, a model was built in Geomatica Modeller. As some of the areas in the SPOT-image are covered with clouds, one classification only using the PALSAR image was done. This classification was used to fill in for the information gaps caused by clouds in the optical image. For the rest of the study area, a joint classification of PALSAR and SPOT was used. In the Joint classification of SPOT and PALSAR, all four optical layers (Green, Red, NIR and SWIR), the kuan filtered SAR-images 26

41 and the three first PCA components of the texture measure were used. In the classification of the PALSAR image, a texture layer with larger scale was added, and instead of the Kaun speckle filter, an 11x11 pixel Enhanced Frost was used. Finally, a classification only using SPOT data was created, to use as a reference. This image was used to see whether including SAR-data had improved the classification compared to only using optical images. 4.6 Rule-based / Object-Based approach The second step in the hierarchy was to separate the four classes from the ANN classifier further. Urban was classified to High Density Built-up (HD), Low Density Built-up (LD) and Roads. Open Area was separated into Field and Recreational Area. Some parts of the ANN classified areas also had to be cleaned, as they had been misclassified in the first step of the hierarchy. For this second level in the hierarchy, an object-based approach using the software Definiens Professional 5 was used. Definiens Professional 5, or ecognition as it is also called, is a program for object based classification. As in the ANN part, two classifications is done; one only using the PALSAR-data, and one where both PALSAR and SPOT data is used. In both classifications, the first step is to segment the images only based on the previous ANN classification. This way, the objects will reflect the first step in the hierarchy. The second step is to add more levels to the segmentation. The lower levels are based on the first segmentation, and can not go outside its boundary. This means that a segment in the lower level can not belong to more than one segment from a higher level, and therefore the lower segments will also only belong to one of the ANN classes Segmentation In object based approaches, the image is segmented into objects, based on three criteria: shape, scale and homogeneity in spectral value. The scale parameter defines how large or small the objects should be. A larger number will result in larger segments. The shape measure is divided into two sub categories; Smoothness and compactness. Smoothness is a ratio between the objects border length and its total number of pixels, while the smoothness is a ratio between border length and the length of the objects bounding box, and is at its minimum when the object is not frayed. The brightness criteria states that every object should have as high homogeneity in brightness as possible (Ban & Hu, 2007; ecognition). In the joint classification of PALSAR and SPOT, two sublevels were created to the segmentation. Both levels were segmented based on the four SPOT layers. The second level of segmentation was set to a scale of 100, weight of shape to 0,3 and compactness to 0,5. The third level was segmented 27

42 using a scale of 45, weight of shape was set to 0.9 and the compactness to 0.1. The high weight of shape in the third level was motivated by the fact that roads where best segmented with high weight on shape. In the segmentation of the PALSAR alone, the four original SAR-layers together with a texture layer were used. The lower resolution of the PALSARimage requested for a higher scale, and therefore it was set to 150. The weight of shape was set to 0,9, and weight of compactness to 0,2. Only two levels of segmentation were used here Rules for Water With the joint classification of PALSAR and SPOT, the Water was separated well by the ANN classifier, and nothing had to be done to this class here. Using only PALSAR however, some Water had been misclassified as open area. This was caused by the texture filter, that because of its large kernel size (11x11), created a border where the Forest and Built-up, with high backscatter, bordered to water, with very low backscatter. With an objectbased approach, these misclassified water areas could be turned into Water. In the lower level of segmentation, rules saying that if an object had High length/width ratio About 50% of its boundary boarding to Water and 50% to Urban Or 50% of its boundary boarding to Water and 50% to Forest Then the object should be changed from Open Area to Water Rules for Forest The Forest areas were classified with high accuracy for both of the projects already by the ANN classifier, and no further classification needed to be done Rules for Low Density Built-up To extract Low Density Built-up (LD), a special context measure was developed. The idea is that LD can be distinguish from High Density Built-up (HD) by the amount and size of the houses. In a LD area, houses are generally more sparsely distributed, with areas of Forest, grass and Roads between them. To be able to describe this relation mathematically, a layer showing the amount of pixels being classified as Built-up in a specific distance from each pixel was developed. First, every pixel that had been classified as urban by the ANN classifier was set to 255, while all other pixels were set to 0. Then, an ordinary mean filter was run over the image. This way, every pixel was assign a value 28

43 between 0 and 255, where 255 means all areas with a distance equal to the kernel size of the mean filter are classified as Urban, and 0 means no pixels (within the same distance) are classified as Urban. Three different kernel sizes were evaluated; 11x11, 25x25 and 51x51 pixels. After some testing, it was concluded that the 25x25 pixel kernel size was best fitted to the objectives. The new layer was then added to ecognition, and thresholds were set for when to consider an object as Low Density Built-Up. For the joint classification, this threshold was set to 121 (< 47% Urban), while for the PALSAR project the threshold was set to 50 (< 18% Urban). The differences in threshold values are due to different classification patterns for the two projects Rules for High Density Built-up All areas classified as urban by the ANN classifier, and not classified as LD or Road in the second step of the hierarchical classifier, was set to High Density Built-up Rules for Roads When only using the SPOT image, Roads are difficult to separate from other man-made features, as they all have similar spectral signatures. On the other hand, Roads could not be separated from Open Area when only using PALSAR for classification. But when combining PALSAR and SPOT in the same classification, Roads can, at least to some degree, be separated. When also including the fact that road segments have high length/width ratio, Roads were separated well. In the joint project, Roads were well classified into the urban class by the ANN classifier. When using the PALSAR alone, however, the Roads were separated into several classes (Open, Urban and Forest), so here, the Roads had to be extracted on a higher level. A new project was created in ecognition, including the PALSAR-images and some texture layers. These were then segmented, with the scale parameter set to 30, and with a high weight on shape, with extracting Roads as its only objective. The rules for this project included standard deviation of brightness values, mean value of brightness values and a length/width measure. After the original PALSAR project was classified into all other classes, the extra Road layer was put on top of it, saying that pixels classified as Road in the special road project, should also be Road in the final classified map. All other pixels kept their classification. In the joint classification of SPOT and PALSAR, roads were extracted from the Urban class of the ANN classifier by the conditions that length/width 29

44 ratio should be higher than 18 and the backscatter in the HH images of PAL- SAR should be low Rules for Recreational Areas Recreational Areas were separated from Open Field by their distance to HD and LD. The more area of Urban within a curtain distance from the Open Area segment, the more likely the object is to be a Recreational Area. This relation was set as a rule in ecognition Rules for Open Fields A common problem when using pixel based classifier is that bare wet soil, and urban areas have the same spectral signatures. Even if the PALSAR managed to help separating the two features to some degree, still many bare fields were misclassified as Urban areas by the ANN classifier. In an object based environment, most of these misclassified fields can be recognized. A Field is often rectangular, or at least more rectangular than most of the other features in a remotely sensed image. The standard deviation of the brightness values is also low, as the Field is often quite homogenous. Finally, Fields generally have lower backscatter than Urban in the PALSAR image. When only using the PALSAR image for the first classification, the problem was instead that some small Open Areas were misclassified as Water. The fact that these misclassified features were totally surrounded by Open Areas, made it possible to recognize them. The parameters mentioned above were set as rules in ecognition, and with some adjustment of the numbers in the parameters, most of the misclassified features were recognized. Other than this, Open Field were defined as not being Recreational Areas, i.e. all areas not classified as Recreational Area where set to Open Field. 4.7 Accuracy Assessment To evaluate the different classifications in an objective way, an accuracy assessment must be performed after each classification. There are two main types of accuracy assessments; qualitative confidence building assessment and statistical measures. Qualitative confidence building assessment involves visually examine the classification, to find gross errors - areas that are clearly misclassified. This method is mostly used in the iterative process of improving the classification. Statistical measures are divided into two sub categories, model-based inference and design-based inference. Model-based inference investigates 30

45 the actual classification model, for example by checking the probabilities for each pixel to be classified to a certain class. Design-based inference statistically measures how accurate the classification is based on samples (Jensen, 2005, chap. 13). In this study, design-based inference was used. 500 pixels from each class were randomly selected as true values from the SPOT image. These true pixels were then compared, by Geomatica s post classification tool, with the classified images. Measures calculated were Overall accuracy, Overall Kappa coefficient, Producer s accuracy, User s accuracy and Kappa coefficient. Producer s accuracy is a measure to describe how many of the pixels in the sample data that where classified right in the produced map. The user s accuracy is calculated by dividing all pixels correctly classified to a class, with all sample pixels classified to that class. This is a measure on how much a certain class is over classified. The Kappa coefficient is a measure of agreement between the remote sensing derived classification map and the reference data. Kappa values above 0,8 represents a strong accuracy, while values between 0,4 and 0,8 represents a moderate accuracy (Jensen, 2005, chap. 11). To be able to use the same true pixels for both SPOT and PALSAR classifications, they were only selected in the areas not covered by clouds in the SPOT image. 31

46 32

47 Chapter 5 Results and Discussion 5.1 Geometric Correction For this study, a three coefficient rational model was used for the geometric correction. The residuals for all images where about 1 pixel in X direction and 0.5 in Y. As it was easier to collect GCPs from the optical image, the RMS is a bit lower in the PALSAR images than the SPOT image (table 5.1). 5.2 Backscatter Profiles Backscatter profiles for all classes where extracted from the Sigma Nought corrected PALSAR images (figure 5.1). From the diagram plot of these values, together with their standard deviations (table 5.2) some conclusion can be made. First, it is interesting to see that the Low Density Built-up (LD) and Forest have almost identical backscatter properties in the HH polarized Images. They differ only on the third decimal. Their separability in the cross polarized layers are better, but when also looking at the standard deviation of the two classes (about 3dB), it is obvious they are not clearly separated in any image. The diagram also shows that High Density Built-up (HD) can be separated from the other classes in the like polarized products. This is probably Table 5.1: Results of the geometric correction. Note that the accuracy is a bit lower on the SAR images. Sensor Date X RMS (pixels) Y RMS (pixels) N. O. Points SPOT July 26 0,62 0,52 30 PALSAR June 18 1,27 0,57 27 PALSAR July 17 1,46 0,

48 Figure 5.1: Backscatter profiles from PALSAR. 34

49 Table 5.2: Standard deviation of Backscatter Profiles. July 17 June 18 Class HH (db) HV (db) HH (db) HV (db) Forest 3,12 3,04 3,06 3,18 HD 6,71 5,66 6,22 6,22 LD 4,17 3,63 4,13 3,60 Open Field 3,50 3,37 3,60 3,21 Recreational 3,32 3,53 3,44 3,53 Road 6,49 5,11 6,31 5,05 Water 3,28 2,91 3,40 2,97 due to the corner reflectors present in the built-up environment. A bit surprising though is that the separability between HD and Forest is so low in the cross-polarized images. The high standard deviation in the High Density Built-up class might be explained by the heterogeneity of the city centers, and large number of corner reflectors, while the high standard deviation in the road class might be explained by the difficulty of selecting clean road pixels. As Roads where classified as Urban in the first step of the hierarchy in this research, it is interesting to se how the backscatter of this class differs from the other urban classes. From the diagram, it is obvious that Roads, because of its low backscatter, can be separated well from HD and LD. Water has lower backscatter than all other classes, but looking also at the standard deviation, the Water seems to be totally separate only in the HH June 18 images. The most interesting result of the backscatter profiling is that the backscatter of Open Area change so dramatically between the two dates. This change can maybe be explained by the phenological cycles of the crop growing on the fields (Ban & Howarth, 1999). If more SAR-images where available for this research, crops might have been able to separate based on their phenological cycles. 5.3 Image processing Texture This test comparing different texture filters showed that a mean filter with 11 pixels window size gave the best result. The mean filter, however, was highly correlated with the speckle filtered image, and therefore, this filter alone was rejected. Instead, it was decided to use a combination of the best texture filters (excluding mean) - dissimilarity, entropy and variance - in the classification 35

50 (a) (b) Figure 5.2: The two texture filters used for this study, here applied to the PALSAR images; (a) is the first three PCI components of the texture filters Dissimilarity, Entropy and Mean, all with the kernel size of 5x5. (b) is the 11x11 Entropy filter. (a) Kaun 5x5 (b) Enhanced Frost 11x11 Figure 5.3: The two Speckle filters used for this study, here applied to the two PALSAR images. The Kaun filter is used for the joint classification of SPOT and PALSAR, while the Enhanced Frost filter is used for the classification of PALSAR alone. 36

51 process. When classifying PALSAR alone, an extra texture layer was added, namely the 11x11 pixel sized Entropy filter (figure 5.2b). This large filter could not be used in the joint classification, as it degraded the resolution of the classification to much, but when classifying PALSAR alone, it increased the result on separating forest and open area significantly Speckle In the test of Speckle filters, the Lee 11x11, Lee 9x9, Enhanced Frost 11x11 and Gamma 9x9 gave the best results. After visually comparing these four filter, it was decided to use the Enhanced Frost filter, with a pixel size of 11x11 (figure 5.3b) for the ANN classification of the PALSAR image. However, when classifying using both PALSAR and SPOT at the same time, the 11x11 window size showed to be too big - the low resolution of the speckle filtered image compared to the SPOT image decreased the accuracy of the classification. As the information from the SAR-part of the spectrum was still wanted, a smaller sized speckle filter was used instead. When only comparing the filters with window size 5x5 or 3x3, the Kuan 5x5 (figure 5.3a) gave the best result, and therefore this was used in the joint ANN classification of SPOT and PALSAR. 5.4 Reference Classifications To be able to evaluate the hierarchical approach described in section 4.6, and also to evaluate if the PALSAR managed to improve the classification, three reference images were created. One only using PALSAR (table 5.6), one only using SPOT (table 5.5) and one were SPOT and PALSAR were used together (table 5.4). They were all classified using Geomatica s neural network classifier (described in section 4.5), but this time with all seven classes (Water, Forest, LD, HD, Road Recreational Area and Open Field). The same training areas were used for the SPOT classification and the Table 5.3: Accuracys for ANN classification using SPOT. This classification is used as a reference to evaluate the perfomnce of the first broad classifier Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest Urban Open Overall

52 Table 5.4: Accuracys for ANN classification using both SPOT and PALSAR. This classification is used as a reference to control the performance of the hierachical classifier. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest LD HD Road Recreational Open Field Overall Table 5.5: Accuracys for ANN classification using only SPOT. This classification is used as a reference to evaluate the benefit of inclufig palsar in the classification process. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest LD HD Road Recreational Open Overall Table 5.6: Accuracys for ANN classification using only PALSAR. This classification is used as a reference to control the performance of the hierachical classifier. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest LD HD Road Recreational Open Overall

53 Table 5.7: ANN classification using SPOT and PALSAR. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest Urban Open Overall joint classification of SPOT and PALSAR, but for the classification of PAL- SAR alone, some adjustments had to be done to make the classification good. As mentioned in section 4.5, a reference map using only SPOT and only four classes where created to evaluate the performance of the first broad classifier (table 5.3) 5.5 ANN Classification One ANN classification using fusion of PALSAR and SPOT (figure 5.4d), and one using only PALSAR (figure 5.4b) were made. In both projects, four classes were separated, Water, Forest, Urban and Open areas. These classifications formed the base for the next step in the hierarchical classifier, where the second step was the Object-based/hybrid classifier. The joint classification of PALSAR and SPOT formed the base for the final classification, while the classification of PALSAR alone was used to fill in for information gaps caused by clouds in the SPOT image. One ANN classification only using SPOT was also created (figure 5.4c). This last classification was to be used as a reference, from which the accuracy of the Joint classification of PALSAR and SPOT were evaluated. The classification of PALSAR alone (table 5.8) gave high classification accuracy for the Forest class (99%) and relatively high accuracy for Water Table 5.8: ANN classification only using PALSAR. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest Urban Open Overall

54 (a) original SPOT (b) PALSAR (c) SPOT Water (d) SPOT and PALSAR Forest Urban Open Area Figure 5.4: The initial classification using ANN and only four classes. (b)is classified using PALSAR, (c) is classified using SPOT and (d) classified using a fusion of both SPOT and PALSAR. 40

55 and Open (90,5 and 84,5 respectively). For distinguishing urban areas, however, the classification result were a bit lower (67,0%). The high accuracy in the Forest class can be understood by looking the texture filtered images (see figure 5.2); the texture of this class is highly different from all other classes except for Low Density Built-up. Actually, the fact that forest share texture values with LD, and also has very similar backscatter profile (see figure 5.1), is the main reason for the lower accuracy in the Urban class; some of the LD objects, which should have been classified as Urban, are misclassified as Forest. The Open Areas are better classified using PALSAR alone than with SPOT alone (table 5.3). This is one indication that PALSAR is better suited than the optical data to extract Open Areas, and as we shall see later, this is an important contribution to the joint classification. By visual inspection, a systematic error can be detected in the PALSAR classification; on the border between Water/Forest and Water/Urban, a small strip has been classified as Open Area. This is caused by the large scale texture filter used for this classification. But as this strip can be removed quite easily in the next step of the hierarchy, it is not a big problem. When comparing the joint classification of SPOT and PALSAR with the other two classification (table 5.7), it is obvious that the PALSAR gives valuable information to this project; when including PALSAR, the classification accuracy of Open Areas increases from 68,7% to 74,0%. By visually examination of the results, and by looking at the User s accuracy, you can see that the inclusion of PALSAR helped even more; In both the joint classification and when using SPOT alone, some bare wet field are misclassified as Urban. But when using SPOT alone (figure 5.4c), not only the bare wet fields are misclassified, but also a lot of the forested areas are confused as Open. As can be seen from the image, this error is almost completely removed by including PALSAR in the classification. The wet fields misclassified as Urban can quite easily be changed to their right class by some object based methods, and besides this error, the joint ANN classification is almost free of errors. 5.6 Segmentation Results Two levels of segmentation were used for the joint classification of SPOT and PALSAR (figure 5.5a and 5.5b), and two for the classification of PALSAR alone (figure 5.5c and 5.5d). 5.7 Rule-Based Classification The two ANN classed images, with only four classes, where used as a base for the hybrid approach, the second step in the hierarchical classifier tested 41

56 (a) SPOT and PALSAR, level 1 (b) SPOT and PALSAR, level 2 (c) PALSAR, level 1 (d) PALSAR, level 2 Figure 5.5: Images segmented in ecognition. The images are coloured by each segments mean color in the PALSAR and SPOT image respectively. in this research. The four classes from the previous step, was here separated into seven; Water, Forest, LD, HD, Road, Recreational and Open Field. For every class, a different classification approach was used. As in the ANN classification, two different classifications were developed; one using only PALSAR (figure 5.7) and one using a combination of PALSAR and SPOT (figure 5.6). The accuracy of the joint classification was generally high (table 5.9, confusion matrix table A.1 in Appendix A). Water, Forest, HD and Open field were all classified with accuracies over 90%. As can be expected, the highest accuracy was reached in the Water class (99,8%), and as can be seen in the confusion matrix, both the emission and commission error for this class are very low. Compared to the ANN classified reference image (table 5.4, confusion matrix table A.5 in Appendix A) where both SPOT and PALSAR were used, the accuracy of all classes except for LD and road increase dramatically. The classification accuracy of Roads was approximately the same (71,0% in the reference image and 69,0% in the hierarchical classification), but looking 42

57 Figure 5.6: Classification result using the hierachical approach and both SPOT and PALSAR. also at the User s accuracy, it is obvious that the hierarchical classification gives better result. The User s accuracy increased from 77,4% to 98,3%, indicating that the Road class is more precise in the classification approach developed for this research; not much pixels not being Road were classified as Road here. The same thing can be seen in the confusion matrix (table 5.9); the commission error for this class is very low. When it comes to the LD class, witch was the only class that gave significantly worse result with the hierarchical/hybrid approach, the accuracy of 74,4% is a bit disappointing. But to fully understand de decrease in accuracy comparing to the reference classification, we must also here look at the user s accuracy; in the reference classification, a very large portion of the image was (wrongly) classified as Low Density Built-up (large commission error, see confusion matrix, table 5.9 ). As the class is clearly over-classified, the chance of getting higher accuracy of course increases. This is one of the reasons for the accuracy of the LD class being higher in the reference image. The Overall classification accuracy increased from 75,4% to 87,6%, and overall kappa statistic improved from 0,61 to This must be considered as a large improvement. In the classification where only PALSAR was used, the accuracy was 43

58 Table 5.9: Accuracy of the hierachical classification using both SPOT and PAL- SAR. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest LD HD Road Recreational Open Overall as expected; lower (table 5.10, confusion matrix table A.3 in Appendix A). Water, Open Field and Forest was the only three classes without large decrease in accuracy compared to the joint classification. Low Density Built-up was the class with lowest accuracy, the classification accuracy was here only 7,0%. However, when looking at the backscatter profiles, this result could pretty much be expected; the separability between Forest and LD was very low (figure 5.1). On the ANN classification were only PALSAR was used, the accuracy of LD was higher, but this can be explained by the fact that much of the forest in this image was misclassified as LD, and thus the LD was largely over-classified here. Still, the accuracy of LD was disappointing. One other class standing out with low accuracy is the Road class, with only 34,6% accuracy. This, however, is not surprising, as the resolution of PALSAR prevents even human eyes to distinguish any Roads except for the largest ones. Compared to the reference classification (table 5.6, confusion matrix table A.2 in Appendix A), the accuracy is approximately the same. Looking again at the reference classification, it can be seen that the Table 5.10: Accuracy of the hierachical classification using only PALSAR. Producer s Users Kappa Class Accuracy (%) Accuracy (%) Statistic Water Forest LD HD Road Recreational Open Overall

59 Figure 5.7: Classification result using the hierachical approach and only PALSAR. hierarchical approach managed to increase the accuracy of many classes; the accuracy of Water increased from 58,5% to 96,8%, the accuracy of Forest increased from 86,1% to 98,8%, the accuracy of High Density Built-up increased from 45,4% to 74,0% and the accuracy of Recreational areas increased from 46,9% to 57,6%. The overall accuracy increased from 55,7% to 66,0% and the Kappa increased from 0,48 to 0,6. Although this result can not compete with that where SPOT data was included, it is still a good result, and the land cover map produced can with success be used to fill in for clouded areas in the joint classification. 5.8 Fusion of SPOT and PALSAR When looking at the classification done using the hierarchical approach with SPOT data combined with PALSAR data, it is difficult to distinguish specifically how the SAR-data contributed to the high classification accuracy. Therefore, an ANN classification using only SPOT data was developed. This classification is here compared with the ANN reference classification used to evaluate the SPOT/PALSAR hierarchical classification, to se how SAR-data can increase the accuracy in Land Cover mapping. By visual examination of the PALSAR data, and the result of the tex- 45

60 (a) (b) (c) Figure 5.8: Comparison of classification with and without PALSAR. (a) is the Original SPOT image, in (b) ANN classification using only SPOT is shown and (c) shows the result of ANN classification using both SPOT and PALSAR. For legend, see page 45. ture filters (figure 5.2), an obvious conclusion is that Forest and Open Area are easily separated in the PALSAR data. This conclusion is shown to be right when looking at the accuracy tables for the two classifications (5.5, table 5.4); the main benefit of including PALSAR data in the classification is accuracy incensement in those two classes. The accuracy of Open Field increased from 34,1% to 72,3% and the accuracy of Forest increased from 68,6% to 94,0%. The main drawback by including the SAR-data is the accuracy of Water, here the accuracy actually decreases from 99,6% to 75,4%. This can be explained by the confusion of Water, Open Field and Roads, whish all have similar backscatter, in the PALSAR image. A bit confusing to see is the dramatically improvement of accuracy in Low Density Built up (LD). The explanation for this is not actually that PALSAR improves the LD separability, instead, it is by its ability to separate Roads and Open Areas from urban that it increases the accuracy of LD. In the classification using only SPOT, many pixels in the LD areas are misclassified as Roads and Open Areas. When combining the ability of SPOT to distinguish man made features in general, with PALSAR s ability to separate Roads and Open Areas from Built-up by it low backscatter, the ability to separate LD increases. Neither the SAR-data nor optical data can do this by itself; this is a synergic effect of combining the two datasets. It must, however, be said that LD are over-classified in both projects. The overall accuracy when combining the PALSAR and SPOT increases from 66,3% to 75,4%, and the Kappa statistics increases from 0,61 to 0,71 compared to only using SPOT. 46

61 Table 5.11: Overall accuracies of the different classifiers Classification Sensors Accuracy (%) Kappa Statistic Hierarchical SPOT and PALSAR ANN SPOT and PALSAR ANN SPOT Hierarchical PALSAR ANN PALSAR Summary The hierarchical classifier using a fusion of SPOT and PALSAR gave the best overall accuracy of the tested classifiers (table 5.11). The second best classifier, the ANN classifier also using SPOT and PALSAR, had more than 10% less accuracy. When comparing separate classes, the hierarchical SPOT and PALSAR classification had best results of all classifiers on all classes except for Low Density Built-up and Roads. Roads, however, had only 2% less accuracy in the hierarchical classifier, than the best road classifier (ANN classifier using SPOT and PALSAR), but at the same time, it had a much higher Users accuracy (98,3% compared to 77,4%). Also when looking at the classification of PALSAR alone, the hierarchical classifier gave the best result. Compared to the ANN classification all classes except roads and LD had better result in the hierarchical classifier. And again, the users accuracy for roads was dramatically higher in the hierarchical classification compared to the ANN classification. 47

62 Figure 5.9: Final classifications. Left is the result of classification using fusion of SPOT and PALSAR. Here the clouds are masked out. In the image to the right, the clouds are filled in by the classification where only PALSAR was used.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching. Remote Sensing Objectives This unit will briefly explain display of remote sensing image, geometric correction, spatial enhancement, spectral enhancement and classification of remote sensing image. At

More information

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG An Introduction to Geomatics خاص بطلبة مساق مقدمة في علم الجيوماتكس Prepared by: Dr. Maher A. El-Hallaq Associate Professor of Surveying IUG 1 Airborne Imagery Dr. Maher A. El-Hallaq Associate Professor

More information

Review. Guoqing Sun Department of Geography, University of Maryland ABrief

Review. Guoqing Sun Department of Geography, University of Maryland ABrief Review Guoqing Sun Department of Geography, University of Maryland gsun@glue.umd.edu ABrief Introduction Scattering Mechanisms and Radar Image Characteristics Data Availability Example of Applications

More information

Microwave Remote Sensing (1)

Microwave Remote Sensing (1) Microwave Remote Sensing (1) Microwave sensing encompasses both active and passive forms of remote sensing. The microwave portion of the spectrum covers the range from approximately 1cm to 1m in wavelength.

More information

ACTIVE SENSORS RADAR

ACTIVE SENSORS RADAR ACTIVE SENSORS RADAR RADAR LiDAR: Light Detection And Ranging RADAR: RAdio Detection And Ranging SONAR: SOund Navigation And Ranging Used to image the ocean floor (produce bathymetic maps) and detect objects

More information

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf(

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf( GMAT x600 Remote Sensing / Earth Observation Types of Sensor Systems (1) Outline Image Sensor Systems (i) Line Scanning Sensor Systems (passive) (ii) Array Sensor Systems (passive) (iii) Antenna Radar

More information

SARscape Modules for ENVI

SARscape Modules for ENVI Visual Information Solutions SARscape Modules for ENVI Read, process, analyze, and output products from SAR data. ENVI. Easy to Use Tools. Proven Functionality. Fast Results. DEM, based on TerraSAR-X-1

More information

Introduction to Radar

Introduction to Radar National Aeronautics and Space Administration ARSET Applied Remote Sensing Training http://arset.gsfc.nasa.gov @NASAARSET Introduction to Radar Jul. 16, 2016 www.nasa.gov Objective The objective of this

More information

Remote Sensing. Ch. 3 Microwaves (Part 1 of 2)

Remote Sensing. Ch. 3 Microwaves (Part 1 of 2) Remote Sensing Ch. 3 Microwaves (Part 1 of 2) 3.1 Introduction 3.2 Radar Basics 3.3 Viewing Geometry and Spatial Resolution 3.4 Radar Image Distortions 3.1 Introduction Microwave (1cm to 1m in wavelength)

More information

Geomatica OrthoEngine v10.2 Tutorial Orthorectifying ALOS PRISM Data Rigorous and RPC Modeling

Geomatica OrthoEngine v10.2 Tutorial Orthorectifying ALOS PRISM Data Rigorous and RPC Modeling Geomatica OrthoEngine v10.2 Tutorial Orthorectifying ALOS PRISM Data Rigorous and RPC Modeling ALOS stands for Advanced Land Observing Satellite and was developed by the Japan Aerospace Exploration Agency

More information

Global 25 m Resolution PALSAR-2/PALSAR Mosaic. and Forest/Non-Forest Map (FNF) Dataset Description

Global 25 m Resolution PALSAR-2/PALSAR Mosaic. and Forest/Non-Forest Map (FNF) Dataset Description Global 25 m Resolution PALSAR-2/PALSAR Mosaic and Forest/Non-Forest Map (FNF) Dataset Description Japan Aerospace Exploration Agency (JAXA) Earth Observation Research Center (EORC) 1 Revision history Version

More information

Due Date: September 22

Due Date: September 22 Geography 309 Lab 1 Page 1 LAB 1: INTRODUCTION TO REMOTE SENSING Due Date: September 22 Objectives To familiarize yourself with: o remote sensing resources on the Internet o some remote sensing sensors

More information

University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI

University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI Introduction and Objectives The present study is a correlation

More information

Synthetic aperture RADAR (SAR) principles/instruments October 31, 2018

Synthetic aperture RADAR (SAR) principles/instruments October 31, 2018 GEOL 1460/2461 Ramsey Introduction to Remote Sensing Fall, 2018 Synthetic aperture RADAR (SAR) principles/instruments October 31, 2018 I. Reminder: Upcoming Dates lab #2 reports due by the start of next

More information

Global 25 m Resolution PALSAR-2/PALSAR Mosaic. and Forest/Non-Forest Map (FNF) Dataset Description

Global 25 m Resolution PALSAR-2/PALSAR Mosaic. and Forest/Non-Forest Map (FNF) Dataset Description Global 25 m Resolution PALSAR-2/PALSAR Mosaic and Forest/Non-Forest Map (FNF) Dataset Description Japan Aerospace Exploration Agency (JAXA) Earth Observation Research Center (EORC) 1 Revision history Version

More information

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Application of GIS to Fast Track Planning and Monitoring of Development Agenda Application of GIS to Fast Track Planning and Monitoring of Development Agenda Radiometric, Atmospheric & Geometric Preprocessing of Optical Remote Sensing 13 17 June 2018 Outline 1. Why pre-process remotely

More information

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES H. Topan*, G. Büyüksalih*, K. Jacobsen ** * Karaelmas University Zonguldak, Turkey ** University of Hannover, Germany htopan@karaelmas.edu.tr,

More information

RADAR (RAdio Detection And Ranging)

RADAR (RAdio Detection And Ranging) RADAR (RAdio Detection And Ranging) CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS PASSIVE ACTIVE DIGITAL CAMERA THERMAL (e.g. TIMS) VIDEO CAMERA MULTI- SPECTRAL SCANNERS VISIBLE & NIR MICROWAVE Real

More information

REMOTE SENSING INTERPRETATION

REMOTE SENSING INTERPRETATION REMOTE SENSING INTERPRETATION Jan Clevers Centre for Geo-Information - WU Remote Sensing --> RS Sensor at a distance EARTH OBSERVATION EM energy Earth RS is a tool; one of the sources of information! 1

More information

SARscape for ENVI. A Complete SAR Analysis Solution

SARscape for ENVI. A Complete SAR Analysis Solution SARscape for ENVI A Complete SAR Analysis Solution IDL and ENVI A Foundation for SARscape IDL The Data Analysis & Visualization Platform Data Access: IDL supports virtually every data format, type and

More information

RADAR REMOTE SENSING

RADAR REMOTE SENSING RADAR REMOTE SENSING Jan G.P.W. Clevers & Steven M. de Jong Chapter 8 of L&K 1 Wave theory for the EMS: Section 1.2 of L&K E = electrical field M = magnetic field c = speed of light : propagation direction

More information

Introduction Active microwave Radar

Introduction Active microwave Radar RADAR Imaging Introduction 2 Introduction Active microwave Radar Passive remote sensing systems record electromagnetic energy that was reflected or emitted from the surface of the Earth. There are also

More information

DEM GENERATION WITH WORLDVIEW-2 IMAGES

DEM GENERATION WITH WORLDVIEW-2 IMAGES DEM GENERATION WITH WORLDVIEW-2 IMAGES G. Büyüksalih a, I. Baz a, M. Alkan b, K. Jacobsen c a BIMTAS, Istanbul, Turkey - (gbuyuksalih, ibaz-imp)@yahoo.com b Zonguldak Karaelmas University, Zonguldak, Turkey

More information

Remote sensing image correction

Remote sensing image correction Remote sensing image correction Introductory readings remote sensing http://www.microimages.com/documentation/tutorials/introrse.pdf 1 Preprocessing Digital Image Processing of satellite images can be

More information

Fusion of Heterogeneous Multisensor Data

Fusion of Heterogeneous Multisensor Data Fusion of Heterogeneous Multisensor Data Karsten Schulz, Antje Thiele, Ulrich Thoennessen and Erich Cadario Research Institute for Optronics and Pattern Recognition Gutleuthausstrasse 1 D 76275 Ettlingen

More information

IKONOS High Resolution Multispectral Scanner Sensor Characteristics

IKONOS High Resolution Multispectral Scanner Sensor Characteristics High Spatial Resolution and Hyperspectral Scanners IKONOS High Resolution Multispectral Scanner Sensor Characteristics Launch Date View Angle Orbit 24 September 1999 Vandenberg Air Force Base, California,

More information

GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT 1-3 MSS IMAGERY

GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT 1-3 MSS IMAGERY GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT -3 MSS IMAGERY Torbjörn Westin Satellus AB P.O.Box 427, SE-74 Solna, Sweden tw@ssc.se KEYWORDS: Landsat, MSS, rectification, orbital model

More information

CEGEG046 / GEOG3051 Principles & Practice of Remote Sensing (PPRS) 8: RADAR 1

CEGEG046 / GEOG3051 Principles & Practice of Remote Sensing (PPRS) 8: RADAR 1 CEGEG046 / GEOG3051 Principles & Practice of Remote Sensing (PPRS) 8: RADAR 1 Dr. Mathias (Mat) Disney UCL Geography Office: 113, Pearson Building Tel: 7670 05921 Email: mdisney@ucl.geog.ac.uk www.geog.ucl.ac.uk/~mdisney

More information

SAR Othorectification and Mosaicking

SAR Othorectification and Mosaicking White Paper SAR Othorectification and Mosaicking John Wessels: Senior Scientist PCI Geomatics SAR Othorectification and Mosaicking This study describes the high-speed orthorectification and mosaicking

More information

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post Remote Sensing Odyssey 7 Jun 2012 Benjamin Post Definitions Applications Physics Image Processing Classifiers Ancillary Data Data Sources Related Concepts Outline Big Picture Definitions Remote Sensing

More information

Introduction to Microwave Remote Sensing

Introduction to Microwave Remote Sensing Introduction to Microwave Remote Sensing lain H. Woodhouse The University of Edinburgh Scotland Taylor & Francis Taylor & Francis Group Boca Raton London New York A CRC title, part of the Taylor & Francis

More information

Interpreting land surface features. SWAC module 3

Interpreting land surface features. SWAC module 3 Interpreting land surface features SWAC module 3 Interpreting land surface features SWAC module 3 Different kinds of image Panchromatic image True-color image False-color image EMR : NASA Echo the bat

More information

A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan 2 1,2 INTRODUCTION

A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan 2 1,2 INTRODUCTION Improving the Thematic Accuracy of Land Use and Land Cover Classification by Image Fusion Using Remote Sensing and Image Processing for Adapting to Climate Change A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan

More information

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010 APCAS/10/21 April 2010 Agenda Item 8 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION Siem Reap, Cambodia, 26-30 April 2010 The Use of Remote Sensing for Area Estimation by Robert

More information

An Introduction to Remote Sensing & GIS. Introduction

An Introduction to Remote Sensing & GIS. Introduction An Introduction to Remote Sensing & GIS Introduction Remote sensing is the measurement of object properties on Earth s surface using data acquired from aircraft and satellites. It attempts to measure something

More information

CALIBRATION OF OPTICAL SATELLITE SENSORS

CALIBRATION OF OPTICAL SATELLITE SENSORS CALIBRATION OF OPTICAL SATELLITE SENSORS KARSTEN JACOBSEN University of Hannover Institute of Photogrammetry and Geoinformation Nienburger Str. 1, D-30167 Hannover, Germany jacobsen@ipi.uni-hannover.de

More information

Radar Imaging Wavelengths

Radar Imaging Wavelengths A Basic Introduction to Radar Remote Sensing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland, Oregon 3 November 2015 Radar Imaging

More information

School of Rural and Surveying Engineering National Technical University of Athens

School of Rural and Surveying Engineering National Technical University of Athens Laboratory of Photogrammetry National Technical University of Athens Combined use of spaceborne optical and SAR data Incompatible data sources or a useful procedure? Charalabos Ioannidis, Dimitra Vassilaki

More information

10 Radar Imaging Radar Imaging

10 Radar Imaging Radar Imaging 10 Radar Imaging Active sensors provide their own source of energy to illuminate the target. Active sensors are generally divided into two distinct categories: imaging and non-imaging. The most common

More information

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS PASSIVE ACTIVE DIGITAL

More information

Forest Discrimination Analysis of Combined Landsat and ALOS-PALSAR Data

Forest Discrimination Analysis of Combined Landsat and ALOS-PALSAR Data Forest Discrimination Analysis of Combined Landsat and ALOS-PALSAR Data E. Lehmann, P. Caccetta, Z.-S. Zhou, A. Held CSIRO, Division of Mathematics, Informatics and Statistics, Australia A. Mitchell, I.

More information

Image interpretation and analysis

Image interpretation and analysis Image interpretation and analysis Grundlagen Fernerkundung, Geo 123.1, FS 2014 Lecture 7a Rogier de Jong Michael Schaepman Why are snow, foam, and clouds white? Why are snow, foam, and clouds white? Today

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

What is Remote Sensing? Contents. Image Fusion in Remote Sensing. 1. Optical imagery in remote sensing. Electromagnetic Spectrum

What is Remote Sensing? Contents. Image Fusion in Remote Sensing. 1. Optical imagery in remote sensing. Electromagnetic Spectrum Contents Image Fusion in Remote Sensing Optical imagery in remote sensing Image fusion in remote sensing New development on image fusion Linhai Jing Applications Feb. 17, 2011 2 1. Optical imagery in remote

More information

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images Fumio YAMAZAKI/ yamazaki@edm.bosai.go.jp Hajime MITOMI/ mitomi@edm.bosai.go.jp Yalkun YUSUF/ yalkun@edm.bosai.go.jp

More information

San Diego State University Department of Geography, San Diego, CA. USA b. University of California, Department of Geography, Santa Barbara, CA.

San Diego State University Department of Geography, San Diego, CA. USA b. University of California, Department of Geography, Santa Barbara, CA. 1 Plurimondi, VII, No 14: 1-9 Land Cover/Land Use Change analysis using multispatial resolution data and object-based image analysis Sory Toure a Douglas Stow a Lloyd Coulter a Avery Sandborn c David Lopez-Carr

More information

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs Basic Digital Image Processing A Basic Introduction to Digital Image Processing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland,

More information

ACTIVE MICROWAVE REMOTE SENSING OF LAND SURFACE HYDROLOGY

ACTIVE MICROWAVE REMOTE SENSING OF LAND SURFACE HYDROLOGY Basics, methods & applications ACTIVE MICROWAVE REMOTE SENSING OF LAND SURFACE HYDROLOGY Annett.Bartsch@polarresearch.at Active microwave remote sensing of land surface hydrology Landsurface hydrology:

More information

Aral Sea profile Selection of area 24 February April May 1998

Aral Sea profile Selection of area 24 February April May 1998 250 km Aral Sea profile 1960 1960 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 2010? Selection of area Area of interest Kzyl-Orda Dried seabed 185 km Syrdarya river Aral Sea Salt

More information

DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES

DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES Mark Daryl C. Janiola (1), Jigg L. Pelayo (1), John Louis J. Gacad (1) (1) Central

More information

Keywords: Agriculture, Olive Trees, Supervised Classification, Landsat TM, QuickBird, Remote Sensing.

Keywords: Agriculture, Olive Trees, Supervised Classification, Landsat TM, QuickBird, Remote Sensing. Classification of agricultural fields by using Landsat TM and QuickBird sensors. The case study of olive trees in Lesvos island. Christos Vasilakos, University of the Aegean, Department of Environmental

More information

Radar Imagery for Forest Cover Mapping

Radar Imagery for Forest Cover Mapping Purdue University Purdue e-pubs LARS Symposia Laboratory for Applications of Remote Sensing 1-1-1981 Radar magery for Forest Cover Mapping D. J. Knowlton R. M. Hoffer Follow this and additional works at:

More information

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD Şahin, H. a*, Oruç, M. a, Büyüksalih, G. a a Zonguldak Karaelmas University, Zonguldak, Turkey - (sahin@karaelmas.edu.tr,

More information

Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana. Geob 373 Remote Sensing. Dr Andreas Varhola, Kathry De Rego

Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana. Geob 373 Remote Sensing. Dr Andreas Varhola, Kathry De Rego 1 Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana Geob 373 Remote Sensing Dr Andreas Varhola, Kathry De Rego Zhu an Lim (14292149) L2B 17 Apr 2016 2 Abstract Montana

More information

US Commercial Imaging Satellites

US Commercial Imaging Satellites US Commercial Imaging Satellites In the early 1990s, Russia began selling 2-meter resolution product from its archives of collected spy satellite imagery. Some of this product was down-sampled to provide

More information

Remote Sensing Platforms

Remote Sensing Platforms Types of Platforms Lighter-than-air Remote Sensing Platforms Free floating balloons Restricted by atmospheric conditions Used to acquire meteorological/atmospheric data Blimps/dirigibles Major role - news

More information

Introduction to RADAR Remote Sensing for Vegetation Mapping and Monitoring. Wayne Walker, Ph.D.

Introduction to RADAR Remote Sensing for Vegetation Mapping and Monitoring. Wayne Walker, Ph.D. Introduction to RADAR Remote Sensing for Vegetation Mapping and Monitoring Wayne Walker, Ph.D. Outline What is RADAR (and what does it measure)? RADAR as an active sensor Applications of RADAR to vegetation

More information

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Time: Max. Marks: Q1. What is remote Sensing? Explain the basic components of a Remote Sensing system. Q2. What is

More information

GE 113 REMOTE SENSING

GE 113 REMOTE SENSING GE 113 REMOTE SENSING Topic 8. Image Classification and Accuracy Assessment Lecturer: Engr. Jojene R. Santillan jrsantillan@carsu.edu.ph Division of Geodetic Engineering College of Engineering and Information

More information

KEY TECHNOLOGY DEVELOPMENT FOR THE ADVENACED LAND OBSERVING SATELLITE

KEY TECHNOLOGY DEVELOPMENT FOR THE ADVENACED LAND OBSERVING SATELLITE KEY TECHNOLOGY DEVELOPMENT FOR THE ADVENACED LAND OBSERVING SATELLITE Takashi HAMAZAKI, and Yuji OSAWA National Space Development Agency of Japan (NASDA) hamazaki.takashi@nasda.go.jp yuji.osawa@nasda.go.jp

More information

FUZZY-BASED FROST FILTER FOR SPECKLE NOISE REDUCTION OF SYNTHETIC APERTURE RADAR (SAR) IMAGE ARDHI WICAKSONO SANTOSO

FUZZY-BASED FROST FILTER FOR SPECKLE NOISE REDUCTION OF SYNTHETIC APERTURE RADAR (SAR) IMAGE ARDHI WICAKSONO SANTOSO FUZZY-BASED FROST FILTER FOR SPECKLE NOISE REDUCTION OF SYNTHETIC APERTURE RADAR (SAR) IMAGE ARDHI WICAKSONO SANTOSO Master of Science (COMPUTER SCIENCE) UNIVERSITI MALAYSIA PAHANG SUPERVISOR S DECLARATION

More information

Active and Passive Microwave Remote Sensing

Active and Passive Microwave Remote Sensing Active and Passive Microwave Remote Sensing Passive remote sensing system record EMR that was reflected (e.g., blue, green, red, and near IR) or emitted (e.g., thermal IR) from the surface of the Earth.

More information

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005 Some Basic Concepts of Remote Sensing Lecture 2 August 31, 2005 What is remote sensing Remote Sensing: remote sensing is science of acquiring, processing, and interpreting images and related data that

More information

The Radar Ortho Suite is an add-on to Geomatica. It requires Geomatica Core or Geomatica Prime as a pre-requisite.

The Radar Ortho Suite is an add-on to Geomatica. It requires Geomatica Core or Geomatica Prime as a pre-requisite. Technical Specifications Radar Ortho Suite The Radar Ortho Suite includes rigorous and rational function models developed to compensate for distortions and produce orthorectified radar images. Distortions

More information

Satellite Remote Sensing: Earth System Observations

Satellite Remote Sensing: Earth System Observations Satellite Remote Sensing: Earth System Observations Land surface Water Atmosphere Climate Ecosystems 1 EOS (Earth Observing System) Develop an understanding of the total Earth system, and the effects of

More information

Topographic mapping from space K. Jacobsen*, G. Büyüksalih**

Topographic mapping from space K. Jacobsen*, G. Büyüksalih** Topographic mapping from space K. Jacobsen*, G. Büyüksalih** * Institute of Photogrammetry and Geoinformation, Leibniz University Hannover ** BIMTAS, Altunizade-Istanbul, Turkey KEYWORDS: WorldView-1,

More information

DEMS BASED ON SPACE IMAGES VERSUS SRTM HEIGHT MODELS. Karsten Jacobsen. University of Hannover, Germany

DEMS BASED ON SPACE IMAGES VERSUS SRTM HEIGHT MODELS. Karsten Jacobsen. University of Hannover, Germany DEMS BASED ON SPACE IMAGES VERSUS SRTM HEIGHT MODELS Karsten Jacobsen University of Hannover, Germany jacobsen@ipi.uni-hannover.de Key words: DEM, space images, SRTM InSAR, quality assessment ABSTRACT

More information

IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY

IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY Ahmed Elsharkawy 1,2, Mohamed Elhabiby 1,3 & Naser El-Sheimy 1,4 1 Dept. of Geomatics Engineering, University of Calgary

More information

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing Introduction to Remote Sensing Definition of Remote Sensing Remote sensing refers to the activities of recording/observing/perceiving(sensing)objects or events at far away (remote) places. In remote sensing,

More information

Introduction of Satellite Remote Sensing

Introduction of Satellite Remote Sensing Introduction of Satellite Remote Sensing Spatial Resolution (Pixel size) Spectral Resolution (Bands) Resolutions of Remote Sensing 1. Spatial (what area and how detailed) 2. Spectral (what colors bands)

More information

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT 1 Image Fusion Sensor Merging Magsud Mehdiyev Geoinfomatics Center, AIT Image Fusion is a combination of two or more different images to form a new image by using certain algorithms. ( Pohl et al 1998)

More information

Francesco Holecz. TUBE II meeting - 17 June Land Degradation. Land Degradation

Francesco Holecz. TUBE II meeting - 17 June Land Degradation. Land Degradation Land Degradation Francesco Holecz Objective To identify and monitor land degraded areas, in particular those related to agricultural and pastoral activities. Following products are generated: Land cover

More information

Introduction to Remote Sensing

Introduction to Remote Sensing Introduction to Remote Sensing Spatial, spectral, temporal resolutions Image display alternatives Vegetation Indices Image classifications Image change detections Accuracy assessment Satellites & Air-Photos

More information

Advanced Optical Satellite (ALOS-3) Overviews

Advanced Optical Satellite (ALOS-3) Overviews K&C Science Team meeting #24 Tokyo, Japan, January 29-31, 2018 Advanced Optical Satellite (ALOS-3) Overviews January 30, 2018 Takeo Tadono 1, Hidenori Watarai 1, Ayano Oka 1, Yousei Mizukami 1, Junichi

More information

FOREST MAPPING IN MONGOLIA USING OPTICAL AND SAR IMAGES

FOREST MAPPING IN MONGOLIA USING OPTICAL AND SAR IMAGES FOREST MAPPING IN MONGOLIA USING OPTICAL AND SAR IMAGES D.Enkhjargal 1, D.Amarsaikhan 1, G.Bolor 1, N.Tsetsegjargal 1 and G.Tsogzol 1 1 Institute of Geography and Geoecology, Mongolian Academy of Sciences

More information

EXAMPLES OF TOPOGRAPHIC MAPS PRODUCED FROM SPACE AND ACHIEVED ACCURACY CARAVAN Workshop on Mapping from Space, Phnom Penh, June 2000

EXAMPLES OF TOPOGRAPHIC MAPS PRODUCED FROM SPACE AND ACHIEVED ACCURACY CARAVAN Workshop on Mapping from Space, Phnom Penh, June 2000 EXAMPLES OF TOPOGRAPHIC MAPS PRODUCED FROM SPACE AND ACHIEVED ACCURACY CARAVAN Workshop on Mapping from Space, Phnom Penh, June 2000 Jacobsen, Karsten University of Hannover Email: karsten@ipi.uni-hannover.de

More information

CHAPTER 7: Multispectral Remote Sensing

CHAPTER 7: Multispectral Remote Sensing CHAPTER 7: Multispectral Remote Sensing REFERENCE: Remote Sensing of the Environment John R. Jensen (2007) Second Edition Pearson Prentice Hall Overview of How Digital Remotely Sensed Data are Transformed

More information

Image interpretation I and II

Image interpretation I and II Image interpretation I and II Looking at satellite image, identifying different objects, according to scale and associated information and to communicate this information to others is what we call as IMAGE

More information

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS REMOTE SENSING Topic 10 Fundamentals of Digital Multispectral Remote Sensing Chapter 5: Lillesand and Keifer Chapter 6: Avery and Berlin MULTISPECTRAL SCANNERS Record EMR in a number of discrete portions

More information

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0 CanImage (Landsat 7 Orthoimages at the 1:50 000 Scale) Standards and Specifications Edition 1.0 Centre for Topographic Information Customer Support Group 2144 King Street West, Suite 010 Sherbrooke, QC

More information

Image interpretation. Aliens create Indian Head with an ipod? Badlands Guardian (CBC) This feature can be found 300 KMs SE of Calgary.

Image interpretation. Aliens create Indian Head with an ipod? Badlands Guardian (CBC) This feature can be found 300 KMs SE of Calgary. Image interpretation Aliens create Indian Head with an ipod? Badlands Guardian (CBC) This feature can be found 300 KMs SE of Calgary. 50 1 N 110 7 W Milestones in the History of Remote Sensing 19 th century

More information

Module 11 Digital image processing

Module 11 Digital image processing Introduction Geo-Information Science Practical Manual Module 11 Digital image processing 11. INTRODUCTION 11-1 START THE PROGRAM ERDAS IMAGINE 11-2 PART 1: DISPLAYING AN IMAGE DATA FILE 11-3 Display of

More information

The availability of cloud free Landsat TM and ETM+ land observations and implications for global Landsat data production

The availability of cloud free Landsat TM and ETM+ land observations and implications for global Landsat data production 14475 The availability of cloud free Landsat TM and ETM+ land observations and implications for global Landsat data production *V. Kovalskyy, D. Roy (South Dakota State University) SUMMARY The NASA funded

More information

Radiometric and Geometric Correction Methods for Active Radar and SAR Imageries

Radiometric and Geometric Correction Methods for Active Radar and SAR Imageries Radiometric and Geometric Correction Methods for Active Radar and SAR Imageries M. Mansourpour 1, M.A. Rajabi 1, Z. Rezaee 2 1 Dept. of Geomatics Eng., University of Tehran, Tehran, Iran mansourpour@gmail.com,

More information

Co-ReSyF RA lecture: Vessel detection and oil spill detection

Co-ReSyF RA lecture: Vessel detection and oil spill detection This project has received funding from the European Union s Horizon 2020 Research and Innovation Programme under grant agreement no 687289 Co-ReSyF RA lecture: Vessel detection and oil spill detection

More information

Change Detection using SAR Data

Change Detection using SAR Data White Paper Change Detection using SAR Data John Wessels: Senior Scientist PCI Geomatics Change Detection using SAR Data The ability to identify and measure significant changes in target scattering and/or

More information

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform Radar (SAR) Image Based Transform Department of Electrical and Electronic Engineering, University of Technology email: Mohammed_miry@yahoo.Com Received: 10/1/011 Accepted: 9 /3/011 Abstract-The technique

More information

Remote Sensing for Rangeland Applications

Remote Sensing for Rangeland Applications Remote Sensing for Rangeland Applications Jay Angerer Ecological Training June 16, 2012 Remote Sensing The term "remote sensing," first used in the United States in the 1950s by Ms. Evelyn Pruitt of the

More information

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS Fundamentals of Remote Sensing Pranjit Kr. Sarma, Ph.D. Assistant Professor Department of Geography Mangaldai College Email: prangis@gmail.com Ph. No +91 94357 04398 Remote Sensing Remote sensing is defined

More information

Section 2 Image quality, radiometric analysis, preprocessing

Section 2 Image quality, radiometric analysis, preprocessing Section 2 Image quality, radiometric analysis, preprocessing Emmanuel Baltsavias Radiometric Quality (refers mostly to Ikonos) Preprocessing by Space Imaging (similar by other firms too): Modulation Transfer

More information

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur. Basics of Remote Sensing Some literature references Franklin, SE 2001 Remote Sensing for Sustainable Forest Management Lewis Publishers 407p Lillesand, Kiefer 2000 Remote Sensing and Image Interpretation

More information

Active and Passive Microwave Remote Sensing

Active and Passive Microwave Remote Sensing Active and Passive Microwave Remote Sensing Passive remote sensing system record EMR that was reflected (e.g., blue, green, red, and near IR) or emitted (e.g., thermal IR) from the surface of the Earth.

More information

Monitoring agricultural plantations with remote sensing imagery

Monitoring agricultural plantations with remote sensing imagery MPRA Munich Personal RePEc Archive Monitoring agricultural plantations with remote sensing imagery Camelia Slave and Anca Rotman University of Agronomic Sciences and Veterinary Medicine - Bucharest Romania,

More information

remote sensing? What are the remote sensing principles behind these Definition

remote sensing? What are the remote sensing principles behind these Definition Introduction to remote sensing: Content (1/2) Definition: photogrammetry and remote sensing (PRS) Radiation sources: solar radiation (passive optical RS) earth emission (passive microwave or thermal infrared

More information

SAR Remote Sensing (Microwave Remote Sensing)

SAR Remote Sensing (Microwave Remote Sensing) iirs SAR Remote Sensing (Microwave Remote Sensing) Synthetic Aperture Radar Shashi Kumar shashi@iirs.gov.in Electromagnetic Radiation Electromagnetic radiation consists of an electrical field(e) which

More information

1. Theory of remote sensing and spectrum

1. Theory of remote sensing and spectrum 1. Theory of remote sensing and spectrum 7 August 2014 ONUMA Takumi Outline of Presentation Electromagnetic wave and wavelength Sensor type Spectrum Spatial resolution Spectral resolution Mineral mapping

More information

IMPACT OF BAQ LEVEL ON INSAR PERFORMANCE OF RADARSAT-2 EXTENDED SWATH BEAM MODES

IMPACT OF BAQ LEVEL ON INSAR PERFORMANCE OF RADARSAT-2 EXTENDED SWATH BEAM MODES IMPACT OF BAQ LEVEL ON INSAR PERFORMANCE OF RADARSAT-2 EXTENDED SWATH BEAM MODES Jayson Eppler (1), Mike Kubanski (1) (1) MDA Systems Ltd., 13800 Commerce Parkway, Richmond, British Columbia, Canada, V6V

More information

Microwave Remote Sensing

Microwave Remote Sensing Provide copy on a CD of the UCAR multi-media tutorial to all in class. Assign Ch-7 and Ch-9 (for two weeks) as reading material for this class. HW#4 (Due in two weeks) Problems 1,2,3 and 4 (Chapter 7)

More information

Acknowledgment. Process of Atmospheric Radiation. Atmospheric Transmittance. Microwaves used by Radar GMAT Principles of Remote Sensing

Acknowledgment. Process of Atmospheric Radiation. Atmospheric Transmittance. Microwaves used by Radar GMAT Principles of Remote Sensing GMAT 9600 Principles of Remote Sensing Week 4 Radar Background & Surface Interactions Acknowledgment Mike Chang Natural Resources Canada Process of Atmospheric Radiation Dr. Linlin Ge and Prof Bruce Forster

More information

Imaging radar Imaging radars provide map-like coverage to one or both sides of the aircraft.

Imaging radar Imaging radars provide map-like coverage to one or both sides of the aircraft. CEE 6100 / CSS 6600 Remote Sensing Fundamentals 1 Imaging radar Imaging radars provide map-like coverage to one or both sides of the aircraft. Acronyms: RAR real aperture radar ("brute force", "incoherent")

More information