EXERCISE 1 - REMOTE SENSING: SENSORS WITH DIFFERENT RESOLUTION

Similar documents
Exercise 4-1 Image Exploration

Module 11 Digital image processing

Enhancement of Multispectral Images and Vegetation Indices

GEOG432: Remote sensing Lab 3 Unsupervised classification

Satellite image classification

GEOG432: Remote sensing Lab 3 Unsupervised classification

RGB colours: Display onscreen = RGB

RADIOMETRIC CALIBRATION

Land Cover Analysis to Determine Areas of Clear-cut and Forest Cover in Olney, Montana. Geob 373 Remote Sensing. Dr Andreas Varhola, Kathry De Rego

The (False) Color World

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Unsupervised Classification

AmericaView EOD 2016 page 1 of 16

An Introduction to Remote Sensing & GIS. Introduction

Land Remote Sensing Lab 4: Classication and Change Detection Assigned: October 15, 2017 Due: October 27, Classication

8. EDITING AND VIEWING COORDINATES, CREATING SCATTERGRAMS AND PRINCIPAL COMPONENTS ANALYSIS

Due Date: September 22

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Assessment of Spatiotemporal Changes in Vegetation Cover using NDVI in The Dangs District, Gujarat

Lecture 13: Remotely Sensed Geospatial Data

Viewing Landsat TM images with Adobe Photoshop

Quantifying Land Cover Changes in Maine

Applications of satellite and airborne image data to coastal management. Part 2

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

BV NNET User manual. V0.2 (Draft) Rémi Lecerf, Marie Weiss

Basic Hyperspectral Analysis Tutorial

University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS

Introduction to Remote Sensing

Remote Sensing Instruction Laboratory

1. Start a bit about Linux

Interpreting land surface features. SWAC module 3

How to Access Imagery and Carry Out Remote Sensing Analysis Using Landsat Data in a Browser

REMOTE SENSING INTERPRETATION

GE 113 REMOTE SENSING

Raster is faster but vector is corrector

Introduction. Introduction. Introduction. Introduction. Introduction

QGIS LAB SERIES GST 101: Introduction to Geospatial Technology Lab 6: Understanding Remote Sensing and Analysis

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010

Remote Sensing And Gis Application in Image Classification And Identification Analysis.

Files Used in This Tutorial. Background. Calibrating Images Tutorial

8th ESA ADVANCED TRAINING COURSE ON LAND REMOTE SENSING

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0

Image interpretation I and II

Remote Sensing for Rangeland Applications

This week we will work with your Landsat images and classify them using supervised classification.

Spatial Analyst is an extension in ArcGIS specially designed for working with raster data.

* Tokai University Research and Information Center

GEO/EVS 425/525 Unit 3 Composite Images and The ERDAS Imagine Map Composer

Land Cover Change Analysis An Introduction to Land Cover Change Analysis using the Multispectral Image Data Analysis System (MultiSpec )

GST 101: Introduction to Geospatial Technology Lab Series. Lab 6: Understanding Remote Sensing and Aerial Photography

EE/GP140-The Earth From Space- Winter 2008 Handout #16 Lab Exercise #3

Land Cover Change Analysis An Introduction to Land Cover Change Analysis using the Multispectral Image Data Analysis System (MultiSpec )

Seasonal Progression of the Normalized Difference Vegetation Index (NDVI)

White paper brief IdahoView Imagery Services: LISA 1 Technical Report no. 2 Setup and Use Tutorial

Lecture 2. Electromagnetic radiation principles. Units, image resolutions.

Final Examination Introduction to Remote Sensing. Time: 1.5 hrs Max. Marks: 50. Section-I (50 x 1 = 50 Marks)

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

CHAPTER 7: Multispectral Remote Sensing

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

TimeSync V3 User Manual. January Introduction

AT-SATELLITE REFLECTANCE: A FIRST ORDER NORMALIZATION OF LANDSAT 7 ETM+ IMAGES

INTRODUCTION TO SNAP TOOLBOX

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD

IMPROVEMENT IN THE DETECTION OF LAND COVER CLASSES USING THE WORLDVIEW-2 IMAGERY

Satellite Remote Sensing: Earth System Observations

Remote sensing image correction

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

Supervised Land Cover Classification An introduction to digital image classification using the Multispectral Image Data Analysis System (MultiSpec )

366 Glossary. Popular method for scale drawings in a computer similar to GIS but without the necessity for spatial referencing CEP

Introduction to Remote Sensing

Visualizing a Pixel. Simulate a Sensor s View from Space. In this activity, you will:

GIS Data Collection. Remote Sensing

MRLC 2001 IMAGE PREPROCESSING PROCEDURE

Introduction to Remote Sensing Part 1

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

Present and future of marine production in Boka Kotorska

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

Data Sources. The computer is used to assist the role of photointerpretation.

Remote Sensing in Daily Life. What Is Remote Sensing?

IKONOS High Resolution Multispectral Scanner Sensor Characteristics

Image interpretation and analysis

F2 - Fire 2 module: Remote Sensing Data Classification

Dirty REMOTE SENSING Lecture 3: First Steps in classifying Stuart Green Earthobservation.wordpress.com

ILLUMINATION CORRECTION OF LANDSAT TM DATA IN SOUTH EAST NSW

At-Satellite Reflectance: A First Order Normalization Of Landsat 7 ETM+ Images

Stratigraphy Modeling Boreholes and Cross. Become familiar with boreholes and borehole cross sections in GMS

CHANGE DETECTION USING OPTICAL DATA IN SNAP

Lab 3: Introduction to Image Analysis with ArcGIS 10

7. RECTIFICATION (GEOMETRIC CORRECTION) OF IMAGES AND RESAMPLING

LAB 2: Sampling & aliasing; quantization & false contouring

Introduction to image processing for remote sensing: Practical examples

Creating a Colour Composite from MERIS L1 Data

Geo/SAT 2 TROPICAL WET REALMS OF CENTRAL AFRICA, PART II

Transcription:

EXERCISE 1 - REMOTE SENSING: SENSORS WITH DIFFERENT RESOLUTION Program: ArcView 3.x 1. Copy the folder FYS_FA with its whole contents from: Kursdata: L:\FA\FYS_FA to C:\Tempdata 2. Open the folder and double click at the file: spysat.apr 3. You have now entered the GIS software ArcView. In this environment it is possible to view, analyse and print satellite images as well as other types of geographical data (e.g. digital maps). 4. Maximize the ArcView window by clicking up right 5. We now start with exercise 1. Maximize the window entitled Exercise 1. 6. Check the box to the left of jan88.tif 7. What do you see? 8. This is a low-resolution satellite image. What features can you identify? 9. Click the 5th button from the left in the lower row ( ). This is the zoom tool. Use it to zoom in and out ( happens when you zoom to the maximum? ) in the image. What 10. How big are the picture elements (in miles), by using the tool 8th from left ( left corner. ), lower row. The distance is displayed in the lower 11. Use the zoom out button to zoom out until you see the whole of Africa again. 12. Mark the bow in front of apr88.tif. What happens? Mark and unmark to see differences. Do the same with jul88.tif and okt88.tif. Why do the images change, and what can they be used for? 13. Close the Exercise 1 window. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-1 -

14. We now start Exercise 2. Maximize the window entitled Exercise 2. 15. You are now looking at a satellite image with a resolution of 30 by 30 meters. 16. Type 500 000 in the scale window (top right). What does this mean? 17. Use the pan tool ( ) to move around in the image. Can you see any signs of human activity? From where do you think the image is? 18. Can you find rivers, roads and cultivated fields? 19. Use the zoom tool to zoom to full extent. Zoom in the lower left corner. What do you think this is? 20. Close the window 21. Open Exercise 3. 22. Maximize the window. 23. The resolution of this satellite image is also 30 x 30 meters. 24. Where do you think humans are living in this area? 25. Type 500 000 in the scale window. 26. Use the pan tool to move around in the image. Can you see any signs of human activity? From what part of the earth do you think the image is? 27. Can you find rivers, roads and cultivated fields? 28. Use the zoom tool to zoom to full extent. Zoom in on the spot that is located a little bit to the right of the two lakes. What do you think this is? 29. Close the window 30. Open Exercise 4. 31. Maximize the window. 32. Try to find Lund and investigate what you can see in this resolution. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-2 -

33. Can you find Sturrup (the airport)? 34. Have a look on other cities and airports as well you may try Copenhagen or Stockholm. 35. Close the window 36. Open Exercise 5. 37. Maximize the window. 38. Why do you think this image look strange? 39. What information can you get? 40. What is the resolution? 41. Where do you think you are in the world? 42. Now check the box next to the layer IR-colour. 43. Zoom in and taka a close look at the data? 44. Which are the smallest objects you can find in the image? 45. Try to locate rivers, roads and buildings. 46. Identify an object and tro to zoom in as much as possible, at what level is the zooming useless? 47. Check the third box ( next to the layer: Black and White ). 48. Try to locate rivers, roads and buildings. 49. Identify an object and try to zoom in as much as possible, at what level is the zooming useless? 50. Find some buildings and zoom in as much as possible (but you must still be able to identify the objects). 51. Uncheck the layer Black and White. What happens? 52. What is the resolution for the third image do you think? 53. The first image in this exercise was a satellite image of mediumlow resolution, the second an air photo with 2 m resolution and the third a 1 m resolution air photo. 54. Close the window. 55. Open Exercise 6. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-3 -

56. Maximize the window. 57. Explore the image and determine what are the smallest objects you can identify. For what purposes would this image be useful? 58. Now mark the second image in this exercise. 59. Explore the information content of the image. 60. Can you identify aircraft type at the airport? 61. This is 1 m resolution commercially availably satellite data. Current spy satellites may have resolutions of less than 0.25 0.5 m. Imagine this and think back on exercise 5 where you saw the difference between 1 and 2 m resolution data! 62. Close the program ArcView. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-4 -

Exercise 2 Introduction to Multi-spectral Remote Sensing 1 Programs: IDRISI32 / MS Excel Objectives: The exercise introduces satellite-based remote sensing and is divided into two parts. Tasks in part 1 include: Exploring images captured by different sensors/platforms; Introducing the concept of false color composites Exploring reflectance values. While part 2 includes: Developing training sites (main land use/ land cover classes); Calculating the spectral signatures of main land use classes; Performing a number of supervised classifications; Performing a radiometric correction (calculation of at-satellite reflectance ). Data: The data used for this exercises includes images from the following sensors: NOAA AVHRR Landsat MSS Landsat TM Spot X Aerial photographs PART 1 all data for part 1 are in the folder SENSORS 1. Getting Started - You can start IDRISI32 through Start > Programs > Idrisi32 > Idrisi32. 1 This practical exercise contains modified parts of the Introductory Image Processing Exercises from the IDRISI Tutorial (J. Ronald Eastman, 1999) Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-5 -

Once IDRISI is launched, the first step is to specify where the input data is located in the computer, and where the output data (new files created during the exercises) are to be saved. This is known in IDRISI as the Working Folder. Normally, the Working Folder is the same for both the input and output data. - Click onto the File menu and choose the Data Paths option. In the new dialogue box for Project environment, click on browse to specify your Main working folder. Choose C:\Tempdata\FYS_FA 2. Loading and Exploring Images a) NOAA-AVHRR Images The first satellite images you will look at are from the AVHRR (Advanced Very High Resolution Radiometer) sensor, mounted on a NOAA satellite. The pixel resolution is approx. 1 km. AVHRR sensors record information in 5 bands, of which the band 1 (red) captures the wavelengths between 580-680 μm, and band 2 (NIR) between 725-1100 μm. - From the Display menu, choose Display Launcher. - With File type to be displayed set to Raster files, select the image NOAARED (red band) by clicking the small icon to the right of the file name box. Make sure the autoscale, title and legend are OFF, and select Grey Scale as the Palette fill. Notice that the contrast is very low, and almost the entire image is dark grey. The Grey Scale palette ranges from black (0) to white (255). To see why the contrast is so low, we can explore the range of reflectance values in greater detail by looking at an histogram of all pixel values. - Choose Analysis > Database Query > HISTO, and select NOAARED as the Input filename and click OK (Ignore the warning message by clicking again OK). The horizontal axis of the histogram may be interpreted as if it were the Grey Palette (0 = black, 255 = white). The vertical axis shows how many pixels in the image have that value. Almost all pixel values range between 25 and 75 (with a mean of 35), and this clearly Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-6 -

explains why the contrast was so low (Grey scale ranges from 0 to 255). - Close the histogram window. - You can increase the contrast in Layer Properties of the Composer window. The autoscale option will accomplish a simple linear stretch for display purposes. When autoscaling is used. the minimum value in the image is displayed with the lowest color in the palette and the maximum with the highest. Activate autoscale and change the Display Min and Display max values to what you believe is reasonable. You will now explore reflectance values at different locations in the image. - You can have the reflectance value of a given pixel by clicking on it with the cursor inquiry mode. 2.1) What are average reflectance values in the red part of the electromagnetic spectrum for(i.e. the NOAARED image): Water: South Sweden (mainly agricultural land): Småland (mainly coniferous forest): Clouds: - Now do the same but looking at the near infra-red (NIR) part of the electromagnetic spectrum (image NOAANIR; choose Grey Scale as Palette fill) 2.2) What are average reflectance values in the NIR part of the electromagnetic spectrum for: Water: South Sweden (mainly agricultural land): Småland (mainly coniferous forest): Clouds: Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-7 -

Now think about the spectral reflectance signature of green vegetation by looking at the values you wrote above and the Figure below. - Now display the image NOAAKVOT, but this time using Quantitative (Standard IDRISI Palette) as Palette fill and activate (turn on) autoscale and Legend. This is a so-called simple vegetation index, created by simply dividing the NIR band with the red band. The result was then scaled to have values between 0-255. The red wavelength region is exploited by pigments in the leaves, namely chlorophylls, to undergo photosynthesis. Conversely, the high reflection in the NIR is induced by internal leaf structure and leaf components. It is common practice to combine reflectances in these two bands into an index that exploits these differences and is therefore sensitive to vegetation amounts. You can now clearly see the differences in vegetation cover between Southern Skåne and Småland. 2.3 ) What is the smallest object you think you can see on NOAA images? (Tip: you can zoom in using ) 2.4) Can you see man-made objects or features on the images. If so, what? Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-8 -

2.5) Look at the Island Öland in the Baltic Sea, why do you think there is such a distinct difference between the south and north part of the island? What does this imply? b) Landsat MSS Images - Now load the image MSS_FCC. This is a so-called False color composite (FCC) over the Helsingborg area, from the sensor MSS (Multispectral Scanner) mounted on a Landsat satellite. The pixel size is 80 m. It is called false color composite because the colors displayed do not represent the colors we see with our eyes. On this image, the green wavelengths are presented in blue, the red in green and the NIR in red. This is done to allow us to see the NIR wavelengths. On a false color composite (FCC) image: Green wavelengths blue on image Red wavelengths green on image NIR wavelengths red on image This color scheme makes it very easy to see the vegetated areas, which are red on the image. 2.6) Which areas are the reddest? What type of vegetation is there? 2.7) Locate an area near Söderåsen which you think is a deciduous (leave) forest. By clicking on the image, you can get the pixel values for all three bands (R:G:B). Note approx. the values in all three bands. Do the same for coniferous forest and areas dominated by agricultural land. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-9 -

2.8) Can you see man-made features in the landscape? Approx. how big are they? c) Landsat TM Images - Load the image TM_FCC. This is a FCC image from the TM (Thematic Mapper) sensor, also mounted on Landsat satellites, with a resolution of 30 m. You partly see Söderåsen and agricultural land to the South. 2.9) Try to locate the same area you studied in the MSS image. What are the biggest differences? 2.10) Locate roads with different sizes. How wide do you think they are in reality? Is it possible to see on the image roads smaller than the pixel size (resolution)? If so, how can this be possible? 2.11) The possibility to see roads depends greatly on the conditions. In what type of landscape is it easiest to see the roads, and where do they disappear? Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-10 -

d) SPOT Images - Load the image SPOT_FCC. This is a also a FCC image from the European SPOT satellite over Helsingborg, with a resolution of 20 m. 2.12) Notice the differences in colors between the city-centre and the outer residential areas (suburbs), where houses have gardens. Why are the colors different? 2.13) At what period of the year (month/season) do you think the image was taken and why? (TIPS: look at the agricultural fields) e) Aerial Photographs - Load the image REV_FCC, which is a FCC aerial photograph showing an area outside Revingehed. 2.14) Try to estimate the resolution of this image (pixel) size. The widest tractor tracks are about 3m wide. - Close all images in IDRISI. PART 2 all data used for Part 2 are in the folder H_BY Don t forget to change your Main working folder to specify the location of your new data! (File > Data Paths) Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-11 -

For the second part of the exercise, you will be using part of a Landsat TM scene over the area of Hörby, Skåne. You will first create what are known as training sites for the main landuse classes, to thereafter automatically explore spectral signatures for these landuse classes. Once the spectral signatures are calculated, you will then classify the image using a number of algorithms. In the last part of the exercises, you will calculate the at-satellite reflectance by transforming digital numbers (0-255) to reflectance values (%). 3. Training Site Development Training sites are areas (selection of pixels) that represent a specific landuse or land cover class. For example, you will identify areas in the image that represent water, and then select a number of pixels (by digitizing) which are to be representative for that class. Training sites should be as homogenous as possible, and many training sites (polygons) can be created for the same landuse class. Each known landuse class will be assigned a unique integer number. The classes you should identify and the training sites you will create (with their unique number) are as follows: 1 deciduous forest (leave trees) * 2 coniferous forest (pine, spruce, ect) * 3 agricultural fields with vegetation 4 agricultural fields without vegetation (bare soil) 5 urban (built-up) areas 6 water without visible traces of algae 7 water with visible traces of algae blooming or other particles * Tip: look back at question 2.7 to know how you can differ coniferous forests with deciduous forests one of them appears darker mainly due to lower reflectance in the NIR. You will base your training sites on a FCC image (TM_FCC). - Display TM_FCC You will use the on-screen digitizing features to digitize polygons around your training sites. On-screen digitizing in IDRISI is made available through the following toolbar icons: Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-12 -

Digitize Delete Feature Save Digitized Data You can use the navigation buttons at the bottom of Composer to focus (pan/zoom in or out). - Select the Digitize icon from the toolbar, and enter TRAININGSITES as the name of the layer to be created. Then enter the feature identifier (number) for the class you want to begin with. Press OK. The vector polygon layer TRAININGSITES is automatically added to the composition and is listed on Composer. The cursor will now appear as the digitizing icon in the image. - Move the cursor to the starting point for the boundary of your training site and press the left mouse button. Then move to the next point along the boundary and press the left mouse button again. To finish digitizing a polygon, press the right mouse button. - You can save your digitized work at any time by pressing the Saved Digitize Data icon on the toolbar. If you make a mistake and wish to delete a polygon, select the Delete Feature icon, select the polygon you wish to delete and press delete on the keyboard. Any number of training sites, or polygons, with the same ID number may be created for each cover type, but there should be an adequate number of pixels for each cover type for statistical characterization. For this exercise, aim to have at least 100 pixels for each training set, a number of approximately 3-4 training sites should for each cover type should cover this. - Create all the trainingsites in the same vector layer (a question will appear before you begin to digitize a new polygon. Choose to Add features to the currently active vector layer. Press OK. - Check the ID-value in the window that appears make sure it is the ID-number for the cover type that you are going to digitize. - Continue until you have training sites for each cover class. - SAVE! Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-13 -

4. Signature Development Once you have a training site vector file, you are ready to create the signature file. - Run MAKESIG from the Analysis > Image Processing > Signature Development menu. - Choose Vector as the training site file type and enter TRAININGSITES as the file defining the training sites. Click the Enter Signature Filenames button. A separate signature file will be created for each identifier (class) in the vector file. Enter the name for each identifier shown (example 1 = forest). When you have entered all file names, click OK. - Indicate that 6 bands of imagery will be processed by pressing the up arrow on the spin button. Click the pick list button in the first box, and choose TM1. In the second box, choose TM2, and so on until all six boxes represent an image. MAKESIG automatically creates a spectral signature group file that contains all the signature file names. To compare these signatures, you can graph them. - Run SIGCOMP from the Analysis > Image Processing > Signature Development menu. Choose to use a signature group file and choose TRAININGSITES. Display them by their means. 4.1) Which band seems to be the best to differentiate the landuse classes? 5. Supervised Classification There are possible algorithms (statistical techniques) for classifying images. When the classification is based on training sites selected by the user, it is said to be a supervised classification. There are also unsupervised classification schemes, where the classes are uniquely based on statistics and the number of classes to be made. Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-14 -

Some of the possible algorithms in IDRISI are: - Minimum distance to means classifier - Maximum likelihood classifier - Parallelepiped classifier All of the classifiers may be found under Analysis > Image Processing > Hard Classifiers. (Read in the help to know more about these techniques) - Run first MINDIS (the minimum distance to means) and indicate that you will use raw distances and an infinite maximum search distance. Click on the Signature Group button and choose the TRAININGSITES signature group file. Give an appropriate output name and click the next button. By default, all bands are to be included in the classification. Click OK. - Now try MAXLIKE (maximum likelihood). Here, the distributions of reflectance values are described by a probability density function developed on the basis of Bayesian statistics. Choose to use equal prior probabilities for each signature. Choose again the signature group TRAININGSITES. The input grid will then automatically fill. Choose to classify all pixels, and use all bands in the classification. - Now try the PIPED (parallelepiped) classification defined by the min/max values, as using all bands. As you can see, the results differ importantly depending on the algorithm used. It is therefore important to select accurately the training areas. In this case, the urban areas, for example, most probably include pixels with high-ranging reflectance values in most bands, making this (or these) training sites rather poor. - To see the range of the reflectance values of the urban training sites, display them graphically with Analysis > Image Processing > Signature Development > SIGCOMP, and choose the filename urban only, displayed by minimum, maximum and mean. 4.2) Which classes are the most difficult to classify (classified wrongly), and why? Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-15 -

6. Performing a Radiometric Correction (calculating the at-satellite reflectance ) The Landsat TM (Thematic Mapper) sensors record the reflectance by allocating a value between 0 and 255 to each pixel (1 byte per pixel). These values are known as the digital numbers (DN). The brightness values (DN) recorded by the sensor are a function of atmospheric properties, target properties and location, as well as sun angle and solar irradiance during the time of sensing. If mosaics of satellite images are made, or if images from different time periods and/or locations are to be compared to each other, the differences in sun angle and solar irradiance should be corrected. Also, the distance between the Sun and the Earth varies in time (ex: seasons), which consequently effects solar irradiance. A common method to correct these differences and take in consideration the above factors is to calculate the at-satellite reflectance, which is a unitless value between 0 and 1 representing the percentage of reflectance reaching the sensor. Sun angle correction is applied by calculating pixel brightness values assuming the sun at zenith on each date of sensing. The easiest way to calculate this is with MS Excel. The first step is to convert the digital numbers (0-255) to a spectral radiance (mwcm -2 sr -1 μm -1 ), using formula 1. L L maxi L min i = L min i + DN (1) DN max i Li = Spectral radiance (mwcm -2 sr -1 μm -1 ) (i = band) Lmin i = Min. spectral radiance (mwcm -2 sr -1 μm -1 ). (Sensor response that gives DN min ) Lmax i = Max. spectral radiance (mwcm -2 sr -1 μm -1 ). (Sensor response that gives DN max ) DN = Digital number (0-255 for Landsat TM) DN max = Maximum digital number (255 for Landsat TM) Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-16 -

- In Excel, start by making a column with all digital values from 0 to 255. - Using the values in Table 1, calculate thereafter the spectral radiance and the at-satellite reflectance in other columns, do this for three spectral bands (of your own choice). Table 1: Spectral radiances, Lmin and Lmax in mwcm -2 sr -1 μm -1, and Esun in mw/(cm 2 μm) for the Landsat 7ETM+ satellite image. Band Band Lmin i Lmax i DNmax Esun i width (μm) TM 1 0.45-6.2 293.7 255 1970 0.52 TM 2 0.52-6.4 300.9 255 1843 0.60 TM 3 0.63-5.0 234.4 255 1555 0.69 TM 4 0.76-5.1 241.1 255 1047 0.90 TM 5 1.55-1.0 47.57 255 227.1 1.75 TM 7 2.08 2.35-0.35 16.54 255 80.53 With the spectral radiance known, the at-satellite reflectance can be calculated with formula 2. Formula 2: Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-17 -

2 π Li d ppλ = (2) Esun cos( Θ) i ppλ = Unitless effective at-satellite reflectance (% of energy reaching sensor) Li = Spectral radiance (mwcm -2 sr -1 μm -1 ) calculated with formula 1. d = Earth-Sun distance in astronomical units (from astronomical almanac) Esuni = Mean solar exoatmospheric spectral irradiance in mw/(cm 2 μm) Θ = Solar zenith angle in degrees Θ 49.31 The distance between the Earth and the Sun varies in time, but can be roughly approximated to 1. The solar zenith angle is the angle between the vertical and the position of the Sun, while the solar height is the angle between the ground (horiziontal) and the Sun. The solar height for the Landsat image is 49.31º. * NOTE: Excel uses by default angles in radians, and not degrees! - Once the calculations are completed, you must find the relation (equation) required to transform the DN numbers (0-255) to atsatellite reflectances (%). This can be done by, for example, plotting (XY scatter plot) all DN numbers on the X-axis and all at-satellite reflectance values on the Y-axis, and extracting the equation (right click on mouse on the points in graph, choose add trendline, in the trendline menu, select display equation on chart under options). The trend is linear. Write down the Equations below: TM1 TM2 Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-18 -

TM3 TM4 TM5 TM7 With these equations you are now able to transform the digital numbers (recorded by the satellite) in each band into reflectance values. WELL DONE! Extra resources: The following sites provide interesting learning material/tutorials regarding remote sensing: The Canada Centre for Remote Sensing: http://www.ccrs.nrcan.gc.ca/ccrs/learn/learn_e.html NASA: http://rst.gsfc.nasa.gov/start.html Jean-Nicolas Poussart, Nov. 2004 Modified by Emilie Stroh Nov. 2006-19 -