Remote Sensing Odyssey 7 Jun 2012 Benjamin Post
Definitions Applications Physics Image Processing Classifiers Ancillary Data Data Sources Related Concepts Outline
Big Picture
Definitions
Remote Sensing Remote Sensing is the collection of information about an object without having physical contact with the object Traditionally regarded as the use of satellites that collect information about the Earth and its environment. Other applications: i.e. microscope, MRIs, etc Both passive and active
Resolutions Spatial How many meters per pixel? Radiometric How many bits store the recorded spectral information? Typically 8 bits, some platforms have 12 bits Temporal How often is the data collected Combination of spatial resolution and platform capabilities Landsat ~ every 16 days Spectral What bands of information are collected? Important to know both the spectral signature of object in question and the spectral bands of the platform. Landsat 5 ER bands, 1-2 thermal bands.
Multispectral Multispectral imagery captures multiple bands of spectral wavelengths Typically 4-8 bands of information Usually includes visible (BGR) light, and always includes one NIR band Bands are typically an average of intensities captured in a range of wavelength
Applications
Vegetation Monitoring Forestry Canopy structure/height Leaf Area Index Biomass Agriculture Crop yield predictions Soil moisture Crop disease
Atmosphere Aerosols / Ozone Clouds Precipitation Water Algae growth / Chlorophyll Sediment Temperature Surface Moisture Mineral composition Temperature Environment
Land Use Commercial, residential, transportation, etc. Tax mapping Land Cover NLCD is derived from Landsat imagery Pollution detection and monitoring Bodies of water Disaster assessment BWCA blowdown/fire, Katrina Administrative
Physics
Basics Governed by the laws of Electromagnetic Radiation (ER) Particle and wave theory apply c = f * λ Frequency is inversely proportional to wavelength E = h * f = h * c / λ Photon s (AKA Quanta) energy is proportional to frequency
ER Spectrum
Interaction When ER comes in contact with an object, 4 possibilities occur: Reflected Absorbed Transmitted Refracted (Atmospheric, usually ignored) Energy Balance Incident E = Reflected + Absorbed + Transmitted
Reflection Main source of RS information, as this ER is what is being collected by the sensor. Gives an object its color. In imagery, gives a higher digital number (DN) Two types of reflector surfaces to consider
Absorption Another key source of RS information In imagery, this produces smaller DN values Atmospheric molecules (H₂O, CO₂, O₃) absorb and scatter specific wavelengths, leaving behind Atmospheric Windows suitable for RS Water completely absorbs near infrared (NIR) Impervious surfaces absorb NIR, then emit energy as thermal IR that can be detected.
Atmospheric Window
Transmittance Not as important as absorption and reflection, but a present phenomena Some of the ER passes through the object and can be reflected/absorbed by underlying objects Major uses Measuring tree canopy density using NIR Mapping sea floors using bathymetry
Refraction/Scattering Introduces error/noise in the resulting image Scattering occurs when light particles interact with atmospheric particles Rayleigh: the blue bands are scattered in the atmosphere Mie: Haze Non-selective: Dust Usually accounted for and not much of a concern for general classification of imagery.
Remote Sensing Equation
Image Processing
Image Processing Image Correction Compensates for distortion, errors, and noise Image Enhancement Improves or increases visual appearance/quality Information Extraction Classification of pixels based on spectral response
Image Correction Geometric Corrections Parametric fixes systematic errors Scan skew Mirror and platform velocity variations Perspective geometry Non-Parametric Establishes mathematical relationships between coordinates of pixels and ground control points. Involves both Rectification and Resampling
Rectification We know how this works, but how do we tell if it is accurate? We calculate the Root Mean Square (RMS) error of the differences between the map and image. RMS = SQRT((X X orig ) 2 + (Y Y orig ) 2 ) Acceptable RMS error <=.5 pixel So for z15, this would represent ~ 2.3m error
Resampling Nearest Neighbor Least precise in terms of placement, but preserves the original radiometry Bilinear Interpolation Smooth image by averaging the 4 closest pixels Cubic Convolution Smooth image by averaging the (16, 25, 36) closest pixels. Produces the sharpest image In RS, always want to preserve radiometry
Image Enhancement Processing or transformation of an image to make information extraction easier or more accurate Fall into 2 categories Spectral Radiometric Enhancement Designed to accentuate contrast between information classes at a per pixel level Spatial Enhancement Uses area/neighborhood operations to determine new pixel value Convolution Filters (High/Low Pass)
Spectral-Radiometric Enhancements Contrast Stretching Distributes the DN values to a larger range of output values.
Spectral-Radiometric Enhancements Principle Components Shows the correlation between 2 or more bands of information Useful in suburban areas, where there are many mixed pixels and a lot of variance. A summary of how it works wouldn t do it justice here, if you are interested read up on it http://en.wikipedia.org/wiki/principal_component_analysis
Principle Components
Spectral-Radiometric Enhancements Spectral Band Ratios Used to accentuate differential spectral responses in 2 bands AND normalizes unwanted effects. NDVI (IR red) / (IR + red) most common
Pan Sharpening Panchromatic images are often higher resolution, but black and white Landsat: Pan 15mpp, Bands 30mpp IKONOS: Pan 1mpp, Bands 4mpp Pan sharpening ties the colored bands to the pixels from a panchromatic image Resulting image is not as accurate as information captured in that resolution
Pan Sharpening
Classifiers
Before we dive in Careful consideration must be given to the class types and number of classes we want to extract from the imagery. Classes should be Mutually Exclusive Exhaustive Hierarchical The overall goal is to minimize the differences within a class, and to maximize the differences between classes.
Classifiers Hard Classifier Boolean, you re in or you re out. Soft Classifier Uses a membership vector to determine classification
Hard Classifiers - Unsupervised Uses cluster analysis to group natural clusters of similar brightness from multispectral image The clusters are not yet classified, the user must go through and apply the classification scheme ISODATA is most common algorithm Advantages: Quick and easy to classify depending on GIS platform Requires no previous knowledge of area being extracted Disadvantages: Number of classes greatly affects clustering accuracy Exact number only found using trial and error
Hard Classifiers - Supervised Parallelpiped Bounding box around class clusters Quick, considers variance but insensitive to boundary overlap Minimum Distance Measures Euclidian distance from pixel to class means. Quick, insensitive to variance
Hard Classifiers - Supervised Maximum Likelihood Computes the probability a pixel belongs to a class Assumes that training sets are normally distributed, so accurate training sets are the key Considered the best perpixel classifier to use Works extremely well with multispectral data, because it handles shared probabilities through other band combinations
Contextual Creates membership vector from neighboring pixels Usually a 3x3 window Neural Networks Can be effective at dealing with mixed pixels Professor not impressed with NN results. Soft Classifiers
Other Classifiers Object Oriented Image Segmentation Emerging as a viable option for high resolution imagery Image is segmented into objects (polygons) with shape and spectral homogeneity Objects are then classified using one of the classifiers previously mentioned. Rule-based decision algorithms can also be used.
Ancillary Data
Ancillary Data Definition: inclusion of outside information to assist in imagery classification Typically involves the use of elevation or elevation derived products Ancillary data can be brought in at any time Pre-processing: Geographical Stratification Classification: As additional bands Post-processing: Sorting
Example Tree Canopy Assessment Segmented Quickbird imagery Combined with ndsm of LIDAR
Data Sources
EROS Earth Resources Observation and Science Administered by the USGS Responsible for Landsat Landsat imagery used in NLCD Landsat 5 is on its last leg Landsat 7 has a scan line issue Landsat 8 launches in 231 days http://glovis.usgs.gov/
France/Europe SPOT 20m resolution, not free Canada Radarsat variable resolution, as low as 5.6m India ResourceSat-1: 5.8m Other Gov ts
Private Companies DigitalGlobe Quickbird 4 bands, 2.6m Worldview 1 Pan, 50cm Worldview 2 8 bands, 1.85m GeoEye GeoEye 1 4 bands, 1.65m GeoEye 2- Operational 2013, 1.36 m IKONOS 4 bands, 3.2m RapidEye Cluster of 5 sats 5 bands, 6.5m
Related Concepts
Hyperspectral RS Captures around 200 bands of information Better approximation of an object s spectral signature If the signature is known, can just reference the spectral curve to identify the object Large amounts of data, very complex Sensors are expensive and often limited to coarse resolutions Fields Geology Agriculture
Change Detection Change detection stems from the temporal nature of imagery, identifying the changes that have taken place in an area over time Two major change detection algorithms Image difference MapA MapB, with a threshold applied to the resulting map to determine areas of change. Don t know the nature of the change Post Classification difference Both images are classified, then analyzed to determine where the thematic values have changed Knowledge of what MapA changed to in MapB
Questions? I ll have books available on my shelf if you are interested.