GEOI 313: Digital Image Processing - I

Similar documents
746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage

Image interpretation and analysis

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

An Introduction to Remote Sensing & GIS. Introduction

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Course overview; Remote sensing introduction; Basics of image processing & Color theory

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf(

Remote sensing image correction

Image Band Transformations

The techniques with ERDAS IMAGINE include:

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005

Chapter 5. Preprocessing in remote sensing

Lecture 2. Electromagnetic radiation principles. Units, image resolutions.

Lecture Notes Prepared by Prof. J. Francis Spring Remote Sensing Instruments

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

Remote Sensing Exam 2 Study Guide

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing

RADIOMETRIC CALIBRATION

GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT 1-3 MSS IMAGERY

Enhancement of Multispectral Images and Vegetation Indices

Digital Image Processing

Image transformations

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

Fig Color spectrum seen by passing white light through a prism.

Section 2 Image quality, radiometric analysis, preprocessing

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

Introduction to Remote Sensing

Remote Sensing Platforms

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

Digital Image Processing

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Camera Requirements For Precision Agriculture

Image Extraction using Image Mining Technique

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

CHAPTER 2 A NEW SCHEME FOR SATELLITE RAW DATA PROCESSING AND IMAGE REPRESENTATION

Introduction to Remote Sensing. Electromagnetic Energy. Data From Wave Phenomena. Electromagnetic Radiation (EMR) Electromagnetic Energy

Study guide for Graduate Computer Vision

ECC419 IMAGE PROCESSING

Camera Requirements For Precision Agriculture

IKONOS High Resolution Multispectral Scanner Sensor Characteristics

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

remote sensing? What are the remote sensing principles behind these Definition

Image Enhancement in Spatial Domain

MULTISPECTRAL IMAGE PROCESSING I

Abstract Quickbird Vs Aerial photos in identifying man-made objects

University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI

Image Processing for feature extraction

Digital Image Processing

Midterm Review. Image Processing CSE 166 Lecture 10

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

Digital Image Processing. Lecture # 8 Color Processing

RGB colours: Display onscreen = RGB

Environmental Remote Sensing GEOG 2021

746A27 Remote Sensing and GIS

REMOTE SENSING INTERPRETATION

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post

Microwave Remote Sensing (1)

Center for Advanced Land Management Information Technologies (CALMIT), School of Natural Resources, University of Nebraska-Lincoln

Data Sources. The computer is used to assist the role of photointerpretation.

Non Linear Image Enhancement

Sensors and Sensing Cameras and Camera Calibration

Remote Sensing for Rangeland Applications

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Dirty REMOTE SENSING Lecture 3: First Steps in classifying Stuart Green Earthobservation.wordpress.com

Remote Sensing Platforms

EFFECT OF DEGRADATION ON MULTISPECTRAL SATELLITE IMAGE

Chapter 3 Part 2 Color image processing

Passive Microwave Sensors LIDAR Remote Sensing Laser Altimetry. 28 April 2003

Remote Sensing. Ch. 3 Microwaves (Part 1 of 2)

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Lab 6: Multispectral Image Processing Using Band Ratios

Digital Image Processing - A Remote Sensing Perspective

Lecture 13: Remotely Sensed Geospatial Data

Comprehensive Vicarious Calibration and Characterization of a Small Satellite Constellation Using the Specular Array Calibration (SPARC) Method

RADAR (RAdio Detection And Ranging)

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

typical spectral signatures of photosynthetically active and non-photosynthetically active vegetation (Beeri et al., 2007)

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Image Processing Lecture 4

DECISION NUMBER FOURTEEN TO THE TREATY ON OPEN SKIES

Introduction to Remote Sensing Part 1

Image Filtering. Median Filtering

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

John P. Stevens HS: Remote Sensing Test

NAVAL POSTGRADUATE SCHOOL THESIS

Transcription:

GEOI 33: Digital Image Processing - I Module I: Image Representation Image Display For remote sensing computing, the image display is especially important because the analyst must be able to examine images and inspect results of analysis, which often are themselves images. At the simplest level, an image display can be thought of as a high quality television screen, although those tailored specifically for image processing have image display processors, which are special computers designed to receive rapidly digital data from the main computer and display them as brightness on the screen. The capabilities of an image display are determined by several factors. First is the size of the image it can display, usually specified by the number of rows and columns it can show at any one time. A large display might show,024 rows and,280 columns. A smaller display with respectable capabilities could show a,024 x 708 image; others of much more limited capabilities could show only smaller sizes, perhaps 640 x 480 (typical of display for the IBM PC). Second, a display has a given radiometric resolution. That is, for each pixel, it has a capability to show a range of brightness. One-bit resolution would give the capability to show either black or white-certainly not enough detail to be useful for most purposes. In practice, six bits (64 brightness levels) are probably necessary for images to appear 'natural', and high-quality displays typically display eight bits (256 brightness levels) or more. A third factor controls the rendition of color in the displayed images. The method of depicting color is closely related to the design of the image display and the display processor. Image display data are held in the frame buffer-a large segment of computer memory dedicated to handling data for display. The frame buffer provides one or more bits to record the brightness of each pixel to be shown on the screen (the bit plane); thus, the displayed image is generated, bit by bit, in the frame buffer. The more bits that have been designed in the frame buffer for each pixel, the greater the range of brightnesses that can be shown for that pixel, as explained above. For actual display on the screen, the digital value for each pixel is converted into an electrical signal that controls the brightness of the pixel on the screen. This requires a digital-to-analog (D-to-A) converter that translates discrete digital values into continuous electrical signals (the opposite function of the A-to-D converter mentioned previously). Different types of images and acquisition Multispectral Scanning There are two main modes or methods of scanning employed to acquire multispectral image data - across-track scanning, and along-track scanning. Across-track scanners scan the Earth in a series of lines. The lines are oriented perpendicular to the direction of motion of the sensor platform (i.e. across the swath). Each line is scanned from one side of the sensor to the other, using a rotating mirror (A). As the platform moves forward over the Earth, successive scans build up a two-dimensional image of the Earth s surface. The incoming reflected or emitted radiation is separated into several spectral components that are detected independently. The UV, visible, near-infrared, and thermal radiation are dispersed into their constituent wavelengths. A bank of internal detectors (B), each sensitive to a specific range of wavelengths, detects and

measures the energy for each spectral band and then, as an electrical signal, they are converted to digital data and recorded for subsequent computer processing. The IFOV (C) of the sensor and the altitude of the platform determine the ground resolution cell viewed (D), and thus the spatial resolution. The angular field of view (E) is the sweep of the mirror, measured in degrees, used to record a scan line, and determines the width of the imaged swath (F). Airborne scanners typically sweep large angles (between 90º and 20º), while satellites, because of their higher altitude need only to sweep fairly small angles (0-20º) to cover a broad region. Because the distance from the sensor to the target increases towards the edges of the swath, the ground resolution cells also become larger and introduce geometric distortions to the images. Also, the length of time the IFOV "sees" a ground resolution cell as the rotating mirror scans (called the dwell time), is generally quite short and influences the design of the spatial, spectral, and radiometric resolution of the sensor. Along-track scanners also use the forward motion of the platform to record successive scan lines and build up a two-dimensional image, perpendicular to the flight direction. However, instead of a scanning mirror, they use a linear array of detectors (A) located at the focal plane of the image (B) formed by lens systems (C), which are "pushed" along in the flight track direction (i.e. along track). These systems are also referred to as pushbroom scanners, as the motion of the detector array is analogous to the bristles of a broom being pushed along a floor. Each individual detector measures the energy for a single ground resolution cell (D) and thus the size and IFOV of the detectors determines the spatial resolution of the system. A separate linear array is required to measure each spectral band or channel. For each scan line, the energy detected by each detector of each linear array is sampled electronically and digitally recorded. Along-track scanners with linear arrays have several advantages over across-track mirror scanners. The array of detectors combined with the pushbroom motion allows each detector to "see" and measure the energy from each ground resolution cell for a longer period of time (dwell time). This allows more energy to be detected and improves the radiometric resolution. The increased dwell time also facilitates smaller IFOVs and narrower bandwidths for each detector. Thus, finer spatial and spectral resolution can be achieved without impacting radiometric resolution. Because detectors are usually solid-state microelectronic devices, they are generally smaller, lighter, require less power, and are more reliable and last longer because they have no moving parts. On the other hand, cross-calibrating thousands of detectors to achieve uniform sensitivity across the array is necessary and complicated.

Module II: Image Enhancement-I Contrast manipulation Contrast stretching This technique involves the translation of the image pixel values from the observed range DNmin to DNmax to the full range of the display device (generally 0-255, which is the range of values representable in an 8bit display devices) lacking in contrast or over bright. There are several types of contrast enhancements which can be subdivided into Linear and Non-Linear procedures. Linear Contrast Stretch: This technique involves the translation of the image pixel values from the observed range DNmin to DNmax to the full range of the display device(generally 0-255, which is the range of values representable in an 8bit display devices)this technique can be applied to a single band, grey-scale image, where the image data are mapped to the display via all three colors LUTs. It is not necessary to stretch between DNmax and DNmin - Inflection points for a linear contrast stretch from the 5th and 95th percentiles, or ± 2 standard deviations from the mean (for instance) of the histogram, or to cover the class of land cover of interest (e.g. water at expense of land or vice versa). It is also straightforward to have more than two inflection points in a linear stretch, yielding a piecewise linear stretch. Equalized Contrast Stretch (Histogram): The underlying principle of histogram equalisation is straightforward and simple, it is assumed that each level in the displayed image should contain an approximately equal number of pixel values, so that the histogram of these displayed values is almost uniform (though not all 256 classes are necessarily occupied). The objective of the histogram equalisation is to spread the range of pixel values present in the input image over the full range of the display device.

Gaussian Stretch This method of contrast enhancement is based upon the histogram of the pixel values is called a Gaussian stretch because it involves the fitting of the observed histogram to a normal or Gaussian histogram. Gray-level thresholding: Gray level thresholding is used to segment an input image into two classes one for those pixels having values below an analyst define gray level and one for those above this value. Below, we illustrate the use of thresholding to prepare a binary mask for an image. Such masks are used to segment an image into two classes so that additional processing can then be applied to each class independently. Level slicing Level (Density) Slicing is the mapping of a range of contiguous grey levels of a single band image to a point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into discrete intervals of 0-63, 64-27, 28-9 and 92-255. These four classes are displayed as four different grey levels. This kind of level slicing is often used in displaying temperature maps. Spatial feature manipulation Spatial Filtering Spatial Filtering can be described as selectively emphasizing or suppressing information at different spatial scales over an image. Filtering techniques can be implemented through the Fourier transform in the frequency domain or in the spatial domain by convolution. i) Convolution Filters Filtering methods exists is based upon the transformation of the image into its scale or spatial frequency components using the Fourier transform. The spatial domain filters

or the convolution filters are generally classed as either high-pass (sharpening) or as low-pass (smoothing) filters. ii) Low-Pass (Smoothing) Filters Low-pass filters reveal underlying two-dimensional waveform with a long wavelength or low frequency image contrast at the expense of higher spatial frequencies. Low-frequency information allows the identification of the background pattern, and produces an output image in which the detail has been smoothed or removed from the original. A 2-dimensional moving-average filter is defined in terms of its dimensions which must be odd, positive and integral but not necessarily equal, and its coefficients. The output DN is found by dividing the sum of the products of corresponding convolution kernel and image elements often divided by the number of kernel elements. A similar effect is given from a median filter where the convolution kernel is a description of the PSF weights. Choosing the median value from the moving window does a better job of suppressing noise and preserving edges than the mean filter. Adaptive filters have kernel coefficients calculated for each window position based on the mean and variance of the original DN in the underlying image. iii) High-Pass (Sharpening) Filters Simply subtracting the low-frequency image resulting from a low pass filter from the original image can enhance high spatial frequencies. High -frequency information allows us either to isolate or to amplify the local detail. If the high-frequency detail is amplified by adding back to - - - - 6 - - - - the image some multiple of the high frequency component extracted by the filter, then the result is a sharper, de-blurred image. High-pass convolution filters can be designed by representing a PSF with positive centre weightr and negative surrounding weights. A typical 3x3 Laplacian filter has a kernal with a high central value, 0 at each corner, and - at the centre of each edge. Such filters can be biased in certain directions for enhancement of edges. A high-pass filtering can be performed simply based on the mathematical concepts of derivatives, i.e., gradients in DN throughout the image. Since images are not continuous functions, calculus is dispensed with and instead derivatives are estimated from the differences in the DN of adjacent pixels in the x,y or diagonal directions. Directional first differencing aims at emphasising edges in image.

Edge enhancement Most interpreters are concerned with recognizing linear features in images. Geologists map faults, joints, and lineaments. Geographers map manmade linear features such as highways and canals. Some linear features occur as narrow lines against a background of contrasting brightness; others are the linear contact between adjacent areas of different brightness. In all cases, linear features are formed by edges. Some edges are marked by pronounced differences in brightness and are readily recognized. More typically, however, edges are marked by subtle brightness differences that may be difficult to recognize. Contrast enhancement may emphasize brightness differences associated with some linear features. This procedure, however, is not specific for linear features because all elements of the scene are enhanced equally, not just the linear elements. Fourier analysis An image is separated into its various spatial frequency components through application of a mathematical operation known as the Fourier transforms. This operation is amounts to fitting a continuous function through the discrete DN values if they were plotted along each row and column in an image. The 'peaks and valleys" along any given row and column can be describe mathematically by a combination of sine and cosine waves with various amplitudes, frequencies, and phases. A Fourier transformation results from the calculation of the amplitude and phase for each possible spatial frequency in an image. After image is separated into its component spatial frequencies, it is possible to display these values in a two dimensional scatter plot known as a Fourier spectrum. Fourier analysis is useful in the host of image processing operations in addition to the spatial filtering and image restoration applications.

Module III: Image Enhancement-II Multi-image manipulation Image Arithmetic Operations The operations of addition, subtraction, multiplication and division are performed on two or more co-registered images of the same geographical area. These techniques are applied to images from separate spectral bands from single multispectral data set or they may be individual bands from image data sets that have been collected at different dates. More complicated algebra is sometimes encountered in derivation of sea-surface temperature from multispectral thermal infrared data (so called splitwindow and multichannel techniques). Addition of images is generally carried out to give dynamic range of image that equals the input images. Band Subtraction Operation on images is sometimes carried out to co-register scenes of the same area acquired at different times for change detection. Multiplication of images normally involves the use of a single'real' image and binary image made up of ones and zeros. Band Ratioing or Division of images is probably the most common arithmetic operation that is most widely applied to images in geological, ecological and agricultural applications of remote sensing. Ratio Images are enhancements resulting from the division of DN values of one spectral band by corresponding DN of another band. One instigation for this is to iron out differences in scene illumination due to cloud or topographic shadow. Ratio images also bring out spectral variation in different target materials. Multiple ratio image can be used to drive red, green and blue monitor guns for color images. Interpretation of ratio images must consider that they are "intensity blind", i.e, dissimilar materials with different absolute reflectances but similar relative reflectances in the two or more utilised bands will look the same in the output image. Principal components Canonical components PCA is appropriate when little prior information about the scene is available. Canonical component analysis, also referred to as multiple discriminant analysis, may be appropriate when information about particular features of interest is available. Canonical component axes are located to maximize the separability of different userdefined feature types.

Vegetation components: Live green plants absorb solar radiation in the photosynthetically active radiation (PAR) spectral region, which they use as a source of energy in the process of photosynthesis. Leaf cells have also evolved to scatter (i.e., reflect and transmit) solar radiation in the near-infrared spectral region (which carries approximately half of the total incoming solar energy), because the energy level per photon in that domain (wavelengths longer than about 700 nanometers) is not sufficient to be useful to synthesize organic molecules. A strong absorption at these wavelengths would only result in overheating the plant and possibly damaging the tissues. Hence, live green plants appear relatively dark in the PAR and relatively bright in the nearinfrared.[3] By contrast, clouds and snow tend to be rather bright in the red (as well as other visible wavelengths) and quite dark in the near-infrared. The pigment in plant leaves, chlorophyll, strongly absorbs visible light (from 0.4 to 0.7 µm) for use in photosynthesis. The cell structure of the leaves, on the other hand, strongly reflects near-infrared light (from 0.7 to. µm). The more leaves a plant has, the more these wavelengths of light are affected, respectively. Since early instruments of Earth Observation, such as NASA's ERTS and NOAA's AVHRR, acquired data in visible and near-infrared, it was natural to exploit the strong differences in plant reflectance to determine their spatial distribution in these satellite images. The NDVI is calculated from these individual measurements as follows: where VIS and NIR stand for the spectral reflectance measurements acquired in the visible (red) and near-infrared regions, respectively (http://earthobservatory.nasa.gov/features/measuringvegetation/measuring_vegeta tion_2.php). These spectral reflectances are themselves ratios of the reflected over the incoming radiation in each spectral band individually; hence they take on values between 0.0 and.0. By design, the NDVI itself thus varies between -.0 and +.0. It should be noted that NDVI is functionally, but not linearly, equivalent to the simple infrared/red ratio (NIR/VIS). The advantage of NDVI over a simple infrared/red ratio is therefore generally limited to any possible linearity of its functional relationship with vegetation properties (e.g. biomass). The simple ratio (unlike NDVI) is always positive, which may have practical advantages, but it also has a mathematically infinite range (0 to infinity), which can be a practical disadvantage as compared to NDVI. Also in this regard, note that the VIS term in the numerator of NDVI only scales the result, thereby creating negative values. NDVI is functionally and linearly equivalent to the ratio NIR / (NIR+VIS), which ranges from 0 to and is thus never negative nor limitless in range. [4] But the most important concept in the understanding of the NDVI algebraic formula is that, despite its name, it is a

transformation of a spectral ratio (NIR/VIS), and it has no functional relationship to a spectral difference (NIR-VIS). In general, if there is much more reflected radiation in near-infrared wavelengths than in visible wavelengths, then the vegetation in that pixel is likely to be dense and may contain some type of forest. Subsequent work has shown that the NDVI is directly related to the photosynthetic capacity and hence energy absorption of plant canopies. Intensity-hue-saturation (HIS) colour space transformation Hue is generated by mixing red, green and blue light are characterised by coordinates on the red, green and blue axes of the color cube. The hue-saturation-intensity hexcone model, where hue is the dominant wavelength of the perceived color represented by angular position around the top of a hexcone, saturation or purity is given by distance from the central, vertical axis of the hexcone and intensity or value is represented by distance above the apex of the hexcone. Hue is what we perceive as color. Saturation is the degree of purity of the color and may be considered to be the amount of white mixed in with the color. It is sometimes useful to convert from RGB color cube coordinates to HIS hexcone coordinates and vice-versa The hue, saturation and intensity transform is useful in two ways: first as method of image enhancement and secondly as a means of combining co-registered images from different sources. The advantage of the HIS system is that it is a more precise representation of human color vision than the RGB system. This transformation has been quite useful for geological applications. Module IV: Image Analysis Digital Analysis - image rectification and restoration: Radiometric, atmospheric and geometric corrections, correction in the spatial spectrum of the images Radiometric and Geometric Corrections Introduction In their raw form as received from imaging sensors mounted on satellite platforms, remotely sensed data may contain flaws or deficiencies. The correction of deficiencies and removal of flaws present in the data is termed as pre-processing. Image preprocessing can be classified into three functional categories i) Radiometric corrections ii) Atmospheric corrections iii) Geometric corrections The intent of image correction is to correct image data for distortions or degradations that stem from the image acquisition process. Image Radiometry generally refers to the digital representation of the sensed data, while radiometric correction involves the

rearrangement of the digital numbers (DN) in an image so that all areas in the image have the same linear relationship between the DN and either radiance or backscatter. Digital Number (DN value) is also known as pixel value. Image geometry refers to the projection, scale and orientation of the image, while geometric correction refers to the modification of the input geometry to achieve the desired geometry. Radiometric Corrections Radiometric errors are caused by detector imbalance and atmospheric deficiencies. Radiometric corrections are transformations on the data in order to remove errors, which are geometry independent. Radiometric corrections are also called as cosmetic corrections and are done to improve the visual appearance of the image. Multiple detectors are used in the sensor system to simultaneously sense several image lines during each sweep of the mirror. This configuration requires an array of 24 detectors (6 lines x 4 bands) in case of MSS. As the detectors are not precisely equivalent in their output characteristics, their output changes gradually over time. Due to these variations there will be different output for the same ground radiance To accomplish this, the scanner views an electrically illuminated step wedge filter during each mirror sweep. Once per orbit, the scanner views the sun to provide a more absolute calibration. These calibration values are used to develop radiometric correction functions for each detector. The correction functions yield digital numbers that correspond linearly with radiance and are applied to all data prior to dissemination. Some of the radiometric distortions are as follows () Correction for missing lines (2) Correction for periodic line striping (3) Random noise correction (4) Atmospheric correction. Correction for Missing Scan Lines (Scan line drop out): Although detectors onboard orbiting satellites are well tested and calibrated before launch, breakdown of any of the detectors may take place. Such defects are due to errors in the scanning or sampling equipment, in the transmission or recording of image data or in reproduction of CCT's. The missing scan lines are seen as horizontal black (pixel value 0) or white (pixel value 255) lines on the image. Techniques are available to locate these bad lines by selecting unusually large discrepancies in image values for sequential lines. The first step in the restoration process is to calculate the average DN value per scan line for entire scene. The average DN value for each scan line is then compared with scene average. Any scan line deviating from the average by more than a designated threshold value is identified as defective. Once detected, they may be cosmetically corrected in three ways Replacement by either the preceding or the succeeding line Averaging of the neighbouring pixel values Replacing the line with other highly correlated band. Correction for line striping (De-stripping): A sensor is called ideal when there is a linear relationship between input and the output. Although all the detectors are well calibrated prior to the launch, the response of some of the detectors may shift towards lower or higher end. The

presence of a systematic horizontal banding pattern is frequently seen on images produced by electronic scanners such as MSS sixth line banding and on TM sixteenth line banding. Banding is a cosmetic defect and it interferes with the visual appreciation of the patterns and features on the image. Hence corrections for these bandings are to be applied to improve the visual appearance and interpretability of the image. Two methods of de-stripping are considered, both these methods are based upon the shape of the histograms of pixel values generated by the individual detectors in a particular band. Atmospheric correction: The value recorded at any pixel location on the remotely sensed image is not a record of thetrue ground-leaving radiance at that point, for the signal is attenuated due to absorption and its directional properties are altered by scattering. Figure 8 depicts the effects the atmosphere has on the measured brightness values of a single pixel for a passive remote sensing system. Scattering at S2 redirects some of the incident radiance within the atmosphere in the field of view of the sensor (the atmospheric path radiance) and some of the energy reflected from point Q is scattered at S so that it is seen as coming from P. To add to these effects the radiance from P and Q is attenuated as it passes through the atmosphere. Other difficulties are caused by the variations in the illumination geometry (Sun s elevation and azimuth angles). The relationship between radiance received at a sensor above the atmosphere and the radiance leaving the ground surface can be given as Ls = Htot ρ T + Lp Htot = total downwelling radiance in a specified spectral band ρ = reflectance of the target T = atmospheric transmittance Lp = atmospheric path radiance Geometric Errors and Corrections Remotely sensed data usually contains both systematic and unsystematic geometric errors. Distortions whose effects are systematic in nature and are constant and can be predicted in advance are called systematic distortions. Systematic distortions are Scan skew, Mirror-scan velocity, Panoramic distortions and Non-systematic distortions include errors due to platform Altitude and Attitude, Platform velocity, Earth rotation, perspective projection(fig 9). More over remotely sensed images are not maps. The transformation of a remotely sensed image so that it has a scale and projections of a map is called geometric correction. A related technique, called registration, is the fitting of the coordinate system of an image to that of second image of the same area. These errors can be divided into two classes (a) those that can be corrected using data from platform ephemeris and knowledge of internal sensor distortion (b) those that cannot be corrected with acceptable accuracy without a sufficient number of ground control points (GCP). Distortion evaluated from tracking data

. Earth Rotation: As the scanning mirror completes successive scans, the earth rotates beneath the sensor. Thus there is a gradual westward shift of the ground swath being scanned. This causes along-scan distortion. To give the pixels their correct position relative to the ground it is necessary to offset the bottom of the image to the west by the amount of movement of the ground during image acquisition. The amount by which the image has to be skewed to the west depends upon the relative velocities of the satellite and earth and the length of the image frame recorded. 2. Spacecraft Velocity: If the spacecraft velocity departs from nominal, the ground track covered by a fixed number of successive mirror sweep changes. This produces along-track scale distortion. 3. Scan- Time Skew: During the time required for the scanning mirror to complete an active scan, the spacecraft moves along the ground track. Thus, the ground swath scanned is not normal to the ground track but is slightly skewed, which produces crossscan geometric distortion. The known velocity of the satellite is used to restore this geometric distortion. The magnitude of correction is 0.082 km for MSS. 4. Sensor Mirror Sweep: The mirror-scanning rate varies non-linearly across a scan because of imperfections in the electro mechanical driving mechanism. Since data samples are taken at regular intervals of time, the varying scan rate produces alongscan distortions. The magnitude of the correction is 0.37 km for MSS. 5. Panoramic Distortions: For scanners used on space borne and airborne remote sensing platforms the angular instantaneous field of view (IFOV) is constant. As a result the effective pixel size on the ground is larger at the extremities of the scan line than at the nadir. It produces along-scan distortion. If the instantaneous field of view (IFOV) is β and the pixel dimension at nadir is p, then its dimension in the scan direction at a scan angle of θ is pθ = βh sec2θ = p sec2θ where h is altitude. 6. Perspective projection: For some applications it is desired that Landsat images represent the projection of points on the earth upon a plane tangent to the earth, with all projection lines normal to the plane. The sensor data represent perspective projections, projections whose all lines meet at a point above the tangent plane. For the MSS, this produces only along-scan distortion. Distortion evaluated from ground control. Altitude: Departures of the spacecraft altitude from nominal produces scaledistortions in the sensor data. For MSS, the distortion is along-scan only and varies with time. The magnitude of correction is upto.5 km for MSS. 2. Attitude: Normally, this sensor axis system is maintained with one axis normal to the Earth's surface and another parallel to the spacecraft velocity vector. As the sensor departs from this attitude, geometric error results. Roll and pitch errors shift the image linearly. Yaw error rotates each image line about its center. Maximum shift occurs to the edge pixels under yaw. For LISS- II, a roll error of 0. degree will shift the image line by.57 km across the track. For pitch error of same magnitude, the line gets shifted along the track by.57 km.

Module V: Image Analysis Principal component Analysis and Discriminate analysis Spectrally adjacent bands in a multispectral remotely sensed image are often highly correlated. Multiband visible/near-infrared images of vegetated areas will show negative correlations between the near-infrared and visible red bands and positive correlations among the visible bands because the spectral characteristics of vegetation are such that as the vigour or greenness of the vegetation increases the red reflectance diminishes and the near-infrared reflectance increases. Thus presence of correlations among the bands of a multispectral image implies that there is redundancy in the data and Principal Component Analysis aims at removing this redundancy. Principal Components Analysis (PCA) is related to another statistical technique called factor analysis and can be used to transform a set of image bands such that the new bands (called principal components) are uncorrelated with one another and are ordered in terms of the amount of image variation they explain. The components are thus a statistical abstraction of the variability inherent in the original band set. To transform the original data onto the new principal component axes, transformation coefficients (eigen values and eigen vectors) are obtained that are further applied in alinear fashion to the original pixel values. This linear transformation is derived from the covariance matrix of the original data set. These transformation coefficients describe the lengths and directions of the principal axes. Such transformations are generally applied either as an enhancement operation, or prior to classification of data. In the context of PCA, information means variance or scatter about the mean. Multispectral data generally have a dimensionality that is less than the number of spectral bands. The purpose of PCA is to define the dimensionality and to fix the coefficients that specify the set of axes, which point in the directions of greatest variability. The bands of PCA are often more interpretable than the source data.