Remote Sensing: Fundamentals, Types and Monitoring Applications of Environmental Consequences of War

Size: px
Start display at page:

Download "Remote Sensing: Fundamentals, Types and Monitoring Applications of Environmental Consequences of War"

Transcription

1 Hdb Env Chem (2009): DOI: /698_2008_6 41 Remote Sensing: Fundamentals, Types and Monitoring Applications of Environmental Consequences of War Dhari Al Ajmi and Saif ud din Abstract The chapter deals with the fundamentals of remote sensing, basic principle of electromagnetic radiation and its interaction with the earth, atmosphere and surface materials. The types of sensors, digital data formats, basic image processing techniques, including image enhancement techniques and classification methods are explained in details. Besides the image processing techniques, applications of remote sensing in Kuwait are presented in a section on applications, that include techniques for mapping subsidence in oil fields, estimation of recharge to shallow aquifers and freshwater lenses, calibration of satellite precipitation data and mapping of hydrocarbon contamination using land surface temperature estimates. Keywords EMR, hydrocarbon pollution, image enhancement, interferometry, LST, TRMM Contents 1 Fundamentals of Remote Sensing Electromagnetic Spectrum: The Photon and Radiometric Quantities Electromagnetic Spectrum: Transmittance, Absorptance, and Reflectance Electromagnetic Spectrum: Distribution of Radiant Energies Electromagnetic Spectrum: Spectral Signatures Sensor Technology Some Basic Image Processing Procedures Processing and Classification of Remotely Sensed Data; Pattern Recognition; Approaches to Data/Image Interpretation Digital Image Digital Image Analysis System Digital Data Formats D. Al Ajmi and Saif ud din Kuwait Institute for Scientific Research Springer-Verlag Berlin Heidelberg 2009

2 42 D. Al Ajmi and Saif ud din 5.4 Preprocessing Image Enhancement Single Bands Linear Stretch Filtering Low-Frequency Filtering in the Spatial Domain High-Frequency Filtering in the Spatial Domain Edge Enhancement in the Spatial Domain Non-Linear Edge Enhancement Principal Components Analysis Band Ratio Classification Image Classification Unsupervised Classification Supervised Classification Parallelepiped Classification The Minimum Distance to Mean Classification The Maximum Likelihood Classification Monitoring Applications of Environmental Consequences of War Application I Hydrocarbon Pollution consequent to 1991 Gulf War Application II Precipitation Estimate Using Tropical Rainfall Measuring Mission Application III Feature Extraction Technique for Palm Tree Census Application IV Subsidence in Oil Fields Application V Sustainable Development of Fresh Water Resources References Fundamentals of Remote Sensing Remote sensing is a study of process, object, or phenomena without being in physical contact with it. The simplest example of remote sensing in daily life is viewing the screen of your computer monitor ( Fig. 1 ). A physical quantity (light) emanates from the screen, which is a source of radiation. The radiated light passes over a distance, and thus is remote to some extent, until it is encountered and captured by the sensor (your eyes). Each eye sends a signal to a processor (your brain) which records the data and interprets this into information. Several of the human senses gather their awareness of the external world almost entirely by perceiving a variety of signals, either emitted or reflected, actively or passively, from objects that transmit this information in waves or pulses. Remote sensing represents the acquisition and measurement of data/information on some property(ies) of a phenomenon, object, or material by a recording device not in physical, intimate contact with the feature(s) under surveillance; techniques involve amassing knowledge pertinent to environments by measuring force fields, electromagnetic radiation, or acoustic energy employing cameras, radiometers and scanners, lasers, radio frequency receivers, radar systems, sonar, thermal devices, seismographs, magnetometers, gravimeters, scintillometers, and other instruments.

3 Remote Sensing 43 Fig. 1 Satellite image of Kuwait City 1.1 Electromagnetic Spectrum: The Photon and Radiometric Quantities The underlying basis for most remote sensing methods and systems is measuring the varying energy levels of a photon. The variations in photon energies are tied to the parameter wavelength or frequency. Electromagnetic radiation covers high to low energy levels which combine into the electromagnetic spectrum (EMS). Radiation from specific parts of the EMS contains photons of different wavelengths whose energy levels fall within a discrete range of values. When any target material is excited by internal processes or by interaction with incoming electromagnetic radiation, it will emit photons of varying wavelengths whose radiometric quantities differ at different wavelengths in a way diagnostic of the material. The photon can be described as the messenger particle for EM force or as the smallest bundle of light. This subatomic mass less particle comprises radiation emitted by matter when it is excited thermally, or by nuclear processes (fusion, fission), or by bombardment with other radiation. It also can become involved as reflected or absorbed radiation. Photons move at the speed of light: 299, kms 1. These particles also move as waves and hence, have a dual nature. These waves follow a pattern that we described in terms of a sine (trigonometric) function, as shown in two dimensions ( Fig. 2 ).

4 44 D. Al Ajmi and Saif ud din Fig. 2 Wave propagation and frequency in different bands Wave length Amplitude Amplitude Electric Wave Magnetic Wave Electromagnetic Spectrum: Transmittance, Absorptance, and Reflectance Fig. 3 Electromagnetic spectrum: trandmittance, Absorptance, and Reflectance The distance between two adjacent peaks on a wave is its wavelength. The total number of peaks that pass by a reference in a second is frequency. A photon travels as a wave having two components, oscillating as sine waves mutually at right angles, one consisting of the varying electric field, the other the varying magnetic field. Both have the same amplitudes and their max min coincide. Unlike other wave types which require a carrier (e.g., water waves), photon waves can transmit through a vacuum (such as in space). When photons pass from one medium to another, e.g., air to glass, refraction is observed ( Fig. 3 ).

5 Remote Sensing Electromagnetic Spectrum: Transmittance, Absorptance, and Reflectance Any beam of photons from some source passing through one medium to another will experience any one or combination of any of the phenomena, i.e., transmission, reflection, absorption, scattering. The primary source of energy that illuminates natural targets is the Sun. The main wavelength interval is between 2,000 and 34,000 Ångstrom [Å], with the maximum power input close to 4,800 Å, which is in the visible green region. As solar rays arrive at the Earth, the atmosphere absorbs or backscatters a fraction of them and transmits the remainder. Upon striking the land and ocean surface, and atmospheric targets, such as air, moisture, and clouds, the incoming radiation partitions into three modes of energy-interaction response: (1) Transmittance (t ) some fraction of the radiation penetrates into certain surface materials such as water and if the material is transparent and thin in one dimension, normally passes through, generally with some diminution. (2) Absorptance (a) some radiation is absorbed through electron or molecular reactions within the medium; a portion of this energy is then re-emitted, usually at longer wavelengths, and some of it remains and heats the target; (3) Reflectance ( r ) some radiation reflects and scatters away from the target at various angles, depending on the surface roughness and the angle of incidence of the rays. Because they involve ratios (to irradiance), these three parameters are dimensionless numbers (between 0 and 1), but are commonly expressed as percentages. Following the Law of Conservation of Energy: t + a + r = 1. A fourth situation, when the emitted radiation results from internal atomic/molecular excitation, usually related to the heat state of a body, is a thermal process. There are two general types of reflecting surfaces that interact with EMR: specular (smooth) and diffuse (rough). These terms are defined geometrically, not physically. A surface may appear to be smooth in a physical sense, i.e., it appears and feels smooth, but at a scale on the order of wavelengths of light, many irregularities might occur throughout that surface. (A concrete roadway may appear smooth and flat from a distance but feels rough when a finger passes over it, owing to small grooves, pits, and protuberances.) Radiation impinging on a diffuse surface tends to be reflected in many directions (scattered). The Rayleigh criterion is used to determine surface roughness with respect to radiation. A specular surface reflects radiation according to Snell s Law which states that the angle of incidence equals the angle of reflectance. Specular reflectances within the visible wavelength range vary from as high as 0.99 for a very good mirror to as low as for a very smooth water surface. In general, natural surfaces are almost always diffuse and depart significantly from specular at shorter wavelengths (into the infrared) and may still be somewhat diffuse in the microwave region.

6 46 D. Al Ajmi and Saif ud din The term bidirectional reflectance describes the common observational condition in remote sensing in which the viewing angle j differs from the angle q of incidence, and incoming/outgoing rays have different azimuths. Thus, reflectances from the same target (type) change in value from various combinations of q and j : this is particularly important when the sensor operates off-nadir and the Sun angle and azimuth vary during the period of operation. 2 Electromagnetic Spectrum: Distribution of Radiant Energies The electromagnetic radiation (EMR) extends over a wide range of wavelengths. A narrow range of EMR extending from 0.4 to 0.7 m m, is detected by the human eye. This is called the visible region. The EMR is divided into different regions based on the wavelengths ( Fig. 4 ). At the very short wavelength end are gamma rays and X-rays normally measured in Ångstroms, which in the metric scale are in units of m. Radiation in the ultraviolet extends from about 300 Å to about 4,000 Å. The visible region occupies the range between 4,000 and 7,000 Å. The infrared region, spanning between 0.7 and 1,000 μ m (or 1mm), has four subintervals of special interest: (1) reflected IR ( μ m), and (2) its film responsive subset, the photographic IR ( μ m); (3) and (4) thermal bands at (3 5 μ m) and (8 14 μ m). The microwave region spreads across cm. The lowest frequency-longest wavelength region beyond 100cm is the radio bands, from VHF (very high frequency) to ELF (extremely low frequency); units applied to this region are often stated as frequencies in units of Hertz (1Hz=1 cycle per second; KHz, MHz and GHz are kilo-, mega-, and giga- Hertz, respectively). The transmission of the energy bands through the atmosphere is variable, some wavelengths are transmitted completely while others are absorbed completely. Figure 5 depicts relative atmospheric radiation transmission and absorption at different wavelengths. Blue zones mark minimal passage of incoming and/or outgoing radiation, whereas, white areas denote atmospheric transparency known as the window. Most Gamma Rays X-rays Ultraviolet Visible Infrared Microwave Television AM Ratio Long Radio Waves Wave Length (m) 1 nm 1 um 1 cm 1 km Fig. 4 Different regions of electromagnetic spectrum

7 Remote Sensing 47 V I S INFRARED MICROWAVE I B K X C S L P L TRANSMISSION Frequency E FOR Q KM PATH RADAR BANDS 10 9 (Hz) 1.0 H 2 O MOSTLY OPAQUE O o 2 3 CO DUE TO H H 2 O 2 2 O H CO O H 2 O H O 2 2 O 2 O H 2 O CO CO H 2 2 O 2 H 2 O H 2 O H 2 O H 2 O 1.0 μm μm 0.1 cm 1.0 cm 10 cm 100 cm WAVE LENGTH Fig. 5 Atmospheric windows remote-sensing instruments on board operate in one or more of these windows by making their measurements with detectors tuned to specific wavelengths that pass through the atmosphere. However, some sensors, especially those on meteorological satellites, directly measure absorption phenomena, such as those associated with carbon dioxide, CO 2 and other gaseous molecules. The atmosphere is nearly opaque to EM radiation in part of the mid-ir and all of the far-ir regions. In the microwave region, by contrast, most of this radiation moves through unimpeded, so radar waves reach the surface. Fortunately, absorption and other interactions occur over many of the shorter wavelength regions, so that only a fraction of the incoming radiation reaches the surface. Backscattering is a major phenomenon in the atmosphere. There are two prominent types of scattering Mie and Rayleigh. The Mie scattering refers to reflection and refraction of radiation by atmospheric constituents like smoke whose dimensions are comparable to the radiation wavelengths. Rayleigh scattering results from constituents like atmospheric gases and water vapor that are much smaller than the radiation wavelengths. Rayleigh scattering increases with shorter wavelengths. The blue sky is a result of Rayleigh scattering. Remote sensing of the Earth traditionally has used reflected energy in the visible and infrared and emitted energy in the thermal infrared and microwave regions to gather radiation that can be used to generate images whose variations represent different processes, phenomena, or objects. Images made from the varying wavelength/intensity signals will show variations in gray tones in black and white versions or colors (in terms of hue, saturation, and intensity) in colored versions. 3 Electromagnetic Spectrum: Spectral Signatures The amount of solar radiation that reflects, absorbs, or transmits varies with wavelength for any given material. This important property of matter makes it possible to identify different substances or classes and separate them by their spectral signature. For example, at some wavelengths, sand reflects more energy than green vegetation but at other wavelengths it absorbs more than does the vegetation. Using reflectance differences, different crop types can be distinguished (Wheat; Alfalfa; Potato; Tomato) ( Fig. 6 ).

8 48 D. Al Ajmi and Saif ud din 0.6 Reflectence % Reflectence curve Wet sand Dry sand Wheat Alfalfa Potato Tomato Wave length (mm) Fig. 6 Spectral signatures of different elements in variable wavelengths 4 Sensor Technology Most remote sensing sensors are designed to measure frequency or wavelength of the impinging radiation. The remote sensing sensors are grouped into several classes according to their functionality and characteristics. A functional treatment of several classes of sensors is plotted as a triangle diagram, in which the corner members are determined by the principal parameter measured: Spectral; Spatial; Intensity ( Fig. 7 ). Figure 8 presents a wider array of sensor types including active and passive sensors of different types. The Passive sensors acquire energy from an external source, e.g., the Sun, and for Active the energy is generated from within the sensor system, beamed outward, and the fraction returned is measured. Other types of commonly deployed sensors in remote sensing studies are as follows: Radiometer is an instrument that quantitatively measures the EM radiation in some interval of the EM spectrum. Spectrometer is an instrument with a component, such as a prism or diffraction grating, which can break radiation extending over a part of the spectrum into discrete wavelengths and/or separate them. The term spectroradiometer tends to imply that the dispersed radiation is in bands rather than discrete wavelengths. Most of the remote sensing satellites have spectroradiometers on board.

9 Remote Sensing 49 Fig. 7 Different devices based on principal parameters they measure There are many remote sensors Sensor type non-scanning Passive scanning non-scanning Active scanning non imaging imaging Microwave radiometer Magnetic sensor Gravimeter Fourier spectrometer Other (Resistivity, etc.) Camera Monochrome Natural Color Infrared Color Infrared Others TV camera Image plane scanning imaging Solid scanner Optical mechanical non-imaging Object plane scanning Microwave radiometer scanner Microwave radiometer Microwave altimeter Laser water depth meter Laser distance meter Image plane scanning Passive phased array radar imaging Object plane scanning Real aperture radar Synthetic aperture radar Fig. 8 Types of Sensors

10 50 D. Al Ajmi and Saif ud din Sensors that instantaneously measure radiation coming from the entire scene at once are called framing systems, i.e., eye, camera, and TV. The size of the scene that is framed is determined by the apertures and optics in the system defines the field of view. If the scene is sensed point by point along successive lines over a finite time, this mode of measurement is a scanning system. Most non-camera sensors operating from moving platforms image the scene by scanning. The radiation normally visible and/or Near and Short Wave IR, and/or thermal emissive in nature must then be broken into its spectral elements, into broad to narrow bands. The width in wavelength units of a band or channel is defined by the instrument s spectral resolution. A more vital aspect of sensor characteristic and performance is spatial resolution. Spatial resolution represents the ability to recognize and separate features of specific sizes. The common definition of spatial resolution is often simply stated as the smallest size of an object that can be picked out from its surrounding objects or features. This separation from neighbors or background may or may not be sufficient to identify the object. Three variables control the achieved spatial resolution: (1) the nature of the target features, the most important being size; (2) the distance between the target and the sensing device; and (3) some inherent properties of the sensor embodied in the term resolving power. The spectral properties of the material help in differentiating them from one another. Typical spectra are shown in Fig. 9 the figures show reflectance for vegetation rises abruptly at about 0.7 μ m, followed by a gradual drop at about 1.1 μm. The first spectral signatures indicate a gradual rise in reflectance with increasing wavelengths for those particular common manmade materials on the ground. Concrete, being light-colored and bright, has a notably higher average than dark asphalt. The other materials fall in between. The shingles are probably bluish in color as suggested by a rise in reflectance from about μ m and a flat response in the remainder of the visible ( μ m) light region. The second curves (on the right or bottom) indicate most vegetation types are very similar in response between 0.3 and 0.5 μ m; show moderate variations in the μm interval; and display maximum variability (hence optimum discrimination) in the μ m range ( Fig. 9). 4.1 Some Basic Image Processing Procedures The basic image processing involves computer-based processing procedures in highlighting and extracting information about scene content, that is, the recognition, appearance, and identification of materials, objects, features, and classes. The processing procedures can broadly be characterized into three categories: image restoration (preprocessing); image enhancement; and classification and information extraction.

11 Remote Sensing 51 a NONVEGETATED LAND AREAS PERCENT REFLECTANCE CONCRETE ASPHALT BARE SOIL GRAVEL SHINGLES FROM ROOT AND MILLER (1971) WAVELENGTH (μm) b VEGETATED LAND AREAS PERCENT REFLECTANCE GRASS TREE SUGAR BEET WHEAT STUBBLE FALLOW FIELDS WAVELENGTH (μm) Fig. 9 Spectral signatures of vegetated and non vegetated areas in different wavelengths

12 52 D. Al Ajmi and Saif ud din 5 Processing and Classification of Remotely Sensed Data; Pattern Recognition; Approaches to Data/Image Interpretation The remotely sensed data from the satellites were analyzed for identifying objects, extraction of information on surface features and deducing their properties through observations made on the reflected/scattered energy from the earth in different spectral bands. The images recorded in digital form are processed on the computer to produce images for interpretation. The digital images contain large amounts of data and computers with large data handling capabilities are used for digital image processing (DIP) work. Digital image processing involves techniques for manipulation of digital image data by computers to generate analog signals. Digital image processing comprises the operations for noise removal, geometric and radiometric corrections, enhancement of images, data compaction, image display and recording, image data manipulation and management. In India NRSA provides data, which are corrected for noise, geometric and radiometric distortions. The digital image processing operations carried out during the present study involved conversion of data from band interleaved by line (BIL) to intergraph format (.cot) and numerical operations for image enhancement, information extraction, data compaction, image display and recording and image data manipulation and management. 5.1 Digital Image Digital image consists of discrete picture elements called pixels. Associated with each pixel is a number represented as Dn (Digital number), that depicts the average radiance of a relatively small area within a scene (pixel). The size of this area determines the reproduction of details within the scene. As the pixel size is reduced, more scene details are preserved in a digital representation. The pixel has a gray scale value where 0 corresponds to black and 255 for white in between there are shades of gray [1 5]. A Dn is simply one of a set of numbers based on powers of 2, such as 2 6 or 64. The range of radiances, which instrument-wise, can be, for example, recorded as varying voltages if the sensor signal is one which is, say, the conversion of photons counted at a specific wavelength or wavelength intervals. The lower and upper limits of the sensor s response capability form the end members of the Dn range selected. The voltages are divided into equal whole number units based on the digitizing range selected. Thus, a Landsat TM band can have its voltage values the maximum and minimum that can be measured subdivided into 2 8 or 256 equal units. These are arbitrarily set at 0 for the lowest value, so the range is then

13 Remote Sensing Digital Image Analysis System The digital image processing system comprises hardware and software elements which help the analyst in extracting meaningful information from the data. The basic system components include input devices, processing elements, interactive devices and output devices. The DIP systems are capable of processing large volumes of data at a very fast rate. The speed of operation is of immense importance in DIP as some of the image analyses techniques like classification, spatial filtering, etc. are highly computation bound, to facilitate this fast processing high speed pipe line processors like array processors may be used in conjunction with the CPU. A display system is an interactive device which is necessary for the user to interact with the computer. Normally a color scrolling display system with an ability to zoom, shrink and pan is provided. The display area could either be (pixels scan lines), 1,024 1,024 or even more in case of high-resolution display systems. The state-of-the-art display systems are microprocessor-controlled and provide a wide range of image analysis capabilities at an increased speed. The analyzed outputs are to be stored in a form which aids further analysis by the users. Outputs can be any of the following: Computer compatible tapes (CCTs)/floppies/cartridges/CDs Gray scale maps on line printer or color plotter outputs Photographs of color monitor displays High precision film output/imagery 5.3 Digital Data Formats The remotely sensed data acquired from the satellites are stored in different types of formats. Band sequential (BSQ) Band interleaved by line (BIL) Band interleaved by pixel (BIP). Each of these formats is preceded on tape by header or tailer information, which consists of ancillary data about the date, altitude of the sensor, sun angle, and so on. This information is useful when the data is corrected geometrically or radiometrically. The data are normally recorded on nine-track CCTs with data density on the tape of 800, 1,600, or 6,250 bits per inch (bpi) Band Sequential Format (BSQ) This format requires that all data for a single band covering the entire scene be written as one file. Thus, if one wants the area in the center of the scene in four bands,

14 54 D. Al Ajmi and Saif ud din it would be necessary to read into this location in four separate files to extract the desired information. The format is preferred as it is not necessary to read serially past unwanted information if certain bands are of no value. The number of tapes may be dependent on the number of bands provided for the scene Band Interleaved by Line Format (BIL) In this format the data for bands are written line by line onto the tape (i.e., line 1 band 1, line 1 band 2, line 1 band 3, line 1 band 4). It is a useful format if all the bands are to be used in the analysis. If some bands are not of interest, this format will be inefficient, since it is necessary in BIL to read serially past all the unwanted data Band Interleaved by Pixel Format (BIP) In this format, the data for the pixel in all bands are written together. Taking the example of LANDSAT MSS 4 band data every element in the matrix has 4 pixel values pixel 1,1, of band 1; pixel 1,1, of band 2; pixel 1,1 of band 3; pixel 1,1 of band 4, pixel 1,2 of band 1, pixel 1,2 of band 2, pixel 1,2 of band 3; pixel 1,2 of band 4; etc. This data format is of use if all the bands are to be used, otherwise it will be inefficient. This format is not very popular now; it was extensively used by the EROS Data Center for LANDSAT scenes in initial stages. 5.4 Preprocessing Preprocessing is an important and diverse set of image preparation programs that act to offset problems with the band data and recalculate Dn values that minimize these problems. Among the programs that optimize these values are atmospheric correction; sun illumination geometry; surface-induced geometric distortions; spacecraft velocity and attitude variations; effects of Earth rotation, elevation, curvature, abnormalities of instrument performance; loss of specific scan lines, and others. Once performed on the raw data, these adjustments require appropriate radiometric and geometric corrections. Resampling is one approach commonly used to produce better estimates of the Dn values for individual pixels. After the various geometric corrections and translations have been applied, the net effect is that the resulting redistribution of pixels involves their spatial displacements to new, more accurate relative positions. However, the radiometric values of the displaced pixels no longer represent the real world values that would be obtained if this new pixel array could be resensed by the scanner. The particular mixture of surface objects or materials in the original pixel has changed somewhat. Three most common transformations are:

15 Remote Sensing 55 The nearest neighbor technique, the transformed pixel takes the value of the closest pixel in the pre-shifted array. The bilinear interpolation approach, the average of the Dns for the 4 pixels surrounding the transformed output pixel is used. The cubic convolution technique averages the 16 closest input pixels; this usually leads to the sharpest image. 5.5 Image Enhancement The principal objective of the enhancement techniques is to process a given image so that the result is better than the original image. Image enhancement techniques improve the quality of the image. These techniques are most useful as many satellite imageries when examined on a color display give inadequate information for image interpretation. There exists a wide variety of techniques for improving image quality. Normally image enhancement involves techniques for increasing the visual distinction between features in a scene. Basically new images are created from the original image data in order to increase the interpretability of information that can be visually extracted from the data, the following techniques were used: Contrast enhancement Density slicing Edge enhancement Spatial filtering Contrast Enhancement The sensors record reflected and radiant flux from the earth s surface materials. The reflectance of materials in different bands of electromagnetic radiation varies, this results in contrast between the materials when recorded by remote sensing systems. Sometimes different materials reflect similar amounts of radiant flux in certain bands of electromagnetic spectrum resulting in a relatively low contrast image. Besides low contrast characteristic of materials, the lowering of the sensitivity of detectors often results in low contrast imagery. The contrast can be defined as the ratio of the maximum intensity to the minimum intensity over an image. C = I max / I min. Contrast ratio has a strong bearing on the resolving power and detectability of an image. The larger this ratio, the easier it is to interpret the image outline. Most of the satellite images lack adequate contrast and require contrast improvement. Low contrast may result from the following:

16 56 D. Al Ajmi and Saif ud din The object and the background of the terrain have nearly uniform electromagnetic response in the wavelength band of energy that is recorded by the remote sensing system, i.e., the scene itself has a low contrast. Scattering of electromagnetic energy by the atmosphere can reduce the contrast of a scene. This effect is most pronounced in the shorter wavelength portions of EMR. The remote sensing system may lack sufficient sensitivity to detect and record the contrast of the terrain. Also, incorrect recording techniques can result in low contrast imagery although the scene has a high contrast. The imageries with low contrast are commonly referred to as washed out with nearly uniform tones of gray. Detectors on board are designed to record a wide range of scene brightness values without getting saturated. The image of the study area has not utilized the full range of brightness, unless it is subjected to enhancement. The contrast enhancement technique expands the range of brightness values in an image so that the image can be efficiently displayed in a desired manner. The Dn values are literally pulled apart, that is, expanded over a full range of Dn values. The effect is to increase the visual contrast between two areas of initially nearly uniform Dn values, which provide ease in identification. The contrast modification is the most commonly applied image enhancement technique. Some enhancement is possible by photographic duplication on a high contrast film. This photographic enhancement results in an overall loss of information. The limited dynamic range of film results in loss of information at the bright and dark extremes of image to avoid loss of information at the tails, the gray scale is enhanced by digital image enhancement. The contrast enhancements are of two types, i.e., linear and non-linear contrast enhancement Linear Contrast Enhancement The linear contrast enhancement expands the original Dn values to make use of the total range of 256 gray levels of the output device. Linear contrast enhancement is best applied to remotely sensed images with Gaussian or near Gaussian histograms, that is when all the brightness values fall between a single narrow range and only one mode is apparent [2, 5]. To perform linear contrast enhancement the image statistics are examined to determine minimum and maximum brightness values in the band (min k and max k ). The output brightness value (BV out ), is computed according to the following equation: BV = (BV min ) / (min max ) quant, out in k k k k where BV in is the original input brightness value and quant k is the range of brightness value that can be displayed on CRT (e.g., 256).

17 Remote Sensing Min Max Contrast Stretch The minimum and maximum Dn values are redistributed in this contrast enhancement from 0 to 255 keeping the min at 0 and max at Percentage Linear Contrast Stretch The min k and max k that lie in a certain percentage of pixels can be specified from the mean of the histogram. This is known as percentage linear contrast stretch. In this enhancement if (minimum maximum) the standard deviation is ±1 and minimum and maximum Dn values are 20 and 60, then on stretching the Dn value 20 will become 0 and 60 will be at 255 and all the values from 0 to 19 will be 0 and all the values from 61 to 255 will be 255. This results in more pure black and white pixels in the scene, increasing the contrast of the image but the information content of pixels that are saturated at 0 and 255 is lost. The slope of a percentage linear contrast stretch is much greater than for a simple min max stretch [2, 5]. The linear stretch of an image having a non-gaussian histogram, is performed piece wise, here a number of linear segments are identified and are stretched. 5.6 Single Bands Linear Stretch Non-Linear Contrast Enhancement In this type of enhancement, the input and output data values follow a non-linear transformation Gaussian Stretch The brightness levels of the individual bands are redistributed to an approximate normal distribution. The near approximate normal distribution of the histogram stretching is achieved by changing mean and standard deviation of the stretched histogram. The mean of the Dn values of the histogram is adjusted by shifting the center of the normal distribution in the range of Dn. The standard deviation varies between 1 and 255 Dn. The increase in standard deviation widens the normal distribution curve; it will increase the contrast of the image. The decrease of standard deviation results in a bunch up of stretched histogram about the mean, which reduces the contrast of the image Histogram Equalization One of the most useful techniques is histogram equalization. In this the algorithm passes through the individual bands of the dataset and assigns approximately an

18 58 D. Al Ajmi and Saif ud din equal number of pixels to each of the user-specified gray scale. The histogram equalization provides greatest contrast enhancement to the most populated range of brightness values in the image. It automatically reduces contrast in the very light or dark parts of the image associated with tails of a normally distributed histogram. The frequency of occurrence of brightness values, f (BVi ) will be the ratio of the number of pixels in the scene with the same value to the total number of pixels in the scene. The probability p i =f (BV i )/n, where p is probability and n is total number of pixels. For each value level BV i in the quant range of of the original histogram, a new cumulative frequency K i is calculated: quantk Ki = f(bv i) / n, i= 0 where the summation counts the frequency of pixels in the image with brightness values equal to or less than BV i and n is total number of pixels in the entire scene. The histogram transformation function K i with the original values L i to determine the closest match and is reassigned to an appropriate brightness value. The histogram equalization results in rescaling of brightness levels of the scene into a lower number of brightness levels. The quantization of the gray levels reduces the smoothness of the edges Logarithmic Contrast Enhancement The logarithmic contrast enhancement is non-linear; this enhancement has the greatest impact on the brightness values found in darker parts of the histogram. It could be reversed to enhance values in the brighter part of the histogram by scaling the input data using an inverse log function. 5.7 Filtering The filtering of the data can be carried out both in spatial and frequency domains. The spatial filters are used as masks. The enhancement in the frequency domain involved the Fourier Transform of the image and its multiplication by a filter transfer function. The inverse transform of the result can be taken to produce enhanced images in the frequency domain. The most common spatial filters include lowpass, highpass, and median filters Spatial Filtering Spatial Frequency is defined as the number of changes in the brightness value per unit distance or any particular part of the image. If there are very few changes in brightness value over a given area in an image, it is called a low frequency image

19 Remote Sensing 59 and if the changes are frequent over a short distance it is called a high frequency image. Spatial frequency describes the brightness values by taking into account the brightness of the neighboring pixels. The spatial frequency in an image can be spatially enhanced or subdued using two different approaches, i.e., spatial convolution (spatial filtering) and Fourier analysis (frequency domain). The spatial convolution is used for enhancing the low and high frequency detail in imagery. 5.8 Low-Frequency Filtering in the Spatial Domain Low-Pass Filters These are enhancements that block the high spatial frequency detail. They are called low-pass filters. This filter evaluates a particular input pixel brightness value BV in, and the pixel surrounding the input pixel and output is a new brightness value (BV out ) for the central pixel, that is the mean of this convolution. The size of the kernel ( n ) is usually 3 3, 5 5 and 7 7. If the coefficients in a low frequency mask are set equal to 1 Mask A = The coefficients c i, in the mask are multiplied by the following individual brightness values (BV i ) in the input image c BV c BV c BV Mask template = c B V c BV c BV, c BV c BV c BV where BV 1 = BV i 1, j 1 BV 2 = BV i 1, j BV 3 = BV i 1, j +1 BV 4 = BV i, j 1 BV 5 = BV i, j BV 6 =BV i, j +1 BV 7 = BV i +1, j 1 BV 8 = BV i +1, j BV 9 = BV i +1, j +1. The primary pixel under investigation is BV 5 =BV i, j. The original data which will result in a low frequency image is given by expression Low - Frequency Image = Int. c BV / n n i= 1 i i = Int.(BV1 + BV2 + BV3 + + BV9) / 9. The values for each pixel are calculated and this is called image smoothing. The image smoothing is useful in removing periodic noise recorded by remote sensing systems. The effect of low-pass filtering is smoothing the image by cutting of the high frequency components and allowing only low frequency.

20 60 D. Al Ajmi and Saif ud din Weighted Low-Pass Filters The weighted low-pass mask carries an option of giving a center weight to an individual pixel. It can be used to have selective smoothing Median Filter The median filter is useful for removing noise in an image, especially shot noise by which individual pixels are corrupted or missing, instead of taking the mean of nine pixels in a 3 3 mask, the median filter ranks the pixels in the neighborhood from lowest to highest and selects the median value. The median filter has several advantages when compared with weighted convolution filters. It does not shift boundaries, has minimal degradation to edges, allows the median filter to be applied repeatedly, as a result fine details are erased and large regions acquire the same brightness value. The standard median filter will erase some lines in the image which are narrower than the half-width of the neighborhood and round or clip corners Edge-Preserving Median Filter The edge preserving median filter is where the median value of the black pixels and gray pixels is computed in a 5 5 array, these two values and the central original brightness value are ranked in ascending order. A final median value is selected to replace the central pixel. This filter preserves edges and corners. The minimum-maximum filter operates on one pixel at a time and examines the brightness value in a user-specified radius (3 3 kernel) and replaces brightness values of the current pixel with the minimum or maximum brightness value encountered Sigma Filter An adaptive box filter (Sigma) is of value for removing noise in digital images. The adaptive box filters are used to remove random bit errors where pixel values have no relation to image scene, i.e., shot noise and for smooth noisy data where the pixels are related to image scene but with an additive or multiplicative component of noise. The procedures rely on computation of the standard deviation components s of only those pixels within a local box surrounding the central pixel [i.e., the eight values surrounding the fifth (central) pixel in a 3 3 mask]. The original brightness value at location 5 is considered to be a bit error if it deviates from the box mean of eight values by more than s and is replaced by the box mean. This is called the adaptive filter because it is based on computation of standard deviation

21 Remote Sensing 61 for each 3 3 window rather than on standard deviation of the entire scene. In this very minor bit errors are removed from low variance areas, but valid data along sharp edges and corners are not replaced Lee Filter The Lee adaptive filter can be used for cleaning up of extremely noisy images and is based on the sigma filter [6]. Lee s filter computes standard deviation for the entire scene. Then the fifth pixel in a 3 3 moving window is replaced by the average of the neighboring pixels that had the intensity within a fixed s range of the central pixel. The filter averaged only those pixels within the box that had intensities within s of the central pixel. This technique effectively reduces speckle and the salt and pepper texture in images without eliminating fine details [7]. The two filters can be combined into a single program for processing images with both random bit error and noisy data. 5.9 High-Frequency Filtering in the Spatial Domain High pass filtering is applied to imagery to remove the slowly varying components and enhance the high frequency local variations. High frequency filter (HFF) is computed by subtracting the output of the low frequency filter (LFF) from twice the value of the original central pixel in a matrix of 3 3, 5 5, or 7 7. In a 3 3 matrix the output value of the fifth pixel is computed by the following expression: HFF 5out = (2 BV 5) LFF 5 out. Brightness values tend to be highly correlated in a 3 3 window, thus the highest frequency filtered images will have a relatively narrow intensity histogram. Thus, the output from most high-frequency filtered images must be contrast stretched prior to visual analysis. This high pass filter sharpens edges Edge Enhancement in the Spatial Domain Edge enhancement delineates the edges and makes the shapes and details comprising the image more conspicuous and perhaps easier to analyze. Generally, the pictorial edges are simply sharp changes in brightness value between two adjacent pixels. In earth science applications the most valuable information that may be derived from an image is contained in the edges surrounding various objects of interest. The edges may be enhanced using either linear or non-linear edgeenhancement techniques.

22 62 D. Al Ajmi and Saif ud din Linear Edge Enhancement A straightforward method of extracting edges in remotely sensed imagery is application of a directional first-difference algorithm that approximates the first derivative between two adjacent pixels. The algorithm produces the first difference of the image input in the horizontal, vertical, and diagonal directions. The algorithm for enhancement is: Vertical : BVi =BV BV +K, j i, j i, j +1 Horizontal : BV =BVi BV +K i, j, j i 1, j NE diagonal : BV i, j =BV BVi +K i, j +1, j +1 SE diagonal : BV i, j =BV BVi +K i, j 1, j +1 The result of the subtraction can either be negative or positive, a constant K, usually 127 is added to make all values positive and centered between 0 and 255. This enhances contrast of adjacent pixels. The resultant image is normally min max contrast stretched to enhance the edges even more. It is best to make the min max values in the contrast stretch a uniform distance from midrange value (127). This causes the uniform areas to appear in shades of gray, while the important edges become black or white. The edge enhancement is carried out by convoluting the original data with a kernel. Chavez et al. [8] suggested the optimum kernel size as 3 3, 5 5, or 7 7 based on the surface roughness and sun angle characteristics of the data. They developed a procedure based on the first difference in the horizontal direction. An offset of 127 is added to the result and the data contrast stretched. The direction of the embossing is controlled by changing the coefficient around the periphery of the mask Gradient Filter It had been observed that low-pass filtering based on averaging of pixel values resulted in smoothing; the process is analogous to integration. The differentiation can be expected to have an opposite effect, i.e., sharpening of the image. The object is not recognized in the image because of the difference of gray values but also because of the difference in the pattern and orientation of pixels. The gradient filter was developed based on the logic that the local orientation of a pattern is the property which describes the edges of the image features. In an image for a function ƒ( x,y ), the gradient of ƒ at coordinate ( x,y ) can be defined as vector The magnitude of this vector, F = df df dx. dy f = f = f x + f y 2 2 1/2 mag( ) [( d / d ) ( d / d ) ].

23 Remote Sensing Compass Filter The compass mask is based on gradient filtering which may be used to perform two-dimensional, discrete differential directional edge enhancement. The compass filter suggests the slope direction of maximum response. Thus, the east gradient mask produces a maximum output for horizontal changes in brightness value from west to east. The gradient mask having zero weight, results in no output response over regions with constant brightness values, i.e., where no edges are present Laplacian Filter The Laplacian filter is a special high pass filter, effectively enhancing the plume and other subtle sensor noise in the image. A Laplacian convolution mask is applied to perform edge enhancement and is insensitive to direction and invariant to rotation The Laplacian Filter States 2 Dx = Dx + Dx, 2 2 L = Dx + Dy, Lˆ = 2 Cos ( pkx ) + 2 Cos ( pky ) 4. The subtraction of the Laplacian edge enhancement from the original image restores the overall gray variation. It also sharpens the image by locally increasing the contrast at discontinuities. The Laplacian operator highlights points, lines, and edges in the image and suppresses uniform and smoothly varying regions. By itself, the Laplacian image is difficult to interpret. The combination of gradient and Laplacian edge operators may be used for edge enhancement, which may be superior to either edge enhancement alone Non-Linear Edge Enhancement Non-linear edge enhancements are performed using non-linear combinations of pixels. Many algorithms are applied using either 2 2 or 3 3 kernels Sobel Edge Detector The Sobel edge detector is based on the notion of the 3 3 window and is computed according to the relationship: 2 2 Sobel 5, out = x + y.

24 64 D. Al Ajmi and Saif ud din Here X = (BV 3 + 2BV 6 + BV 9 ) (BV 1 + 2BV 4 + BV 7 ), Y = (BV 1 + 2BV 2 + BV 3 ) (BV 7 +2BV 8 +BV 9 ). The operator detects horizontal, vertical, and diagonal edges. Each pixel in an image is declared an edge if its Sobel values exceeds some user-specified threshold. This information may be used to create the edge map, which often appears as a white line on a black background or vice versa Robert Edge Detector Robert s edge detector is based on the use of only four elements of 3 3 mask [9]. The new pixel value at pixel location BV 5,out is computed according to equation: where X = BV 5 BV 9, Y = BV 6 BV 8. Roberts5, out = x + y, Kirsch Filter The gradient filters used to imply enhancement of edge gradient in two orthogonal directions, i.e., row and columns. In Kirsch the edge enhancement is carried out in 8 modulo. The gradient directions are East, North-east, North, North-west, West, South-west, South, and South-east. The template gradient 3 3 impulse response arrays were used where the scale factor was 1/15 [10]. The Kirsch non-linear edge enhancement calculates the gradient at pixel location BV i, j. The algorithm applied is: BV i,j = max{ 1,max 7 [Abs(5S 3T)] i=0 i i }, where S i =BVi +BV i +1 +BV i +2 and T i =BV i +3 +BV i +4 +BV i +5 +BV i +6 +BV i +7. The subscripts of BV are evaluated modulo 8, meaning that the computation moves around the perimeter of the mask in eight steps. The edge enhancement computes the maximal compass gradient magnitude about input image point BV i, j. The value of S i equals the sum of three adjacent pixels, while T i equals the sum of the remaining four adjacent pixels. The input pixel value at BV i, j is never used in computation. The effect of the Kirsch operator is enhancement of linear edges and boundaries Wallis Filter The edge crisping has also been achieved through statistical differentiation which involves generation of an image by dividing each pixel value by its estimated standard deviation according to the basic relation:

25 Remote Sensing 65 G F S, ( jk, ) = ( jk, ) ( jk, ) where F ( j,k ) of the pixel S ( j,k ) is the expected standard deviation and G ( j, k ) is the computed Dn value. The statistical differentiation operator in which the enhanced image takes values from desired first-order and second-order moments is referred to as the Wallis filter [11] Prewitt Filter The Prewitt filter is an edge gradient operator described by pixel numbering convention [12]. In the Prewitt filter the row and column gradients are normalized to provide unit gain and positive weighted and unit gain negative weighted averages about a separated edge position. Unlike the Sobel edge detector the Prewitt edge detector has K =1, whereas the Sobel detector has K =2, and as a result the pixel values of the north, south, east, and west remain the same. The output of the Prewitt Filtering usually eliminates the edges in the scene as compared to Sobel output, where the north, south, east, and west pixel values are doubled Frost Filter The frost command filter removes high frequency noise while preserving the edges. The command removes speckle noise from images, besides it can be used to remove high frequency noise from any type of image. The areas of the images having low spatial frequency are smoothed and the areas containing high spatial frequencies are not affected. The result is the speckle noise is removed from smooth areas of the image while the edges are kept clean Principal Components Analysis The principal component analysis (PCA) or Karhunen Loeve analysis is useful for analysis of highly correlated multi-spectral remotely sensed data [13, 14]. The transformation of raw remote sensor data using PCA can result in new principal component images that may be more interpretable than the original data [14, 15]. For PCA the transformation is applied to a correlated set of multi-spectral data, application of the transformation to the correlated remote sensor data will result in another uncorrelated multi-spectral dataset that has certain ordered variance properties. This transformation is conceptualized by considering the two-dimensional distribution of pixel values obtained in two bands that can be labeled as X 1 and X 2. The spread or variance of the distribution of points is an indication of the correlation and quality of information associated with both bands, if all the points are clustered in an extremely tight zone in two-dimensional space, these data will provide very little information.

26 66 D. Al Ajmi and Saif ud din The initial measurement coordinate axes X 1 and X 2 may not be the best arrangement in multi-spectral feature space to analyze the remotely sensed data associated with these two bands. The PCA will translate and/or rotate the original axes so that the original brightness values on axes X 1 and X 2 are redistributed (re-projected) onto a new set of axes as X 1 and X 2. The X 1 coordinate system might then be rotated about its new origin ( m 1,m 2 ) in the new coordinate system some f degrees so that the first axis X 1 is associated with maximum amount of variance in the scatter point, this new axis is called the first principal component axis PC 1 =l 1. The second principal component axis PC 2 =l 2 is orthogonal to PC 1. Thus, the major and minor axis of ellipsoid of points in bands X 1 and X 2 are called the principal components. To re-project the original data on the X 1 and X 2 axes onto the PC 1 and PC 2 axes certain linear transformations are to be applied to the original pixel values. The linear transformation required is derived from the covariance matrix of the original data set. Thus, this is a data-dependent process with each data set yielding different transformation coefficients. The transformation is calculated from the original spectral statistics [16] as follows: The n n covariance matrix, covariance of the n -dimensional remote sensing data set to be transformed is computed. Use of the covariance matrix results in an unstandardized PCA, whereas use of the correlation matrix results in a standardized PCA. The eigenvalues E =[l 1,1, l 2,2, l 3,3,.., l n, n ] and eigenvectors EV=[ a kp for k =1 to n bands and p =1 to n components]. EV T = EV Cov, where EV T is the transpose of the eigenvector matrix, EV, and E is a diagonal covariance matrix whose elements l ii, called eigenvalues, are the variance of the p th principal components, where p =1 to n components. The non-diagonal eigenvalues, l ii are equal to zero and therefore can be ignored. The number of nonzero eigen values in an n n covariance matrix always equal n, the number of bands examined. The eigen values are often called components (eigen value 1 may be referred to as PC 1 ) where l p is the p th eigenvalue out of the possible n eigen values. By calculating the correlation of each band k with each component p, it is possible to determine how each band is associated with each principal component. The equation is Rkp = akp lp Var k, where a kp, eigenvector for band k and component p ; l p, p th eigenvalue; Var k =variance of band k in the covariance matrix. The above computation results in a new n n matrix. Each component contributes different information and it can be seen in different images. To do this it is necessary to identify the brightness values (BV i, j, k ) associated with each pixel. Loeve [17] and Castleman [18] suggest that standardized PCA based on computation of eigenvalues from correlation matrices is superior to unstandardized PCA

27 Remote Sensing 67 computed from covariance matrices when analyzing change in multi-temporal image datasets. Another use of PCA, mainly as a means to improve image enhancement, is known as a decorrelation stretch (DS). The DS optimizes the assignment of colors that bring out subtle differences not readily distinguished in natural and false color composites. This reduction in interband correlations emphasizes small but often diagnostic reflectance or emittance variations owing to topography and temperature. The first step is to transform band data into at least the first three PCs. Each component is rescaled by normalizing the variance of the PC vectors. Then each PC image is stretched, usually following the Gaussian mode. The stretched PC data are then projected back into the original channels which are enhanced to maximize spectral sensitivity ( Fig. 10 ). Users of ASTER data have found decorrelation stretching to be particularly effective in image display. The stretch is effective whether the bands used are in the Visible, the SWIR, or the thermal IR interval. The three ASTER scenes in Fig. 11 (again, of an unidentified area) show the effects of a DS. The difference between the PC color composites and the DS color composites is generally not large, but extra statistic data manipulation in the latter often leads to a better product Band Ratio The band ratio minimizes the effect of environmental factors like slope and aspect, shadows or seasonal changes which affect the brightness values of an identical surface. The band ratios provide unique information that is not available in any single band. The mathematical expression of the ratio function is: BV = BV BV, i, j, r i, j, k i, j, l Fig. 10 An Original image (right) and PC enhanced imagery (left)

28 68 D. Al Ajmi and Saif ud din Fig. 11 Decorrelation stretch of visible, SWIR and thermal IR images from ASTER where BV i, j,r is the output ratio value for a pixel at row i, column j, ratio of band r ; BV i, j,k is the brightness value at the same location in band k and BV i, j,l is the brightness value in band l. If the value of brightness BV i, j is equal to 0, then the alternatives are to be taken as, the mathematical domain of the function as 1/255 to 255 through 0; or to assign 0 a value of 1; or a small value of the order of 0.1 can be added to the denominator. Normalization is done to encode the values in a standard eight bit format. Using the normalization function the ratio value 1 is assigned the brightness value 128 and the ratio values between 1/255 and 1 are assigned values between 1 and 128 by the function: BV = Int.[(BV 127) + 1]. i, j, n i, j, r

29 Remote Sensing 69 Fig. 12 Band ratio image showing mineralization in lithology Ratio values from 1 to 255 are assigned values within a range of by the function ( ) BVi, j, n= Int BVi, j, r 2. Three pairs of ratio images can be co-registered (aligned) and projected as color composites. In individual ratio images and in these composites, certain ground features tend to be highlighted, based on unusual or anomalous ratio values. For example, an ore deposit may be weathered or altered so that a diagnostic surface staining, called gossan, develops. This stain consists of hydrated iron oxide (rust) that is normally yellow-brown. In band 3, this material reflects strongly in the red but it is apt to be dark in band 4. The ratio quotient values for this situation tend, therefore, to exceed 2 3, giving rise to a bright spot pattern in a 3/4 image ( Fig. 12 ). The yellows and reds in this composite ( Fig. 12 ) denote areas of rock alteration and mineralization. 6 Classification 6.1 Image Classification The computer aided classification of the remotely sensed data is based on the concept that a pixel is characterized by its spectral signatures, which vary in different wave bands. The spectral signatures of themes are assigned a gray level or color which relates the Dn values to thematic information. The information extraction process which analyzes the spectral signatures and assigns the pixel to thematic categories based on similarity of Dn values is referred to as classification. There are two types of classification: Unsupervised classification Supervised classification

30 70 D. Al Ajmi and Saif ud din 6.2 Unsupervised Classification In this classification numerical operations are performed that search for natural groupings of the spectral properties of pixels as examined in an image. The computer selects the mean class and covariance matrices to be used in classification. Once the data is classified, the classified data are assigned to some natural and spectral classes and the spectral classes are converted to information classes of interest. Some of the clusters are meaningless as they represent mixed classes of earth surface materials. The unsupervised classification attempts to cluster the Dn values of the scene into natural boundaries using numerical operations. Unsupervised classification operates on the color composite made from bands 2, 3, and 4 specifying just six clusters ( Fig. 13 ). The light buff colors associate with the marine waters but are also found in the mountains where shadows are evident in the individual band and color composite images. Red occurs where there is some heavy vegetation. Dark olive is found almost exclusively in the ocean against the beach. The orange, green, and blue colors have less discrete associations. The image in Fig. 14 shows bands 2, 3, and 4, in which 15 clusters are set up; a different color scheme is chosen ( Fig. 14 ). In this image many individual areas represented by clusters do not appear to correlate well. Unfortunately, what is happening is a rather artificial subdivision of spectral responses from small segments of the surface. Another composite, bands 4, 7, and 1, shows a new classification with the same problems as the first, although sediment variation in the ocean is better discriminated ( Fig. 15 ). Fig. 13 Imagery depicting unsupervised classification

31 Remote Sensing 71 Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 Cluster 9 Cluster 10 Cluster 11 Cluster 12 Cluster 13 Cluster 14 Cluster 15 Fig. 14 An unsupervised classification image with 15 classes Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 Cluster 9 Cluster 10 Cluster 11 Cluster 12 Cluster 13 Cluster 14 Cluster 15 Fig. 15 An unsupervised classification image

32 72 D. Al Ajmi and Saif ud din The unsupervised classification is too much of a generalization and the clusters only roughly match some of the actual classes. Its value is mainly as a guide to the spectral content of a scene to aid in making a preliminary interpretation prior to conducting the much more powerful supervised classification procedures. There are several types of unsupervised classification which are explained briefly below: Sequential Clustering This clustering algorithm operates in a two-pass mode. In the first pass, the program reads through the data set and sequentially builds clusters (group of points in spectral space). There is a mean vector associated with each cluster. In the second pass, a minimum distance classification to mean vector algorithm is applied, pixel wise, where each pixel is assigned to one mean vector created in pass Pass 1: Cluster Building During the first pass, the analyst may be required to supply four types of information: 1. R, a radius in spectral space used to determine when a new cluster should be formed. 2. C, a spectral space distance parameter used when merging clusters. 3. N, the number of pixels to be evaluated between each merging of the clusters. 4. C max, the maximum number of clusters to be identified by the algorithm. These can be set top default values, if no initial human interaction is desired. In the sequential clustering the data set are evaluated sequentially from left to right (line 1, column 1). The brightness value associated with each pixel in the image represent the mean data vector of cluster, it is an n -dimensional mean vector where n represents the number of bands. If the spectral distance ( D ) between two clusters is greater than R, the mean data vector of cluster 1 becomes the average of first and second pixel brightness values. The spectral distance ( D ) is computed using Pythagorean theorem. If the distance between two clusters is less than D then the two clusters are merged together. Pass 2: Assignment of pixels to one of the C max Clusters using Minimum-Distance Classification Logic The final cluster mean data vectors are used in a minimum-distance to means classification algorithm to classify all the pixels in the image to one of the C max cluster. It is necessary to evaluate the location of the clusters in the image, label and combine. It is usually necessary to combine some clusters. Cluster labeling is performed by interactively displaying all the pixels assigned to an individual cluster, making it possible to identify their location and spatial association with other clusters. This interactive visual analysis in conjunction with

33 Remote Sensing 73 the information provided in the scatter plot, allows one to group clusters into information classes Statistical Clustering This method of unsupervised classification takes into account the homogeneity of neighboring groups of pixels, instead of considering individual pixels equally. The algorithm uses 3 3 sets of contiguous pixels that have similar measurement vectors. The assumption behind this algorithm is that contiguous, homogenous pixels usually indicate a spatial pattern within the data that is worth classifying. The process consists of two parts: Homogeneity within the window of pixels being considered; Cluster merging and deletion Homogeneity Parameters Windows of nine pixels (3 3) are tested for homogeneity, from the upper left corner of the data, moving one window at a time, so that the windows do not overlap. A skip factor may be specified so that every x th window across and every yth window down is tested. The mean and standard deviation are calculated for the nine pixels in each band. These values are compared to the values entered. L, a lower bound for the standard deviation. This value is usually small, but not equal to zero. Its primary purpose is to prevent clusters with a standard deviation of zero in any band. A cluster with a standard deviation of zero will also have a covariance of zero with any band, causing zeros to appear in the covariance matrix. U, is an upper bound for the standard deviation. The higher the standard deviation is for one band, the less homogenous the data is in that band. Therefore, the upper bound is a ceiling for the amount of variation within a window. The coefficient of variation V is an alternative test for homogeneity, based on the mean of the cluster Isodata Classification Isodata stands for iterative self-organizing data analysis technique. It is iterative as it repeatedly performs an entire classification and recalculates statistics. Selforganizing refers to the way in which it locates a cluster with minimum user input. The Isodata classification uses minimum spectral distance to assign a cluster for each pixel, it will begin with a specified number of arbitrary cluster means and then it processes repetitively, so that those arbitrary means will shift to the mean of the cluster in the data. Clusters with large variances are split and clusters with smaller variances are merged.

34 74 D. Al Ajmi and Saif ud din 6.3 Supervised Classification In supervised classification the analyst has to specify all the parameters. The steps that are to be taken while attempting supervised classification are: An appropriate classification scheme must be adopted. Representative training sites may be selected. Statistics must be extracted from the training site spectral data. The statistics are analyzed to select the appropriate features (band) to be used in the classification process. Select the appropriate classification algorithm. Classify the imagery into m classes. Statistically evaluate the classification accuracy The Classification Scheme The categories of interest must be carefully selected and defined to successfully perform digital image classification. It is essential to realize the fundamental difference between the information classes (defined by the analyst) and spectral classes (inherent of sensor). The major point of difference between various classification schemes is their emphasis and ability to convert spectral classes into information classes of remote sensing data Training The aim of training is to obtain sets of spectral data that can be used to determine decision rules for the classification of each pixel in the whole image data set. The training data for each class must be representative of all data for that class. Each training site consists of many pixels, conventionally it is taken that if there are n number of bands the number of pixels in each band is n +1. The mean, standard deviation, variance, minimum value, maximum value, variance-covariance matrix and correlation matrix for training classes are calculated, which represent the fundamental information on the spectral characteristics of all classes. Since for selection of appropriate bands only this information is not enough, thus feature selection is used. The training sites are presented on true color map bands 1, 2, and 3 for 13 classes ( Fig. 16 ) Feature Selection Feature selection is the process of discriminating each class of interest and to determine the bands in which a particular class is highlighted. Feature selection involves both statistical and/or graphical analysis to determine the degree of seperatability

35 Remote Sensing 75 seawater sediment1 sediment2 Baysediment marsh wavesurf sand urban1 urban2 sunlits1 shadows1 scrublands grass fields trees cleared Fig. 16 Selection of training data from satellite image between classes. Combinations of bands are normally ranked according to their ability to discriminate each class from all others using n bands at a time (Table1 ). We can deduce from this table that most of the signatures have combinations of Dn values that allow us to distinguish one from another, depending on the actual standard deviations (not shown). Two classes, urban 1 and cleared (ground), are quite similar in the first four bands but apparently are different enough in bands 5 and 7 to suppose that they are separable. The range of variations in the thermal band 6 is much smaller than in other bands, suggesting its limitation as an efficient separator. On the basis of the above training sets are prepared and the image can be classified; there are several types of classification methods. 6.4 Parallelepiped Classification The classification is based on simple Boolean logic. Training data in n spectral bands are used in performing the classification. Brightness values from each pixel of the multi-spectral imagery are used to produce an n -dimensional mean vector,

36 76 D. Al Ajmi and Saif ud din Table 1 Table of band means and sample size for each class training set Class/BAND (TH) 7 No. of pixels 1. Seawater , Sediments Sediments Bay sediment Marsh Waves surf , Sand Urban Urban , Sun slope , Shade slope Scrublands , Grass Fields Trees , Cleared M c. Where M c =( m c 1, m c 2, m c 3,, m cm ) with m ck, being the mean value of the training data obtained for class c in band k out of m possible classes. S ck is the standard deviation of the training class c in band k out of m possible classes. Using one standard deviation threshold, a parallelepiped algorithm decides BV ijk is in class c, if m S BV m + S, ck ck ijk ck ck where c =1,2,3,,m number of classes, k =1,2,3,,n number of bands; thus the low and high decision boundaries are defined as Low = m S. ck ck ck High ck =m ck +S ck the parallelepiped algorithm becomes Lowck BVijk High ck. These decision boundaries form an n -dimensional parallelepiped in future space. If the pixel value lies above the lower threshold and below the high threshold for all n bands evaluated it is assigned to that class. When an unknown pixel does not satisfy any of the Boolean logic criteria, it is assigned to an unclassified category. Increasing the size of thresholds to classify the unclassified category, would increase the size of the parallelepipeds and introduce a significant amount of overlap among many parallelepipeds resulting in classification error. Perhaps these pixels belong to the class that

37 Remote Sensing 77 has not been trained. The parallelepiped algorithm is a computationally efficient method, unfortunately because some parallelepipeds overlap, it is possible that an unknown candidate pixel might satisfy the criteria of more than one class, in such cases the pixel is assigned to the first class for which it meets all criteria [3]. The parallelepiped classification in multi-band data sometimes resulted in loss of thematic classes due to similar spectral expression of more than one class. 6.5 The Minimum Distance to Mean Classification This is a simple and commonly used classification algorithm and the classification accuracies are comparable to any other classification. The user in this classification has to provide the mean vectors for each class in each band m ck, from the training sets. In this classification distance of each mean vector m ck is calculated for each unknown pixel (BV ijk ), this distance is calculated using Euclidian distance based on Pythagorean theorem or around the block distance measures. The computation of the Euclidian distance from point a ( x,y ) to the mean of class 1 ( m x, m y ) measured in band k and l, relies on equation: 2 2 Distance = (BV ijk mck ) + (BV ijl mcl ), where m ck and m cl represent the mean vectors of class c measured in bands k and l. The subscript for class c is incremented from 1 to n, by calculating the Euclidian distance from point a to the mean of all the classes, it is possible to determine which distance is shortest. In minimum distance a threshold can be assigned from the class means, beyond which a pixel will not be assigned to a category even though, it is nearest to the mean of that category. When more than two bands are evaluated in a classification, it is possible to extend the logic of computing the distance between just two points in n -space using equation. Distance = ( a b ). n i= Each unknown pixel is then placed in the class closest to the mean vector in this band space. For this classified image there were 16 gray levels, each representing a class, to which a color is assigned. This minimum distance classification has all seven TM bands, including the thermal ( Fig. 17 ). 6.6 The Maximum Likelihood Classification This classification assigns each pixel having feature x to the class c whose units are most probable to have given rise to feature vector x. It assumes that training data statistics for each class in each band are Gaussian in nature (Uni-modal).

38 78 D. Al Ajmi and Saif ud din seawater sediment1 sediment2 Baysediment marsh wavesurf sand urban1 urban2 sunlits1 shadows1 scrublands grass fields trees cleared Fig. 17 Supervised classification of satellite image Bi-modal or tri-modal histograms in a single band are not ideal for max-like classification, in such cases, individual modes probably represent individual classes that should be trained upon individually and labeled as separate classes, thus producing unimodal data. Maximum likelihood classification makes use of the mean measurement vector, M c, for each class and covariance matrix for class c for bands k through i, V c. p c p i, where i =1,2,3, m possible classes p V X M V X M T 1 c = { 0.5log[det ( c)]} {0.5 ( c) ( c ) ( c)}, where det ( V c ) is the determinant of the covariance matrix ( V c ). To classify the measurement vector x of an unknown pixel into a class, the maximum likelihood decision rule computes the value p c for each class. Then it assigns the pixel to the class that has the maximum value. In this image 16 classes are identified. These classes are identical to the previous ones recorded in the minimum distance image. In both instances, the Sediment class has been subdivided into three levels and two Urban classes are attempted, to account for visual differences between them ( Fig. 18 ).

39 Remote Sensing 79 seawater sediment1 sediment2 baysediment marsh wavesurf sand urban1 urban2 sunlits1 shadows1 scrublands grass fields trees cleared Fig. 18 Supervised classification images with additional classes in sediments and urban area 7 Monitoring Applications of Environmental Consequences of War 7.1 Application I Hydrocarbon Pollution consequent to 1991 Gulf War A unique environmental catastrophe affected Kuwait during the 1991 Gulf War, which severely impacted environmental conditions. The most conspicuous among these changes were development of oil lakes and oil polluted surfaces. The oil lakes were formed due to gushing of oil from the free flowing oil wells. Thick hydrocarbon deposits covered large areas in the form of tarmats and tarcrete over the desert surface from the burning oil. Studies post the 1991 Gulf War estimated around 300 oil lakes covering an area of 36km 2. These interpretations were largely based on remote sensing studies with selective ground checks. The temporal variation in oil lakes has been studied in detail by Kwarteng and Al-Amji [19] ; their study shows a decrease in spatial coverage in recent years. Monitoring near surface existence of hydrocarbon polluted surfaces in the Burgan Oil field area was attempted by Saif ud din et al. [20, 21]. The researchers used the land surface temperature (LST) as an indicator for hydrocarbon pollution.

40 80 D. Al Ajmi and Saif ud din The LST and total petroleum hydrocarbon (TPH) are positively correlated. The methodology followed is based on the fact that most of the satellites carry a thermal infra-red band which can be used for LST estimation. The spatio-temporal variation in the thermodynamic properties of surface material has been mapped in order to identify hydrocarbon polluted surfaces using Landsat TM data. Emissivity is a strong indicator of compositional variation in silicate minerals which make up the bulk of the earth s surface material. Emissivity affects the apparent temperature due to changes in the thermal properties of materials (conductivity, density, capacity, and inertia). There are several algorithms proposed to estimate LST from remotely sensed data. The most common of these are mono-window and split window methods [22 25], the latter was used initially to estimate sea surface temperature. The land surface temperature measurement is a complicated task due to high spatio-temporal variation of surface emissivities and atmospheric water vapor as both affect the thermal radiance reaching the sensor [26]. Landsat TM band 6 data with a wavelength range of m m can be utilized to estimate land surface temperature. The Landsat TM has a single thermal band, therefore the split window method [27], and temperature emissivity method [28], could not be applied. For measurement of LST from Landsat TM data, the mono-window algorithm, single channel algorithm, and radiative transfer equations, can be used [23, 25, 29, 30]. The algorithm proposed by Richter [23] is simple and accurate. The logic for LST retrieval from satellite data is based on the fact that ground emissivity is known. LST can therefore be calculated by accounting for atmospheric correction. Computation of atmospheric correction is a complex process. To simplify the process, however, upward atmospheric thermal radiance and the reflected atmospheric radiance are subtracted from the observed radiance at satellite to compute brightness temperature at ground level. The standard atmospheric profiles provided in the ATCOR program of PCI, which is based on LOWTRAN can be used to get the atmospheric radiance for surface temperature estimation using band 6 data from Landsat TM. The mono-window algorithm proposed by Qin et al. [25] is based on the thermal radiance transfer equation to calculate LST. It utilizes transmittance and mean atmospheric temperature to estimate LST. The LST estimation is done considering the fact that brightness temperature at satellite can be computed by estimation of radiance from Dn value and conversion of radiance into brightness temperature. The radiance calculation from Dn of TM data utilizes an equation developed by Markham and Barker [31], shown below. Q ( ) DN L = L + L + L ( l) min( l) max( l) min( l) where L ( l ), spectral radiance received at sensor; Q max, maximum Dn value, i.e., 255; Q Dn, Dn value of the selected pixel, i.e., it can be anywhere between 0 and 255; L min( l ) and L max( l ) are minimum and maximum spectral radiance for Q Dn =0 and Q max =255. Q max,

41 Remote Sensing 81 The ATCOR Program of PCI uses the constants defined by Schneider and Mauser [32], where the average wavelength for Landsat TM band 6 is taken as μm, L min( l ) = mwcm 2 sr 1 μm 1 for Dn value 0 and L max( l ) =1.56mWcm 2 sr 1 μm 1 for Dn value 255. Substituting these values in the equation above L L ( ) = = Q. ( l ) dn ( l ) dn The brightness temperature correction for true LST is based on the radiative transfer equation [33]. The concept accounts for thermal emittance from an object in accordance with the blackbody theory, which states: B ( C 1 ( l ) T = 5 l e C2 lt ( 1) ), where B ( l ) (T) is spectral radiance of a blackbody measured in Wm 2 sr 1 μm 1 ; l is wavelength in meters; C 1 and C 2 are spectral constants; C 1 = Wm 2 ; C 2 = μm o K; T =temperature ok. This concept of a blackbody is theoretical. In real-life situations, materials everywhere do not behave as blackbodies because of their emissivity. Emissivity = Fr / Fb, where Fr, radiant flux exiting a real-world body; Fb, radiant flux exiting a black body. Since the materials in the study area do not behave as blackbodies, emissivities are to be considered which will account for emittance by atmospheric absorption and reflection. The LST estimates take into account the upward and downward atmospheric radiances. The former is greater than the latter. Their difference in clear sky is within 5 C [25]. In the present study, the emissivity for the sand in a desert environment is assumed to be 0.76, for carbonate lithologies in the study area it is 0.92, and for heavy oil it is The approximation is taken as: B ( T ) ( ). 6 a = B6 Ta To get LST from T s Planck s radiance function is linearized [25]. L = a+ bt, 6 6 where a and b are coefficients valid for a temperature range of 0 70 C. a = b = The ATCOR program used for estimation of the LST assumes two surface temperatures T s1 and T s2 with their corresponding black body temperatures T b1 and T b2. Q

42 82 D. Al Ajmi and Saif ud din The atmospheric correction and elevation functions are included in ATCOR, which provide a very good approximation. The temperature images are derived from ATCORT1. The digital image processing of Landsat TM band 6 for land surface temperature measurement carried out for post-1991 Gulf War images show a remarkable pattern coincident with the surface hydrocarbon disposition in the area expressed as variable LST within the image. The area shows an increase in the LST as compared to a pre-war image of Intra-image LST variations depict oil polluted surfaces since the emissivity and thermodynamic behavior of oil is different from that of soil or water. The 1989 image shows LST differences due to different emissive properties of the lithology and sand. The area exhibits LST variations in the sand cover in the post Gulf War images, due to hydrocarbon polluted surfaces formed during the 1991 Gulf War. The persistent spatial intra-image LST pattern correlates with the oil lakes, tarmats, and soot fallout. Selective ground truth has been carried out by Kuwait Institute for Scientific Research [34 38] ( Figs ). The intra-image LST variations were verified from the collated ground truth data to verify the temperature anomalies in the images ( Fig. 29 ). The intra-image LST variations correlate very well with the hydrocarbon polluted sites in the study area. The algorithm applied to study intra-image spatial variations in LST shows temperature variations in 1991, 1995, 1998, and 2000 in the Burgan Oil Field of Kuwait. These figures show a similar LST pattern during successive years even though they were obtained at different times of the year. The LST pattern correlates with the hydrocarbon polluted surfaces in the study area. A sketch map is included to show different pollution zones and location of some field photographs is also marked on the LST map. The temporal changes have resulted in reduction of spatial extent and pollution intensity of these zones. The reduction in intensity is primarily due to weathering of the hydrocarbon, leaching during rains and downward migration in summers when surface temperature is extremely high (exceeds 50 C). Intra-scene LST values are directly proportional to pollution intensity. The higher temperatures in the LST maps indicate relatively higher pollution. It is clear from Figs that the intensity of pollution and spatial extent has reduced in successive years. This decrease is probably due to several reasons: increase in depth of pollutants; bioremediation; sludge recovery; leaching; and chemical alterations. Some of the oil lakes and tarcrete deposits are covered by a thin veneer of sand which has obscured them. The depth of burial has increased over the years; however, the thickness of sand sheet has not increased in recent years in the study area [39]. High temperatures in summer months, imparts mobility to the tar and soot deposits at the surface, and winter rains leach out hydrocarbon fractions which percolate down with the water [40]. Consequently, the depth of pollution observed at several places, varies from a surface veneer to a maximum depth of 4μ.

43 Remote Sensing 83 N W E S Ahmadi Quarry Kilometer Fig. 19 Landsat image over Burgan Oil field (1989) The analyses of pre- and post-1991 Gulf War images of the study area show a very elegant co-relation of LST with the spatial locations of oil lakes, tarcrete, tarmat, and soot in the Burgan oil field. Higher intra-image LST corresponds to higher

44 84 D. Al Ajmi and Saif ud din W N E Legend Temperature in Celsius S Ahmadi Quarry Kilometer Fig. 20 LST image over Burgan Oil field (1989) total petroleum hydrocarbon (TPH) and/or near surface burial. Very high values are also observed over the flaring chimneys of the oil separators, while lower LST correlates well with lower TPH and/or deep burial and water pits. The TPH concentration in the uncontaminated soil varies from 7 to 20 μgg 1 whereas the oil lakes shows a TPH concentration of around 25,000 μgg 1. These oil lakes correspond

45 Remote Sensing 85 N W S E Ahmadi Oil Lakes Kilometer Fig. 21 Landsat image over Burgan Oil field (1991) to the high temperature zones in LST maps. The tarmats are solidified thick sludge deposits with TPH concentration of 12,000 19,000 μgg 1. They correspond to a slightly lower temperature than oil lakes. There exists a good correlation between the hydrocarbon concentrations and LST in all images.

46 86 D. Al Ajmi and Saif ud din W N S E Legend Temperature in Celsius Ahmadi Oil Lakes Kilometer Fig. 22 LST image over Burgan Oil field (1989) The LST accuracy is ± 0.9 C to 1.1 C. However, in this study the accuracy of the actual LST estimation is not a primary concern since the relative LST patterns are sufficient to identify different hydrocarbon polluted surfaces. The actual LST measurement and calibration is part of an ongoing research. The methodology developed can be adopted for detecting oil polluted surfaces on land and as spills in oceans and near the ports [20]. The technology can be used

47 Remote Sensing 87 N W E S Ahmadi Water pit Oil Lakes Road Kilometer Fig. 23 Landsat image over Burgan Oil field (1995)

48 88 D. Al Ajmi and Saif ud din W N S E Legend Temperature in Celsius Ahmadi Water pits Oil lakes Kilometer Road Fig. 24 LST image over Burgan Oil field (1995) for quick monitoring of spatially large areas, using similar algorithms for other satellite data, where the temporal coverage is quick. An important application can be in near real time monitoring of oil spills. The effect of darkness will not affect the LST observations so waste dumping during night-time can be easily identified for pollution control and environmental management ( Fig.30 ).

49 Remote Sensing 89 N W E S Ahmadi Quarry Oil lakes Kilometer Fig. 25 Landsat image over Burgan Oil field (1998)

50 90 D. Al Ajmi and Saif ud din W N S E Ahmadi Legend Temperature in Celsius Quarry Oil lakes Kilometer Fig. 26 LST image over Burgan Oil field (1998) 7.2 Application II Precipitation Estimate Using Tropical Rainfall Measuring Mission In the post-war scenario the precipitation patterns changed. The total rainfall measuring mission data has been calibrated over Kuwait to have continuous spatial and temporal precipitation coverage over Kuwait. Precipitation retrievals from remote

51 Remote Sensing 91 N W E S Water pit Ahmadi Quarry Oil lakes Kilometer Fig. 27 Landsat image over Burgan Oil field (2000)

52 92 D. Al Ajmi and Saif ud din W N E Legend Temperature in Celsius S Water pits Ahmadi Quarry Oil lakes Kilometer Fig. 28 LST image over Burgan Oil field (2000) sensing satellite data are an acceptable alternative to the use of rain gauges for global and regional climatic/hydrological models. The tropical rainfall measuring mission (TRMM) is a low altitude satellite, equipped with an active precipitation radar (PR) along with a multi-channel passive TRMM microwave imager (TMI), visible and infrared scanners (VRIS), earth radiant energy flux system, and lightning imaging sensor. TRMM provides better spatio-temporal precipitation coverage on global and regional scales, as it provides a wide variety of data.

53 Remote Sensing 93 N W E S Kilometer Fig. 29 Spatial distribution of soot, tarmats, tarcrete and oil lakes in southern Kuwait The TRMM precipitation retrievals are based on the emission of microwave radiation from raindrops. The PR is the first space-based radar to measure precipitation [41 43]. It operates at 13.8 GHz frequency with a 2.17 cm wavelength. The instrument is capable of detecting echoes of >17 dbz. However, due to the strong reflection of the earth s surface, PR measures precipitation rates close to the ground surface, thus affecting the reflectivity profile above the surface [43]. In developing countries, particularly in arid countries, there is a paucity of continuous rain gauge precipitation measurements from which time series analyses can be carried out if required. An attempt is being made to redress this balance, in Kuwait. A technique to resample the monthly TRMM precipitation rate on 0.25 is described for calculating precipitation at any spatial location with limited or no ground control [44]. The technique has been calibrated by testing modeled data

54 94 D. Al Ajmi and Saif ud din N W E Legend S Concentration in microgram/gram (Temperature in degree Celcius) (19-21) (21-23) (25-27) (27-29) (31-33) (35-37) Kilometer Fig. 30 Correlation between surface temperature and hydrocarbon pollution with actual rain gauge measurements at eight different locations in Kuwait for which sufficient ground measurements are available over a 96-month period, for the period between 1st January, 1998 and 31st December, The Kuwait Institute for Scientific Research operates eight weather stations for gathering baseline metrological data over Kuwait. The stations are equipped with MetOne 370 rain gauges that are operated on a tipping bucket principle. This allows accurate repeatable measurement with minimum maintenance. The data is recorded every 10 minutes and is stored in attached data loggers. TRMM dataset 3B43 V6 is used to obtain satellite precipitation estimates. The 3B43 is executed once every calendar month and gives a single best estimate for precipitation rates and the root mean square precipitation error estimate, by combining three-hourly integrated high quality data, IR estimate (3B42) with the monthly accumulated climate assessment monitoring system (CAMS) or global precipitation climatology center (GPCC) rain gauge analyses (3S45). This dataset is corrected on both a global and regional scale.

55 Remote Sensing 95 In this application, an attempt has been made to validate weighted bi-linear interpolation for precipitation retrievals over a point of interest. The source dataset is the TRMM 3B43 V6 data with a 0.25 degree grid. This grid is a bit coarse for local interpretation. Instead of using the grid value for a particular location, or the averaging of adjacent pixel values on a 0.25 degree grid, we found that the bi-linear weighted interpolation which is presented here will be more realistic. The TRMM 3B43 dataset was used for bilinear weighted interpolation to resample the data for a specific location [45], which in the present study is the spatial location of the rain gauge. The logic that has been adopted is to average four adjacent pixels for a location of interest [46]. The location ( X,Y) has been kept in a 2 2 grid in such a manner that it occupies a central position in the grid. An illustration is included ( Fig. 31 ). In the bi-linear interpolation, a simplistic rationale is followed, i.e., for calculating the pixel value of a particular position ( X,Y), four adjacent pixel values are used [21]. The closer the pixel is to the position ( X,Y), the more influence (weight) it will carry. The method is not merely a falling function of distance from the pixel. Rather it considers a weighted approach based on its spatial locations in a two-dimensional space [47]. The derivation of bi-linear interpolation weights can be expressed as follows [48, 49] : X = SxX1 + (1 Sx) X2, X X1 Sx =, X2 X1 Y = SyY1 + (1 Sy) Y2, Y Y1 Sy =, Y Y 2 1 X 1 Y 1 X X 2 Y 1 (X,Y) Y X 1 Y 2 X 2 Y 2 Fig. 31 Grid considered for estimation of point values

56 96 D. Al Ajmi and Saif ud din where 0 S 1 X1 X X2. Considering the above, the actual weight at any point ( X,Y ) in a two-dimensional space can be computed [49] as: IXY (, ) = (1 S)(1 S) IXY ( ) + S(1 S) IXY ( ) + (1 S) SIXY ( ) + SSIXY ( ), x y 1 1 x y 2 1 x y 1 2 x y 2 2 where I=actual pixel value. The comparison of the satellite precipitation following the bilinear weighted interpolation and the gauge measurement are presented below at eight different spatial locations. The measured data for the rain gauges was not continuous, and at certain times was not recorded due to instrument malfunction and/or calibration errors. However, the TRMM data is continuous ( Figs ). The bi-linear weighted interpolation has been applied to the entire time series of TRMM dataset for 96 months to get calculated data over each station even when the measurement was not made by gauge. It is obvious, from the plots, that the measured and computed precipitation data are very highly correlated. This suggests that the methodology can be a reliable alternative to assessing precipitation in areas where rain gauge measurements are unavailable. Normally, the 3B43 data set is corrected using ground truth measurements, but their validation sites are spaced far apart with some even located on ocean buoys, which may not even account for the precipitation variation on a local scale due to Urban Heat Island, variable emissivity of surface material and surface topography. Rain gauge measurements at all eight locations were compared to: (1) pixel values for the grid within which rain gauges were cited, (2) an average 2 2 grid value around a particular rain gauge location and (3) a bi-linearly weighted interpolation estimate Rainfall in Millimeter Taweel Months Rain Gauge TRMM Estimate Fig. 32 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Taweel

57 Remote Sensing Salmi 100 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 33 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Salmi 160 KISR 140 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 34 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at KISR around a rain gauge of interest. It was found that the rain gauge measurements at all eight locations compared extremely well with method (3) above. This suggests that more accurate estimates of precipitation at any location are obtained by the use of the bilinear weighted interpolation methodology. This study therefore validates the use of

58 98 D. Al Ajmi and Saif ud din 120 Azor 100 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 35 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Azor 120 Mutla 100 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 36 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Mutla

59 Remote Sensing Subbiya 100 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 37 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Subbiya 90 Haiman Precipitation in Millimeter Months Rain Gauge TRMM Estimate Fig. 38 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Haiman a bi-linear weighted interpolation method on a TRMM 3B43 V6 dataset. The computed precipitation rates are slightly higher than measured values albeit within +2% of the measured values. This is extremely good considering that rain gauges are believed to underestimate precipitation of the order of 10 20% due to the wind speed, precipitation rates, and sometimes the apparatus itself [50 54].

60 100 D. Al Ajmi and Saif ud din 120 Wafra 100 Rainfall in Millimeter Months Rain Gauge TRMM Estimate Fig. 39 Correlation between precipitation estimates obtained from Tropical rainfall measuring mission satellite and rain gauge at Wafra Fig. 40 An oil logged farm 7.3 Application III Feature Extraction Technique for Palm Tree Census Vegetation monitoring post the 1991 Gulf War is of prime importance. Lots of date palm trees and farms have been extensively damaged ( Fig. 40 ). Remote sensing technology is being used to automatically classify the vegetation type for agriculture census. The Kuwait Institute for Scientific Institute took a step in this direction by developing a methodology for automatic mapping of urban treed areas. During the current phase it is mode directed towards mapping of date

61 Remote Sensing 101 palm trees. Palm trees are very common in Middle Eastern countries. They are of significant environmental and commercial importance [55]. In recent decades the Middle Eastern region has witnessed an extensive planting of date palm trees, both in urban and agricultural areas. Millions of trees are estimated to have been planted in these arid deserts. Extensive plantation in urban areas rarely gives a clue that these are arid and hyper arid countries. Among these species most common are date palm trees, which are seen planted along the roads, in front of houses, in parks, and organized plantation in agricultural areas. However, there is limited knowledge of actual tree counts and their exact spatial locations, which is a requirement for any agricultural census. Remote sensing data has been used for the identification of urban treed areas, but with limited classification accuracies. These lower classification accuracies are attributed to a variety of spectral and textural properties [56]. Medium resolution satellites including LANDSAT, SPOT, and ASTER have been used in urban treed classification but their spatial resolution permits only larger patches of treed areas to be classified [56 58]. With the advancement in Satellite technology and availability of high spatial resolution images, it is now feasible to achieve higher classification accuracies in urban and agricultural areas. The present study is an attempt to map the date palm trees in urban and agricultural areas of Kuwait. An accuracy assessment is also made to compare results from maximum likelihood and the Laplacian blob classifications of date palm trees within the test areas. Similar studies for classifying and quantifying olive trees were taken up by European Union Countries. The European Economic Committee (EEC) realized the need to quantify the Olive plantation in 1997 and launched the OLISTAT project in September 1997 to estimate the number of olive trees in France, Italy, Spain, Portugal, and Greece [59]. The counting of trees is a classic example of remote sensing applications in forestry. However, crown counting of trees is not an easy or straightforward task as there are limitations of satellite data resolution as well as problems related to the subjective nature of interpretation. Howard [60] indicated that the capacity to distinguish different objects is governed by size of the object relative to pixel. The multi-spectral classification methods have provided reasonably good results, but there is still room for further improvement in classification accuracy if textural parameters are taken into account. It was believed that with the availability of higher resolution satellite data, classification accuracies will improve, but ~100% accuracies are yet to be seen. Since the higher spectral resolutions increased the intra-class separatability in an image, Marceau et al. [61] suggested that optimal spatial resolution for the classification of temperate forest should be 10m as resolutions finer than 10m increased the intra-class seperatability and decreased the overall classification accuracy. The inclusion of textural parameters in urban and forest area image classification improved accuracies of image classification on high resolution images [56, 62]. Previous investigations have attempted the extraction of tree textures from highresolution satellite data using neural network, co-occurrence matrix, semi-variogram, threshold based spatial clustering, local variance and local maximum filtering

62 102 D. Al Ajmi and Saif ud din [63 68]. Some of these approaches worked well but the success rate was limited in urban areas. The importance of urban area classification in this region is realized since there is extensive date palm plantation in urban areas throughout Middle Eastern countries. As these trees are planted along the roads, inside and outside private properties and on road side pavements, the gathering of information of these trees for agricultural census purposes is a cumbersome task to achieve, yet there is no systematic database, to document their spatial information. Researchers started to develop algorithms and application software to count trees [69 71]. In this communication, selective filtering and Laplacian blob are used for classifying date palm trees in both the urban and agricultural areas. Two Quickbird scenes of April 2005 were selected over the study area. One of the scenes is over an urban area in central Kuwait, while the other is from an agricultural area in northern Kuwait. The quickbird data set used is panfused with a spatial resolution of 0.6m ( Fig. 41 ). There are two steps involved in palm tree counting. The first step is the selective smoothing by non-linear diffusion, which results in a sharp contrast among a number of different features followed by Laplacian filtering. Iraq Iran Kuwait Saudi Arabia Qatar UAE Oman N W E Yemen 29.8 S Kuwait Study Area km Fig. 41 Location of the two selected scenes

63 Remote Sensing 103 The second step involves the application of a non-linear parabolic equation [72] to the two selected quickbird scenes. This equation allows selective enhancement and smoothing in addition to simultaneously preventing the blurring of the edges. The processing is quite effective for image classification in urban areas. The equation is stated as: I Ixyt (,, )/ t= g( Gx I) Idiv, I I where I div I gradient diffuses the image Ixy (, ) in the direction orthogonal to its I and not in all directions. [g( Gx I) is used for edge enhancements. This anisotropic filtering is basically the statistical interpretation of the anisotropic diffusion, but it takes slightly more time in processing and implementation (Black and Sapiro 1998 ). The second step is Laplacian filtering which can be suggested as an irreducible differential invariant. It is expressed mathematically by equation 2 [ I = Ixx+ Iyy, when a gray scale image defines a first-order derivative as equation I = ( Ix, Iy) Ixx Ixy and a second-order derivative by Hessian matrix HI =. Iyx Iyy 2 2 When the second-order derivative is greater than zero and Ix + Iy = 0, then the point is referred to as an elliptic point due to its appearance. The sign of Laplacian equation indicates the maxima and minima. The dark blob indicates the minima ( A ) and the bright blob indicates the maxima ( B ) (Fig. 42 ). I I xx xx + Iyy > 0. + I < 0. The proposed methodology is morphometric and thus spatial resolution in palm tree mapping is critically important. The use of 0.6-m spatial resolution data to map palm trees with 3 4 m crown sizes and 3 8 m inter tree spacing was sufficient. Similar methodology has been successfully used in the European Union countries for the mapping of Olive trees. Application of this methodology to classify date palm trees in arid urban and agriculture areas was employed using two Quickbird scenes. The original quickbird scene with different species of trees and bushes with similar spectral characteristics, including streets and cars and the other scene with green houses, buildings and trees are used for classifying using maximum likelihood classification. The results of this classification are not perfect. A simple yy

64 104 D. Al Ajmi and Saif ud din a b I xx + I yy > I xx + I yy < 0 Fig. 42 Laplacian blob (a) minima and (b) maxima accuracy assessment was carried out using the random point method in an area of 1,000 1,000 pixels for images. A number of 100 randomly selected points were assessed in either image to ascertain whether the pixel was correctly assigned to a class or misassigned to another class in a confusion matrix given in Table 2. The errors are stated as commission and omission errors. Commission error results from incorrect identification of a pixel, while omission error occurs when we simply don t recognize a pixel that we should have identified as belonging to a particular class. The Quickbird true color image was used for visual reference. The accuracy assessment of the two classifications shows that using the proposed methodology has led to a significant increase in classification accuracies. The accuracy achieved for the maximum likelihood classification is 67 and 79% in urban and peri-urban areas, respectively. The textural characteristics of the trees play a significant role in identification. The smaller area of tree and the similar spectral signatures of grass, play grounds, and lawns make it imperative to integrate textural parameters in classification schemes. In order to achieve this, a selective smoothing procedure was adopted on high resolution imagery to isolate and characterize individual trees. In the Laplacian maxima filtering, this pre-processing step is crucial since this requires

65 Remote Sensing 105 Table 2 Accuracy of the image classifications Multi-spectral classification Laplacian maxima filtering Palm Non-palm Omission error Commission error Mapping accuracy Palm Non-palm Omission error Commission error Mapping accuracy Urban area Urban area Palm % 33% 51.12% Palm % 5% 92.38% Non-palm % 32% 50.75% Non-palm % 3% 92.23% Total % Total % Palm Non-palm Omission error Commission error Mapping accuracy Palm Non-palm Omission error Commission error Mapping accuracy Peri-urban area Palm % 24% 66.12% Peri-urban Palm % 3% 96.12% Non-palm % 18% 64.40% area Non-palm % 1% 96.04% Total % Total %

66 106 D. Al Ajmi and Saif ud din a processing system to encapsulate the image content. Feature extraction results are directly related to the performance of the initial enhancement and smoothing stage. The methodology helped in accurate classification of date palm trees in urban and agriculture areas. An accuracy of 96% is achieved for the urban area and 98% for the agriculture area. The advantages of this methodology are clearly the higher accuracies for similar scenes and simple processing. The problems related to shadowing and irregular strands are solved to a larger extent, using this approach. The image processing was carried out using PCI Geomatica software. However, it is possible to utilize any other software for image processing ( Figs ). It was demonstrated that the proposed methodology surpasses the conventional maximum likelihood classification in terms of palm tree identification in the study area. The methodology used is precise in palm tree classification in both the urban and agricultural areas. The classification in this instance is simple and straightforward, and the subjective nature of data interpretation which relies to a great deal on the interpreter s perception is greatly minimized. These high accuracies in classification are also due to the fact that there is no tree undergrowth and the uniformity of palm tree crown sizes. The analyses show that the palm trees are highlighted as blobs and the crown patterns are very clearly segregated from other vegetation types in the area. The Laplacian blob maxima coincide with the center of the tree. The success of the blob detection technique in spatial mapping of palm trees in urban areas of Kuwait shows potential of the methodology to be used in other Middle Eastern countries with similar climatic and vegetation patterns. Fig. 43 Wuickbird image, Kuwait city

67 Remote Sensing 107 Non Palm Trees Non Palm Trees Palm Trees Fig. 44 Maximum likelihood classified image, Kuwait city Fig. 45 Date palm tree blobs using laplaccian maxima (inverted LUT), Kuwait city 7.4 Application IV Subsidence in Oil Fields Consequent to the 1991 Gulf War there was extensive damage to the oil facilities. After the war large-scale development was carried out leading to further expansion of exploration and production facilities. The remote sensing technique has been

68 108 D. Al Ajmi and Saif ud din Fig. 46 Quickbird image showing farm in north Kuwait Non Palm Trees Palm Trees Non Palm Trees Fig. 47 Maximum likelihood classified image, showing farm in north Kuwait successively employed in oil field areas to map the micro-elevation changes. Radar interferometry for elevation mapping has been successfully used. The interferometric techniques give very precise measurements using highly correlated radar images [39]. Surface sagging and subsidence in Oil fields have been reported from all over the world [73 80]. The subsidence is usually gradual and often so small that it remains undetected in a conventional ground survey. The surface subsidence in oil

69 Remote Sensing 109 Fig. 48 Date palm tree blobs using laplaccian maxima (inverted LUT), showing farm in north Kuwait fields has caused enormous financial losses due to disruption of production and loss of infrastructure. Studies have been carried out to evaluate the impact of subsidence in oil fields in California, Netherlands, Ekofisk Norway, Brent United Kingdom [81]. Shell-operated production in the areas has been badly affected due to surface subsidence. The InSAR technique is used in Kuwait to map the elevation changes within the oil fields. Subsidence is typically explained by tectonic activity, but in areas like Burgan Oil field, Kuwait, where there is little evidence of tectonic activity in the historical past, it is believed to be caused by mass balance adjustments. The mass balance can lead to sagging of the surface and slow subsidence as hydrocarbon reserves are removed from the host formation. The host formation undergoes compaction due to over burden and loss of pressure from within the formation. This can also reactivate sub-surface geological structures, which may compromise the integrity of the reservoir seal, resulting in natural migration of hydrocarbons to other formations. Furthermore, the compaction consequent to subsidence may lead to reduced porosity which can lower the production levels in a formation. Since the launch of the ERS-1 satellite by the European Space Agency (ESA) in 1991, the topic of Synthetic Aperture Radar (SAR) interferometric processing of signals has gained attention in the remote sensing community all over the world. The interferometric technique using SAR data is a very precise measurement technique to measure the deformation effect of the earth s surface with sub-centimeter accuracy it is reported to be accurate even at the millimeter level [73, 80, 82]. The radar interferometry is an alternative to the conventional stereoscopic method for extracting topographic and deformation information. Synthetic aperture radar is an

70 110 D. Al Ajmi and Saif ud din imaging radar device which images the radar backscatter of the earth s surface over large areas with a spatial resolution of 10 20m. InSAR uses the change in phase for mapping the relative position of a point in space. In radar interferometry the change in phase is measured between the backscatter in two different SAR images of the same area taken from slightly different positions or at different times. The second set of SAR data is taken with the same perspective and is co-registered and combined into an interferogram. Pixel-wise phase difference is measured which indicates the change in relative position of a pixel. The basic idea of radar interferometry is based on the interference of electromagnetic waves. The interferometric technique relies on processing of two SAR images of the same area from slightly displaced passes at different times or from two antennas on the same platform at the same time. In the present communication we are using images from slightly displaced passes at different times. The SAR is an active remote sensing system where the sensor acts as transmitter and receiver antenna during the image acquisitions. Six SAR scenes corresponding to , , , , , and from ERS 1 and 2 have been analyzed to map the elevation changes. The data of was not helpful since it had a perpendicular baseline of 1,504m which leads to loss of coherence. The other five data scenes were used. The 1996 scenes are a tandem pair and were most useful to give the topographic phase information, since the deformation, atmospheric and noise phases are eliminated due to their insignificant contribution in the phase information. The topographic information from the SAR data showed very good correlation with the DEM data derived from the ground information. The phase information is used in interferometry which gives millimeter-level vertical precision [83]. Because of such a fine vertical resolution it had been utilized in mapping subsidence, tectonic deformation before, during and after earthquakes, inflation and deflation of volcanoes due to magma movements [73]. It utilizes the correlation of small-scale roughness to provide a coherent scattered signal, because of its high sensitivity, care is taken so that the signals are not decorrelated due to the long time interval of passes, vegetation growth and anthropogenic activity. In interferometry the relative positional accuracy of each pixel is very high. The coherence value of the three images in the Burgan oil field area shows high coherence values of about High coherence indicates low phase noise and can give very precise estimates, while in lower values of coherence the phase noise is pronounced and possibilities of precise deformation estimation are less. Figure 49 shows three image pairs showing different coherence values and subsidence information. All the three show good coherence in the oil field area. The coherence values are low in the western part of the scene probably due to the presence of a sand sheet. In the present study repeat pass radar interferometry was used and interferograms are formed by two image signals, I 1 and I 2, which can be mathematically expressed as: * { i i i( ) i I I1I2 A1e j A2e j A1A2e j j Φ = = = = Ae },

71 Remote Sensing N W E S N W E S Burgan Oil Field Legend Burgan Oil Field Legend 0.0 mm mm a b The correlation (a) and subsidence (b) using scene dated and N W E 29 S N W E S Burgan Oil Field Legend Burgan Oil Field Legend 0.0 mm a mm The correlation (a) and subsidence (b) using scene dated and N N W E W E 29 S 29 S b Burgan Oil Field Legend Burgan Oil Field Legend 0.0 mm mm a b The correlation (a) and subsidence (b) using scene dated and Fig. 49 Correlation and subsidence images from Burgan Oil field where * denotes the conjugate complex; A =A 1 A 2 is interferogram amplitude; F = is the interferogram phase. 1 2 For identical imaging geometry and surface I 1 =I 2 =0. But this is not observed in practice, thus the phase difference is the sum of contributions from different processes [84] [85], which are expressed as: Φ=Φ topo +Φ def +Φ, atm +Φ n +Φdis

72 112 D. Al Ajmi and Saif ud din where Φ topo is the phase contribution due to viewing of topography from different perspectives, F def phase contribution due to movement along the line of sight, F dis phase contribution due to displacement, F atm phase contribution due to atmospheric effect, F n is phase noise. In order to determine the surface movement the assumption is taken that F orbit, F atm, F def, and F n are negligible. Thus, F topo can be separated from F dis. The separation of the topographic phase was done to yield the displacement phase. The interferometric phase component resulting from surface displacement is calculated using expression: Φ = Φ Φ. dis unwrappedphase topo The overall coherence values of the study area are in the range of Burgan Oil field shows a high coherence, which is good for subsidence measurements. The tandem pair (12th and 13th May 1996) shows almost no subsidence in the highly correlated area (Burgan Oil field). The data for 12th May 1996 was combined with 29th May data and no detectable subsidence was observed either. In the third case the data of 12th May 1996 and 10th March 1999 were combined to yield maximum displacement of 27mm in Burgan oil field area. The ASTER Mosaic of the study area with corresponding subsidence rates shows a general overview of where subsidence exists ( Fig. 50 ). The subsidence rates in the areas with low correlation are high, which may be attributed to the sand movement, anthropogenic activities or simply are errors due to atmospheric artifacts or baseline approximations. But in the area of Burgan oil field where the correlation value is near unity, the displacement values seem to be very reliable. The technique shows subsidence exists in the oil fields of Kuwait. The diffused seismic activity in the region can be partly attributed to the mass balance changes in the oil reservoirs, however the seismic activity is seen more often in the Dahar area, where too it can be attributed to mass balance changes in the carbonates. But in the case of Dahar and Managish oil field areas these changes are manifested on the surface as sinkholes due to brittle failure of carbonate lithology. In the a W N S E b W N S E Burgan Oil Field Legend 0.0 mm mm Fig. 50 ASTER image of study area (left) and corresponding subsidence map (right)

73 Remote Sensing 113 case of Burgan the lithology is primarily clastics with intercalated shales. The Burgan Formation, which is Albian in age, is 330m thick and forms part of the Wasia Group [86, 87] which is overlain by Aruma, Hasa, and Kuwait Groups. Burgan is the largest oil reservoir of Kuwait and it has produced for over seven decades. The wells are under artesian conditions, which implies that the pressure in the Formation is maintained. This pressure in host formations can be due to superincumbent strata of overlying formations. The mass balance adjustment due to hydrocarbon extraction, leads to sagging of the surface and slow subsidence, as the host formation adjusts to displacement of the hydrocarbon reserves. The infinitesimal deformation over a period of years has apparently kept the oil facilities undisturbed and unaffected. 7.5 Application V Sustainable Development of Fresh Water Resources The arid lands in the Middle-East have scarcity of water, which affects the human population directly and can be a serious issue of conflict. The freshwater resources are limited mostly to fossil water from the humid past; few shallow aquifers receive recharge through precipitation in the arid ecosystem of the State of Kuwait and adjacent countries, which calls for sustainable management of fresh water resources. Kuwait has a two-tier aquifer system comprising the Dibdibah Formation of the Kuwait Group at the top and the underlying Dammam Formation, which supports the lower aquifer. The Kuwait Group has a shallow aquifer system of Quaternary age, which is unconfined in nature. Lithologically it comprises silt and gravelly sand. The Dammam Aquifer underlies the Kuwait Group, it is a chalky dolomitic limestone of middle Miocene age, which is under confined conditions, however upward leakages from the Dammam Formation are reported at places [88]. The groundwater level varies from 90 m above mean sea level in the southwest to zero at the Arabian Gulf in the northeast. The flow follows the regional northeastern dip. The groundwater quality from these aquifers varies from brackish in the southwest to highly saline in the northeast. The brackish water in the south and central part with TDS around 4,000 ppm is used for irrigation and landscaping, but in the north the water is highly saline with TDS exceeding 130,000 ppm [89, 90], which renders it unfit for direct use. Only a few shallow aquifers receive groundwater recharge in the middle-east, Raudatain Basin in Kuwait is one of them. The shallow aquifers are irreparably damaged by over exploitation in the arid lands. Quantification of hydrological budget is extremely difficult over large spatial and temporal domains through direct observations, therefore remote sensing technology has been extensively used to estimate various critical factors like landuse landcover, runoff, evaporation, evapotranspiration [91 99]. In the present study remote sensing technology has been used to estimate recharge to fresh water shallow aquifers for their sustainable management in arid ecosystems. The fresh water in northern Kuwait occurs as a lens in the Raudatain area. This lens floats over the highly

74 114 D. Al Ajmi and Saif ud din saline groundwater in the northern part of the country. The Raudatain fresh water lens is used for bottling drinking water, however the recharge to the fresh water lens has not been estimated earlier. The area lies to the north of Kuwait city ( Fig. 51 ). Geologically the area is upper Miocene Lower Pleistocene age and belongs to the upper Dibdibah Formation of the Kuwait Group. Lithologically the formation consists of coarse grained pebbly sand with thin intercalations of clayey sand and clay, pebbles, cobbles, gravel, and conglomerate. The assemblage is similar to a fan deposit. The surface geology shows consolidated calcritic deposit, which is helpful in combating aeolian and fluvial erosion to a large extent, but impedes recharge to the shallow aquifers ( Fig. 52 ). The Dammam Formation underlies the Kuwait Group. It is Middle Miocene in age, lithologically comprising inter-bedded marine marls, limestone, and clays. The thickness of this formation varies from 30 to 101m. The limestone horizons are karstified in the Dammam Formation. The yield from the karstified limestone horizons of the Dammam Formation estimated in the adjacent Al Hasa area in eastern Saudi Arabia vary between 12 and 14 m 3 s 1 [100]. The study area shows signs of secondary salinity along the drainage channels and low infiltration of bedrocks (20 cmh 1 ) [101] as is evident from seasonal playas that are formed after a rain event. The entire State of Kuwait is dissected by a network of channels, most of which are confined to the Dibdibah Formation (Kuwait Group) of Upper Miocene Lower Pleistocene age [101]. The drainage mapping of northern Kuwait demarcates the Iraq Iran Kuwait Saudi Arabia Qatar UAE Oman N W E Yemen 29.8 S Kuwait Kuwait City Study Area Fig. 51 Location map

75 Remote Sensing N W E 30 S LEGEND Aeolian Sand Alluvium Desert Floor Deposits Sabkah Deposits Upper Dibdibah Formation Lower Dibdibah Formation Upper Member of Fars Formation Lower Member of Fars Formation Undifferentiated Fars and Ghar Formations 0 50 Kilometers Fig. 52 Simplified geological map (After Saif uddin, et al., 2007) catchment boundary of the Raudatain Basin using Landsat ETM data. The Raudatain Basin shows centripetal drainage with low gradient. The hydrological conditions suggest that the Raudatain lens received some recharge over the years, which could be a fraction of actual drop from sky, possibly due to the development of secondary salinity and playa over the recent past which has reduced the permeability of the Dibdiba Formation to a greater extent ( Fig. 53 ). The recharge of the basin has been estimated by integration of the precipitation data with the geology, geomorphology, and hydrological parameters [44]. The precipitation was estimated through interpretation of Tropical Rainfall Measuring Mission (TRMM) data and scenes where rainfall events are observed over the study area in the year 2003 have been selected. The rainfall rates in millimeters were computed for each month over the basin ( Table 3 ). The DEM is created from SRTM data. The relief setting of the watershed is an indicator of runoff/recharge potential. The terrain slope is used to estimate the transmission loss, since channel slope affects the depth and duration of inundation.

76 116 D. Al Ajmi and Saif ud din Fig. 53 Drainage map of Raudatain and Umm al Aish, north Kuwait Table 3 Precipitation data over the study area Accumulated rainfall in millimeters Latitude Longitude January February March April May November December Mean monthly Net yearly rainfall

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur Histograms of gray values for TM bands 1-7 for the example image - Band 4 and 5 show more differentiation than the others (contrast=the ratio of brightest to darkest areas of a landscape). - Judging from

More information

FOR 353: Air Photo Interpretation and Photogrammetry. Lecture 2. Electromagnetic Energy/Camera and Film characteristics

FOR 353: Air Photo Interpretation and Photogrammetry. Lecture 2. Electromagnetic Energy/Camera and Film characteristics FOR 353: Air Photo Interpretation and Photogrammetry Lecture 2 Electromagnetic Energy/Camera and Film characteristics Lecture Outline Electromagnetic Radiation Theory Digital vs. Analog (i.e. film ) Systems

More information

746A27 Remote Sensing and GIS

746A27 Remote Sensing and GIS 746A27 Remote Sensing and GIS Lecture 1 Concepts of remote sensing and Basic principle of Photogrammetry Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University What

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs Basic Digital Image Processing A Basic Introduction to Digital Image Processing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland,

More information

Course overview; Remote sensing introduction; Basics of image processing & Color theory

Course overview; Remote sensing introduction; Basics of image processing & Color theory GEOL 1460 /2461 Ramsey Introduction to Remote Sensing Fall, 2018 Course overview; Remote sensing introduction; Basics of image processing & Color theory Week #1: 29 August 2018 I. Syllabus Review we will

More information

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING Geo/SAT 2 INTRODUCTION TO REMOTE SENSING Paul R. Baumann, Professor Emeritus State University of New York College at Oneonta Oneonta, New York 13820 USA COPYRIGHT 2008 Paul R. Baumann Introduction Remote

More information

Enhancement of Multispectral Images and Vegetation Indices

Enhancement of Multispectral Images and Vegetation Indices Enhancement of Multispectral Images and Vegetation Indices ERDAS Imagine 2016 Description: We will use ERDAS Imagine with multispectral images to learn how an image can be enhanced for better interpretation.

More information

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering Image Processing Intensity Transformations Chapter 3 Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering INEL 5327 ECE, UPRM Intensity Transformations 1 Overview Background Basic intensity

More information

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing Introduction to Remote Sensing Definition of Remote Sensing Remote sensing refers to the activities of recording/observing/perceiving(sensing)objects or events at far away (remote) places. In remote sensing,

More information

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Application of GIS to Fast Track Planning and Monitoring of Development Agenda Application of GIS to Fast Track Planning and Monitoring of Development Agenda Radiometric, Atmospheric & Geometric Preprocessing of Optical Remote Sensing 13 17 June 2018 Outline 1. Why pre-process remotely

More information

An Introduction to Remote Sensing & GIS. Introduction

An Introduction to Remote Sensing & GIS. Introduction An Introduction to Remote Sensing & GIS Introduction Remote sensing is the measurement of object properties on Earth s surface using data acquired from aircraft and satellites. It attempts to measure something

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 8. Image Enhancement 8.1 Image Reduction and Magnification. 8.2 Transects (Spatial Profile) 8.3 Spectral Profile 8.4 Contrast Enhancement 8.4.1 Linear Contrast Enhancement 8.4.2 Non-Linear Contrast Enhancement

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Microwave Remote Sensing (1)

Microwave Remote Sensing (1) Microwave Remote Sensing (1) Microwave sensing encompasses both active and passive forms of remote sensing. The microwave portion of the spectrum covers the range from approximately 1cm to 1m in wavelength.

More information

Microwave Remote Sensing

Microwave Remote Sensing Provide copy on a CD of the UCAR multi-media tutorial to all in class. Assign Ch-7 and Ch-9 (for two weeks) as reading material for this class. HW#4 (Due in two weeks) Problems 1,2,3 and 4 (Chapter 7)

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1 TSBB09 Image Sensors 2018-HT2 Image Formation Part 1 Basic physics Electromagnetic radiation consists of electromagnetic waves With energy That propagate through space The waves consist of transversal

More information

Introduction to Remote Sensing. Electromagnetic Energy. Data From Wave Phenomena. Electromagnetic Radiation (EMR) Electromagnetic Energy

Introduction to Remote Sensing. Electromagnetic Energy. Data From Wave Phenomena. Electromagnetic Radiation (EMR) Electromagnetic Energy A Basic Introduction to Remote Sensing (RS) ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland, Oregon 1 September 2015 Introduction

More information

Conceptual Physics Fundamentals

Conceptual Physics Fundamentals Conceptual Physics Fundamentals Chapter 13: LIGHT WAVES This lecture will help you understand: Electromagnetic Spectrum Transparent and Opaque Materials Color Why the Sky is Blue, Sunsets are Red, and

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG An Introduction to Geomatics خاص بطلبة مساق مقدمة في علم الجيوماتكس Prepared by: Dr. Maher A. El-Hallaq Associate Professor of Surveying IUG 1 Airborne Imagery Dr. Maher A. El-Hallaq Associate Professor

More information

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Time: Max. Marks: Q1. What is remote Sensing? Explain the basic components of a Remote Sensing system. Q2. What is

More information

Introduction to Remote Sensing Part 1

Introduction to Remote Sensing Part 1 Introduction to Remote Sensing Part 1 A Primer on Electromagnetic Radiation Digital, Multi-Spectral Imagery The 4 Resolutions Displaying Images Corrections and Enhancements Passive vs. Active Sensors Radar

More information

Remote Sensing. Ch. 3 Microwaves (Part 1 of 2)

Remote Sensing. Ch. 3 Microwaves (Part 1 of 2) Remote Sensing Ch. 3 Microwaves (Part 1 of 2) 3.1 Introduction 3.2 Radar Basics 3.3 Viewing Geometry and Spatial Resolution 3.4 Radar Image Distortions 3.1 Introduction Microwave (1cm to 1m in wavelength)

More information

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post

Remote Sensing. Odyssey 7 Jun 2012 Benjamin Post Remote Sensing Odyssey 7 Jun 2012 Benjamin Post Definitions Applications Physics Image Processing Classifiers Ancillary Data Data Sources Related Concepts Outline Big Picture Definitions Remote Sensing

More information

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018 GEOL 1460/2461 Ramsey Introduction/Advanced Remote Sensing Fall, 2018 Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018 I. Quick Review from

More information

Chapter 16 Light Waves and Color

Chapter 16 Light Waves and Color Chapter 16 Light Waves and Color Lecture PowerPoint Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. What causes color? What causes reflection? What causes color?

More information

Chapter 8. Remote sensing

Chapter 8. Remote sensing 1. Remote sensing 8.1 Introduction 8.2 Remote sensing 8.3 Resolution 8.4 Landsat 8.5 Geostationary satellites GOES 8.1 Introduction What is remote sensing? One can describe remote sensing in different

More information

EE 529 Remote Sensing Techniques. Introduction

EE 529 Remote Sensing Techniques. Introduction EE 529 Remote Sensing Techniques Introduction Course Contents Radar Imaging Sensors Imaging Sensors Imaging Algorithms Imaging Algorithms Course Contents (Cont( Cont d) Simulated Raw Data y r Processing

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

Absorption: in an OF, the loss of Optical power, resulting from conversion of that power into heat.

Absorption: in an OF, the loss of Optical power, resulting from conversion of that power into heat. Absorption: in an OF, the loss of Optical power, resulting from conversion of that power into heat. Scattering: The changes in direction of light confined within an OF, occurring due to imperfection in

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

GE 113 REMOTE SENSING. Topic 7. Image Enhancement GE 113 REMOTE SENSING Topic 7. Image Enhancement Lecturer: Engr. Jojene R. Santillan jrsantillan@carsu.edu.ph Division of Geodetic Engineering College of Engineering and Information Technology Caraga State

More information

John P. Stevens HS: Remote Sensing Test

John P. Stevens HS: Remote Sensing Test Name(s): Date: Team name: John P. Stevens HS: Remote Sensing Test 1 Scoring: Part I - /18 Part II - /40 Part III - /16 Part IV - /14 Part V - /93 Total: /181 2 I. History (3 pts. each) 1. What is the name

More information

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur. Basics of Remote Sensing Some literature references Franklin, SE 2001 Remote Sensing for Sustainable Forest Management Lewis Publishers 407p Lillesand, Kiefer 2000 Remote Sensing and Image Interpretation

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 5 Image Enhancement in Spatial Domain- I ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation

More information

Lecture 13: Remotely Sensed Geospatial Data

Lecture 13: Remotely Sensed Geospatial Data Lecture 13: Remotely Sensed Geospatial Data A. The Electromagnetic Spectrum: The electromagnetic spectrum (Figure 1) indicates the different forms of radiation (or simply stated light) emitted by nature.

More information

Introduction to Remote Sensing

Introduction to Remote Sensing Introduction to Remote Sensing Spatial, spectral, temporal resolutions Image display alternatives Vegetation Indices Image classifications Image change detections Accuracy assessment Satellites & Air-Photos

More information

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction Table of contents Vision industrielle 2002/2003 Session - Image Processing Département Génie Productique INSA de Lyon Christian Wolf wolf@rfv.insa-lyon.fr Introduction Motivation, human vision, history,

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

JP Stevens High School: Remote Sensing

JP Stevens High School: Remote Sensing 1 Name(s): ANSWER KEY Date: Team name: JP Stevens High School: Remote Sensing Scoring: Part I - /18 Part II - /40 Part III - /16 Part IV - /14 Part V - /93 Total: /181 2 I. History (3 pts each) 1. What

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror Image analysis CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror 1 Outline Images in molecular and cellular biology Reducing image noise Mean and Gaussian filters Frequency domain interpretation

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Electromagnetic Waves

Electromagnetic Waves Electromagnetic Waves What is an Electromagnetic Wave? An EM Wave is a disturbance that transfers energy through a field. A field is a area around an object where the object can apply a force on another

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Digital Image Processing - A Remote Sensing Perspective

Digital Image Processing - A Remote Sensing Perspective ISSN 2278 0211 (Online) Digital Image Processing - A Remote Sensing Perspective D.Sarala Department of Physics & Electronics St. Ann s College for Women, Mehdipatnam, Hyderabad, India Sunita Jacob Head,

More information

Study guide for Graduate Computer Vision

Study guide for Graduate Computer Vision Study guide for Graduate Computer Vision Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 November 23, 2011 Abstract 1 1. Know Bayes rule. What

More information

RADIOMETRIC CALIBRATION

RADIOMETRIC CALIBRATION 1 RADIOMETRIC CALIBRATION Lecture 10 Digital Image Data 2 Digital data are matrices of digital numbers (DNs) There is one layer (or matrix) for each satellite band Each DN corresponds to one pixel 3 Digital

More information

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII IMAGE PROCESSING INDEX CLASS: B.E(COMPUTER) SR. NO SEMESTER:VII TITLE OF THE EXPERIMENT. 1 Point processing in spatial domain a. Negation of an

More information

Section 1: Sound. Sound and Light Section 1

Section 1: Sound. Sound and Light Section 1 Sound and Light Section 1 Section 1: Sound Preview Key Ideas Bellringer Properties of Sound Sound Intensity and Decibel Level Musical Instruments Hearing and the Ear The Ear Ultrasound and Sonar Sound

More information

MODULE 9 LECTURE NOTES 1 PASSIVE MICROWAVE REMOTE SENSING

MODULE 9 LECTURE NOTES 1 PASSIVE MICROWAVE REMOTE SENSING MODULE 9 LECTURE NOTES 1 PASSIVE MICROWAVE REMOTE SENSING 1. Introduction The microwave portion of the electromagnetic spectrum involves wavelengths within a range of 1 mm to 1 m. Microwaves possess all

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 1 Patrick Olomoshola, 2 Taiwo Samuel Afolayan 1,2 Surveying & Geoinformatic Department, Faculty of Environmental Sciences, Rufus Giwa Polytechnic, Owo. Nigeria Abstract: This paper

More information

7. Consider the following common offset gather collected with GPR.

7. Consider the following common offset gather collected with GPR. Questions: GPR 1. Which of the following statements is incorrect when considering skin depth in GPR a. Skin depth is the distance at which the signal amplitude has decreased by a factor of 1/e b. Skin

More information

Introduction Active microwave Radar

Introduction Active microwave Radar RADAR Imaging Introduction 2 Introduction Active microwave Radar Passive remote sensing systems record electromagnetic energy that was reflected or emitted from the surface of the Earth. There are also

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT Sapana S. Bagade M.E,Computer Engineering, Sipna s C.O.E.T,Amravati, Amravati,India sapana.bagade@gmail.com Vijaya K. Shandilya Assistant

More information

LlIGHT REVIEW PART 2 DOWNLOAD, PRINT and submit for 100 points

LlIGHT REVIEW PART 2 DOWNLOAD, PRINT and submit for 100 points WRITE ON SCANTRON WITH NUMBER 2 PENCIL DO NOT WRITE ON THIS TEST LlIGHT REVIEW PART 2 DOWNLOAD, PRINT and submit for 100 points Multiple Choice Identify the choice that best completes the statement or

More information

Radar Imaging Wavelengths

Radar Imaging Wavelengths A Basic Introduction to Radar Remote Sensing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland, Oregon 3 November 2015 Radar Imaging

More information

Image enhancement. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman

Image enhancement. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Image enhancement Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Image enhancement Enhancements are used to make it easier for visual interpretation

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Background Adaptive Band Selection in a Fixed Filter System

Background Adaptive Band Selection in a Fixed Filter System Background Adaptive Band Selection in a Fixed Filter System Frank J. Crosby, Harold Suiter Naval Surface Warfare Center, Coastal Systems Station, Panama City, FL 32407 ABSTRACT An automated band selection

More information

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain Image Enhancement in spatial domain Digital Image Processing GW Chapter 3 from Section 3.4.1 (pag 110) Part 2: Filtering in spatial domain Mask mode radiography Image subtraction in medical imaging 2 Range

More information

Spatial Analyst is an extension in ArcGIS specially designed for working with raster data.

Spatial Analyst is an extension in ArcGIS specially designed for working with raster data. Spatial Analyst is an extension in ArcGIS specially designed for working with raster data. 1 Do you remember the difference between vector and raster data in GIS? 2 In Lesson 2 you learned about the difference

More information

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror Image analysis CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror 1 Outline Images in molecular and cellular biology Reducing image noise Mean and Gaussian filters Frequency domain interpretation

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS Fundamentals of Remote Sensing Pranjit Kr. Sarma, Ph.D. Assistant Professor Department of Geography Mangaldai College Email: prangis@gmail.com Ph. No +91 94357 04398 Remote Sensing Remote sensing is defined

More information

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching. Remote Sensing Objectives This unit will briefly explain display of remote sensing image, geometric correction, spatial enhancement, spectral enhancement and classification of remote sensing image. At

More information

TDI2131 Digital Image Processing

TDI2131 Digital Image Processing TDI2131 Digital Image Processing Image Enhancement in Spatial Domain Lecture 3 John See Faculty of Information Technology Multimedia University Some portions of content adapted from Zhu Liu, AT&T Labs.

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf(

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf( GMAT x600 Remote Sensing / Earth Observation Types of Sensor Systems (1) Outline Image Sensor Systems (i) Line Scanning Sensor Systems (passive) (ii) Array Sensor Systems (passive) (iii) Antenna Radar

More information

Digital Image Processing. Lecture # 3 Image Enhancement

Digital Image Processing. Lecture # 3 Image Enhancement Digital Image Processing Lecture # 3 Image Enhancement 1 Image Enhancement Image Enhancement 3 Image Enhancement 4 Image Enhancement Process an image so that the result is more suitable than the original

More information

End-of-Chapter Exercises

End-of-Chapter Exercises End-of-Chapter Exercises Exercises 1 12 are conceptual questions designed to see whether you understand the main concepts in the chapter. 1. Red laser light shines on a double slit, creating a pattern

More information

Blacksburg, VA July 24 th 30 th, 2010 Remote Sensing Page 1. A condensed overview. For our purposes

Blacksburg, VA July 24 th 30 th, 2010 Remote Sensing Page 1. A condensed overview. For our purposes A condensed overview George McLeod Prepared by: With support from: NSF DUE-0903270 in partnership with: Geospatial Technician Education Through Virginia s Community Colleges (GTEVCC) The art and science

More information

Image interpretation and analysis

Image interpretation and analysis Image interpretation and analysis Grundlagen Fernerkundung, Geo 123.1, FS 2014 Lecture 7a Rogier de Jong Michael Schaepman Why are snow, foam, and clouds white? Why are snow, foam, and clouds white? Today

More information

Interpreting land surface features. SWAC module 3

Interpreting land surface features. SWAC module 3 Interpreting land surface features SWAC module 3 Interpreting land surface features SWAC module 3 Different kinds of image Panchromatic image True-color image False-color image EMR : NASA Echo the bat

More information

EC-433 Digital Image Processing

EC-433 Digital Image Processing EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)

More information

Hyperspectral image processing and analysis

Hyperspectral image processing and analysis Hyperspectral image processing and analysis Lecture 12 www.utsa.edu/lrsg/teaching/ees5083/l12-hyper.ppt Multi- vs. Hyper- Hyper-: Narrow bands ( 20 nm in resolution or FWHM) and continuous measurements.

More information

Laser Beam Analysis Using Image Processing

Laser Beam Analysis Using Image Processing Journal of Computer Science 2 (): 09-3, 2006 ISSN 549-3636 Science Publications, 2006 Laser Beam Analysis Using Image Processing Yas A. Alsultanny Computer Science Department, Amman Arab University for

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

SATELLITE OCEANOGRAPHY

SATELLITE OCEANOGRAPHY SATELLITE OCEANOGRAPHY An Introduction for Oceanographers and Remote-sensing Scientists I. S. Robinson Lecturer in Physical Oceanography Department of Oceanography University of Southampton JOHN WILEY

More information

Period 3 Solutions: Electromagnetic Waves Radiant Energy II

Period 3 Solutions: Electromagnetic Waves Radiant Energy II Period 3 Solutions: Electromagnetic Waves Radiant Energy II 3.1 Applications of the Quantum Model of Radiant Energy 1) Photon Absorption and Emission 12/29/04 The diagrams below illustrate an atomic nucleus

More information

RADAR (RAdio Detection And Ranging)

RADAR (RAdio Detection And Ranging) RADAR (RAdio Detection And Ranging) CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS PASSIVE ACTIVE DIGITAL CAMERA THERMAL (e.g. TIMS) VIDEO CAMERA MULTI- SPECTRAL SCANNERS VISIBLE & NIR MICROWAVE Real

More information

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts Remote sensing in archaeology from optical to lidar Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts Introduction Optical remote sensing Systems Search for

More information

Digital Image Processing

Digital Image Processing What is an image? Digital Image Processing Picture, Photograph Visual data Usually two- or three-dimensional What is a digital image? An image which is discretized, i.e., defined on a discrete grid (ex.

More information

Passive Microwave Sensors LIDAR Remote Sensing Laser Altimetry. 28 April 2003

Passive Microwave Sensors LIDAR Remote Sensing Laser Altimetry. 28 April 2003 Passive Microwave Sensors LIDAR Remote Sensing Laser Altimetry 28 April 2003 Outline Passive Microwave Radiometry Rayleigh-Jeans approximation Brightness temperature Emissivity and dielectric constant

More information

Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study

Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study N.Ganesh Kumar +, E.Venkateswarlu # Product Quality Control, Data Processing Area, NRSA, Hyderabad.

More information

746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage

746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage 746A27 Remote Sensing and GIS Lecture 3 Multi spectral, thermal and hyper spectral sensing and usage Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University Multi

More information

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics Chapters 1-3 Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation Radiation sources Classification of remote sensing systems (passive & active) Electromagnetic

More information

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication Image Enhancement DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 15, 2013 Mårten Björkman (CVAP)

More information

Remote sensing image correction

Remote sensing image correction Remote sensing image correction Introductory readings remote sensing http://www.microimages.com/documentation/tutorials/introrse.pdf 1 Preprocessing Digital Image Processing of satellite images can be

More information

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005 Some Basic Concepts of Remote Sensing Lecture 2 August 31, 2005 What is remote sensing Remote Sensing: remote sensing is science of acquiring, processing, and interpreting images and related data that

More information

(A) 2f (B) 2 f (C) f ( D) 2 (E) 2

(A) 2f (B) 2 f (C) f ( D) 2 (E) 2 1. A small vibrating object S moves across the surface of a ripple tank producing the wave fronts shown above. The wave fronts move with speed v. The object is traveling in what direction and with what

More information