MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

Similar documents
MODULE 4 LECTURE NOTES 1 CONCEPTS OF COLOR

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

Digital Image Processing. Lecture # 8 Color Processing

Image interpretation and analysis

Image interpretation I and II

Today s Presentation. Introduction Study area and Data Method Results and Discussion Conclusion

Fig Color spectrum seen by passing white light through a prism.

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

DIFFERENTIAL APPROACH FOR MAP REVISION FROM NEW MULTI-RESOLUTION SATELLITE IMAGERY AND EXISTING TOPOGRAPHIC DATA

Saturation And Value Modulation (SVM): A New Method For Integrating Color And Grayscale Imagery

Remote Sensing for Rangeland Applications

An Introduction to Remote Sensing & GIS. Introduction

(Refer Slide Time: 1:28)

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

REMOTE SENSING FOR FLOOD HAZARD STUDIES.

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING

Digital Image Processing

Interpreting land surface features. SWAC module 3

Environmental Remote Sensing GEOG 2021

Developing a New Color Model for Image Analysis and Processing

RGB colours: Display onscreen = RGB

USE OF COLOR IN REMOTE SENSING

8. EDITING AND VIEWING COORDINATES, CREATING SCATTERGRAMS AND PRINCIPAL COMPONENTS ANALYSIS

Image Band Transformations

Overview. Introduction. Elements of Image Interpretation. LA502 Special Studies Remote Sensing

remote sensing? What are the remote sensing principles behind these Definition

GE 113 REMOTE SENSING

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005

Digital Image Processing (DIP)

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES

RADAR (RAdio Detection And Ranging)

The studies began when the Tiros satellites (1960) provided man s first synoptic view of the Earth s weather systems.

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Introduction to Remote Sensing

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

YIQ color model. Used in United States commercial TV broadcasting (NTSC system).

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

Enhancement of Multispectral Images and Vegetation Indices

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET

OPTICAL RS IMAGE INTERPRETATION

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester

A. Dalrin Ampritta 1 and Dr. S.S. Ramakrishnan 2 1,2 INTRODUCTION

Hello, welcome to the video lecture series on Digital image processing. (Refer Slide Time: 00:30)

9/12/2011. Training Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Introduction to Remote Sensing

Important Missions. weather forecasting and monitoring communication navigation military earth resource observation LANDSAT SEASAT SPOT IRS

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

Introduction. The Spectral Basis for Color

Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Spatial Resolution

Color Image Processing

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

The techniques with ERDAS IMAGINE include:

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform

Image enhancement. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman

Lecture Series SGL 308: Introduction to Geological Mapping Lecture 8 LECTURE 8 REMOTE SENSING METHODS: THE USE AND INTERPRETATION OF SATELLITE IMAGES

Remote sensing image correction

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

EECS490: Digital Image Processing. Lecture #12

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models

Blacksburg, VA July 24 th 30 th, 2010 Remote Sensing Page 1. A condensed overview. For our purposes

Image transformations

6 Color Image Processing

to Geospatial Technologies

The NAGI Fusion Method: A New Technique to Integrate Color and Grayscale Raster Layers

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Radiometric Resolution

GE 113 REMOTE SENSING

TDI2131 Digital Image Processing

Viewing Landsat TM images with Adobe Photoshop

Remote Sensing Exam 2 Study Guide

[1]{Department of Geography, University of California, Santa Barbara, U.S.A.}

Improving Spatial Resolution Of Satellite Image Using Data Fusion Method

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

GEOI 313: Digital Image Processing - I

Remote Sensing Platforms

NORMALIZING ASTER DATA USING MODIS PRODUCTS FOR LAND COVER CLASSIFICATION

IMAGE ANALYSIS TOOLBOX AND ENHANCED SATELLITE IMAGERY INTEGRATED INTO THE MAPPLACE By Ward E. Kilby 1, Karl Kliparchuk 2 and Andrew McIntosh 2

Background Objectives Study area Methods. Conclusions and Future Work Acknowledgements

REMOTE SENSING INTERPRETATION

MULTISPECTRAL IMAGE PROCESSING I

Monitoring agricultural plantations with remote sensing imagery

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

9/12/2011. Training Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Remote Sensing Instruction Laboratory

GEOG432: Remote sensing Lab 3 Unsupervised classification

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

RADIOMETRIC CALIBRATION

VC 16/17 TP4 Colour and Noise

Remote Sensing Part 3 Examples & Applications

New Additive Wavelet Image Fusion Algorithm for Satellite Images

DEM GENERATION WITH WORLDVIEW-2 IMAGES

Transcription:

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES 1. Introduction Digital image processing involves manipulation and interpretation of the digital images so as to extract maximum information from the image. Image enhancement is used to enhance the image display such that different features can be easily differentiated. In addition to the contrast stretching and edge enhancement mentioned in the previous lectures, processes used for image enhancement include the color manipulation and the use of other data sets. This lecture covers a few of such methods used for image enhancement. These are Density slicing Thresholding Intensity-Hue-Saturation (IHS) images Time composite images Synergic images 2. Density slicing Density slicing is the process in which the pixel values are sliced into different ranges and for each range a single value or color is assigned in the output image. It is also know as level slicing. For example, Fig.1(a) shows the ASTER GDEM for a small watershed in the Krishna River Basin. Elevation values in the DEM range from 591-770 m above mean sea level. However, the contrast in the image is not sufficient to clearly identify the variations. The pixel values are sliced into 14 ranges as shown in Fig. 1 (b) and the colors are assigned to each range. The resulting image is shown in Fig 1(b). Density slicing may be thus used to introduce color to a single band image. Density slicing is useful in enhancing images, particularly if the pixel values are within a narrow range. It enhances the contrast between different ranges of the pixel values. D Nagesh Kumar, IISc, Bangalore 1 M4L4

However, a disadvantage of the density slicing is the subtle information loss as a single color is assigned to each range. The variations in the pixel values within the range cannot be identified from the density sliced image. (a) (b) Fig. 1 (a) ASTER GDEM and (b) Density sliced image showing 14 levels of elevation D Nagesh Kumar, IISc, Bangalore 2 M4L4

3. Thresholding Thresholding is used to divide the input image into two classes: pixels having values less than the threshold and more than the threshold. The output image may be used for detailed analysis of each of these classes separately. For example, calculate total area of lakes in the Landsat band-4 image given in Fig 2(a). This can be better estimated if the non-water pixels are deemphasized and the water pixels are emphasized. In the image highest DN for water is 35. Therefore, a threshold of 35 is used here to mask out the water bodies. All pixels with DN greater than 35 are assigned 255 (saturated to white) and those with DN less than or equal to 35 are assigned zero (black). The output image is shown in Fig. 2(b). In the output, the lakes are highlighted, whereas the other features are suppressed. From the output, area of the water bodies can be easily estimated. Fig. 2 (a) Landsat TM Band-4 image and (b) Output images after using a threshold DN value of 35 to mask out the water bodies D Nagesh Kumar, IISc, Bangalore 3 M4L4

4. Intensity-Hue-Saturation (IHS) images An image is generally the color composite of the three basic colors red, blue and green. Any color in the image is obtained through a combination of the three basic colors at varying intensities. For example, each basic color can vary from 0-255 in an 8-bit display system. Thus several combinations of the three colors are possible. A color cube (Fig. 3), with red, blue and green as it axes, is one way of representing the color composite obtained by adding the three basic colors. This is called RGB color scheme. More details are given in lecture 1. Fig. 3. A color cube used to represent the RGB color scheme An alternate way of describing the colors is by using intensity-hue-saturation (IHS) system. The following are the components of the IHS system. Intensity: Intensity represents the brightness of the color. It varies from black (corresponding to 0) to white (corresponding to 255 in an 8-bit system). Hue: Hue represents the dominant wavelength of light contributing to the color. It varies from 0 to 255 corresponding to various ranges of red, blue and green. Saturation: Saturation represents the purity of the color. A value 0 represents completely impure color with all wavelengths equally represented in it (grey tones). The maximum value (255 in an 8-bit system) represents the completely pure color (red, blue or green). Any color is described using a combination of the intensity (I), hue (H) and saturation (S) components as shown in Fig. 4. D Nagesh Kumar, IISc, Bangalore 4 M4L4

Fig. 4 Representation of color in the IHS scheme 4.1 Transformation from RGB scheme into IHS scheme The RGB color components may be transformed into the corresponding IHS components by projecting the RGB color cube into a plane perpendicular to the gray line of the color cube, and tangent to the cube at the corner farthest from the origin as shown in Fig. 5(a). This gives a hexagon. If the plane of projection is moved from black to white, the size of the hexagon increases. The size of the projected hexagon is the minimum at black, which gives only a point, and maximum at white. The series of hexagons developed by moving the plane of projection from black to white are combined to form the hexacone, which is shown in Fig. 5(b). In this projection, size of the hexagon at any point along the cone is determined by the intensity. Within each hexagon, the representation of hue and saturation are shown in Fig. 5(c). Hue increases counterclockwise from the axis corresponding to red. Saturation is the length of the vector from the origin. D Nagesh Kumar, IISc, Bangalore 5 M4L4

(a) (b) (c) Fig. 5 (a) Projection of a color cube in to a plane through black (b) Hexacone representing the IHS color scheme (c) Hexagon showing the intensity, hue and saturation components in the IHS representation (Source: http://en.wikipedia.org/wiki/hsl_and_hsv) D Nagesh Kumar, IISc, Bangalore 6 M4L4

Instead of the hexagonal plane, circular planes are also used to represent the IHS transformations, which are called IHS cones (Fig. 6) Fig. 6. IHS cone representing the color scheme In the IHS color scheme the relationship between the IHS components with the corresponding RGB components is established as shown in Fig. 7. Consider an equilateral triangle in the circular plane with its corners located at the position of the red, green, and blue hue. Hue changes in a counterclockwise direction around the triangle, from red (H=0), to green (H=1) to blue (H=2) and again to red (H=3). Values of saturation are 0 at the center of the triangle and increase to maximum of 1 at the corners. Fig.7. Relationship between RGB and IHS system D Nagesh Kumar, IISc, Bangalore 7 M4L4

IHS values can be derived from RGB values through the following transformations (Gonzalez and Woods, 2006). Inverse of these relationships may be used for mapping IHS values into RGB values. These have been covered in Section 2.4 of module 4, lecture 1 and therefore will not be repeated here. 4.2 Image enhancement through IHS transformation When any three spectral bands of a MSS (multi-spectral scanner) data are combined in the RGB system, the resulting color image typically lacks saturation, even though the bands have been contrast-stretched. This under-saturation is due to the high degree of correlation between spectral bands. High reflectance values in the green band, for example, are accompanied by high values in the blue and red bands, and hence pure colors are not produced. To correct this problem, a method of enhancing saturation was developed that consists of the following steps: Transform any three bands of data from the RGB system into the IHS system in which the three component images represent intensity, hue and saturation. Typically intensity image is dominated by albedo and topography. Sunlit slopes have high intensity values (bright tones), and shadowed areas have low values (dark tones) Saturation image will be dark because of the lack of saturation in the original data. Apply a linear contrast stretch to the saturation image D Nagesh Kumar, IISc, Bangalore 8 M4L4

Transform the intensity, hue and enhanced saturation images from the IHS system back into three images of the RGB system. These enhanced RGB images may used to prepare the new color composite image Schematic of the steps involved in the image enhancement through IHS transformation is shown in Fig.8. At this point, we assume that the reader is familiar with the RGB to IHS transformation. In Fig. 8 below, the original RGB components are first transformed into their corresponding IHS components (encode), then these IHS components are manipulated to enhance the desired characteristics of the image (manipulate) and finally the modified IHS components are transformed back into the RGB color system for display (decode). Fig.8. Schematic of the steps involved in the image enhancement through IHS transformation The color composite output after the saturation enhancement gives better color contrast within the image. For example, Fig.9 (a) shows the Landsat ETM + standard FCC image (bands 2, 3 and 4 are used as blue, green and red components). Color contrast between the features is not significant, which makes the feature identification difficult. The image is converted from the RGB scheme to IHS scheme. Fig 8 (b) shows the IHS transformation of the image. The saturation of the image enhanced through IHS transformation. In the display, intensity and hue are displayed through red and green, respectively. Blue is used to display the saturation. From the image, it is evident that the saturation is poor (as indicated by the poor contribution of blue in the display). Further, the saturation component is linearly stretched. The intensity, hue and the linearly stretched saturation components are then transformed into the corresponding RGB scheme. D Nagesh Kumar, IISc, Bangalore 9 M4L4

Fig. 10 shows the image displayed using the modified RGB color scheme. A comparison with the original FCC image reveals the contrast enhancement achieved through the enhancement using IHS transformation. (a) (b) D Nagesh Kumar, IISc, Bangalore 10 M4L4

Fig. 9 (a) Standard FCC of the Landsat ETM+ image and (b) corresponding IHS transformed image D Nagesh Kumar, IISc, Bangalore 11 M4L4

Fig.10. Landsat ETM+ image enhanced through IHS transformation 4.3 Advantages of IHS transfer in image enhancement IHS system mimics the human eye system more closely in conceiving color. Following are some of the advantages of IHS transformation in image enhancement. IHS transformation gives more control over the color enhancement Transformation from RGB scheme to IHS scheme gives the flexibility to vary each component of the IHS system separately without effecting the other IHS transformed image can be used to generate synergic images. Using this approach, data of different sensors, having different spatial and spectral resolution can be merged to enhance the information. High resolution data from one source may be displayed as the intensity component, and the low resolution data from some other source as the hue and saturation components. 5. Synergic images Synergic images are those generated by combining information from different data sources. Images of different spatial and spectral resolutions are merged to enhance the information contained in an image. For synergetic image generation, it is important that separate bands are co-registered with each other and that they contain same number of rows and columns. FCC can be produced by considering any three bands (may be of different spectral or spatial resolution). Examples: PAN data merged with LISS data (substituted for the Intensity image), TM data merged with SPOT PAN data and Radar data merged with IRS LISS data. Fig. 11 shows the synergic image produced by combining the IRS LISS-III image with high resolution PAN image. D Nagesh Kumar, IISc, Bangalore 12 M4L4

Fig. 11. IRS LISS III and PAN merged and enhanced Image of Hyderabad IRS LISS-III image and the PAN images are of different spatial and spectral resolution. LISS-III image is of 23m spatial resolution and uses 4 narrow wavelength bands. PAN image gives coarse spectral resolution using a single band. However, PAN image gives fine spatial resolution (5.8m). Combining the benefits of both, a synergic image can be produced using the IHS transformation. The intensity component of the PAN image is replaced from the LISS-III image. The resulted synergic image is transformed back to the RGB scheme, which is shown in Fig. 10. Spectral information from the LISS-III image is merged the fine spatial resolution of the PAN data in the image. Non-remote sensing data such as topographic data, elevation information may also be merged through DEM. Non-remote sensing data such as location names can also be merged. Perspective view of southeast of Los Angeles produced by draping TM and radar data over a DEM and viewing from the southwest is shown in Fig. 12. D Nagesh Kumar, IISc, Bangalore 13 M4L4

Fig. 12. Perspective view of southeast of Los Angeles produced by draping TM and radar data over a digital elevation model and viewing from the southwest Fig 13. Shows the comparison of Landsat TM image with TM/SPOT fused data for an airport southeast of Los Angels. The fused image is considerably sharper than the standard TM image. D Nagesh Kumar, IISc, Bangalore 14 M4L4

Fig 13. (a) Landsat TM image (b) TM/SPOT fused data for an airport southeast of Los Angels 6. Time composite images Cloud cover in the atmosphere often restricts the visibility of the land area in optical images. However, if an image contains cloud cover in a portion and if that imagery can be acquired everyday like in the case of NOAA AVHRR, a time composite imagery can be produced without cloud cover. For the cloud covered area, the information is extracted from the successive images. The following are the steps followed for generating time composite images. Co-register images acquired over number of days (say 15 days) Area with cloud cover is identified from the first imagery and is replaced by the next imagery of the same area. Cloud cover (if any) from this composite imagery is replaced with the third imagery. This procedure is repeated 15 times (say over 15 days imageries) The National Remote Sensing Centre (NRSC) used such time composited imageries of NOAA AVHRR over 15 days for Agricultural drought assessment and analysis. D Nagesh Kumar, IISc, Bangalore 15 M4L4

Bibliography / Further reading 1. Blom, R. G. and Daily, M., 1982, Radar image processing for rock type discrimination, IEEE Transactions on Geoscience Electronics, 20, 343-351. 2. Buchanan, M. D., 1979, Effective utilization of color in multidimensional data presentation, Proc. Of the Society of Photo-Optical Engineers, Vol. 199, pp. 9-19. 3. Foley, J. D., van Dan, A., Feiner, S. K. and Hughes, J. F., 1990, Computer Graphics- Principles and Practice, Second Edition in C. Reading, MA: Addison-Wesley. 4. Gonzalez, R. C., Woods, R. E., 2006. Digital Image Processing, Prentice-Hall of India, New Delhi. 5. Kiver, M. S., 1965. Color Television Fundamentals, McGraw-Hill, New York. 6. Lillesand, T. M., Kiefer, R. W., Chipman, J. W., 2004. Remote sensing and image interpretation. Wiley India (P). Ltd., New Delhi. 7. Massonet, D., 1993, Geoscientific applications at CNES. In: Schreier, G. (1993a) (ed.), 397-415. 8. Mulder, N. J., 1980, A view on digital image processing. ITC Journal, 1980-1983, 452-476. 9. Poynton, C. A., 1996. A Technical Introduction to Digital Video, John Wiley & Sons, New York. 10. Walsh, J. W. T., 1958. Photometry, Dover, New York. D Nagesh Kumar, IISc, Bangalore 16 M4L4