CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 1 W. Philpot, Cornell University - January 20, 2015

Size: px
Start display at page:

Download "CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 1 W. Philpot, Cornell University - January 20, 2015"

Transcription

1 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image THE DIGITAL IMAGE As common as images are in our experience, it is surprisingly difficult to arrive at a definition of an image that provides an adequate description for the purposes of image processing. For most purposes it is reasonable to restrict the notion of an image to a twodimensional representation of an object, scene or area. That suggests that a black & white (grayscale) image could be defined as a single-valued function of n-variables, f(x 1, x 2 ). Such a definition would include grayscale drawings, photographs, maps, and television images but would exclude such things as virtual optical images, mental images, and sculpture, other threedimensional images or color. Indeed, a standard 3-color image is a 3-valued function of 2- variables, f(x 1, x 2 ) where f is a vector. The techniques of image processing can be applied to data that are not normally considered to be image data. For example, consider a data set that represents measurements of temperature of the surface temperature of the soil at a set of sample sites in an agricultural field collected over a 5-day period. A typical, graphical representation of these data is shown as the top graph in Figure 2.1. An alternative, but atypical representation of these data is shown in the center (single-line) graph in which each data point is shown as a square cell with dark representing colder temperatures and light representing warmer temperatures. For simplicity, only four temperature ranges are shown. Suppose that the same series of temperature measurements had been made during the same period for many plants, each of which was identical except for the amount of moisture in the soil. These data could be represented as a matrix of cells whose brightness or density is proportional to temperature and whose position is related to the time of the measurement and the amount of water available (Figure 2.1, bottom). When presented in this way, these data can be treated as an image even though the axes represent time and soil moisture rather than distance or position. In Figure 2.1, temperature is represented as a function of time and soil moisture. Temperature is the dependent variable while time and soil moisture are independent variables. There is actually no reason to limit the number of independent variables to two. In this example, soil salinity and soil type could well be significant factors affecting soil temperature. In general, the operations that we perform on a 2-D image are equally applicable in multiple dimensions. This implies that there may be justification for extending the definition of a grayscale image to include functions of more than two variables, f(x 1, x 2,, x n ). For most purposes in the following discussions, however, n will be assumed to be equal to two, i.e., the discussions will be limited to two-dimensional images. DEFINITION: A grayscale image is a single-valued function of two spatial variables, f(x 1, x 2 ). Except where otherwise noted, the dimensions will be assumed to be strictly spatial. An image may be abstractly represented as a continuous function of two variables, f(x, y), defined on some bounded region of a plane. For example, a photograph is an image in which the information is recorded as continuous gradations in tone or color across the two-dimensional surface of the film. Information is contained in both the relative locations of all the points as well as in their color and/or intensity.

2 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 2 soil temperature time day1 day 2 day 3 day 4 digitized data gray value ==> quantization sampling ==> average over 2 hrs 40 min. soil sample (moisture) time Figure 1.1: Representations of time vs. temperature; top: typical graph of temperature as a function of time; middle: The same function represented by changes in print density; bottom: Extension of the middle graphic to two-dimensions: an "image" of temperature as a function of time and soil moisture. 1.1 Digitization In order to be in a form suitable for computer processing, an image function, f(x, y), must be digitized. Digitization, the process of representing a continuous function by a finite set of discrete observations, is a two-step process when applied to images: digitization of the spatial coordinates, (independent variables), is called sampling, while digitization of the amplitude of the function is referred to as quantization. Figure 2.1c is an example of a digitized image.

3 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image Sampling In sampling an image, one is attempting to represent an entity which is continuous in the (x, y) or independent variable space (and therefore contains an infinite number of points) by a finite, ordered set of measurements. Where the image is represented by the continuous function, f(x,y), the sampled image is represented by the array, s(i, j), where i = 1,2,3,..., m, and j = 1,2,3,..., n. A typical sampling pattern is illustrated in Figure 1.2. The requirement that the set of sampled points be ordered is made so that the position of sampled point in the image may be related to the location of that point in an array. 1,1 1,2 1,2 1,n 2,1 2,2 2,3 2,n 3,1 3,2 3,3 3,n m-1,1 m-1,2 m-1,3 m-1,n m,1 m,2 m,3 m,n Figure 1.2 A typical sampling pattern for image data. Using a finite number of samples to represent the continuous image function implies that there will be a loss of information. It is the aim of sampling design to minimize this information loss while also minimizing the total number of samples in the image. The advantage in reducing the number of samples is that it will simplify data storage and transmission and, to a certain extent, later processing. On the other hand, to insure that no information is lost, the number of samples must approach infinity (m, n ). Some criteria are needed which will guide the selection of an appropriate sampling scheme. In establishing these criteria, we will begin by considering four different sampling schemes: 1.Minimum change in magnitude - In this sampling scheme a sample will be collected whenever there is a change in the magnitude of the image function greater than some minimum amount, cm. Thus, if the last sample point selected was at (x, y), the next sampling point in the x-direction will occur at the point x + x where: f(x + x, y) f(x, y) > c m (2.1) 2.Minimum change in slope - This scheme uses the slope of the image function, requiring sampling whenever the slope changes by more than a minimum amount. c s : f(x, y) x f(x + x, y) > c x s (2.2) 3.Critical point sampling - In some instances it may be that maxima and minima can characterize the information of importance in an image. These points may be located by looking for points at which the slope of the image function is 0, i.e., looking for those points (x, y) which satisfy the condition: f(x, y) x = 0 (2.3)

4 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image Uniform interval sampling - When the smallest feature of interest in an image is of a known size, d, then any regular sampling pattern which requires a sampling interval, s, such that s < d/2 will be assured of detecting the smallest feature of interest. One-dimensional examples There are advantages and disadvantages associated with each of these sampling schemes; each one would be optimal in a different situation. For example, consider Figure 1.3a, which shows an image of a diffraction pattern of a circular aperture. Taking a cross-section through the center of the image at y j = (y f y 0 )/2 yields the one-dimensional function (Figure 1.3b). Some of the differences among the sampling schemes described above can be seen by examining their effect on this one-dimensional function. Figure 1.3c shows the pattern resulting from sampling based on a minimum change in magnitude of the image function. This sampling pattern is insensitive to changes in magnitude smaller than c m, a valuable characteristic when variations on the order of c m are due primarily to noise, but awkward when such variations are real as at A in Figure 1.3c. The minimum change in magnitude criterion tends to result in over-sampling of large magnitude features and undersampling of small magnitude features. If the change in slope is used as a sampling criterion, a very different sampling pattern results (Figure 1.3d). For this example, the sampling pattern is quite good since it would not be difficult to reconstruct the original function accurately using the sampled points. However, in a noisy image, for which changes in slope would be frequent and meaningless, this strategy could easily result in serious over-sampling. Oversampling due to noise could also be a problem for the critical point sampling strategy. In the example (Figure 1.3e), the image function is smooth, continuous and free of any appreciable noise and the sampling is nearly ideal: all the major characteristics of the image are captured by the relatively few samples. Furthermore, this is the only sampling strategy of the four considered that accurately portrays the symmetry of the image function. Unfortunately, this strategy does not respond well with noisy image functions or to functions containing discontinuities. Most importantly, the image function is ill defined between the critical points, i.e., large regions where the brightness is monotonically increasing or decreasing will be left blank. The first three sampling strategies all share two flaws for sampling image data: 1) all use irregularly spaced sampling points, and 2) all are difficult to implement in two or more dimensions. Irregular spacing will make it more difficult to locate individual points. Although only one number is required to represent each point, two numbers are required to specify the x,y location of that point on the original image. Thus, the improvements in the efficiency of transmission, storage or processing which would result from the reduction of the number of sample points is at least partially lost due to the need to include location information for every point. In contrast, the last sampling strategy, which uses uniform spacing, allows the samples to be located solely by reference to that sample's position in the array of sample points.

5 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 5 Figure 1.3: Sampling strategies illustrated using a 1-dimensional function derived from the image of a diffraction pattern: a. diffraction pattern for a circular aperture; b. cross section of the diffraction pattern at y 1; c. minimum change in amplitude; f. uniform sampling; d. minimum change in slope; e. critical point sampling.

6 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 6 The equidistant sampling pattern is illustrated in Figure 1.3f. The separation of consecutive samples is called a sampling interval. In this example the equidistant sampling pattern compares well with the other sampling strategies. For the particular sampling interval chosen, the equidistant sampling actually results in fewer sample points than required by the strategies based on changes in amplitude or slope of the image function, and the same number of samples as required by the critical point sampling strategy. All of the major features of the image function, except for the symmetry, are reasonably reproducible. The major disadvantage of uniform sampling is that its effectiveness is critically dependent on the sampling interval. Great care must be taken to keep the sampling interval small enough to pick up significant detail, yet large enough to keep the total number of samples at a manageable level. The first three sampling strategies all share two flaws for sampling image data: 1) they all use irregularly spaced sampling points, and 2) they are difficult to implement in two or more dimensions. Irregular spacing will make it more difficult to locate individual points. Although only one number is required to represent each point, two numbers are required to specify the x,y location of that point on the original image. Thus, the improvements in the efficiency of transmission, storage or processing which would result from the reduction of the number of sample points is at least partially lost due to the need to include location information for every point. In contrast, the last sampling strategy, which uses uniform spacing, allows the samples to be located solely by reference to that sample's position in the array of sample points. The equidistant sampling pattern is illustrated in Figure 1.3f. The separation of consecutive samples is called a sampling interval. In this example the equidistant sampling pattern compares well with the other sampling strategies. For the particular sampling interval chosen, the equidistant sampling actually results in fewer sample points than required by the strategies based on changes in amplitude or slope of the image function, and the same number of samples as required by the critical point sampling strategy. All of the major features of the image function, except for the symmetry, are reasonably reproducible. The major disadvantage of uniform sampling is that its effectiveness is critically dependent on the sampling interval. Great care must be taken to keep the sampling interval small enough to pick up significant detail, yet large enough to keep the total number of samples at a manageable level. The typical sampling pattern for most image data is a regular rectangular array such as that illustrated in Figure 1.2, where the sampling interval in the x-direction (horizontal), s x, need not be the same as the sampling interval in the y-direction (vertical), s y. This type of sampling pattern is used in part because of the simplicity of the approach and the general efficiency of the data storage and handling, and in part because a rectangular array is well adapted to the ways in which image digitizers operate (Castleman, Chap. 2).

7 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2 7 Thus far, individual samples have been discussed as if measurements were instantaneous and represented a single point on the target. Actual samples represent measurements which are integrated over space and time, a fact which implies that there will be several sample characteristics which will affect the information content of the image. Definitions of the pertinent sample characteristics are given below. sampling interval (s x, s y ) - the separation of adjacent samples (equidistant sampling). resolution cell size (r, r y ) (also resolution element size) - the area covered, or solid angle subtended by a single sample. Instantaneous Field of View (IFOV) - the field of view of a single detecting element when all motion is stopped. The term IFOV is frequently used as a synonym for resolution cell size. Although the usage is not proper, it is rarely misleading because the actual difference is usually quite small. Ground Instantaneous Field of View (GIFOV) - the IFOV projected on the ground (specific to a particular altitude, and viewing angle.) display cell (dx, dy) - a single pixel on a particular display device. For an image to appear undistorted (without prior geometric correction) the display cell size should be proportional to the sampling intervals, i.e.: d x d y = s x s y (2.4) The ratio of the vertical dimension to the horizontal dimension is called the aspect ratio, e.g., s x, s y. picture element (pixel, pel) - single sample pure pixel - a pixel which lies entirely within a single target class. The significant pixel dimension in this case is the resolution cell size. mixed pixel - a pixel which lies partly in two or more distinct target classes. r x Rule of thumb: to insure that at least one pure pixel is obtained, the largest sampling interval should be about 1/3 the smallest dimension of the target. This presumes that the resolution cell size is about equal to or less than the sampling interval. r y d mixed pixel pure pixel target The sampling pattern of the Landsat Multispectral Scanner (MSS), shown in Figure 1.4, illustrates the distinction between sampling interval, resolution cell size and IFOV. The detecting element IFOV is square; however, since the scanning mirror is constantly in motion, the IFOV of the detector moves in the scanning direction and the resolution cell size in the

8 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-8 direction of scan is slightly larger than the IFOV. Note also that the samples overlap in the scan direction but are discontiguous in the orbit direction. s x scan direction r x r x r y + + IFOV x flight direction + + s y r x = 80 m s x = 57 m IFOV x = 79 m r y = 79 m s y = 82 m IFOV y = 79 m Figure 1.4: Landsat MSS sampling pattern. Note that adjacent samples overlap in the scan direction but are discontinuous in the orbit direction Before undertaking a more standard approach to describing resolution, it will be useful to consider a heuristic characterization of the problem. Consider the two targets illustrated in Figure 1.5. Figure 1.5a represents a vegetated region with boundaries that delineate different cover types, forest and meadows for example. Figure 1.5b represents an airport. When the resolution cell size of an image pixel is on the order of the smallest overall dimension of the target (r = d 1 ), then it will often be possible to Detect a change in tone (amplitude). Detection is more likely if the tone (brightness) of the target contrasts sharply with that of the background material. Identification of a target implies that some distinguishing feature is apparent in the samples. For the vegetated region, identification of a cover type requires that at least one pure pixel exists (s = r = d 1 /3) and that the target is tonally distinct (Figure 1.5c). Identification of the airport, a feature that is defined primarily by its shape, requires that the overall shape of the airport be resolved. This means that the samples can have a spacing which is no greater than half the separation of the runways (s = d 2 /2). Any greater spacing means that the runways could not be seen as distinct, and the airport would not be spatially defined (Figure 1.5d). These are lower limits for identification of both targets and identification would be marginal. For a statistically significant identification of the cover types, many pure pixels will be necessary; for definitive identification of the airport the sampling intervals should be less than the width of the runways.

9 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-9 d 3 d 2 d 1 d 1 a. vegetated area b. airport d 3 d 2 d d 1 Figure 1.5: Illustration of criteria for sampling resolution. Spatial Frequency pure pixel criterion: s = d 1 /3 spatial criterion: s d 2 /2 A complete discussion of sampling would require a more extensive mathematical development than is appropriate here. Nonetheless, the general concepts are important and we may gain a preliminary understanding of some of the important problems involved with some simple illustrations. A central concept to the problem of sampling is the concept of spatial frequency. A spatial frequency is the number of times per unit distance that a feature repeats. A simple example is the spatial frequency of a sine wave. The sine wave illustrated in Figure 1.6

10 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-10 has a wavelength of. Since the pattern of the sine wave repeats after a distance, the spatial frequency, k, of the sine wave is simply k = 1/. Most images contain an infinite number of spatial frequencies. In fact, any image can be reconstructed using nothing more than an infinite sum of sine waves of different spatial frequencies if each has the proper intensity and phase characteristics. This is not meant to imply that an image will contain any obvious periodic features (such as sand waves, water waves, rows of agricultural crops, etc.). Rather, certain types of features are characteristic of high spatial frequencies. Among these are abrupt changes in intensity, sharp boundaries and rapid, small changes in intensity (e.g., the graininess or rough texture of a particular feature in a photograph), and any other "small" feature. At the other extreme are the slow changes in intensity over an image, the average intensity of an image or any "large" features in an image which would be characteristic of the low spatial frequencies. With the exception of a relatively few special images such as the diffraction pattern of Figure 1.3, all spatial frequencies are present to some extent in an image and, thus, all spatial frequencies are required to completely reconstruct an image. Unfortunately, in sampling an image the higher spatial frequencies are eliminated leading to characteristic degradations in the reconstruction of an image from sampled points. The relationship of sampling and spatial frequencies may be illustrated using the simple sine function. A simple sine wave of wavelength (spatial frequency, k = 1/ ) and amplitude A is shown in its original form in Figure 1.6a, and as it would appear when sampled at several different rates as determined by the size of the sampling interval, s. (The resolution cell size is assumed to be much smaller than the wavelength (r ) in this example.) In Figure 1.6b the sampling interval is almost an order of magnitude smaller than the wavelength (s = /7) and the waveform (frequency and phase) is easily reproduced from the sampled points. The only thing gained by increasing the sampling rate (decreasing s) is a better estimate of the amplitude of the wave. Decreasing the sampling rate (increasing s) will at first only increase the chance of misrepresenting the amplitude. With fewer samples the amplitude of the wave might appear to be lower than it actually is and the apparent amplitude may vary if the original wavelength is not an integral multiple of the sampling interval. The frequency and phase of the wave will be accurately portrayed until the sampling rate falls to twice per wavelength (s = /2 = 1/2k). This is the critical sampling frequency the lowest sampling frequency capable of correctly characterizing the sine wave of wavelength,. As is illustrated in Figure 1.6c, the amplitude and phase of the wave will not necessarily be represented accurately with critical sampling; only the frequency of the wave is fixed. If the sampling frequency, k, is decreased even beyond the critical sampling frequency for this wavelength, then there is no assurance that anything about the wave will be accurately portrayed by the samples. For the example in Figure 1.6d, the wave reconstructed from the sampled points is of a longer wavelength than the original. (It is actually the longest wavelength that will fit the sampled points.) The reconstructed waveform is an artifact of the sampling and is called aliasing error. Note that there is a direct relationship between the original wavelength, the sampling interval and the aliased wavelength. If the original wavelength, l, is shorter than twice the sampling interval ( < 2s) by an amount s, then the longest wavelength which can be fit to the sampled points,, will always be greater than 2s by an amount proportional to s :

11 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-11 k = 1 = f + f (2.5) (1/2s) + (1/ s) (original frequency) k = 1 = f f (2.6) ( 1 ) (1/ s) (aliased frequency) 2s When a function is sampled at intervals of s, the sampled data is said to have a Nyquist frequency of k = 1/2s. The Nyquist frequency is sometimes called the "folding" frequency because of the symmetry of the above equations. It represents the highest spatial frequency that can be represented by the sampled data with any accuracy. In general one will be sampling a function which is much more complex than a simple sine wave. As was stated above, a continuous image contains an infinite number of spatial frequencies all of which are necessary if this image is to be reconstructed with complete accuracy. It should be obvious from the above example that, for a given sampling interval, not only will the highest spatial frequencies be lost, but also artifacts can be introduced at lower frequencies because of aliasing error. Sample size Thus far it has been assumed that an individual sample is small compared to the shortest wavelength being sampled. All real measuring devices have finite size and sensitivity. Thus, any real sample will have a finite size and the resulting measurement will represent an average over the area covered by the sample. This averaging has the effect of a low-pass filter, where, if the size of the sample is r, then any frequencies higher than 1/2r will be damped out quite rapidly. As an example, consider Figure 2.7, in which observations made at a sampling interval, s, but with two different sample sizes are compared. As the sample size approaches zero the sampled value approaches the local value of the original function at the sampling point, i.e., each sample follows the local, high frequency variations. As the sample size increases, the response of the samples to the high frequency variations (the small features) is quickly reduced. By filtering the higher frequencies, the larger sample size will also have the effect of damping out aliased frequencies. When the sample size is equal the sampling interval, s, the magnitude of the sampled point is an average of the magnitude of the original sine wave over the sample size. Notice that the aliasing is still a problem but that the amplitude of the reconstructed wave at the aliased frequency has been significantly reduced. With r = s (Figure 2.7), the high frequencies most effectively damped due to the sample size are the aliased frequencies suggesting that there should be very little loss of information due to the finite sample size. Increasing the sample size such that r>s will result in more complete damping of the aliased frequencies but will also result in damping of the highest real frequencies.

12 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-12 Figure 1.6: The effects of sampling a particular wavelength using different sampling intervals can be summarized as follows: A. s ==> 0 Exact reproduction of the sampled wave. B. 0 < s < /2 oversampling: The sampled wave may have a lower amplitude than the original. If the original wavelength is not an integral multiple of the sampling interval = ns, where n = 2,3,4...) then aliased frequencies may be introduced superimposed on the primary wavelength. C. s = /2 critical sampling: The frequency of the original wave will be reproduced properly but both the phase and the amplitude will probably be lost. D. s > /2 undersampling: The reconstructed wave will bear no resemblance to the original. Amplitude, phase and frequency will all likely to be misrepresented.

13 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image p λ s r r = s r < s λ 2λ 3λ 4λ 5λ 6λ Figure 1.7: Damping of aliased frequencies as a result of the increase in sample size. r --> 0 ( r --> s ) the sampled value approaches the actual value of the function at the sampling point. ( ) frequencies on the order of the sample size are damped. Two-dimensional case The same concepts apply when dealing with a fully two-dimensional image function. Here an explicit example is useful. Figure 2.8a shows a two-dimensional cosine function with a spatial frequency of k. Figure 2.8a could be a schematic representation of a scene, viewed from aircraft of some naturally occurring waveform (ocean waves, sand waves, freshly plowed fields, etc.). The spatial frequencies in the x- and the y-directions are k x, and k y respectively, where, k = (k x 2 + k y 2 ) 1/2 (2.7) The sine waves found by making observations along the top edge and left side of the image are illustrated by the solid curves at the top and left side of the figure, respectively. Assuming that the sample size is much smaller than the wavelength (r < 1/k = ), then selecting the sampling interval in the x-direction so that the wave in the x-direction is critically sampled (s x = 1/2k x = x /2) the sampled wave will be as indicated by the bold curve at the top of Figure 2.8b. As was true in the one-dimensional case, the amplitude and phase are both misrepresented, but the frequency is correct. Critical sampling in the y-direction would yield the same result for the sampled wave in the y-direction. The image resulting from this particular sampling pattern (not illustrated) will be a checkerboard pattern; dark rectangles will appear at conjunction of the minima of the sampled waves and light rectangles will appear at the conjunction of the maxima of the sampled waves. If the sampling interval in the y-direction, s y, is increased until s y > 1/2k y = y /2, then the wave in the y-direction will be undersampled as is illustrated by the bold curve on the left side of Figure 2.8b. Reconstructing the image from the sampled points shown in Figure 2.8b yields an image with an aliased frequency k'. The reconstructed waveform appears to have a wavelength of ' = 1/k' and to be oriented in an entirely different direction than the original wave.

14 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image p. 14 Figure 2.8: Aliasing as a result of undersampling in a simple 2-dimensional function. Aliasing errors in real data are not always as obvious as in the example given. The aliased frequencies that are introduced by the sampling of real images will usually have the effect of blurring edges and increasing the noise level in the sampled image. Aliasing is never entirely avoidable, however the effect of aliasing can often be minimized by careful selection of the sampling interval and the sample size in order to include enough of the high spatial frequencies to characterize the smallest features of interest in the image. If there is reason to suspect that the sampling is inadequate, e.g. that k x N x, then it may be possible to estimate the original frequency or the aliased frequencies that would be introduced along with the original frequency. Let k' = aliased frequency ' = aliased wavelength k = actual frequency = actual wavelength ' = 1/k' = 1/k k' = (k' x 2 + k' y 2 ) 1/2 k = (k x 2 + k y 2 ) 1/2 = tan -1 (k y /k x ) = tan -1 (k y /k x ) Consider two cases: CASE 1: Undersampling If k x > N x then the original frequency will be missing entirely, and only aliased frequencies will be apparent. Thus, the frequencies that appear in the image are: k' = [(k x ') 2 + (k y ') 2 ] 1/2 ' = 1/k' ' = tan -1 (k y '/k x ') Since the aliased frequency is lower than the Nyquist frequency, we may write: k x ' = N x k x ==> k x = N x k x 'k y ' = N y k y ==> k y = N y k y '

15 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-15 Knowing the frequency shifts, k x and k y, that would be ascribed to aliasing, it is possible to compute the original frequencies: k x = (±N x + k x ) k y = (±N y + k y ) Notice that there is some uncertainty as to sign. Since the phase information is lost due to undersampling, there is ambiguity as to the orientation of the original frequency. Thus, there are two possible solutions to each of the above equations. There is, therefore, an uncertainty as to the spatial frequency and orientation of the original pattern: k = [k x 2 + k y 2 ] 1/2 = tan -1 (k y /k x ) CASE2: Near-critical sampling If k x N x then the original frequency will be represented, although rather weakly, and aliased frequencies will probably be present. Thus, the frequencies that appear in the image are: k = [(k x ) 2 + (k y ) 2 ] 1/2 k' = [(k x ') 2 + (k y ') 2 ] 1/2 = 1/k = tan -1 (k y /k x ) ' = 1/k' ' = tan -1 (k y '/k x ') Obviously, these should be related by the frequency shift about the Nyquist (folding) frequency. k x = (N x + k x ) k' x = (N x k x ) k y = (N y + k y ) k' y = (N y k y ) Where k x, N x and k x are all assumed to have the same sign, as are k y, N y and k y. The only question is which pattern is aliased and which is real Quantization Digitization of the amplitude of the image function is called quantization. More precisely, quantization is the process by which the magnitude at each point in a continuous image is assigned a new value from a finite set of gray values. The process is illustrated in Figure 2.9 where the response of an optical system (voltage) to a continuously varying radiance is plotted. The noise in the response curve represents system noise. (All of the components of the optical system contribute to the response, but it is the combined response of all the components that is ultimately of interest.) Quantization is the process of dividing up the output voltages into ranges and assigning each range a different value. A single value is called a gray value (GV) or digital number (DN). The term DN arose because the values are usually chosen to be positive integers, a choice that simplifies many aspects of processing. The interval between adjacent gray values is the quantization interval, V. The quantization interval is often chosen to be constant although there are some conditions under which an irregular quantization is preferable.

16 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-16 v s v 2 v max Voltage (v) v v min v 0 v 1 L 0 L 1 L 2 L s L max Figure 2.9: Quantization parameters v 0 = dark current. v 1, v 2 = range of voltages over which v is linearly proportional to the radiance, L. v min, v max = range of voltages which are to be digitized. v = quantization interval should be greater or equal to the noise. Voltages within the crosshatched region will be assigned to the same gray value. v max v 2 20 Gray Value (GV) v min v v 1 L 1 L min L 2 L max Radiance, L Figure 2.10: Quantization: unequal interval quantization is required for the non-linear region for voltages greater than V 2 if each quantization interval is to represent equal intervals of radiance. As V becomes smaller, detector noise becomes more prominent. Crosshatched region represents the voltage range assigned to gray value 6.

17 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-17 First, let us consider Figure 2.9 for the case of equal quantization. One must choose the range of possible voltages (corresponding to the desired range of radiances) to be represented by k(x) and the quantization interval, V. Ideally the optical system and detector have been selected so that the range of radiances that is of interest (L min to L max ) is within the linear response range of the detector. If the minimum and maximum output voltages (V min and V max ) are within the linear range (V l and V 2 ) and the gray values are separated by equal quantization intervals, then the gray value scale will be linearly related to the observed radiance. The quantization interval, V, should be chosen to provide sufficient sensitivity to discern the smallest real change in radiance, but should not be less than the rms noise level. Occasionally, unequal quantization intervals are desirable. For instance, if the range of radiances of interest extends beyond the linear range of the detector/optical system response (Figure 2.10), then unequal quantization intervals are necessary if the linear relationship between the gray value scale and radiance is to be maintained. In Figure 2.10, the required adjustment quickly reduces the quantization interval to less than the system noise a situation that should be avoided. The ideal quantization interval is equal to or slightly greater than the rms system noise, since this choice will minimize system noise without any substantial degradation of the real signal. Unfortunately, there is often the further constraint of the total number of allowable gray values to consider. For example, most image processing hardware is designed to handle byte data efficiently (1 byte = 8 bits) which effectively limits the number of usable intervals (gray values) to 256. The number of gray values may also be limited by the sensitivity of the detection system, the accuracy of the digitizer or by the data transmission rate. The Landsat MSS data was digitized with equal quantization intervals to 7-bit accuracy, yielding a possible gray value range of only 0 to 127. Landsat Thematic Mapper (TM) and SPOT data are quantized to 8-bits (0 to 255) and the Advanced Very High Resolution Radiometer (AVHRR) data are quantized to 10 bits (0 to 1023). The design criteria for each of these systems is very different, and is dependent on the particular types of targets which are of interest for each system, the range of atmospheric conditions which must be accounted for and the change in radiance which corresponds to a significant signal (Figure 2.10). Obviously, the requirements for dynamic range, quantization and total number of gray values must be balanced. These quantities are related by the expression: V = V max V min (2.8) 2 b 1 where b is the number of bits in the digitized image. (See Table 2.3 for a description of bits, bytes and words.) If the quantization interval is too small, then for a fixed number of bits, the dynamic range of the system will be limited, and useful data that is too bright will saturate the detector; useful data that is too dark may be assigned a gray value of zero. On the other hand, if the dynamic range of the detector/digitizer is set to accept the full range of expected radiance, sensitivity to small changes in radiance is lost. Selection of a quantization interval that is large compared to the detection system noise may not produce obvious flaws in a complex image which spans a large portion of the instrument's dynamic range. However, in a scene in which the brightness (radiance) is slowly varying (spatially), coarse quantization will tend to produce curved boundaries between regions

18 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-18 which differ by only one gray value. The position of the boundary is an artifact of the quantization and the effect is called false contouring (Figure 2.12). 1.2 Histogram A histogram is a function representing the number of pixels in an image or a portion of an image which have the same gray value: h(k) = number of pixels in a specified image region with gray value k Closely related to the histogram is the probability density function (pdf) which is a function representing the probability that a pixel randomly selected from an image, or image region, will have a specified gray value. The histogram and the pdf are related by: p(k) = h(k)/n (2.1) where N is the number of pixels in the image region of interest. An example of an image histogram is given in Figure 2.13a. This histogram is fairly typical of a complex image with good contrast the gray values span the dynamic range of the output which suggest that the image has good contrast, and the distribution of gray values is smooth, suggesting that the image is fairly complex. A histogram contains no spatial information whatsoever; it simply represents the overall distribution of gray values in the image region. The fact that there is an unequal distribution of gray values in the image histogram suggests that image quality could be improved without increasing the number of gray values by varying the quantization interval to reflect the expected distribution of scene radiance. The technique, called tapered quantization, improves the quality of the image by providing for finer quantization where there is more information (Figure 2.13b). Tapered quantization also improves the visual contrast of the image. The difficulty of using tapered quantization is that the distribution of scene radiance must be known, or at least approximated, in advance. This is difficult and, for most applications, ineffective. 1.3 Data rate Technological limits of detectors, optical systems and digitizers are often not the limiting characteristics in determining the final radiometric and spatial resolution of the imaging system. The data rate the rate at which data can be transmitted from the imaging system to a receiving station is frequently the significant limiting factor. An image which has n rows and m columns and is digitized to b bits has n*m*b bits. If there is more than one image collected for the same scene then there are n*m*b bits for each of the images. Most of the earth resource satellite systems have several bands or channels, each channel representing data collected in a different spectral region. For example, Landsat MSS has 4 bands, two in the visible and two in the near infrared. Landsat TM has 6 bands, three in the visible two in the near infrared, one in the middle infrared and one in the thermal infrared. Thus the total number of bits in the image set of d images is n*m*d*b. Since an MSS or TM image set is collected in less than 30 seconds, data rates of gigabits per second (10**12 bits/sec) may be required for large area coverage at moderate spatial, spectral and radiometric resolution.

19 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image (10) (8) 0.21 (7) 0.12 (8) 0.08 (8) 0.14 (7) 0.11 (7) 0.08 (8) 0.03 (10) 0.19 (6) AVHRR TM MSS irradiance (bit s) Radiance ( mw/ cm 2 -st-µ) Wavelengt h ( µm) Figure 2.11: Dynamic range and sensitivity of selected bands of three satellite imaging systems.

20 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-20 a) Temperature distribution map b) Image quantized to 8 gray levels c) The same image quantized to 8 gray levels with a different offset. Figure 2.12: An example of false contouring. Quantization results in abrupt changes in gray level. A small offset in the quantization levels can yield different results even if the data are quantized to the same number of gray levels over the same range.

21 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image AVIR IS 0.95 mrad Daedalus polariz. MODIS 1 km 500 m CZC S 0.6 km AVHR R 1.1 km SPOT 20 m 10 m PAN TM 30 m ( 120m) MSS 80 m ( 1) ( 2) ( 3) ( 4) microns (µ) microns (µ) Figure 2.13: Spectral bands and nominal pixel size of aircraft and satellite multispectral imaging systems.

22 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-22 AFTERTHOUGHTS 1.4 Compression, Antialiasing, Data integrity Antialiasing, as the name suggests, is a technique used to combat aliasing in image display. When a line is drawn on a piece of paper, its edges appear smooth regardless of the line's orientation. When a line is drawn on a computer display, whether CRT- or LCD-based, the limited resolution of the display means that, if it isn't precisely horizontal or vertical, the line will appear as if it is constructed in a series of "steps". This is aliasing and the stepped effect is often known as "the jaggies". The various methods used to overcome the jaggies, and produce lines which appear to the viewer to be smooth, are referred to as antialiasing. The simplest of these techniques is illustrated in the accompanying diagram, in which actual size and enlarged versions of lines drawn on a computer display at an angle of 66, with and without antialiasing, are shown. The line on the left, without antialiasing, shows the stepped effect of the jaggies. To make the line truly smooth-edged, portions of the white pixels in direct contact with the black pixels would also need to be black but, since only whole pixels can be addressed by displays, another method must be found. Fortunately, the human visual system is fairly easy to fool and the image on the right shows how antialiasing can do this. By using shades which fall between black and white in pixels which form the edge of the line, the stark contrast between the two extreme shades, which humans use to identify edges, is smoothed out and the optical illusion that the jaggedness of the line has been reduced is achieved. Note that the text in the diagram has also been produced with and without antialiasing being applied and how it affects the appearance of the letters. One of the advantages that CRT displays have over LCD displays is that, because the individual pixels are created by glowing areas of phosphor, they have no definite edges. Consequently, light from adjacent pixels tends to overlap partially, causing slight blurring: a built-in antialiasing effect. Aliasing can also occur when dealing with digital audio, when it produces the audio equivalent of the jaggies, which is a buzzing sound. Other antialiasing techniques have been proposed. One that is particularly interesting suggests using pixels of varying shape (Kirsch, 2010). Compression is the name given to the suite of techniques designed to minimize the size of an image without affecting the visual contact. This can involve clever adjustments of the color scale (since digital images can represent a greater range of colors and gray scales than humans can readily discern at one time), eliminating repetitive data (e.g., a block of pixels that are essentially identical, or much more subtle changes. Compression implies a loss of information

23 CEE 615: DIGITAL IMAGE PROCESSING Topic 2: The Digital Image 2-23 and most standard compression method (jpeg, mpeg,...) involve significant loss. The degree of compression can be typically be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. The JPEG 2000 standard was created in 2000 superseding with a newly designed, wavelet-based method that induces fewer artifacts in the images. Nonetheless, compression generally implies loss of information and compressed images are questionable as data sources. Kirsch, Russel A. (2010) Precision and Accuracy in Scientific Imaging. J. Res. Natl Inst. Stand. Technol., 115: URL:

1 W. Philpot, Cornell University The Digital Image

1 W. Philpot, Cornell University The Digital Image 1 The Digital Image DEFINITION: A grayscale image is a single-valued function of 2 variables: ff(xx 1, xx 2 ). Notes: A gray scale image is a single-valued function of two spatial variables, ff(xx 11,

More information

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS REMOTE SENSING Topic 10 Fundamentals of Digital Multispectral Remote Sensing Chapter 5: Lillesand and Keifer Chapter 6: Avery and Berlin MULTISPECTRAL SCANNERS Record EMR in a number of discrete portions

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Spatial Resolution

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Spatial Resolution CHARACTERISTICS OF REMOTELY SENSED IMAGERY Spatial Resolution There are a number of ways in which images can differ. One set of important differences relate to the various resolutions that images express.

More information

Enhancement of Multispectral Images and Vegetation Indices

Enhancement of Multispectral Images and Vegetation Indices Enhancement of Multispectral Images and Vegetation Indices ERDAS Imagine 2016 Description: We will use ERDAS Imagine with multispectral images to learn how an image can be enhanced for better interpretation.

More information

Application Note (A13)

Application Note (A13) Application Note (A13) Fast NVIS Measurements Revision: A February 1997 Gooch & Housego 4632 36 th Street, Orlando, FL 32811 Tel: 1 407 422 3171 Fax: 1 407 648 5412 Email: sales@goochandhousego.com In

More information

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS PASSIVE ACTIVE DIGITAL

More information

Chapter 9 Image Compression Standards

Chapter 9 Image Compression Standards Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how

More information

Image Processing (EA C443)

Image Processing (EA C443) Image Processing (EA C443) OBJECTIVES: To study components of the Image (Digital Image) To Know how the image quality can be improved How efficiently the image data can be stored and transmitted How the

More information

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005

Some Basic Concepts of Remote Sensing. Lecture 2 August 31, 2005 Some Basic Concepts of Remote Sensing Lecture 2 August 31, 2005 What is remote sensing Remote Sensing: remote sensing is science of acquiring, processing, and interpreting images and related data that

More information

RGB colours: Display onscreen = RGB

RGB colours:  Display onscreen = RGB RGB colours: http://www.colorspire.com/rgb-color-wheel/ Display onscreen = RGB DIGITAL DATA and DISPLAY Myth: Most satellite images are not photos Photographs are also 'images', but digital images are

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG An Introduction to Geomatics خاص بطلبة مساق مقدمة في علم الجيوماتكس Prepared by: Dr. Maher A. El-Hallaq Associate Professor of Surveying IUG 1 Airborne Imagery Dr. Maher A. El-Hallaq Associate Professor

More information

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing

Int n r t o r d o u d c u ti t on o n to t o Remote Sensing Introduction to Remote Sensing Definition of Remote Sensing Remote sensing refers to the activities of recording/observing/perceiving(sensing)objects or events at far away (remote) places. In remote sensing,

More information

An Introduction to Remote Sensing & GIS. Introduction

An Introduction to Remote Sensing & GIS. Introduction An Introduction to Remote Sensing & GIS Introduction Remote sensing is the measurement of object properties on Earth s surface using data acquired from aircraft and satellites. It attempts to measure something

More information

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes: Evaluating Commercial Scanners for Astronomical Images Robert J. Simcoe Associate Harvard College Observatory rjsimcoe@cfa.harvard.edu Introduction: Many organizations have expressed interest in using

More information

LAB 2: Sampling & aliasing; quantization & false contouring

LAB 2: Sampling & aliasing; quantization & false contouring CEE 615: Digital Image Processing Spring 2016 1 LAB 2: Sampling & aliasing; quantization & false contouring A. SAMPLING: Observe the effects of the sampling interval near the resolution limit. The goal

More information

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Radiometric Resolution

CHARACTERISTICS OF REMOTELY SENSED IMAGERY. Radiometric Resolution CHARACTERISTICS OF REMOTELY SENSED IMAGERY Radiometric Resolution There are a number of ways in which images can differ. One set of important differences relate to the various resolutions that images express.

More information

Chapter 8. Representing Multimedia Digitally

Chapter 8. Representing Multimedia Digitally Chapter 8 Representing Multimedia Digitally Learning Objectives Explain how RGB color is represented in bytes Explain the difference between bits and binary numbers Change an RGB color by binary addition

More information

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns)

Spectral Signatures. Vegetation. 40 Soil. Water WAVELENGTH (microns) Spectral Signatures % REFLECTANCE VISIBLE NEAR INFRARED Vegetation Soil Water.5. WAVELENGTH (microns). Spectral Reflectance of Urban Materials 5 Parking Lot 5 (5=5%) Reflectance 5 5 5 5 5 Wavelength (nm)

More information

746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage

746A27 Remote Sensing and GIS. Multi spectral, thermal and hyper spectral sensing and usage 746A27 Remote Sensing and GIS Lecture 3 Multi spectral, thermal and hyper spectral sensing and usage Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University Multi

More information

Lecture 2. Electromagnetic radiation principles. Units, image resolutions.

Lecture 2. Electromagnetic radiation principles. Units, image resolutions. NRMT 2270, Photogrammetry/Remote Sensing Lecture 2 Electromagnetic radiation principles. Units, image resolutions. Tomislav Sapic GIS Technologist Faculty of Natural Resources Management Lakehead University

More information

Module 11 Digital image processing

Module 11 Digital image processing Introduction Geo-Information Science Practical Manual Module 11 Digital image processing 11. INTRODUCTION 11-1 START THE PROGRAM ERDAS IMAGINE 11-2 PART 1: DISPLAYING AN IMAGE DATA FILE 11-3 Display of

More information

What is an image? Bernd Girod: EE368 Digital Image Processing Pixel Operations no. 1. A digital image can be written as a matrix

What is an image? Bernd Girod: EE368 Digital Image Processing Pixel Operations no. 1. A digital image can be written as a matrix What is an image? Definition: An image is a 2-dimensional light intensity function, f(x,y), where x and y are spatial coordinates, and f at (x,y) is related to the brightness of the image at that point.

More information

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur. Basics of Remote Sensing Some literature references Franklin, SE 2001 Remote Sensing for Sustainable Forest Management Lewis Publishers 407p Lillesand, Kiefer 2000 Remote Sensing and Image Interpretation

More information

Image and Video Processing

Image and Video Processing Image and Video Processing () Image Representation Dr. Miles Hansard miles.hansard@qmul.ac.uk Segmentation 2 Today s agenda Digital image representation Sampling Quantization Sub-sampling Pixel interpolation

More information

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010 APCAS/10/21 April 2010 Agenda Item 8 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION Siem Reap, Cambodia, 26-30 April 2010 The Use of Remote Sensing for Area Estimation by Robert

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

Final Examination Introduction to Remote Sensing. Time: 1.5 hrs Max. Marks: 50. Section-I (50 x 1 = 50 Marks)

Final Examination Introduction to Remote Sensing. Time: 1.5 hrs Max. Marks: 50. Section-I (50 x 1 = 50 Marks) Final Examination Introduction to Remote Sensing Time: 1.5 hrs Max. Marks: 50 Note: Attempt all questions. Section-I (50 x 1 = 50 Marks) 1... is the technology of acquiring information about the Earth's

More information

MULTISPECTRAL IMAGE PROCESSING I

MULTISPECTRAL IMAGE PROCESSING I TM1 TM2 337 TM3 TM4 TM5 TM6 Dr. Robert A. Schowengerdt TM7 Landsat Thematic Mapper (TM) multispectral images of desert and agriculture near Yuma, Arizona MULTISPECTRAL IMAGE PROCESSING I SENSORS Multispectral

More information

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs Basic Digital Image Processing A Basic Introduction to Digital Image Processing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland,

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Fluency with Information Technology Third Edition by Lawrence Snyder Digitizing Color RGB Colors: Binary Representation Giving the intensities

More information

Introduction to Remote Sensing

Introduction to Remote Sensing Introduction to Remote Sensing Spatial, spectral, temporal resolutions Image display alternatives Vegetation Indices Image classifications Image change detections Accuracy assessment Satellites & Air-Photos

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data

More information

CS 465 Prelim 1. Tuesday 4 October hours. Problem 1: Image formats (18 pts)

CS 465 Prelim 1. Tuesday 4 October hours. Problem 1: Image formats (18 pts) CS 465 Prelim 1 Tuesday 4 October 2005 1.5 hours Problem 1: Image formats (18 pts) 1. Give a common pixel data format that uses up the following numbers of bits per pixel: 8, 16, 32, 36. For instance,

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Computers and Imaging

Computers and Imaging Computers and Imaging Telecommunications 1 P. Mathys Two Different Methods Vector or object-oriented graphics. Images are generated by mathematical descriptions of line (vector) segments. Bitmap or raster

More information

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0 CanImage (Landsat 7 Orthoimages at the 1:50 000 Scale) Standards and Specifications Edition 1.0 Centre for Topographic Information Customer Support Group 2144 King Street West, Suite 010 Sherbrooke, QC

More information

Chapter 3¾Examination and Description of Soils SOIL SURVEY MANUAL 73. Soil Color

Chapter 3¾Examination and Description of Soils SOIL SURVEY MANUAL 73. Soil Color Chapter 3¾Examination and Description of Soils SOIL SURVEY MANUAL 73 Soil Color Elements of soil color descriptions are the color name, the Munsell notation, the water state, and the physical state: "brown

More information

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Digitizing Color Fluency with Information Technology Third Edition by Lawrence Snyder RGB Colors: Binary Representation Giving the intensities

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction One of the major achievements of mankind is to record the data of what we observe in the form of photography which is dated to 1826. Man has always tried to reach greater heights

More information

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING Geo/SAT 2 INTRODUCTION TO REMOTE SENSING Paul R. Baumann, Professor Emeritus State University of New York College at Oneonta Oneonta, New York 13820 USA COPYRIGHT 2008 Paul R. Baumann Introduction Remote

More information

II. Basic Concepts in Display Systems

II. Basic Concepts in Display Systems Special Topics in Display Technology 1 st semester, 2016 II. Basic Concepts in Display Systems * Reference book: [Display Interfaces] (R. L. Myers, Wiley) 1. Display any system through which ( people through

More information

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Application of GIS to Fast Track Planning and Monitoring of Development Agenda Application of GIS to Fast Track Planning and Monitoring of Development Agenda Radiometric, Atmospheric & Geometric Preprocessing of Optical Remote Sensing 13 17 June 2018 Outline 1. Why pre-process remotely

More information

Introduction to Remote Sensing. Electromagnetic Energy. Data From Wave Phenomena. Electromagnetic Radiation (EMR) Electromagnetic Energy

Introduction to Remote Sensing. Electromagnetic Energy. Data From Wave Phenomena. Electromagnetic Radiation (EMR) Electromagnetic Energy A Basic Introduction to Remote Sensing (RS) ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland, Oregon 1 September 2015 Introduction

More information

Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study

Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study Digital database creation of historical Remote Sensing Satellite data from Film Archives A case study N.Ganesh Kumar +, E.Venkateswarlu # Product Quality Control, Data Processing Area, NRSA, Hyderabad.

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale CS 548: Computer Vision REVIEW: Digital Image Basics Spring 2016 Dr. Michael J. Reale Human Vision System: Cones and Rods Two types of receptors in eye: Cones Brightness and color Photopic vision = bright-light

More information

Bias errors in PIV: the pixel locking effect revisited.

Bias errors in PIV: the pixel locking effect revisited. Bias errors in PIV: the pixel locking effect revisited. E.F.J. Overmars 1, N.G.W. Warncke, C. Poelma and J. Westerweel 1: Laboratory for Aero & Hydrodynamics, University of Technology, Delft, The Netherlands,

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT Sapana S. Bagade M.E,Computer Engineering, Sipna s C.O.E.T,Amravati, Amravati,India sapana.bagade@gmail.com Vijaya K. Shandilya Assistant

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Digitization and fundamental techniques

Digitization and fundamental techniques Digitization and fundamental techniques Chapter 2.2-2.6 Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University Outline Imaging Digitization Sampling Labeling

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

Texture characterization in DIRSIG

Texture characterization in DIRSIG Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2001 Texture characterization in DIRSIG Christy Burtner Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

RADIOMETRIC CALIBRATION

RADIOMETRIC CALIBRATION 1 RADIOMETRIC CALIBRATION Lecture 10 Digital Image Data 2 Digital data are matrices of digital numbers (DNs) There is one layer (or matrix) for each satellite band Each DN corresponds to one pixel 3 Digital

More information

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching. Remote Sensing Objectives This unit will briefly explain display of remote sensing image, geometric correction, spatial enhancement, spectral enhancement and classification of remote sensing image. At

More information

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf(

Outline. Introduction. Introduction: Film Emulsions. Sensor Systems. Types of Remote Sensing. A/Prof Linlin Ge. Photographic systems (cf( GMAT x600 Remote Sensing / Earth Observation Types of Sensor Systems (1) Outline Image Sensor Systems (i) Line Scanning Sensor Systems (passive) (ii) Array Sensor Systems (passive) (iii) Antenna Radar

More information

Sampling and Reconstruction

Sampling and Reconstruction Sampling and reconstruction COMP 575/COMP 770 Fall 2010 Stephen J. Guy 1 Review What is Computer Graphics? Computer graphics: The study of creating, manipulating, and using visual images in the computer.

More information

Physics 23 Laboratory Spring 1987

Physics 23 Laboratory Spring 1987 Physics 23 Laboratory Spring 1987 DIFFRACTION AND FOURIER OPTICS Introduction This laboratory is a study of diffraction and an introduction to the concepts of Fourier optics and spatial filtering. The

More information

Digital Imaging Rochester Institute of Technology

Digital Imaging Rochester Institute of Technology Digital Imaging 1999 Rochester Institute of Technology So Far... camera AgX film processing image AgX photographic film captures image formed by the optical elements (lens). Unfortunately, the processing

More information

Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality

Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality Andrei Fridman Gudrun Høye Trond Løke Optical Engineering

More information

Unit 8: Color Image Processing

Unit 8: Color Image Processing Unit 8: Color Image Processing Colour Fundamentals In 666 Sir Isaac Newton discovered that when a beam of sunlight passes through a glass prism, the emerging beam is split into a spectrum of colours The

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pk Pakorn Watanachaturaporn, Wt ht Ph.D. PhD pakorn@live.kmitl.ac.th,

More information

Remote Sensing Platforms

Remote Sensing Platforms Types of Platforms Lighter-than-air Remote Sensing Platforms Free floating balloons Restricted by atmospheric conditions Used to acquire meteorological/atmospheric data Blimps/dirigibles Major role - news

More information

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018 GEOL 1460/2461 Ramsey Introduction/Advanced Remote Sensing Fall, 2018 Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018 I. Quick Review from

More information

DESIGN NOTE: DIFFRACTION EFFECTS

DESIGN NOTE: DIFFRACTION EFFECTS NASA IRTF / UNIVERSITY OF HAWAII Document #: TMP-1.3.4.2-00-X.doc Template created on: 15 March 2009 Last Modified on: 5 April 2010 DESIGN NOTE: DIFFRACTION EFFECTS Original Author: John Rayner NASA Infrared

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, g, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pakorn Watanachaturaporn, Ph.D. pakorn@live.kmitl.ac.th, pwatanac@gmail.com

More information

Exp No.(8) Fourier optics Optical filtering

Exp No.(8) Fourier optics Optical filtering Exp No.(8) Fourier optics Optical filtering Fig. 1a: Experimental set-up for Fourier optics (4f set-up). Related topics: Fourier transforms, lenses, Fraunhofer diffraction, index of refraction, Huygens

More information

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur Histograms of gray values for TM bands 1-7 for the example image - Band 4 and 5 show more differentiation than the others (contrast=the ratio of brightest to darkest areas of a landscape). - Judging from

More information

Compression and Image Formats

Compression and Image Formats Compression Compression and Image Formats Reduce amount of data used to represent an image/video Bit rate and quality requirements Necessary to facilitate transmission and storage Required quality is application

More information

Introduction to Remote Sensing Part 1

Introduction to Remote Sensing Part 1 Introduction to Remote Sensing Part 1 A Primer on Electromagnetic Radiation Digital, Multi-Spectral Imagery The 4 Resolutions Displaying Images Corrections and Enhancements Passive vs. Active Sensors Radar

More information

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Home The Book by Chapters About the Book Steven W. Smith Blog Contact Book Search Download this chapter in PDF

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam Diffraction Interference with more than 2 beams 3, 4, 5 beams Large number of beams Diffraction gratings Equation Uses Diffraction by an aperture Huygen s principle again, Fresnel zones, Arago s spot Qualitative

More information

The information carrying capacity of a channel

The information carrying capacity of a channel Chapter 8 The information carrying capacity of a channel 8.1 Signals look like noise! One of the most important practical questions which arises when we are designing and using an information transmission

More information

(Refer Slide Time: 1:28)

(Refer Slide Time: 1:28) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 10 Image characteristics and different resolutions in Remote Sensing Hello everyone,

More information

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University Images and Graphics Images and Graphics Graphics and images are non-textual information that can be displayed and printed. Graphics (vector graphics) are an assemblage of lines, curves or circles with

More information

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis Chapter 2: Digital Image Fundamentals Digital image processing is based on Mathematical and probabilistic models Human intuition and analysis 2.1 Visual Perception How images are formed in the eye? Eye

More information

Chapter 3: Assorted notions: navigational plots, and the measurement of areas and non-linear distances

Chapter 3: Assorted notions: navigational plots, and the measurement of areas and non-linear distances : navigational plots, and the measurement of areas and non-linear distances Introduction Before we leave the basic elements of maps to explore other topics it will be useful to consider briefly two further

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Edge-Raggedness Evaluation Using Slanted-Edge Analysis Edge-Raggedness Evaluation Using Slanted-Edge Analysis Peter D. Burns Eastman Kodak Company, Rochester, NY USA 14650-1925 ABSTRACT The standard ISO 12233 method for the measurement of spatial frequency

More information

Single, Double And N-Slit Diffraction. B.Tech I

Single, Double And N-Slit Diffraction. B.Tech I Single, Double And N-Slit Diffraction B.Tech I Diffraction by a Single Slit or Disk If light is a wave, it will diffract around a single slit or obstacle. Diffraction by a Single Slit or Disk The resulting

More information

Scanning Archival Images

Scanning Archival Images Scanning Archival Images A Guide for Community Heritage Projects A Project of the Gimli Municipal Heritage Advisory Committee Scanning Archival Images A Guide for Community Heritage Projects THIS GUIDE

More information

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc How to Optimize the Sharpness of Your Photographic Prints: Part II - Practical Limits to Sharpness in Photography and a Useful Chart to Deteremine the Optimal f-stop. Robert B.Hallock hallock@physics.umass.edu

More information

Introduction to Visual Perception & the EM Spectrum

Introduction to Visual Perception & the EM Spectrum , Winter 2005 Digital Image Fundamentals: Visual Perception & the EM Spectrum, Image Acquisition, Sampling & Quantization Monday, September 19 2004 Overview (1): Review Some questions to consider Elements

More information

Review. Introduction to Visual Perception & the EM Spectrum. Overview (1):

Review. Introduction to Visual Perception & the EM Spectrum. Overview (1): Overview (1): Review Some questions to consider Winter 2005 Digital Image Fundamentals: Visual Perception & the EM Spectrum, Image Acquisition, Sampling & Quantization Tuesday, January 17 2006 Elements

More information

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES 1. Introduction Digital image processing involves manipulation and interpretation of the digital images so

More information

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

digital film technology Resolution Matters what's in a pattern white paper standing the test of time digital film technology Resolution Matters what's in a pattern white paper standing the test of time standing the test of time An introduction >>> Film archives are of great historical importance as they

More information

(Refer Slide Time: 3:11)

(Refer Slide Time: 3:11) Digital Communication. Professor Surendra Prasad. Department of Electrical Engineering. Indian Institute of Technology, Delhi. Lecture-2. Digital Representation of Analog Signals: Delta Modulation. Professor:

More information