Special Imaging Techniques

Size: px
Start display at page:

Download "Special Imaging Techniques"

Transcription

1 CHAPTER 25 Special Imaging Techniques This chapter presents four specific aspects of image processing. First, ways to characterize the spatial resolution are discussed. This describes the minimum size an object must be to be seen in an image. Second, the signal-to-noise ratio is examined, explaining how faint an object can be and still be detected. Third, morphological techniques are introduced. These are nonlinear operations used to manipulate binary images (where each pixel is either black or white). Fourth, the remarkable technique of computed tomography is described. This has revolutionized medical diagnosis by providing detailed images of the interior of the human body. Spatial Resolution Suppose we want to compare two imaging systems, with the goal of determining which has the best spatial resolution. In other words, we want to know which system can detect the smallest object. To simplify things, we would like the answer to be a single number for each system. This allows a direct comparison upon which to base design decisions. Unfortunately, a single parameter is not always sufficient to characterize all the subtle aspects of imaging. This is complicated by the fact that spatial resolution is limited by two distinct but interrelated effects: sample spacing and sampling aperture size. This section contains two main topics: (1) how a single parameter can best be used to characterize spatial resolution, and (2) the relationship between sample spacing and sampling aperture size. Figure 25-1a shows profiles from three circularly symmetric PSFs: the pillbox, the Gaussian, and the exponential. These are representative of the PSFs commonly found in imaging systems. As described in the last chapter, the pillbox can result from an improperly focused lens system. Likewise, the Gaussian is formed when random errors are combined, such as viewing stars through a turbulent atmosphere. An exponential PSF is generated when electrons or x-rays strike a phosphor layer and are converted into 423

2 424 The Scientist and Engineer's Guide to Digital Signal Processing Amplitude a. PSF Distance P G E Amplitude b. MTF E G P Spatial frequency (lp per unit distance) FIGURE 25-1 FWHM versus MTF. Figure (a) shows profiles of three PSFs commonly found in imaging systems: (P) pillbox, (G) Gaussian, and (E) exponential. Each of these has a FWHM of one unit. The corresponding MTFs are shown in (b). Unfortunately, similar values of FWHM do not correspond to similar MTF curves. light. This is used in radiation detectors, night vision light amplifiers, and CRT displays. The exact shape of these three PSFs is not important for this discussion, only that they broadly represent the PSFs seen in real world applications. The PSF contains complete information about the spatial resolution. To express the spatial resolution by a single number, we can ignore the shape of the PSF and simply measure its width. The most common way to specify this is by the Full-Width-at-Half-Maximum (FWHM) value. For example, all the PSFs in (a) have an FWHM of 1 unit. Unfortunately, this method has two significant drawbacks. First, it does not match other measures of spatial resolution, including the subjective judgement of observers viewing the images. Second, it is usually very difficult to directly measure the PSF. Imagine feeding an impulse into an imaging system; that is, taking an image of a very small white dot on a black background. By definition, the acquired image will be the PSF of the system. The problem is, the measured PSF will only contain a few pixels, and its contrast will be low. Unless you are very careful, random noise will swamp the measurement. For instance, imagine that the impulse image is a array of all zeros except for a single pixel having a value of 255. Now compare this to a normal image where all of the pixels have an average value of about 128. In loose terms, the signal in the impulse image is about 100,000 times weaker than a normal image. No wonder the signal-to-noise ratio will be bad; there's hardly any signal! A basic theme throughout this book is that signals should be understood in the domain where the information is encoded. For instance, audio signals should be dealt with in the frequency domain, while image signals should be handled in the spatial domain. In spite of this, one way to measure image resolution is by looking at the frequency response. This goes against the fundamental

3 Chapter 25- Special Imaging Techniques 425 philosophy of this book; however, it is a common method and you need to become familiar with it. Taking the two-dimensional Fourier transform of the PSF provides the twodimensional frequency response. If the PSF is circularly symmetric, its frequency response will also be circularly symmetric. In this case, complete information about the frequency response is contained in its profile. That is, after calculating the frequency domain via the FFT method, columns 0 to N/2 in row 0 are all that is needed. In imaging jargon, this display of the frequency response is called the Modulation Transfer Function (MTF). Figure 25-1b shows the MTFs for the three PSFs in (a). In cases where the PSF is not circularly symmetric, the entire two-dimensional frequency response contains information. However, it is usually sufficient to know the MTF curves in the vertical and horizontal directions (i.e., columns 0 to N/2 in row 0, and rows 0 to N/2 in column 0). Take note: this procedure of extracting a row or column from the two-dimensional frequency spectrum is not equivalent to taking the one-dimensional FFT of the profiles shown in (a). We will come back to this issue shortly. As shown in Fig. 25-1, similar values of FWHM do not correspond to similar MTF curves. Figure 25-2 shows a line pair gauge, a device used to measure image resolution via the MTF. Line pair gauges come in different forms depending on the particular application. For example, the black and white pattern shown in this figure could be directly used to test video cameras. For an x-ray imaging system, the ribs might be made from lead, with an x-ray transparent material between. The key feature is that the black and white lines have a closer spacing toward one end. When an image is taken of a line pair gauge, the lines at the closely spaced end will be blurred together, while at the other end they will be distinct. Somewhere in the middle the lines will be just barely separable. An observer looks at the image, identifies this location, and reads the corresponding resolution on the calibrated scale. FIGURE 25-2 Line pair gauge. The line pair gauge is a tool used to measure the resolution of imaging systems. A series of black and white ribs move together, creating a continuum of spatial frequencies. The resolution of a system is taken as the frequency where the eye can no longer distinguish the individual ribs. This example line pair gauge is shown several times larger than the calibrated scale indicates. line pairs / mm Pixel value Pixel value a. Example profile at 12 lp/mm Pixel number b. Example profile at 3 lp/mm Pixel number

4 426 The Scientist and Engineer's Guide to Digital Signal Processing The way that the ribs blur together is important in understanding the limitations of this measurement. Imagine acquiring an image of the line pair gauge in Fig Figures (a) and (b) show examples of the profiles at low and high spatial frequencies. At the low frequency, shown in (b), the curve is flat on the top and bottom, but the edges are blurred, At the higher spatial frequency, (a), the amplitude of the modulation has been reduced. This is exactly what the MTF curve in Fig. 25-1b describes: higher spatial frequencies are reduced in amplitude. The individual ribs will be distinguishable in the image as long as the amplitude is greater than about 3% to 10% of the original height. This is related to the eye's ability to distinguish the low contrast difference between the peaks and valleys in the presence of image noise. A strong advantage of the line pair gauge measurement is that it is simple and fast. The strongest disadvantage is that it relies on the human eye, and therefore has a certain subjective component. Even if the entire MTF curve is measured, the most common way to express the system resolution is to quote the frequency where the MTF is reduced to either 3%, 5% or 10%. Unfortunately, you will not always be told which of these values is being used; product data sheets frequently use vague terms such as "limiting resolution." Since manufacturers like their specifications to be as good as possible (regardless of what the device actually does), be safe and interpret these ambiguous terms to mean 3% on the MTF curve. A subtle point to notice is that the MTF is defined in terms of sine waves, while the line pair gauge uses square waves. That is, the ribs are uniformly dark regions separated by uniformly light regions. This is done for manufacturing convenience; it is very difficult to make lines that have a sinusoidally varying darkness. What are the consequences of using a square wave to measure the MTF? At high spatial frequencies, all frequency components but the fundamental of the square wave have been removed. This causes the modulation to appear sinusoidal, such as is shown in Fig. 25-2a. At low frequencies, such as shown in Fig. 25-2b, the wave appears square. The fundamental sine wave contained in a square wave has an amplitude of 4/B' 1.27 times the amplitude of the square wave (see Table 13-10). The result: the line pair gauge provides a slight overestimate of the true resolution of the system, by starting with an effective amplitude of more than pure black to pure white. Interesting, but almost always ignored. Since square waves and sine waves are used interchangeably to measure the MTF, a special terminology has arisen. Instead of the word "cycle," those in imaging use the term line pair (a dark line next to a light line). For example, a spatial frequency would be referred to as 25 line pairs per millimeter, instead of 25 cycles per millimeter. The width of the PSF doesn't track well with human perception and is difficult to measure. The MTF methods are in the wrong domain for understanding how resolution affects the encoded information. Is there a more favorable alternative? The answer is yes, the line spread function (LSF) and the edge response. As shown in Fig. 25-3, the line spread

5 Chapter 25- Special Imaging Techniques 427 a. Line Spread Function (LSF) b. Edge Response 50% 10% 90% Full Width at Half Maximum (FWHM) 10% to 90% Edge response FIGURE 25-3 Line spread function and edge response. The line spread function (LSF) is the derivative of the edge response. The width of the LSF is usually expressed as the Full-Width-at-Half-Maximum (FWHM). The width of the edge response is usually quoted by the 10% to 90% distance. function is the response of the system to a thin line across the image. Similarly, the edge response is how the system responds to a sharp straight discontinuity (an edge). Since a line is the derivative (or first difference) of an edge, the LSF is the derivative (or first difference) of the edge response. The single parameter measurement used here is the distance required for the edge response to rise from 10% to 90%. There are many advantages to using the edge response for measuring resolution. First, the measurement is in the same form as the image information is encoded. In fact, the main reason for wanting to know the resolution of a system is to understand how the edges in an image are blurred. The second advantage is that the edge response is simple to measure because edges are easy to generate in images. If needed, the LSF can easily be found by taking the first difference of the edge response. The third advantage is that all common edges responses have a similar shape, even though they may originate from drastically different PSFs. This is shown in Fig. 25-4a, where the edge responses of the pillbox, Gaussian, and exponential PSFs are displayed. Since the shapes are similar, the 10%-90% distance is an excellent single parameter measure of resolution. The fourth advantage is that the MTF can be directly found by taking the one-dimensional FFT of the LSF (unlike the PSF to MTF calculation that must use a twodimensional Fourier transform). Figure 25-4b shows the MTFs corresponding to the edge responses of (a). In other words, the curves in (a) are converted into the curves in (b) by taking the first difference (to find the LSF), and then taking the FFT.

6 428 The Scientist and Engineer's Guide to Digital Signal Processing Amplitude a. Edge response 10% to 90% distance E P G Amplitude b. MTF Limiting resolution -10% -5% -3% Distance P E G Spatial frequency (lp per unit distance) FIGURE 25-4 Edge response and MTF. Figure (a) shows the edge responses of three PSFs: (P) pillbox, (G) Gaussian, and (E) exponential. Each edge response has a 10% to 90% rise distance of 1 unit. Figure (b) shows the corresponding MTF curves, which are similar above the 10% level. Limiting resolution is a vague term indicating the frequency where the MTF has an amplitude of 3% to 10%. The fifth advantage is that similar edge responses have similar MTF curves, as shown in Figs (a) and (b). This allows us to easily convert between the two measurements. In particular, a system that has a 10%-90% edge response of x distance, has a limiting resolution (10% contrast) of about 1 line pair per x distance. The units of the "distance" will depend on the type of system being dealt with. For example, consider three different imaging systems that have 10%-90% edge responses of 0.05 mm, 0.2 milliradian and 3.3 pixels. The 10% contrast level on the corresponding MTF curves will occur at about: 20 lp/mm, 5 lp/milliradian and 0.33 lp/pixel, respectively. Figure 25-5 illustrates the mathematical relationship between the PSF and the LSF. Figure (a) shows a pillbox PSF, a circular area of value 1, displayed as white, surrounded by a region of all zeros, displayed as gray. A profile of the PSF (i.e., the pixel values along a line drawn across the center of the image) will be a rectangular pulse. Figure (b) shows the corresponding LSF. As shown, the LSF is mathematically equal to the integrated profile of the PSF. This is found by sweeping across the image in some direction, as illustrated by the rays (arrows). Each value in the integrated profile is the sum of the pixel values along the corresponding ray. In this example where the rays are vertical, each point in the integrated profile is found by adding all the pixel values in each column. This corresponds to the LSF of a line that is vertical in the image. The LSF of a line that is horizontal in the image is found by summing all of the pixel values in each row. For continuous images these concepts are the same, but the summations are replaced by integrals. As shown in this example, the LSF can be directly calculated from the PSF. However, the PSF cannot always be calculated from the LSF. This is because the PSF contains information about the spatial resolution in all directions, while the LSF is limited to only one specific direction. A system

7 Chapter 25- Special Imaging Techniques 429 FIGURE 25-5 Relationship between the PSF and LSF. A pillbox PSF is shown in (a). Any row or column through the white center will be a rectangular pulse. Figure (b) shows the corresponding LSF, equivalent to an integrated profile of the PSF. That is, the LSF is found by sweeping across the image in some direction and adding (integrating) the pixel values along each ray. In the direction shown, this is done by adding all the pixels in each column. a. Point Spread Function b. "Integrated" profile of the PSF (the LSF) has only one PSF, but an infinite number of LSFs, one for each angle. For example, imagine a system that has an oblong PSF. This makes the spatial resolution different in the vertical and horizontal directions, resulting in the LSF being different in these directions. Measuring the LSF at a single angle does not provide enough information to calculate the complete PSF except in the special instance where the PSF is circularly symmetric. Multiple LSF measurements at various angles make it possible to calculate a non-circular PSF; however, the mathematics is quite involved and usually not worth the effort. In fact, the problem of calculating the PSF from a number of LSF measurements is exactly the same problem faced in computed tomography, discussed later in this chapter. As a practical matter, the LSF and the PSF are not dramatically different for most imaging systems, and it is very common to see one used as an approximation for the other. This is even more justifiable considering that there are two common cases where they are identical: the rectangular PSF has a rectangular LSF (with the same widths), and the Gaussian PSF has a Gaussian LSF (with the same standard deviations). These concepts can be summarized into two skills: how to evaluate a resolution specification presented to you, and how to measure a resolution specification of your own. Suppose you come across an advertisement stating: "This system will resolve 40 line pairs per millimeter." You should interpret this to mean: "A sinusoid of 40 lp/mm will have its amplitude reduced to 3%-10% of its true value, and will be just barely visible in the image." You should also do the mental calculation that 40 10% contrast is equal to a 10%-90% edge response of 1/(40 lp/mm) = mm. If the MTF specification is for a 3% contrast level, the edge response will be about 1.5 to 2 times wider. When you measure the spatial resolution of an imaging system, the steps are carried out in reverse. Place a sharp edge in the image, and measure the

8 430 The Scientist and Engineer's Guide to Digital Signal Processing resulting edge response. The 10%-90% distance of this curve is the best single parameter measurement of the system's resolution. To keep your boss and the marketing people happy, take the first difference of the edge response to find the LSF, and then use the FFT to find the MTF. Sample Spacing and Sampling Aperture Figure 25-6 shows two extreme examples of sampling, which we will call a perfect detector and a blurry detector. Imagine (a) being the surface of an imaging detector, such as a CCD. Light striking anywhere inside one of the square pixels will contribute only to that pixel value, and no others. This is shown in the figure by the black sampling aperture exactly filling one of the square pixels. This is an optimal situation for an image detector, because all of the light is detected, and there is no overlap or crosstalk between adjacent pixels. In other words, the sampling aperture is exactly equal to the sample spacing. The alternative example is portrayed in (e). The sampling aperture is considerably larger than the sample spacing, and it follows a Gaussian distribution. In other words, each pixel in the detector receives a contribution from light striking the detector in a region around the pixel. This should sound familiar, because it is the output side viewpoint of convolution. From the corresponding input side viewpoint, a narrow beam of light striking the detector would contribute to the value of several neighboring pixels, also according to the Gaussian distribution. Now turn your attention to the edge responses of the two examples. The markers in each graph indicate the actual pixel values you would find in an image, while the connecting lines show the underlying curve that is being sampled. An important concept is that the shape of this underlying curve is determined only by the sampling aperture. This means that the resolution in the final image can be limited in two ways. First, the underlying curve may have poor resolution, resulting from the sampling aperture being too large. Second, the sample spacing may be too large, resulting in small details being lost between the samples. Two edge response curves are presented for each example, illustrating that the actual samples can fall anywhere along the underlying curve. In other words, the edge being imaged may be sitting exactly upon a pixel, or be straddling two pixels. Notice that the perfect detector has zero or one sample on the rising part of the edge. Likewise, the blurry detector has three to four samples on the rising part of the edge. What is limiting the resolution in these two systems? The answer is provided by the sampling theorem. As discussed in Chapter 3, sampling captures all frequency components below one-half of the sampling rate, while higher frequencies are lost due to aliasing. Now look at the MTF curve in (h). The sampling aperture of the blurry detector has removed all frequencies greater than one-half the sampling rate; therefore, nothing is lost during sampling. This means that the resolution of this system is

9 Chapter 25- Special Imaging Techniques 431 Example 1: Perfect detector Row Column a. Sampling grid with square aperture Example 2: Blurry detector Row Column e. Sampling grid with Gaussian aperture 100 b. Edge response 100 f. Edge response Pixel value 50 Pixel value Pixel number Pixel number 100 c. Edge response 100 g. Edge response Pixel number Pixel number 1.0 d. MTF 1.0 h. MTF Amplitude Amplitude Pixel value Pixel value Spatial Frequency FIGURE Spatial Frequency

10 432 The Scientist and Engineer's Guide to Digital Signal Processing completely limited by the sampling aperture, and not the sample spacing. Put another way, the sampling aperture has acted as an antialias filter, allowing lossless sampling to take place. In comparison, the MTF curve in (d) shows that both processes are limiting the resolution of this system. The high-frequency fall-off of the MTF curve represents information lost due to the sampling aperture. Since the MTF curve has not dropped to zero before a frequency of 0.5, there is also information lost during sampling, a result of the finite sample spacing. Which is limiting the resolution more? It is difficult to answer this question with a number, since they degrade the image in different ways. Suffice it to say that the resolution in the perfect detector (example 1) is mostly limited by the sample spacing. While these concepts may seem difficult, they reduce to a very simple rule for practical usage. Consider a system with some 10%-90% edge response distance, for example 1 mm. If the sample spacing is greater than 1 mm (there is less than one sample along the edge), the system will be limited by the sample spacing. If the sample spacing is less than 0.33 mm (there are more than 3 samples along the edge), the resolution will be limited by the sampling aperture. When a system has 1-3 samples per edge, it will be limited by both factors. Signal-to-Noise Ratio An object is visible in an image because it has a different brightness than its surroundings. That is, the contrast of the object (i.e., the signal) must overcome the image noise. This can be broken into two classes: limitations of the eye, and limitations of the data. Figure 25-7 illustrates an experiment to measure the eye's ability to detect weak signals. Depending on the observation conditions, the human eye can detect a minimum contrast of 0.5% to 5%. In other words, humans can distinguish about 20 to 200 shades of gray between the blackest black and the whitest white. The exact number depends on a variety of factors, such Contrast 50% 40% 30% 20% 10% 8% 5% 3% 1% FIGURE 25-7 Contrast detection. The human eye can detect a minimum contrast of about 0.5 to 5%, depending on the observation conditions. 100% contrast is the difference between pure black and pure white.

11 Chapter 25- Special Imaging Techniques 433 SNR 0.5 Pixel value Column number 1.0 Pixel value Column number 2.0 Pixel value Column number FIGURE 25-8 Minimum detectable SNR. An object is visible in an image only if its contrast is large enough to overcome the random image noise. In this example, the three squares have SNRs of 2.0, 1.0 and 0.5 (where the SNR is defined as the contrast of the object divided by the standard deviation of the noise). as the brightness of the ambient lightning, the distance between the two regions being compared, and how the grayscale image is formed (video monitor, photograph, halftone, etc.). The grayscale transform of Chapter 23 can be used to boost the contrast of a selected range of pixel values, providing a valuable tool in overcoming the limitations of the human eye. The contrast at one brightness level is increased, at the cost of reducing the contrast at another brightness level. However, this only works when the contrast of the object is not lost in random image noise. This is a more serious situation; the signal does not contain enough information to reveal the object, regardless of the performance of the eye. Figure 25-8 shows an image with three squares having contrasts of 5%, 10%, and 20%. The background contains normally distributed random noise with a standard deviation of about 10% contrast. The SNR is defined as the contrast divided by the standard deviation of the noise, resulting in the three squares having SNRs of 0.5, 1.0 and 2.0. In general, trouble begins when the SNR falls below about 1.0.

12 434 The Scientist and Engineer's Guide to Digital Signal Processing The exact value for the minimum detectable SNR depends on the size of the object; the larger the object, the easier it is to detect. To understand this, imagine smoothing the image in Fig with a 3 3 square filter kernel. This leaves the contrast the same, but reduces the noise by a factor of three (i.e., the square root of the number of pixels in the kernel). Since the SNR is tripled, lower contrast objects can be seen. To see fainter objects, the filter kernel can be made even larger. For example, a 5 5 kernel improves the SNR by a factor of 25' 5. This strategy can be continued until the filter kernel is equal to the size of the object being detected. This means the ability to detect an object is proportional to the square-root of its area. If an object's diameter is doubled, it can be detected in twice as much noise. Visual processing in the brain behaves in much the same way, smoothing the viewed image with various size filter kernels in an attempt to recognize low contrast objects. The three profiles in Fig illustrate just how good humans are at detecting objects in noisy environments. Even though the objects can hardly be identified in the profiles, they are obvious in the image. To really appreciate the capabilities of the human visual system, try writing algorithms that operate in this low SNR environment. You'll be humbled by what your brain can do, but your code can't! Random image noise comes in two common forms. The first type, shown in Fig. 25-9a, has a constant amplitude. In other words, dark and light regions in the image are equally noisy. In comparison, (b) illustrates noise that increases with the signal level, resulting in the bright areas being more noisy than the dark ones. Both sources of noise are present in most images, but one or the other is usually dominant. For example, it is common for the noise to decrease as the signal level is decreased, until a plateau of constant amplitude noise is reached. A common source of constant amplitude noise is the video preamplifier. All analog electronic circuits produce noise. However, it does the most harm where the signal being amplified is at its smallest, right at the CCD or other imaging sensor. Preamplifier noise originates from the random motion of electrons in the transistors. This makes the noise level depend on how the electronics are designed, but not on the level of the signal being amplified. For example, a typical CCD camera will have an SNR of about 300 to 1000 (40 to 60 db), defined as the full scale signal level divided by the standard deviation of the constant amplitude noise. Noise that increases with the signal level results when the image has been represented by a small number of individual particles. For example, this might be the x-rays passing through a patient, the light photons entering a camera, or the electrons in the well of a CCD. The mathematics governing these variations are called counting statistics or Poisson statistics. Suppose that the face of a CCD is uniformly illuminated such that an average of 10,000 electrons are generated in each well. By sheer chance, some wells will have more electrons, while some will have less. To be more exact, the number of electrons will be normally distributed with a mean of 10,000, with some standard deviation that describes how much variation there is from

13 Chapter 25- Special Imaging Techniques 435 a. Constant amplitude noise Pixel value Pixel value Column number b. Noise dependent on signal level Column number FIGURE 25-9 Image noise. Random noise in images takes two general forms. In (a), the amplitude of the noise remains constant as the signal level changes. This is typical of electronic noise. In (b), the amplitude of the noise increases as the square-root of the signal level. This type of noise originates from the detection of a small number of particles, such as light photons, electrons, or x-rays. well-to-well. A key feature of Poisson statistics is that the standard deviation is equal to the square-root of the number of individual particles. That is, if there are N particles in each pixel, the mean is equal to N and the standard deviation is equal to N. This makes the signal-to-noise ratio equal to N/ N, or simply, N. In equation form: EQUATION 25-1 Poisson statistics. In a Poisson distributed signal, the mean, µ, is the average number of individual particles, N. The standard deviation, F, is equal to the square-root of the average number of individual particles. The signal-to-noise ratio (SNR) is the mean divided by the standard deviation. µ ' N F ' N SNR ' N In the CCD example, the standard deviation is 10,000 ' 100. Likewise the signal-to-noise ratio is also 10,000 ' 100. If the average number of electrons per well is increased to one million, both the standard deviation and the SNR increase to 1,000. That is, the noise becomes larger as the signal becomes

14 436 The Scientist and Engineer's Guide to Digital Signal Processing larger, as shown in Fig. 25-9b. However, the signal is becoming larger faster than the noise, resulting in an overall improvement in the SNR. Don't be confused into thinking that a lower signal will provide less noise and therefore better information. Remember, your goal is not to reduce the noise, but to extract a signal from the noise. This makes the SNR the key parameter. Many imaging systems operate by converting one particle type to another. For example, consider what happens in a medical x-ray imaging system. Within an x-ray tube, electrons strike a metal target, producing x-rays. After passing through the patient, the x-rays strike a vacuum tube detector known as an image intensifier. Here the x-rays are subsequently converted into light photons, then electrons, and then back to light photons. These light photons enter the camera where they are converted into electrons in the well of a CCD. In each of these intermediate forms, the image is represented by a finite number of particles, resulting in added noise as dictated by Eq The final SNR reflects the combined noise of all stages; however, one stage is usually dominant. This is the stage with the worst SNR because it has the fewest particles. This limiting stage is called the quantum sink. In night vision systems, the quantum sink is the number of light photons that can be captured by the camera. The darker the night, the noisier the final image. Medical x-ray imaging is a similar example; the quantum sink is the number of x-rays striking the detector. Higher radiation levels provide less noisy images at the expense of more radiation to the patient. When is the noise from Poisson statistics the primary noise in an image? It is dominant whenever the noise resulting from the quantum sink is greater than the other sources of noise in the system, such as from the electronics. For example, consider a typical CCD camera with an SNR of 300. That is, the noise from the CCD preamplifier is 1/300th of the full scale signal. An equivalent noise would be produced if the quantum sink of the system contains 90,000 particles per pixel. If the quantum sink has a smaller number of particles, Poisson noise will dominate the system. If the quantum sink has a larger number of particles, the preamplifier noise will be predominant. Accordingly, most CCD's are designed with a full well capacity of 100,000 to 1,000,000 electrons, minimizing the Poisson noise. Morphological Image Processing The identification of objects within an image can be a very difficult task. One way to simplify the problem is to change the grayscale image into a binary image, in which each pixel is restricted to a value of either 0 or 1. The techniques used on these binary images go by such names as: blob analysis, connectivity analysis, and morphological image processing (from the Greek word morphe, meaning shape or form). The foundation of morphological processing is in the mathematically rigorous field of set theory; however, this level of sophistication is seldom needed. Most morphological algorithms are simple logic operations and very ad hoc. In

15 Chapter 25- Special Imaging Techniques 437 a. Original b. Erosion c. Dilation FIGURE Morphological operations. Four basic morphological operations are used in the processing of binary images: erosion, dilation, opening, and closing. Figure (a) shows an example binary image. Figures (b) to (e) show the result of applying these operations to the image in (a). d. Opening e. Closing other words, each application requires a custom solution developed by trialand-error. This is usually more of an art than a science. A bag of tricks is used rather than standard algorithms and formal mathematical properties. Here are some examples. Figure 25-10a shows an example binary image. This might represent an enemy tank in an infrared image, an asteroid in a space photograph, or a suspected tumor in a medical x-ray. Each pixel in the background is displayed as white, while each pixel in the object is displayed as black. Frequently, binary images are formed by thresholding a grayscale image; pixels with a value greater than a threshold are set to 1, while pixels with a value below the threshold are set to 0. It is common for the grayscale image to be processed with linear techniques before the thresholding. For instance, illumination flattening (described in Chapter 24) can often improve the quality of the initial binary image. Figures (b) and (c) show how the image is changed by the two most common morphological operations, erosion and dilation. In erosion, every object pixel that is touching a background pixel is changed into a background pixel. In dilation, every background pixel that is touching an object pixel is changed into an object pixel. Erosion makes the objects smaller, and can break a single object into multiple objects. Dilation makes the objects larger, and can merge multiple objects into one. As shown in (d), opening is defined as an erosion followed by a dilation. Figure (e) shows the opposite operation of closing, defined as a dilation followed by an erosion. As illustrated by these examples, opening removes small islands and thin filaments of object pixels. Likewise, closing removes

16 438 The Scientist and Engineer's Guide to Digital Signal Processing islands and thin filaments of background pixels. These techniques are useful for handling noisy images where some pixels have the wrong binary value. For instance, it might be known that an object cannot contain a "hole", or that the object's border must be smooth. Figure shows an example of morphological processing. Figure (a) is the binary image of a fingerprint. Algorithms have been developed to analyze these patterns, allowing individual fingerprints to be matched with those in a database. A common step in these algorithms is shown in (b), an operation called skeletonization. This simplifies the image by removing redundant pixels; that is, changing appropriate pixels from black to white. This results in each ridge being turned into a line only a single pixel wide. Tables 25-1 and 25-2 show the skeletonization program. Even though the fingerprint image is binary, it is held in an array where each pixel can run from 0 to 255. A black pixel is denoted by 0, while a white pixel is denoted by 255. As shown in Table 25-1, the algorithm is composed of 6 iterations that gradually erode the ridges into a thin line. The number of iterations is chosen by trial and error. An alternative would be to stop when an iteration makes no changes. During an iteration, each pixel in the image is evaluated for being removable; the pixel meets a set of criteria for being changed from black to white. Lines loop through each pixel in the image, while the subroutine in Table 25-2 makes the evaluation. If the pixel under consideration is not removable, the subroutine does nothing. If the pixel is removable, the subroutine changes its value from 0 to 1. This indicates that the pixel is still black, but will be changed to white at the end of the iteration. After all the pixels have been evaluated, lines change the value of the marked pixels from 1 to 255. This two-stage process results in the thick ridges being eroded equally from all directions, rather than a pattern based on how the rows and columns are scanned. a. Original fingerprint b. Skeletonized fingerprint FIGURE Binary skeletonization. The binary image of a fingerprint, (a), contains ridges that are many pixels wide. The skeletonized version, (b), contains ridges only a single pixel wide.

17 Chapter 25- Special Imaging Techniques 'SKELETONIZATION PROGRAM 110 'Object pixels have a value of 0 (displayed as black) 120 'Background pixels have a value of 255 (displayed as white) 130 ' 140 DIM X%[149,149] 'X%[, ] holds the image being processed 150 ' 160 GOSUB XXXX 'Mythical subroutine to load X%[, ] 170 ' 180 FOR ITER% = 0 TO 5 'Run through six iteration loops 190 ' 200 FOR R% = 1 TO 148 'Loop through each pixel in the image. 210 FOR C% = 1 TO 148 'Subroutine 5000 (Table 25-2) indicates which 220 GOSUB 5000 'pixels can be changed from black to white, 230 NEXT C% 'by marking the pixels with a value of NEXT R% 250 ' 260 FOR R% = 0 TO 149 'Loop through each pixel in the image changing 270 FOR C% = 0 TO 149 'the marked pixels from black to white. 280 IF X%(R%,C%) = 1 THEN X%(R%,C%) = NEXT C% 300 NEXT R% 310 ' 320 NEXT ITER% 330 ' 340 END TABLE 25-1 The decision to remove a pixel is based on four rules, as contained in the subroutine shown in Table All of these rules must be satisfied for a pixel to be changed from black to white. The first three rules are rather simple, while the fourth is quite complicated. As shown in Fig a, a pixel at location [R,C] has eight neighbors. The four neighbors in the horizontal and vertical directions (labeled 2,4,6,8) are frequently called the close neighbors. The diagonal pixels (labeled 1,3,5,7) are correspondingly called the distant neighbors. The four rules are as follows: Rule one: The pixel under consideration must presently be black. If the pixel is already white, no action needs to be taken. Rule two: At least one of the pixel's close neighbors must be white. This insures that the erosion of the thick ridges takes place from the outside. In other words, if a pixel is black, and it is completely surrounded by black pixels, it is to be left alone on this iteration. Why use only the close neighbors, rather than all of the neighbors? The answer is simple: running the algorithm both ways shows that it works better. Remember, this is very common in morphological image processing; trial and error is used to find if one technique performs better than another. Rule three: The pixel must have more than one black neighbor. If it has only one, it must be the end of a line, and therefore shouldn't be removed. Rule four: A pixel cannot be removed if it results in its neighbors being disconnected. This is so each ridge is changed into a continuous line, not a group of interrupted segments. As shown by the examples in Fig ,

18 440 The Scientist and Engineer's Guide to Digital Signal Processing a. Pixel numbering Row R!1 R R+1 Column C!1 C C FIGURE Neighboring pixels. A pixel at row and column [R,C] has eight neighbors, referred to by the numbers in (a). Figures (b) and (c) show examples where the neighboring pixels are connected and unconnected, respectively. This definition is used by rule number four of the skeletonization algorithm. b. Connected neighbors Row R!1 R R+1 Column C!1 C C+1 * Row R!1 R R+1 Column C!1 C C+1 * Row R!1 R R+1 Column C!1 C C+1 * c. Unconnected neighbors R!1 Column C!1 C C+1 * R!1 Column C!1 C C+1 R!1 Column C!1 C C+1 * Row R R+1 Row R * * * R+1 Row R * R+1 * connected means that all of the black neighbors touch each other. Likewise, unconnected means that the black neighbors form two or more groups. The algorithm for determining if the neighbors are connected or unconnected is based on counting the black-to-white transitions between adjacent neighboring pixels, in a clockwise direction. For example, if pixel 1 is black and pixel 2 is white, it is considered a black-to-white transition. Likewise, if pixel 2 is black and both pixel 3 and 4 are white, this is also a black-to-white transition. In total, there are eight locations where a black-to-white transition may occur. To illustrate this definition further, the examples in (b) and (c) have an asterisk placed by each black-to-white transition. The key to this algorithm is that there will be zero or one black-to-white transition if the neighbors are connected. More than one such transition indicates that the neighbors are unconnected. As additional examples of binary image processing, consider the types of algorithms that might be useful after the fingerprint is skeletonized. A disadvantage of this particular skeletonization algorithm is that it leaves a considerable amount of fuzz, short offshoots that stick out from the sides of longer segments. There are several different approaches for eliminating these artifacts. For example, a program might loop through the image removing the pixel at the end of every line. These pixels are identified

19 Chapter 25- Special Imaging Techniques ' Subroutine to determine if the pixel at X%[R%,C%] can be removed ' If all four of the rules are satisfied, then X%(R%,C%], is set to a value of 1, 5020 ' indicating it should be removed at the end of the iteration ' 5040 'RULE #1: Do nothing if the pixel already white 5050 IF X%(R%,C%) = 255 THEN RETURN 5060 ' 5070 ' 5080 'RULE #2: Do nothing if all of the close neighbors are black 5090 IF X%[R% -1,C% ] <> 255 AND X%[R%,C%+1] <> 255 AND X%[R%+1,C% ] <> 255 AND X%[R%,C% -1] <> 255 THEN RETURN 5100 ' 5110 ' 5120 'RULE #3: Do nothing if only a single neighbor pixel is black 5130 COUNT% = IF X%[R% -1,C% -1] = 0 THEN COUNT% = COUNT% IF X%[R% -1,C% ] = 0 THEN COUNT% = COUNT% IF X%[R% -1,C%+1] = 0 THEN COUNT% = COUNT% IF X%[R%,C%+1] = 0 THEN COUNT% = COUNT% IF X%[R%+1,C%+1] = 0 THEN COUNT% = COUNT% IF X%[R%+1,C% ] = 0 THEN COUNT% = COUNT% IF X%[R%+1,C% -1] = 0 THEN COUNT% = COUNT% IF X%[R%,C% -1] = 0 THEN COUNT% = COUNT% IF COUNT% = 1 THEN RETURN 5230 ' 5240 ' 5250 'RULE 4: Do nothing if the neighbors are unconnected 'Determine this by counting the black-to-white transitions 5270 'while moving clockwise through the 8 neighboring pixels COUNT% = IF X%[R% -1,C% -1] = 0 AND X%[R% -1,C% ] > 0 THEN COUNT% = COUNT% IF X%[R% -1,C% ] = 0 AND X%[R% -1,C%+1] > 0 AND X%[R%,C%+1] > 0 THEN COUNT% = COUNT% IF X%[R% -1,C%+1] = 0 AND X%[R%,C%+1] > 0 THEN COUNT% = COUNT% IF X%[R%,C%+1] = 0 AND X%[R%+1,C%+1] > 0 AND X%[R%+1,C% ] > 0 THEN COUNT% = COUNT% IF X%[R%+1,C%+1] = 0 AND X%[R%+1,C% ] > 0 THEN COUNT% = COUNT% IF X%[R%+1,C% ] = 0 AND X%[R%+1,C% -1] > 0 AND X%[R%,C%-1] > 0 THEN COUNT% = COUNT% IF X%[R%+1,C% -1] = 0 AND X%[R%,C% -1] > 0 THEN COUNT% = COUNT% IF X%[R%,C% -1] = 0 AND X%[R% -1,C% -1] > 0 AND X%[R%-1,C% ] > 0 THEN COUNT% = COUNT% IF COUNT% > 1 THEN RETURN 5380 ' 5390 ' 5400 'If all rules are satisfied, mark the pixel to be set to white at the end of the iteration 5410 X%(R%,C%) = ' 5430 RETURN TABLE 25-2 by having only one black neighbor. Do this several times and the fuzz is removed at the expense of making each of the correct lines shorter. A better method would loop through the image identifying branch pixels (pixels that have more than two neighbors). Starting with each branch pixel, count the number of pixels in each offshoot. If the number of pixels in an offshoot is less than some value (say, 5), declare it to be fuzz, and change the pixels in the branch from black to white.

20 442 The Scientist and Engineer's Guide to Digital Signal Processing Another algorithm might change the data from a bitmap to a vector mapped format. This involves creating a list of the ridges contained in the image and the pixels contained in each ridge. In the vector mapped form, each ridge in the fingerprint has an individual identity, as opposed to an image composed of many unrelated pixels. This can be accomplished by looping through the image looking for the endpoints of each line, the pixels that have only one black neighbor. Starting from the endpoint, each line is traced from pixel to connecting pixel. After the opposite end of the line is reached, all the traced pixels are declared to be a single object, and treated accordingly in future algorithms. Computed Tomography A basic problem in imaging with x-rays (or other penetrating radiation) is that a two-dimensional image is obtained of a three-dimensional object. This means that structures can overlap in the final image, even though they are completely separate in the object. This is particularly troublesome in medical diagnosis where there are many anatomic structures that can interfere with what the physician is trying to see. During the 1930's, this problem was attacked by moving the x-ray source and detector in a coordinated motion during image formation. From the geometry of this motion, a single plane within the patient remains in focus, while structures outside this plane become blurred. This is analogous to a camera being focused on an object at 5 feet, while objects at a distance of 1 and 50 feet are blurry. These related techniques based on motion blurring are now collectively called classical tomography. The word tomography means "a picture of a plane." In spite of being well developed for more than 50 years, classical tomography is rarely used. This is because it has a significant limitation: the interfering objects are not removed from the image, only blurred. The resulting image quality is usually too poor to be of practical use. The long sought solution was a system that could create an image representing a 2D slice through a 3D object with no interference from other structures in the 3D object. This problem was solved in the early 1970s with the introduction of a technique called computed tomography (CT). CT revolutionized the medical x-ray field with its unprecedented ability to visualize the anatomic structure of the body. Figure shows a typical medical CT image. Computed tomography was originally introduced to the marketplace under the names Computed Axial Tomography and CAT scanner. These terms are now frowned upon in the medical field, although you hear them used frequently by the general public. Figure illustrates a simple geometry for acquiring a CT slice through the center of the head. A narrow pencil beam of x-rays is passed from the x-ray source to the x-ray detector. This means that the measured value at the detector is related to the total amount of material placed anywhere

21 Chapter 25- Special Imaging Techniques 443 LEFT RIGHT FIGURE Computed tomography image. This CT slice is of a human abdomen, at the level of the navel. Many organs are visible, such as the (L) Liver, (K) Kidney, (A) Aorta, (S) Spine, and (C) Cyst covering the right kidney. CT can visualize internal anatomy far better than conventional medical x-rays. FRONT A K S C L REAR along the beam's path. Materials such as bone and teeth block more of the x- rays, resulting in a lower signal compared to soft tissue and fat. As shown in the illustration, the source and detector assemblies are translated to acquire a view (CT jargon) at this particular angle. While this figure shows only a single view being acquired, a complete CT scan requires 300 to 1000 views taken at rotational increments of about 0.3E to 1.0E. This is accomplished by mounting the x-ray source and detector on a rotating gantry that surrounds the patient. A key feature of CT data acquisition is that x-rays pass only through the slice of the body being examined. This is unlike classical tomography where x-rays are passing through structures that you try to suppress in the final image. Computed tomography doesn't allow information from irrelevant locations to even enter the acquired data. radiation detector FIGURE CT data acquisition. A simple CT system passes a narrow beam of x-rays through the body from source to detector. The source and detector are then translated to obtain a complete view. The remaining views are obtained by rotating the source and detector in about 1E increments, and repeating the translation process. radiation source

22 444 The Scientist and Engineer's Guide to Digital Signal Processing Several preprocessing steps are usually needed before the image reconstruction can take place. For instance, the logarithm must be taken of each x-ray measurement. This is because x-rays decrease in intensity exponentially as they pass through material. Taking the logarithm provides a signal that is linearly related to the characteristics of the material being measured. Other preprocessing steps are used to compensate for the use of polychromatic (more than one energy) x-rays, and multielement detectors (as opposed to the single element shown in Fig ). While these are a key step in the overall technique, they are not related to the reconstruction algorithms and we won't discuss them further. Figure illustrates the relationship between the measured views and the corresponding image. Each sample acquired in a CT system is equal to the sum of the image values along a ray pointing to that sample. For example, view 1 is found by adding all the pixels in each row. Likewise, view 3 is found by adding all the pixels in each column. The other views, such as view 2, sum the pixels along rays that are at an angle. There are four main approaches to calculating the slice image given the set of its views. These are called CT reconstruction algorithms. The first method is totally impractical, but provides a better understanding of the problem. It is based on solving many simultaneous linear equations. One equation can be written for each measurement. That is, a particular sample in a particular profile is the sum of a particular group of pixels in the image. To calculate N 2 unknown variables (i.e., the image pixel values), there must be N 2 independent equations, and therefore N 2 measurements. Most CT scanners acquire about 50% more samples than rigidly required by this analysis. For example, to reconstruct a image, a system might take 700 views with 600 samples in each view. By making the problem overdetermined in this manner, the final image has reduced noise and artifacts. The problem with this first method of CT reconstruction is computation time. Solving several hundred thousand simultaneous linear equations is an daunting task. The second method of CT reconstruction uses iterative techniques to calculate the final image in small steps. There are several variations of this method: the Algebraic Reconstruction Technique (ART), Simultaneous Iterative Reconstruction Technique (SIRT), and Iterative Least Squares Technique (ILST). The difference between these methods is how the successive corrections are made: ray-by-ray, pixel-by-pixel, or simultaneously correcting the entire data set, respectively. As an example of these techniques, we will look at ART. To start the ART algorithm, all the pixels in the image array are set to some arbitrary value. An iterative procedure is then used to gradually change the image array to correspond to the profiles. An iteration cycle consists of looping through each of the measured data points. For each measured value, the following question is asked: how can the pixel values in the array be changed to make them consistent with this particular measurement? In other words, the measured sample is compared with the

23 Chapter 25- Special Imaging Techniques 445 view view 1 view 3 FIGURE CT views. Computed tomography acquires a set of views and then reconstructs the corresponding image. Each sample in a view is equal to the sum of the image values along the ray that points to that sample. In this example, the image is a small pillbox surrounded by zeros. While only three views are shown here, a typical CT scan uses hundreds of views at slightly different angles. sum of the image pixels along the ray pointing to the sample. If the ray sum is lower than the measured sample, all the pixels along the ray are increased in value. Likewise, if the ray sum is higher than the measured sample, all of the pixel values along the ray are decreased. After the first complete iteration cycle, there will still be an error between the ray sums and the measured values. This is because the changes made for any one measurement disrupts all the previous corrections made. The idea is that the errors become smaller with repeated iterations until the image converges to the proper solution. Iterative techniques are generally slow, but they are useful when better algorithms are not available. In fact, ART was used in the first commercial medical CT scanner released in 1972, the EMI Mark I. We will revisit iterative techniques in the next chapter on neural networks. Development of the third and forth methods have almost entirely replaced iterative techniques in commercial CT products. The last two reconstruction algorithms are based on formal mathematical solutions to the problem. These are elegant examples of DSP. The third method is called filtered backprojection. It is a modification of an older

24 446 The Scientist and Engineer's Guide to Digital Signal Processing view view 1 view 3 a. Using 3 views b. Using many views FIGURE Backprojection. Backprojection reconstructs an image by taking each view and smearing it along the path it was originally acquired. The resulting image is a blurry version of the correct image. technique, called backprojection or simple backprojection. Figure shows that simple backprojection is a common sense approach, but very unsophisticated. An individual sample is backprojected by setting all the image pixels along the ray pointing to the sample to the same value. In less technical terms, a backprojection is formed by smearing each view back through the image in the direction it was originally acquired. The final backprojected image is then taken as the sum of all the backprojected views. While backprojection is conceptually simple, it does not correctly solve the problem. As shown in (b), a backprojected image is very blurry. A single point in the true image is reconstructed as a circular region that decreases in intensity away from the center. In more formal terms, the point spread function of backprojection is circularly symmetric, and decreases as the reciprocal of its radius. Filtered backprojection is a technique to correct the blurring encountered in simple backprojection. As illustrated in Fig , each view is filtered before the backprojection to counteract the blurring PSF. That is, each of the one-dimensional views is convolved with a one-dimensional filter kernel to create a set of filtered views. These filtered views are then backprojected to provide the reconstructed image, a close approximation to the "correct" image. In fact, the image produced by filtered backprojection is identical

25 Chapter 25- Special Imaging Techniques 447 filtered view filtered view 1 filtered view 3 a. Using 3 views b. Using many views FIGURE Filtered backprojection. Filtered backprojection reconstructs an image by filtering each view before backprojection. This removes the blurring seen in simple backprojection, and results in a mathematically exact reconstruction of the image. Filtered backprojection is the most commonly used algorithm for computed tomography systems. to the "correct" image when there are an infinite number of views and an infinite number of points per view. The filter kernel used in this technique will be discussed shortly. For now, notice how the profiles have been changed by the filter. The image in this example is a uniform white circle surrounded by a black background (a pillbox). Each of the acquired views has a flat background with a rounded region representing the white circle. Filtering changes the views in two significant ways. First, the top of the pulse is made flat, resulting in the final backprojection creating a uniform signal level within the circle. Second, negative spikes have been introduced at the sides of the pulse. When backprojected, these negative regions counteract the blur. The fourth method is called Fourier reconstruction. In the spatial domain, CT reconstruction involves the relationship between a two-dimensional image and its set of one-dimensional views. By taking the two-dimensional Fourier transform of the image and the one-dimensional Fourier transform of each of its views, the problem can be examined in the frequency domain. As it turns out, the relationship between an image and its views is far simpler in the frequency domain than in the spatial domain. The frequency domain analysis

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture: The Lecture Contains: Effect of Temporal Aperture: Spatial Aperture: Effect of Display Aperture: file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture18/18_1.htm[12/30/2015

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Statistics, Probability and Noise

Statistics, Probability and Noise Statistics, Probability and Noise Claudia Feregrino-Uribe & Alicia Morales-Reyes Original material: Rene Cumplido Autumn 2015, CCC-INAOE Contents Signal and graph terminology Mean and standard deviation

More information

Digital Images & Image Quality

Digital Images & Image Quality Introduction to Medical Engineering (Medical Imaging) Suetens 1 Digital Images & Image Quality Ho Kyung Kim Pusan National University Radiation imaging DR & CT: x-ray Nuclear medicine: gamma-ray Ultrasound

More information

Chapter 17. Shape-Based Operations

Chapter 17. Shape-Based Operations Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image Introduction Chapter 16 Diagnostic Radiology Radiation Dosimetry I Text: H.E Johns and J.R. Cunningham, The physics of radiology, 4 th ed. http://www.utoledo.edu/med/depts/radther In diagnostic radiology

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Linear Image Processing

Linear Image Processing CHAPTER Linear Image Processing Linear image processing is based on the same two techniques as conventional DSP: convolution and Fourier analysis. Convolution is the more important of these two, since

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

Image Enhancement in the Spatial Domain (Part 1)

Image Enhancement in the Spatial Domain (Part 1) Image Enhancement in the Spatial Domain (Part 1) Lecturer: Dr. Hossam Hassan Email : hossameldin.hassan@eng.asu.edu.eg Computers and Systems Engineering Principle Objective of Enhancement Process an image

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

COMPUTED TOMOGRAPHY 1

COMPUTED TOMOGRAPHY 1 COMPUTED TOMOGRAPHY 1 Why CT? Conventional X ray picture of a chest 2 Introduction Why CT? In a normal X-ray picture, most soft tissue doesn't show up clearly. To focus in on organs, or to examine the

More information

Using Curves and Histograms

Using Curves and Histograms Written by Jonathan Sachs Copyright 1996-2003 Digital Light & Color Introduction Although many of the operations, tools, and terms used in digital image manipulation have direct equivalents in conventional

More information

Image Processing Lecture 4

Image Processing Lecture 4 Image Enhancement Image enhancement aims to process an image so that the output image is more suitable than the original. It is used to solve some computer imaging problems, or to improve image quality.

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 5 Image Enhancement in Spatial Domain- I ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

The Noise about Noise

The Noise about Noise The Noise about Noise I have found that few topics in astrophotography cause as much confusion as noise and proper exposure. In this column I will attempt to present some of the theory that goes into determining

More information

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido The Discrete Fourier Transform Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido CCC-INAOE Autumn 2015 The Discrete Fourier Transform Fourier analysis is a family of mathematical

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

30 lesions. 30 lesions. false positive fraction

30 lesions. 30 lesions. false positive fraction Solutions to the exercises. 1.1 In a patient study for a new test for multiple sclerosis (MS), thirty-two of the one hundred patients studied actually have MS. For the data given below, complete the two-by-two

More information

TRANSFORMS / WAVELETS

TRANSFORMS / WAVELETS RANSFORMS / WAVELES ransform Analysis Signal processing using a transform analysis for calculations is a technique used to simplify or accelerate problem solution. For example, instead of dividing two

More information

Modulation Transfer Function

Modulation Transfer Function Modulation Transfer Function The resolution and performance of an optical microscope can be characterized by a quantity known as the modulation transfer function (MTF), which is a measurement of the microscope's

More information

FFT Convolution. The Overlap-Add Method

FFT Convolution. The Overlap-Add Method CHAPTER 18 FFT Convolution This chapter presents two important DSP techniques, the overlap-add method, and FFT convolution. The overlap-add method is used to break long signals into smaller segments for

More information

TDI2131 Digital Image Processing

TDI2131 Digital Image Processing TDI2131 Digital Image Processing Image Enhancement in Spatial Domain Lecture 3 John See Faculty of Information Technology Multimedia University Some portions of content adapted from Zhu Liu, AT&T Labs.

More information

CCD reductions techniques

CCD reductions techniques CCD reductions techniques Origin of noise Noise: whatever phenomena that increase the uncertainty or error of a signal Origin of noises: 1. Poisson fluctuation in counting photons (shot noise) 2. Pixel-pixel

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Edge-Raggedness Evaluation Using Slanted-Edge Analysis Edge-Raggedness Evaluation Using Slanted-Edge Analysis Peter D. Burns Eastman Kodak Company, Rochester, NY USA 14650-1925 ABSTRACT The standard ISO 12233 method for the measurement of spatial frequency

More information

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII IMAGE PROCESSING INDEX CLASS: B.E(COMPUTER) SR. NO SEMESTER:VII TITLE OF THE EXPERIMENT. 1 Point processing in spatial domain a. Negation of an

More information

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain Image Enhancement in spatial domain Digital Image Processing GW Chapter 3 from Section 3.4.1 (pag 110) Part 2: Filtering in spatial domain Mask mode radiography Image subtraction in medical imaging 2 Range

More information

10/3/2012. Study Harder

10/3/2012. Study Harder This presentation is a professional collaboration of development time prepared by: Rex Christensen Terri Jurkiewicz and Diane Kawamura Study Harder CR detection is inefficient, inferior to film screen

More information

MATLAB 6.5 Image Processing Toolbox Tutorial

MATLAB 6.5 Image Processing Toolbox Tutorial MATLAB 6.5 Image Processing Toolbox Tutorial The purpose of this tutorial is to gain familiarity with MATLAB s Image Processing Toolbox. This tutorial does not contain all of the functions available in

More information

Radionuclide Imaging MII Single Photon Emission Computed Tomography (SPECT)

Radionuclide Imaging MII Single Photon Emission Computed Tomography (SPECT) Radionuclide Imaging MII 3073 Single Photon Emission Computed Tomography (SPECT) Single Photon Emission Computed Tomography (SPECT) The successful application of computer algorithms to x-ray imaging in

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted

More information

10/26/2015. Study Harder

10/26/2015. Study Harder This presentation is a professional collaboration of development time prepared by: Rex Christensen Terri Jurkiewicz and Diane Kawamura Study Harder CR detection is inefficient, inferior to film screen

More information

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction Table of contents Vision industrielle 2002/2003 Session - Image Processing Département Génie Productique INSA de Lyon Christian Wolf wolf@rfv.insa-lyon.fr Introduction Motivation, human vision, history,

More information

Carmen Alonso Montes 23rd-27th November 2015

Carmen Alonso Montes 23rd-27th November 2015 Practical Computer Vision: Theory & Applications calonso@bcamath.org 23rd-27th November 2015 Alternative Software Alternative software to matlab Octave Available for Linux, Mac and windows For Mac and

More information

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes: Evaluating Commercial Scanners for Astronomical Images Robert J. Simcoe Associate Harvard College Observatory rjsimcoe@cfa.harvard.edu Introduction: Many organizations have expressed interest in using

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pk Pakorn Watanachaturaporn, Wt ht Ph.D. PhD pakorn@live.kmitl.ac.th,

More information

An Activity in Computed Tomography

An Activity in Computed Tomography Pre-lab Discussion An Activity in Computed Tomography X-rays X-rays are high energy electromagnetic radiation with wavelengths smaller than those in the visible spectrum (0.01-10nm and 4000-800nm respectively).

More information

Chapter 12 Image Processing

Chapter 12 Image Processing Chapter 12 Image Processing The distance sensor on your self-driving car detects an object 100 m in front of your car. Are you following the car in front of you at a safe distance or has a pedestrian jumped

More information

Image processing for gesture recognition: from theory to practice. Michela Goffredo University Roma TRE

Image processing for gesture recognition: from theory to practice. Michela Goffredo University Roma TRE Image processing for gesture recognition: from theory to practice 2 Michela Goffredo University Roma TRE goffredo@uniroma3.it Image processing At this point we have all of the basics at our disposal. We

More information

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Home The Book by Chapters About the Book Steven W. Smith Blog Contact Book Search Download this chapter in PDF

More information

An Activity in Computed Tomography

An Activity in Computed Tomography Pre-lab Discussion An Activity in Computed Tomography X-rays X-rays are high energy electromagnetic radiation with wavelengths smaller than those in the visible spectrum (0.01-10nm and 4000-800nm respectively).

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Vision Review: Image Processing. Course web page:

Vision Review: Image Processing. Course web page: Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 7, Announcements Homework and paper presentation guidelines are up on web page Readings for next Tuesday: Chapters 6,.,

More information

Lesson 06: Pulse-echo Imaging and Display Modes. These lessons contain 26 slides plus 15 multiple-choice questions.

Lesson 06: Pulse-echo Imaging and Display Modes. These lessons contain 26 slides plus 15 multiple-choice questions. Lesson 06: Pulse-echo Imaging and Display Modes These lessons contain 26 slides plus 15 multiple-choice questions. These lesson were derived from pages 26 through 32 in the textbook: ULTRASOUND IMAGING

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Medical Imaging. X-rays, CT/CAT scans, Ultrasound, Magnetic Resonance Imaging

Medical Imaging. X-rays, CT/CAT scans, Ultrasound, Magnetic Resonance Imaging Medical Imaging X-rays, CT/CAT scans, Ultrasound, Magnetic Resonance Imaging From: Physics for the IB Diploma Coursebook 6th Edition by Tsokos, Hoeben and Headlee And Higher Level Physics 2 nd Edition

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

The popular conception of physics

The popular conception of physics 54 Teaching Physics: Inquiry and the Ray Model of Light Fernand Brunschwig, M.A.T. Program, Hudson Valley Center My thinking about these matters was stimulated by my participation on a panel devoted to

More information

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING PRESENTED BY S PRADEEP K SUNIL KUMAR III BTECH-II SEM, III BTECH-II SEM, C.S.E. C.S.E. pradeep585singana@gmail.com sunilkumar5b9@gmail.com CONTACT:

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Fourier Transform Pairs

Fourier Transform Pairs CHAPTER Fourier Transform Pairs For every time domain waveform there is a corresponding frequency domain waveform, and vice versa. For example, a rectangular pulse in the time domain coincides with a sinc

More information

1.6 Beam Wander vs. Image Jitter

1.6 Beam Wander vs. Image Jitter 8 Chapter 1 1.6 Beam Wander vs. Image Jitter It is common at this point to look at beam wander and image jitter and ask what differentiates them. Consider a cooperative optical communication system that

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

(Refer Slide Time: 01:45)

(Refer Slide Time: 01:45) Digital Communication Professor Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Module 01 Lecture 21 Passband Modulations for Bandlimited Channels In our discussion

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Note: These sample pages are from Chapter 1. The Zone System

Note: These sample pages are from Chapter 1. The Zone System Note: These sample pages are from Chapter 1 The Zone System Chapter 1 The Zones Revealed The images below show how you can visualize the zones in an image. This is NGC 1491, an HII region imaged through

More information

Artifacts. Artifacts. Causes. Imaging assumptions. Common terms used to describe US images. Common terms used to describe US images

Artifacts. Artifacts. Causes. Imaging assumptions. Common terms used to describe US images. Common terms used to describe US images Artifacts Artifacts Chapter 20 What are they? Simply put they are an error in imaging These artifacts include reflections that are: not real incorrect shape, size or position incorrect brightness displayed

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Teaching the Uncertainty Principle In Introductory Physics

Teaching the Uncertainty Principle In Introductory Physics Teaching the Uncertainty Principle In Introductory Physics Elisha Huggins, Dartmouth College, Hanover, NH Eliminating the artificial divide between classical and modern physics in introductory physics

More information

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase Fourier Transform Fourier Transform Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase 2 1 3 3 3 1 sin 3 3 1 3 sin 3 1 sin 5 5 1 3 sin

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

Imaging Particle Analysis: The Importance of Image Quality

Imaging Particle Analysis: The Importance of Image Quality Imaging Particle Analysis: The Importance of Image Quality Lew Brown Technical Director Fluid Imaging Technologies, Inc. Abstract: Imaging particle analysis systems can derive much more information about

More information

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987)

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987) Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group bdawson@goipd.com (987) 670-2050 Introduction Automated Optical Inspection (AOI) uses lighting, cameras, and vision computers

More information

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc How to Optimize the Sharpness of Your Photographic Prints: Part II - Practical Limits to Sharpness in Photography and a Useful Chart to Deteremine the Optimal f-stop. Robert B.Hallock hallock@physics.umass.edu

More information

Chapter 2 Fourier Integral Representation of an Optical Image

Chapter 2 Fourier Integral Representation of an Optical Image Chapter 2 Fourier Integral Representation of an Optical This chapter describes optical transfer functions. The concepts of linearity and shift invariance were introduced in Chapter 1. This chapter continues

More information

Visible Light Communication-based Indoor Positioning with Mobile Devices

Visible Light Communication-based Indoor Positioning with Mobile Devices Visible Light Communication-based Indoor Positioning with Mobile Devices Author: Zsolczai Viktor Introduction With the spreading of high power LED lighting fixtures, there is a growing interest in communication

More information

4K Resolution, Demystified!

4K Resolution, Demystified! 4K Resolution, Demystified! Presented by: Alan C. Brawn & Jonathan Brawn CTS, ISF, ISF-C, DSCE, DSDE, DSNE Principals of Brawn Consulting alan@brawnconsulting.com jonathan@brawnconsulting.com Sponsored

More information

Amorphous Selenium Direct Radiography for Industrial Imaging

Amorphous Selenium Direct Radiography for Industrial Imaging DGZfP Proceedings BB 67-CD Paper 22 Computerized Tomography for Industrial Applications and Image Processing in Radiology March 15-17, 1999, Berlin, Germany Amorphous Selenium Direct Radiography for Industrial

More information

Custom Filters. Arbitrary Frequency Response

Custom Filters. Arbitrary Frequency Response CHAPTER 7 Custom Filters Most filters have one of the four standard frequency responses: low-pass, high-pass, band-pass or band-reject. This chapter presents a general method of designing digital filters

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

digital film technology Resolution Matters what's in a pattern white paper standing the test of time digital film technology Resolution Matters what's in a pattern white paper standing the test of time standing the test of time An introduction >>> Film archives are of great historical importance as they

More information

Qäf) Newnes f-s^j^s. Digital Signal Processing. A Practical Guide for Engineers and Scientists. by Steven W. Smith

Qäf) Newnes f-s^j^s. Digital Signal Processing. A Practical Guide for Engineers and Scientists. by Steven W. Smith Digital Signal Processing A Practical Guide for Engineers and Scientists by Steven W. Smith Qäf) Newnes f-s^j^s / *" ^"P"'" of Elsevier Amsterdam Boston Heidelberg London New York Oxford Paris San Diego

More information

End-of-Chapter Exercises

End-of-Chapter Exercises End-of-Chapter Exercises Exercises 1 12 are conceptual questions designed to see whether you understand the main concepts in the chapter. 1. Red laser light shines on a double slit, creating a pattern

More information

Improving the Detection of Near Earth Objects for Ground Based Telescopes

Improving the Detection of Near Earth Objects for Ground Based Telescopes Improving the Detection of Near Earth Objects for Ground Based Telescopes Anthony O'Dell Captain, United States Air Force Air Force Research Laboratories ABSTRACT Congress has mandated the detection of

More information

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror Image analysis CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror A two- dimensional image can be described as a function of two variables f(x,y). For a grayscale image, the value of f(x,y) specifies the brightness

More information

Signal Processing for Digitizers

Signal Processing for Digitizers Signal Processing for Digitizers Modular digitizers allow accurate, high resolution data acquisition that can be quickly transferred to a host computer. Signal processing functions, applied in the digitizer

More information

8.2 Common Forms of Noise

8.2 Common Forms of Noise 8.2 Common Forms of Noise Johnson or thermal noise shot or Poisson noise 1/f noise or drift interference noise impulse noise real noise 8.2 : 1/19 Johnson Noise Johnson noise characteristics produced by

More information

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale CS 548: Computer Vision REVIEW: Digital Image Basics Spring 2016 Dr. Michael J. Reale Human Vision System: Cones and Rods Two types of receptors in eye: Cones Brightness and color Photopic vision = bright-light

More information

Receiver Performance and Comparison of Incoherent (bolometer) and Coherent (receiver) detection

Receiver Performance and Comparison of Incoherent (bolometer) and Coherent (receiver) detection At ev gap /h the photons have sufficient energy to break the Cooper pairs and the SIS performance degrades. Receiver Performance and Comparison of Incoherent (bolometer) and Coherent (receiver) detection

More information

Finger print Recognization. By M R Rahul Raj K Muralidhar A Papi Reddy

Finger print Recognization. By M R Rahul Raj K Muralidhar A Papi Reddy Finger print Recognization By M R Rahul Raj K Muralidhar A Papi Reddy Introduction Finger print recognization system is under biometric application used to increase the user security. Generally the biometric

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Introduction. Chapter Time-Varying Signals

Introduction. Chapter Time-Varying Signals Chapter 1 1.1 Time-Varying Signals Time-varying signals are commonly observed in the laboratory as well as many other applied settings. Consider, for example, the voltage level that is present at a specific

More information

arxiv:physics/ v1 [physics.optics] 12 May 2006

arxiv:physics/ v1 [physics.optics] 12 May 2006 Quantitative and Qualitative Study of Gaussian Beam Visualization Techniques J. Magnes, D. Odera, J. Hartke, M. Fountain, L. Florence, and V. Davis Department of Physics, U.S. Military Academy, West Point,

More information