SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE. Thesis. Submitted to. The School of Engineering of the UNIVERSITY OF DAYTON

Size: px
Start display at page:

Download "SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE. Thesis. Submitted to. The School of Engineering of the UNIVERSITY OF DAYTON"

Transcription

1 SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree Master of Science in Electro-Optics By Marc A. Finet UNIVERSITY OF DAYTON Dayton, Ohio August 2012

2 SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Name: Finet, Marc Alain APPROVED BY: Russell Hardie, Ph.D. Advisory Committee Chairman Professor Electrical Engineering & Electro-Optics Christopher D. Brewer, Ph.D. Committee Member Technical Advisor AFRL/MLPJE, WPAFB, OH Peter Powers, Ph.D. Committee Member Professor Department of Physics John G. Weber, Ph.D. Associate Dean School of Engineering Tony E. Saliba, Ph.D. Dean, School of Engineering & Wilke Distinguished Professor ii

3 ABSTRACT SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Name: Finet, Marc Alain University of Dayton Advisor: Dr. Russell Hardie The defense industry has numerous detectors that provide critical imaging capability on tactical and reconnaissance platforms and have been shown to be susceptible to permanent damage from high energy pulsed lasers in both laboratory and field testing. Much of the materials research into this involves two different methods of providing pulsed laser damage protection: extrinsic limiter implementation and intrinsic detector hardening. This thesis focused on what gains could be made using another method: system defocus and detector redundancy. The work of this thesis revolved around hardening a camera system through defocusing the focal plane array (FPA) and then using image restoration algorithms to regain the image quality of the degraded images. This system, a three channel image splitting prism with lens mount, provided a unique opportunity to test multiple images of an identical scene with slight spatial iii

4 misalignments, varying sensor defocus and precisely measured optical degradation as measured by the Point Spread Function. These defocused images were then restored using filters that utilized information from only a single channel (the Wiener Filter, Regularized Least Squares Filter, and Constrained Least Squares Filter) and across multiple channels (Multichannel Constrained Least Squares Filter). Results from the single channel filters were excellent and allowed significant sensor hardening without image degradation when compared to the unfiltered image. Results from the multichannel RLS filter as tested were disappointing when compared to those from the single channel however and could be expanded upon in future work. iv

5 TABLE OF CONTENTS ABSTRACT... iii TABLE OF CONTENTS... v LIST OF FIGURES.vii CHAPTER 1: INTRODUCTION... 1 CHAPTER 2: PROOF OF IMAGE HARDENING THROUGH TRANSLATION OF THE FOCAL ARRAY FROM FOCUS Experimental Setup Experimental Results... 8 CHAPTER 3: DESCRIPTION OF OPTICAL IMAGE PROCESSING FILTERS FOR USE IN IMAGE RESTORATION Wiener Filter Image Restoration Constrained Least Squares Image Restoration Regularized Least Squares Image Restoration Multichannel Regularized Least Squares Image Restoration CHAPTER 4: EXPERIMENTAL SETUP FOR DEFOCUS RESTORATION MEASUREMENT Image Capture and Defocus Measurement System Description Multichannel Prism Description v

6 4.1.2 Imaging Targets Defocus Hardening CHAPTER 5: SIMULATIONS Input Images Blur How It Works Add Noise How Noise Is Selected And How It Works Calculate Wiener SNR, RLS Alphas, CLS Gammas Apply Values To Simulated Images - Results Multi-Channel Restoration -All Three Channels In Focus CHAPTER 6: ACTUAL IMAGES SINGLE CHANNEL Show Of The Measured Images CHAPTER 7: MULTICHANNEL IMAGES Predicative Defocus Restoration Through Simulations CHAPTER 8: CONCLUSIONS REFERENCES vi

7 LIST OF FIGURES Figure 1: Laser damaged CCD array... 1 Figure 2: Gaussian beam profile through focus... 5 Figure 3: Experimental Setup proving defocused sensors harden them to high energy lasers... 6 Figure 4: Profile of Continuum beam used in experiment... 7 Figure 5: Damage grid on VPC-790B camera Figure 6: Damage threshold of camera vs fluence... 9 Figure 7: Sample cross channel modulation transfer functions Figure 8: Experiment setup Figure 9: Pacific Corp PC-370A Figure 10: Prism system with attached cameras Figure 11: Image splitting prism Figure 12: Diagram showing dimensions and optical paths of Optec prisms Figure 13: Registration pattern Figure 14: Sample PSF Figure 15: Slanted edge target Figure 16: Sample image scene [10] Figure 17: Gaussian beam propagation [12] vii

8 Figure 18: Beam radius vs translation from focal plane Figure 19: Distance between sensor and focal point as image is translated Figure 20: Distance between sensor and focal point as image is translated Figure 21: Peak energy change vs image target position Figure 22: Hardening change vs distance from focus Figure 23: Cameraman.tif Figure 24: Westconcordorthophoto.png Figure 25: Scaled red PSFs Figure 26: Scaled blue PSFs Figure 27: Scaled green PSFs Figure 28: Radial cross section of red channel PSF Figure 29: Radial cross section of blue channel PSF Figure 30: Radial cross section of green channel PSF Figure 31: Blurring using PSF convolution (Red channel at 9cm) Figure 32: Simulated image blur at different distances Figure 33: Simulating the noise of the camera Figure 34: Cameraman.tif with 4 levels of noise Figure 35: Zoomed in simulated noise levels with different variance Figure 36: Optimal Wiener filter SNR values for all three channels (not at same scale) Figure 37: Optimal RLS filter alpha values for all three channels (not at same scale) Figure 38: Optimal CLS filter gamma values for all three channels viii

9 Figure 39: Comparison of RLS filter optimal alpha value scale (red channel) Figure 40: Comparison of Wiener filter optimal SNR value scale (red channel). 53 Figure 41: Comparison of CLS filter optimal gamma scale (red channel) Figure 42: Red channel image restoration accuracy (no noise) Cameraman.tif Figure 43: Green channel image restoration accuracy (no noise) Cameraman.tif Figure 44: Blue channel image restoration accuracy (no noise) Cameraman.tif Figure 45: Optimal Cameraman filter restoration images (Red channel) Figure 46: Red channel V = 0 maximum defocus restored Cameraman images 58 Figure 47: Optimal Cameraman filter restoration images (Green channel) Figure 48: Green channel V = 0 maximum defocus restored Cameraman images Figure 49: Optimal Cameraman filter restoration images (Blue channel) Figure 50: Blue channel V = 0 maximum defocus restored Cameraman images 62 Figure 51: Red channel image restoration accuracy (no noise) Satellite image Figure 52: Green channel image restoration accuracy (no noise) Satellite image Figure 53: Blue channel image restoration accuracy (no noise) Satellite image Figure 54: Optimal Satellite filter restoration images for V=0 (Red channel) ix

10 Figure 55: Red channel V = 0 maximum defocus restored Satellite images Figure 56: Summary of filters restoration ability on simulated red channel for all noise levels - Satellite image Figure 57: Red channel Wiener filter restoration error Figure 58: Red channel RLS filter restoration error Figure 59: Red channel CLS filter restoration error Figure 60: Optimized Wiener filter restoration coefficient Figure 61: Optimized RLS filter restoration coefficient Figure 62: Optimized CLS filter restoration coefficient Figure 63: Comparison of multi-channel vs single channel algorithm restoration quality Figure 64: Red images multichannel RLS vs single channel RLS Figure 65: Comparison of registered multi-channel vs single channel algorithm restoration quality Figure 66: Showing effects of image registration via affine transforms Figure 67: Unrestored prism images Figure 68: Zoomed in unrestored images Figure 69: Image #1 (0.5cm before focus) algorithm comparison Figure 70: Zoomed in view of image 1 (0.5cm before focus) Figure 71: Image #6 (at focus) algorithm comparison Figure 72: Zoomed In image Figure 73: Filter comparison of image Figure 74: Zoomed in image x

11 Figure 75: Image #16 filter comparison Figure 76: Image Figure 77: Comparison between single and multi channel RLS at location Figure 78: Zoomed in comparison of single and multichannel RLS filters at location Figure 79: Comparison between single and multi channel RLS at location Figure 80: Zoomed in comparison of single and multi channel RLS filters at location Figure 81: Comparison between single and multi channel RLS at locations 11, 16, and Figure 82: Zoomed in comparison of single and multi channel RLS filters at locations 11, 16 and Figure 83: Test of predicative defocus restoration abilities xi

12 CHAPTER 1 INTRODUCTION Figure 1: Laser damaged CCD array The Air Force has numerous detectors that provide critical imaging capability on tactical and reconnaissance platforms and have been shown to be susceptible to permanent damage (such as that shown in Figure 1) from high energy pulsed lasers in both laboratory and field testing. As such, the USAF is researching many methods to provide laser hardening to their imaging systems. Most materials research into this involves two different methods of providing 1

13 pulsed laser damage protection: extrinsic limiter implementation and intrinsic detector hardening. This thesis looks to examine a third option: system defocus and redundancy. This method is done through the use of an imaging system containing multiple, redundant detectors all imaging the same source. These systems divide the image either into several identical images, each with an equal number of photons, or into separate images and passing each band into a separate sensor. These images from several imaging sensors are then recombined into a single image. Examining the imaging system that splits the imaged scene into several identical images, it becomes evident that you have the capability to image an identical scene many times simultaneously under several different optical conditions such as image defocus. Since each image will contain different information on the same identical scene it is then possible to use post-processing algorithms to extract that data and provide an enhanced restored image of that scene. This thesis will cover just such a technique. Previous work has shown that under certain conditions you may be able to restore a defocused, blurred image to a better quality than if the image had been in focus through the use of image processing algorithms [1]. It has also shown that the limit to the amount and quality of the image restoration is dependent upon the size of the wavelength band visible to the imaging sensor and the ability to accurately measure the Point Spread Function of the system. Additionally, this work theorized that measuring 2

14 an image with multiple sensors, each with a varying amount of defocus with differing locations of the zeroes on their respective MTF s would aid in restoring defocused images. This thesis will build upon this work by proving that image defocus does increase the laser threshold of the system and testing the use of a redundant imaging system allows for greater image restoration than using a single sensor. A redundant imaging system that split the image into three identical channels (each with 1/3 the image intensity) was selected and then the sensors that read each of these channels were defocused by different amounts by moving them from the focal plane such that an identical image was placed onto each sensor with varying amounts of blur. The goal of this was to show that when the images were recombined after post-processing the additional data from the out of focus channels would increase the quality of the image when compared to the image recorded by a single imaging channel. 3

15 CHAPTER 2 PROOF OF IMAGE HARDENING THROUGH TRANSLATION OF THE FOCAL ARRAY FROM FOCUS 4

16 Normalized Intensity spot in focus 2x spot size 3x spot size Beam cross section Figure 2: Gaussian beam profile through focus A relatively simple way to reduce the threat of a high energy laser upon a digital imaging sensor is to reduce the energy density at the sensor s imaging plane by moving the detector out of the focal plane of the imaging lens. This has the effect of increasing the effective spot size which in turn reduces the peak laser intensity incident upon the detector, as shown in Figure 2). This section will describes the experiment performed which proves that by moving an imaging system away from the focal plane of a lens you increase the damage threshold of the lens/detector system. 5

17 2.1 Experimental Setup Reference Diode 1.2 N.D. Filter camera system under test Continuum LASER 532nm Lens (f=250mm) Joule Meter Figure 3: Experimental Setup proving defocused sensors harden them to high energy lasers The experimental setup shown in Figure 3 is designed to test a VPC 790B camera s damage threshold at several different spot sizes with each containing the same amount of energy. The beam used was a Continuum PowerLite Precision II 9010 operating at ~26uJ at a wavelength of 532nm. This beam proved to have a relatively good beam profile but contained high shot-to-shot energy fluctuations making it necessary to monitor the beams output energy with a reference diode calibrated to the lasers energy levels with a Joule meter. By calibrating the reference diode it was measured that for the setup used it would get a reading of 5.6E-6 Joules of energy upon the target (before the insertion of N.D. filters) for every Volt that the reference diode outputted. This allowed the tracking of the shot-to-shot energy fluctuations of the Continuum. fluctuations of the Continuum. 6

18 Zoom Raw beam profile at lens aperture Focused beam with 250mm lens Figure 4: Profile of Continuum beam used in experiment To cut down on the Continuums output energy to get near the camera s measured damage threshold it was necessary to use a stack of Neutral Density (N.D.) filters to reduce the energy of the incident beam. By inserting a total N.D. of 1.2 before the focusing lens it was possible to attenuate the beam s energy to a level that was just above the damage threshold of the camera when the beam was at focus. The attenuated beam was then steered into a lens with a focal length of 250mm. Beyond the lens the camera was moved through the Raleigh Range of the lens and then slowly out of focus. This allowed precise control of the spot size of the beam. 7

19 5mm out of focus Z In focus (0mm) Figure 5: Damage grid on VPC-790B camera. The camera under test was attached to three Newport linear actuators controlled by a Newport ESP300 control box, allowing precision movement in the X,Y, and Z to an accuracy of less than ten microns. This allowed the creation of a damage grid pattern for easy analysis, with each row being a different distance from the focal point of the lens (with row one being at the focal point) repeated several times in different columns. 2.2 Experimental Results For this experiment 23 sets of data were taken, with each data set moving the camera system from in focus to 5mm out of focus over the course of eleven separate laser pulses. The results are shown in Figure 5. As you can see from Figure 5 different results were recorded between data sets (columns), with some data sets damaging and some not at identical distances from the focal plane of the lens. This is due to the shot-to-shot noise of the beam which was recorded 8

20 through the reference diode used in the experiment. Taking the energy levels of the laser pulses into account and plotting the results according to spot size gives Figure 6, with damage marked with circles. Figure 6: Damage threshold of camera vs fluence As shown, the laser energy levels needed to damage the sensor increases as the camera moves away from the focal plane and the beam radius increases, shown here as a decrease in fluence. This is to be expected and proves that the decrease in peak intensity overcomes any possible issues with heat dissipation. These fluence levels show that an increase in beam radius 9

21 from.023mm to.041mm increases the sensor damage threshold from approximately 0.52uJ to 0.81uJ, marking an increase in damage threshold of about 1/3. This corresponds to moving the sensor from the focal plane by 2.5mm or 1% of a 250mm lens. This is a significant amount of protection that is gained by a slight shift from the lens focal plane and was worth investigating more. 10

22 CHAPTER 3 DESCRIPTION OF OPTICAL IMAGE PROCESSING FILTERS FOR USE IN IMAGE RESTORATION Now that it has been proven that defocusing an imaging system hardens the optical sensor, we will talk about how the blurring caused by defocusing the image was removed using image restoration filters. This was done using Wiener filters, Regularized Least Squares Filters and Constrained Least Squares Filters. 3.1 Wiener Filter Image Restoration When describing image blur, it is easiest to describe it as a shift-invariant blur model with noise, described with formula (3.1.1) [2] 11

23 y( m, n) h( m, n)* x( m, n) u( m, n) (3.1.1) In this formula, y(m,n) represents the two dimensional image as observed by our imaging system with the original, perfect image itself is described as x(m,n). The Point Spread Function (PSF) of the imaging system is defined as h(m,n) and includes the blurring of all optical elements (the lens and the focal plane array of the imager). Finally, u(m,n) represents the noise levels introduced on the original image and seen on our observed image. The Wiener filter is best described in the frequency domain and can be written as G(k,l) such that Xˆ ( k, l) G( k, l) Y( k, l) (3.1.2) where X(k,l) is the desired restored image (which in this case is the Fourier Transform (FT) of the original image x), Y(k,l) the FT of the image as observed by the imaging system, and G(k,l) some as yet undetermined filter. G(k,l) can be chosen to minimize the expected value of the expression: l 2 X ( k, l) G( k, l) Y( k, ) (3.1.3) This expression is minimized with G( k, l) 2 H( k, l) H( k, l) S ( k, l) / S u x ( k, l) (3.1.4) H(k,l) = Fourier Transform of PSF S u (k,l) = noise power spectrum S x (k,l) = signal power spectrum 12

24 The noise power spectrum S u for an M x N image is S 2 u( k, l) MN u (3.1.5) with 2 u = Variance at each pixel. The signal power spectrum varies with each image, but most images have similar power spectra and the Wiener filter is insensitive to small variations in the signal power spectrum. This enables the use of a standard wiener filter model on each camera and have it work despite the images it s restoring being different. 3.2 Constrained Least Squares Image Restoration A major problem with the Wiener filter is that the power spectra of the undegraded image and noise must be known in advance. As discussed above, it is usually not possible to have an undegraded image to apply the filter to and so a general power spectra is traditionally used. The problem is that this means that the power spectra used is actually a statistical average and hence cannot be optimal to each image to which it is applied. An alternative method is called the Constrained Least Squares (CLS) filter. It has the advantage that the only prior knowledge about the imaging system required are the mean and variance of the noise. Unfortunately, this also means that it is extremely sensitive to noise. When describing the constrained least squares filter, it is once again easiest to describe it as a shift-invariant blur model with noise, described with Equation

25 A major problem with solving this type of equation is that it has multiple solutions, and the solutions do not depend upon the continuity of the data (ie small changes in the observed image lead to very different results when calculating of the original image). In addition, H(k,l) is extremely sensitive to noise. The CLS filter alleviates these sensitivities by measuring the optimal solution against a measure of smoothness within the image and assumes the original image will have strong spatial correlations. The CLS filter used here was developed by myself and Dr. Russell Hardie (Univ. Dayton). To measure these spatial correlations the second derivative of the image (the Laplacian) was used to create a cost function, C. The cost function shown in equation (3.2.1) was used in this paper. C m, n 2 2 {[ y( m, n) h( m, n)* x( m, n)] [ l( m, n)* x( m, n)] } (3.2.1) Note that is a regularization parameter. This regularization parameter is what forces the image to be somewhat smooth by minimizing the high frequency content of the image. The larger the value the smoother the restored image will be. The frequency domain solution to the optimization problem is given by the expression ˆ H F( k, l) [ 2 H( k, l) ( k, l) ] G( k, l) 2 L( k, l) * (3.2.2) 14

26 where L(k,l) is the Fourier transform of the discrete Laplacian function in equation l ( x, y) 1/ 4 0 1/ 4 1 1/ 4 0 1/ 4 0 (3.2.3) Note that must be adjusted such that the constraint function is optimized ([3] pg ) Regularized Least Squares Image Restoration An additional restoration method is called the Regularized Least Squares (RLS) filter. Like the Constrained Least Squares filter it has the advantage that the only prior knowledge about the imaging system required are the mean and variance of the noise. It is in some respects very similar to the CLS filter as discussed below. The RLS filter that used [4] was developed by Dr. Russell Hardie (University of Dayton). When describing the regularized least squares filter, it is once again easiest to describe it as a shift-invariant blur model with noise, described with Equation Much like the CLS filter, the RLS filter has multiple solutions, and the solutions do not depend upon the continuity of the data (ie small changes in the observed image lead to very different results when calculating the original image). It too alleviates these sensitivities by measuring the optimal solution against a measure of smoothness within the image and assumes the original image will have strong spatial correlations. To measure these spatial 15

27 correlations the second derivative of the image (the Laplacian) was used to create a cost function, C. For this paper the cost function used was: C m, n 2 2 {[ y( m, n) h( m, n)* f ( m, n)] [ l( m, n)* f ( m, n)] } (3.3.1) where is a regularization parameter and f is the current estimate of the original image x. This regularization parameter is what forces the image to be somewhat smooth by minimizing the high frequency content of the image. The larger the value the smoother the restored image will be. When working with the filter it is best to operate in the frequency domain. The cost function C can then be written as: T T C 2H ( Hx y) C Cf (3.3.2) z 2 Note that L(k,l) is the Fourier transform of the discrete Laplacian function as described in Equation Unlike the CLS filter, the RLS filter minimizes the cost function by iteratively guessing and improving the values for the initial observed image over a number of predefined iterations. Due to the iterative nature of the Regularized Least Squares filter, it is necessary to come up with an initial estimated value of the original image ( f ) and then try to improve it. For the original estimate the observed image, y, was used. f y (3.3.3) 16

28 This was then solved iteratively using the gradient decent optimization to choose the next estimated image value. This process is repeated for the desired number of iterations and if allowed to run for enough iterations it returned a close representation of the actual image that is observed by our system Multichannel Regularized Least Squares Image Restoration To use the Regularized Least Squares image restoration function on multiple frames of data, we simply change the cost function of the normal RLS filter to span several frames of data as shown in [4]. This turns our cost function into: C b a 1 m, n 2 2 {[ y( m, n) h( m, n)* f ( m, n)] [ l( m, n)* f ( m, n)] } (3.4.1) Where we are performing our multichannel RLS filtering over channels a=1,2,3, b. [4]. Figure 7: Sample cross channel modulation transfer functions The multichannel RLS algorithm has a major advantage over its single channel version in that it can use cross-channel information to increase its restoration ability. Because each frame is slightly different, each frame actually 17

29 contains different information regarding the same scene. Thus if we know the relationship between each frame we can use a mutlichannel filter to combine the information and create a single reconstructed image. This is extremely useful at filling in the zeroes of a camera systems Modulation Transfer Function (MTF). Figure 7 shows how this can be done by showing three sample MTF functions. At the spatial frequency where each MTF has a value of zero the imaging system is not able to reproduce any information of the scene that it is imaging. However, if all three of the sample imaging systems shown were imaging the same scene, we would be able fill in any gaps of our image scene with information obtained from the other two channels. In this way multichannel modulation transfer functions are able to achieve better performance than a single channel version. 18

30 CHAPTER 4 EXPERIMENTAL SETUP FOR DEFOCUS RESTORATION MEASUREMENT To test the restoration abilities of the image processing algorithms it was necessary to build a system setup that would allow the capture of images in a method that ensured precise and repeatable measurements over several imaging targets. Most importantly it had to allow fine, measured control in the amount of image blur being introduced into imaging system. This image blur and camera alignment needed to be easily and precisely duplicated such that one could image multiple imaging targets under identical conditions, enabling the correlation of the information obtained from multiple imaging targets with identical and alignments and induced optical blur levels 19

31 4.1 Image Capture and Defocus Measurement System Description 72.4mm 46.5mm 765.8mm Variable 45mm Image target Prism with mounted cameras Nikkor Rail Figure 8: Experiment setup Figure 9: Pacific Corp PC-370A The experimental setup chosen for this paper is shown in Figure 8. It was chosen to enable easily swap out imaging targets and move them through precisely measured amounts of defocus. The prism shown was an Optec prism designed specifically for this experiment. This prism splits the incoming image 20

32 into three identical output channels, each with an equal number of photons. This prism will be described further later in this section. Permanently mounted to each output channel is a PC-370A monochromatic CCD camera manufactured by Pacific Corporation (Figure 9 albeit without its own lens as shown), and an image of the prism with cameras permanently attached is shown in Figure 10. The imaging lens of the system was a Nikkor 105mm f2d-dc AF lens produced by Nikon, chosen for its high quality and the low amount of introduced image aberrations. Figure 10: Prism system with attached cameras The image targets used for this experiment were mounted to a sliding rail system with precisely measured and labeled millimeter ruler markings, enabling it to be moved both into and out of focus along the camera systems optical axis by precise, repeatable increments. At each location images were taken of three 21

33 different image targets: One to determine image registration parameters, one to measure defocus blurring (via the Point Spread Function), and one of a complex desert background scene (the sample image scene to restore). These images are discussed in more detail in section Multichannel Prism Description The prism selected was specially made by Optec Corporation. This prism takes a single input image and splits it into three identical output images (each receiving 1/3 of the photons) which are in turn imaged by a Figure 11: Image splitting prism 22

34 monochromatic camera system. Figure 11 shows the prism itself without lens mountings or the attached cameras. It is important to specify that the Optec prism was manufactured such that each optical output channel has an identical optical path length through the prism and outputs an identically aligned image. This allows the three mounted cameras on the end of the system to all share a single focal point when a lens is mounted onto the system. A diagram showing the optical paths of the channels is shown in Figure 12. Note that each channel has an identical optical path length (30mm) through the prism, which is equivalent to 18.2m in air Figure 12: Diagram showing dimensions and optical paths of Optec prisms 23

35 4.1.2 Imaging Targets The first measurement that was needed with this prism system was to determine the affine transform parameters required to provide image registration across camera channels and between stage positions. The camera systems attached to our prism are closely aligned to show the same image on all three channels. In actuality they are not perfectly aligned, however, meaning that there were minor variations between the images that would cause different amounts of image shift, blurring and other visual artifacts if not accounted for. To accommodate each of these alignment errors it was necessary to perform geometric transformations on the images to bring them into proper alignment. This includes performing spatial transformation of coordinates and using intensity interpolation that assigns intensity values to the spatially transformed pixels ([3] pg 87). The spatial transformation method that was used is known as the affine transform [5], which has the general form shown in Equation This x ~ x y ~ y 1 1 T t11 t21 0 t t t13 t 23 1 ~ x ~ y 1 ( ) transformation can scale, rotate, translate, or sheer a set of coordinate points, depending on the values within the transform matrix T. Affine transformations have several nice features and ensure the image is not overly warped by turning lines into lines and preserving the ratios of distances along the lines [6]. 24

36 . Figure 13: Registration pattern To calculate these parameters a custom imaging target shown in Figure 13 was imaged by the system. This target has a good combination of line types and thicknesses, featuring many line intersections. It has several thin, fine lines to help with image registration when the camera is near focus while also having large, bold lines for when the camera is severely defocused and the images contain a high amount of blur. Once the affine transformation parameters for the system were measured, another target was inserted onto the rail system to measure the Modulation Transfer Functions (MTF) and Point Spread Functions (PSF) of the system. The MTF describes the ability of a CCD imaging system to reproduce the contrast or modulation present in a scene at any given spatial frequency [7]. The advantage of measuring the MTF is that it is a direct and quantitative measure of image quality and it is widely used within the photonics industry. The Point 25

37 Spread Function of an imaging system is another method of measuring its image quality. The PSF is a measurement of the optical blur of a system by imaging a point source and recording the corresponding blotch of light on the imaging plane. If an ideal camera system were to image a point source we would get an image of a perfectly circular spot in the image with a uniform gray level within the spot and zero light elsewhere. Since our camera system is not ideal, however, in an image of a point source we would get a gray level distribution that is high at the center of the point but that decreases radially outward, reaching zero a certain distance away from the center [8]. Many image restoration algorithms depend upon having an accurate point spread function, however, for this thesis the MTF was derived and used to estimate the PSF because the PSF of a camera system is more difficult to accurately measure directly. Figure 14: Sample PSF 26

38 Figure 15: Slanted edge target The target that was chosen to measure the MTF of the camera system was simply a slanted edge target printed with a high resolution, and is shown in Figure 15. Traditionally, this method of MTF measurement is done using a back-illuminated razorblade [9], but the cameras in this system are of a low enough resolution that a printed slanted-edge target will provide accurate results. From this target the Edge Spread Function (ESF) was measured, from which the MTF and PSF of our camera system were derived. 27

39 Figure 16: Sample image scene [10] The third and final image target used was a view of a sample desert scene from a distance (Figure 16). This image is a subset of a larger image [10] and was chosen because it has lots of detail allowing an easy visual representation of the image processing algorithms restoration abilities. 4.2 Defocus Hardening As proven in Chapter 2, the amount of defocus put into the imaging system had a direct effect on the systems protection from high energy laser systems. The experimental system used here was specifically designed to allow the easy 28

40 addition of defocus to the camera system by mounting our image targets onto a rail system. Simple Gaussian propagation models were then used to determine the laser hardening allowed by the image restoration algorithms. When calculating Gaussian beam intensity distribution it was necessary to use Equation ([11] pg. 354]. 2 2r I( r) I0 exp( ) 2 w 0 (4.2.1) Note that r is the radius away from the beams propagation axis and I 0 is the peak, on-axis intensity of the beam. w 0 is the Gaussian beam radius, which is defined as the beam-width where the intensity falls to 1/e 2 of its on-axis intensity. Figure 17: Gaussian beam propagation [12] Propagation of Gaussian beams through an optical system can be treated almost as simply as geometric optics. Because of the unique self-fourier Transform characteristic of the Gaussian, an integral to describe the evolution of the intensity profile with distance is not needed. The transverse distribution intensity remains Gaussian at every point in the system; only the radius of the Gaussian and the radius of curvature of the wavefront change [12], as shown in Figure

41 The change in beam width as the Gaussian beam propagates can be shown to be ([11]] pg. 352): 2 w ( x) w 2 0 x 1 2 w0 2 (4.2.2) Where w is the beam width, w 0 is the beam waist (the width at focus) and λ is the wavelength. To calculate the change in peak intensity as the focal plane array moves away from the focal point of the lens some assumptions were made about the system. It was assumed that the beam comes in collimated along the optical axis of our lens system. It was also assumed that the beam width of the laser targeting the system is ½ the width of the entrance aperture of our lens in order to ensure that diffraction could be ignored in the calculations. The lens had a focal length of 105mm, and its f/# was set to be 2.8. This gave an entrance aperture diameter of D focal length fnum 105mm mm (4.2.3) and a beam radius of 18.75mm. It was also assumed that the beam was collimated and operating at 531.5nm, a common laser wavelength. Given these assumptions it is possible to calculate the beam waist radius at focus to be [12]: 30

42 Beam radius (m) 4 f 2w0 D 4(531.5e 9) 105e 3 2w0 37.5e 3 w nm 0 (4.2.4) Once the minimum beam waist for our system at focus was calculated, how the beam waist expansion as the object image moves away from the focal point of the lens was plotted. Figure 18 shows how much the Gaussian beam in the system changes in radius as it moves away from its focal point. As shown, even 2.5mm past the focal point the beam radius has expanded from 947nm to 446 um. 4.5 x 10-4 beam radius vs translation of fpa from focus Distance from focal point (m) x 10-3 Figure 18: Beam radius vs translation from focal plane 31

43 To relate this to the system used in this thesis, it must be shown how moving the imaging target out of focus corresponds to moving the fpa away from the focal point of the lens. To do this, a simple thin lens calculation [13] was performed: f s s 1 2 (4.2.5) Examining and taking the knowledge that the red channel is in focus when the variable distance is 9.0cm, it is shown that s1=90mm+765.8mm+45mm = 900.8mm. From this and the knowledge of the lens having a focal length of 105mm, you can solve the distance between lens and fpa to be mm. Knowing that this is system focus, it can be calculated how much moving the target out of focus moves the fpa away from the focal point of the lens. This is shown in Figure

44 defocus (mm) 2 Distance between sensor and focal point Stage position (m) Figure 19: Distance between sensor and focal point as image is translated. Now that this is done, it is possible to map how each of the stage positions for the setup changes the beam width on the focal plane array, and hence the maximum energy. This was done using Equations and The results are shown in both Figure 21 and Figure 22 with the assumption of a 5mW laser input to the system. Figure 21 is a useful look of how the peak energy changes with respect to the defocus corresponding to the image target position. It is useful to track how the defocus amounts (which are labeled with regards to target position on a rail system, i.e. 9.5cm, 12cm, etc) change the energy on the system. This data corresponds to Figure 22 which contains the same data but maps it according to the focal point arrays displacement from the focal point of the lens when dealing with a collimated input beam. 33

45 Peak Energy (J) 10 7 Peak Energy change vs target position target Position (m) Figure 20: Distance between sensor and focal point as image is translated. Now that this is done, it is possible to map how each of the stage positions for the setup changes the beam width on the focal plane array, and hence the maximum energy. This was done using Equations and The results are shown in both Figure 21 and Figure 22 with the assumption of a 5mW laser input to the system. Figure 21 is a useful look of how the peak energy changes with respect to the defocus corresponding to the image target position. It is useful to track how the defocus amounts (which are labeled with regards to target position on a rail system, i.e. 9.5cm, 12cm, etc) change the energy on the system. This data corresponds to Figure 22 which contains the same data but maps it according to the focal point arrays displacement from the focal point of the lens when dealing with a collimated input beam. 34

46 Hardening factor Peak Energy (J) 10 7 Peak Energy change vs target position target Position (m) Figure 21: Peak energy change vs image target position 10 6 Hardening gained vs defocus fpa distance from focus (m) x 10-4 Figure 22: Hardening change vs distance from focus 35

47 As shown in these figures, a small amount of translation of the fpa from the focal plan can have a huge difference in the peak intensity incident upon the array. Figure 22 shows this by assigning a Hardening Factor to these calculations defined here as the ratio of the Intensity the system can withstand at each translation at focus vs the intensity it can withstand with the FPA at the focal plane of the system. These figures give a quick method of checking how much sensor hardening gained by the amounts of defocus shown in future sections. 36

48 CHAPTER 5 SIMULATIONS Before applying calculations to real images captured by the prism, it was necessary to simulate the images gathered by the prism. Performing these simulations served a very important purpose: it allowed the calculation of the optimal values needed for the Regularized Least Squares, Constrained Least Squares and Wiener filters. Running these simulations involved taking an idealized input image, blurring it in a similar method to how the camera blurs an image, adding appropriate noise levels to the blurred image, and then running the RLS and CLS algorithms at several different alpha and gamma values. The results were then compared and the alpha and gamma value that yielded the nearest result to the idealized input image was selected. The Wiener filter simulations made use of the fact that the Wiener filter is relatively insensitive to the power spectrum of the image. Combine this with the fact that most images have a fairly similar power spectra and the Signal to Noise 37

49 calculations found should be able to be applied to the Wiener filter simulations on the real prism-system images 5.1 Input Images The intensity prism simulations were performed with two different input images to compare the results. Ideally, the parameters used in the restoration algorithms calculated from simulating these two images would achieve similar results when applied to actual images. This would imply that the algorithms can restore the images properly no matter what type of scene the cameras were imaging. These sample input images are shown below: Figure 23: Cameraman.tif 38

50 Figure 24: Westconcordorthophoto.png These images (both included in the Matlab image processing toolbox) represented the ideal image that our algorithms were working towards. These ideal images had to be blurred and then had noise added to simulate the differences between the actual scene being imaged and our imaging systems representation of that scene. 5.2 Blur How It Works The blurring of an imaging system can be simulated using a simple twodimensional convolution of the input target image with the systems measured Point Spread Function (PSF). This simulates the blurring introduced by an imperfect lens [14]. To measure the point spread function, a slanted edge target (Figure 15) was imaged for each channel with the image defocused by precise amounts through its movement on the rail system (See Chapter 4). From this 39

51 image the Edge Spread Function (ESF) of my imaging system was measured using the sfrmat 2.0 Matlab tool developed by Peter Burns. This tool measures the ESF of several rows of CCD pixels and then averages them to determine the ESF of the system. The ESF returned was then converted to the Line Spread Function (LSF) of the system by taking the derivative of the ESF [9]. From the Line Spread Function the PSF of the optical system was derived by making the assumption that its PSF was circularly symmetric. This ESF to PSF conversion was performed using a Matlab script developed by Dr. Russell Hardie (University of Dayton). This allowed the creation of our system PSFs by taking the measured LSF and using it s values as radius values in a circular array to create PSFs. The Point Spread Functions that were measured are shown in Figure 25- Figure 27. The plotted PSFs were each taken from an image at one of the thirtysix defocus amounts (their title is the location of the image on the rail system described in Chapter 4 and as such indirectly represents its distance from the focal point of the lens). This was done for each of the three channels in the Optec prism (which, although color independent with this prism, are labeled Red, Blue and Green to tell them apart. This is simply a legacy label from another prism designed by Optec and have no actual correlation to the colors in this case). Each PSF was plotted here in addition to the plots of the cross sections of each PSF. This allows easy comparison of both the sharpness of the PSF between the channels (through their compared relative intensities) and also to see the spreading of the PSF within each channel as defocus is increased. 40

52 8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 25: Scaled red PSFs 8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 26: Scaled blue PSFs 41

53 8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 27: Scaled green PSFs Figure 28: Radial cross section of red channel PSF 42

54 Figure 29: Radial cross section of blue channel PSF Figure 30: Radial cross section of green channel PSF Comparing the PSFs, the red channel has the best image quality when the target is mounted at 9cm, while the blue and green channels have the best image when the target is at 8.5cm. This proves that the optical paths of the individual channels of the imaging system are not identical, as was planned. 43

55 Additionally it is evident by the sharpness of the PSFs that the red channel has the best image quality due to it being nearest to a perfect point spread function. Now the PSF of each channel at each target distance has been measured, these PSFs can be used to simulate the blur level. To do this it was necessary to take each input image, pad its edges (to prevent an accurate convolution along the images actual edges) and then perform a two-dimensional convolution with the systems s PSF. Example blur results are shown in Figure 31 and Figure 32. Unblurred Image Blurred Image PSF * = Figure 31: Blurring using PSF convolution (Red channel at 9cm) 44

56 Red Channel 9cm Red Channel 10cm Red Channel 12cm Red Channel 14cm Red Channel 18cm Red Channel 22cm Figure 32: Simulated image blur at different distances 5.3 Add Noise How Noise Is Selected And How It Works Now that the images have been accurately blurred, noise must be added to them in order to approximate noise levels typically seen in camera systems. To do this the assumption has been made that the camera system has a Gaussian noise profile. This allows the use of Matlabs imnoise function to add Gaussian white noise to the images. These simulations have been done at four different noise levels noiseless, actual camera noise (low), medium and high levels of noise. 45

57 averaged image single image simulated image with V= e Figure 33: Simulating the noise of the camera To measure the amount of noise added to the camera system, it was necessary to take a set of 20 identical images and then average them to remove the noise. It was then possible to take both a single image, average the image and then perform a pixel to pixel comparison of the two images to find the Mean Averaged Error (MAE) of their pixels in Matlab. Noise was then added to the averaged image using the imnoise.m function in Matlab and slowly increased the variance of the noise until the MAE of the single image and the artificially noisy averaged image equaled the MAE between the single image and the 46

58 averaged image. A zoomed in comparison of the noiseless image, single image, and artificially noisy image are shown in Figure 33. As shown, adding Gaussian noise with a variance of 4.476e-5 closely simulates the noise inherent in our camera system. This noise level was found to allow the closest approximation of our real world camera system s actual noise levels. To simulate multiple levels of noise, simulations were done with four different levels of added noise variance (V=0, V=4.47e-5, V=1.2e-4, V=1e-3). These noise levels were selected to provide a broad range of noise levels to test our image restoration algorithms with, from noiseless (V=0) to more noise than we ever expect to actually encounter in a real world situation. Figure 34 shows the cameraman.tif image with these noise levels added. A zoomed in, 50 x 50 pixel comparison of these noise levels are shown in Figure 35 for better detail. 47

59 V = 0 V = 4.47e V = 1.2e V = 1e Figure 34: Cameraman.tif with 4 levels of noise V = 0 V = 4.47e V = 1.2e-4 V = 1e Figure 35: Zoomed in simulated noise levels with different variance 48

60 5.4 Calculate Wiener SNR, RLS Alphas, CLS Gammas Once the blurring coefficients had been calculated and the noise levels added, it was possible to calculate optimal restoration coefficients for the selected image processing algorithms. For the Wiener filter, this means finding the SNR, for the Regularized Least Squares it means finding the alpha values, and for the Constrained Least Squares algorithm finding the gammas. These coefficients determine how each algorithm balances its desire to create a smooth image (due to the images probable strong spatial correlations) with the need to also create images with sharp edges where appropriate. To efficiently find the optimal filter values Matlabs fminbnd.m function was chosen which finds the minimum value of a function of one variable within a specified search area (Equation 5.4.1). min x f ( x) such that x 1 < x < x 2 (5.4.1) It does this through the use of the Golden Section search and parabolic interpolation [15] derived in Computer Methods for Mathematical Computations 110]. After the search routine had been chosen it was necessary to determine how to calculate the function that needed to be minimized. To do this three matlab functions were created (one for each filter) that reported the MSE (Mean Squared Error) between the target (unblurred and noiseless) optimal reference image and a simulated blurred image restored by the filter with a specific 49

61 SNR values SNR values SNR values SNR values restoration coefficient (called SNR, alpha, or gamma depending upon my filter). These functions were then put into into Matlabs fminbnd.m search routine to minimize the MSE error by using different restoration coefficients. These results were then run for each recorded amount of blur at four different noise levels and on two different images. The optimal results for the filter restoration coefficients are shown below in Figure 36 - Figure 38. Note that the amount of image defocus present in the image increases with image number, and the image number can be used to find the stage position from the PSF figures given in Figure 25 - Figure V = 0 Cameraman Satellite V = 4.47e-005 Cameraman Satellite Image # Image # 16 x 10-3 V = Cameraman Satellite V = Cameraman Satellite Image # Image # Figure 36: Optimal Wiener filter SNR values for all three channels (not at same scale) 50

62 RLS alpha values RLS alpha values RLS alpha values RLS alpha values 7 x 10-3 V = 0 6 Cameraman Satellite V = 4.47e-005 Cameraman Satellite Image # Image # V = Cameraman Satellite V = Cameraman Satellite Image # Image # Figure 37: Optimal RLS filter alpha values for all three channels (not at same scale) 51

63 Optimal RLS alpha value Optimal RLS alpha value gamma values gamma values CLS gamma values gamma values V = 0 Cameraman Satellite V = 4.47e-005 Cameraman Satellite Image # Image # V = Cameraman Satellite V = Cameraman Satellite Image # Image # Figure 38: Optimal CLS filter gamma values for all three channels Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Image # Image # Figure 39: Comparison of RLS filter optimal alpha value scale (red channel) 52

64 Optimal CLS gamma value Optimal CLS gamma value Optimal Wiener SNR value Optimal Wiener SNR value Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Image # Image # Figure 40: Comparison of Wiener filter optimal SNR value scale (red channel) Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e Image # Image # Figure 41: Comparison of CLS filter optimal gamma scale (red channel) Looking at the results shown in Figure 36 - Figure 38, it is easy to note that our calculations show that the results for the cameraman.tif image usually results in a larger optimal filter coefficient when compared to that calculated using the satellite image. This is due to the fact that the cameraman.tif image has less spatial detail than the satellite image westconcordorthophoto.png and 53

65 as such can allow more smoothing without losing its detail. As can be seen in Figure 39 - Figure 41, larger noise values require a larger optimal restoration parameter to help smooth out the effects of noise. However it is interesting to note that the CLS filter s optimal gamma parameter is fairly noise insensitive. 5.5 Apply Values To Simulated Images - Results Now that the optimal filter parameters have been calculated it was possible to apply them to our simulated images and compare them to our optimal images to check our filter quality. To do the Wiener, Regularized Least Squares and Constrained Least Squares filters were applied to the blurred images using the parameters gathered in the previous section. We then compared the results of this plus the artificially blurred image pixel by pixel to the optimal unblurred images and computed the Mean Squared Error (MSE) between these two images. Since the MSE should become lower as the restored image gets closer to the original optimal image, this allowed for a direct comparison to numerically measure the image restoration ability of each algorithm. The filters were first applied to the noiseless images (V = 0) because these images should be the easiest to restore. After applying the image restoration algorithms to the three image channels the MSE results are shown in Figure 42 - Figure

66 MSE error MSE error 1200 Red channel RLS 200 Wiener CLS unrestored image # Figure 42: Red channel image restoration accuracy (no noise) Cameraman.tif 1200 Green channel RLS 200 Wiener CLS unrestored image # Figure 43: Green channel image restoration accuracy (no noise) Cameraman.tif 55

67 MSE error 1200 Blue channel RLS 200 Wiener CLS unrestored image # Figure 44: Blue channel image restoration accuracy (no noise) Cameraman.tif As shown in Figure 42, all of the image restoration algorithms improve upon the unrestored blurred image with the same amount of blur applied. The best unrestored image for the Red channel is image six (which corresponds to when the imaging targets are at the 9cm rail position), which is consistent if you look at the point spread functions shown in Figure 25. Figure 25 shows that the red channels PSF is optimal when the target is at the 9cm position which means that is the location of the systems focus. The mean squared error (MSE) value for the unrestored image at that position is This means that any of the restored images that have an MSE value less than are statistically closer to the actual ideal image than the best unrestored image at focus. The optimal restored images calculated for each filter are all also at image location six. The optimal RLS restoration MSE value is 122, the optimal CLS MSE value is 34.48, and the optimal Wiener MSE is The red channel restored images corresponding to these algorithms are shown in Figure

68 Least blurred unrestored (image #6) Wiener (image #6) optimal image RLS (image #6) CLS (image #6) Figure 45: Optimal Cameraman filter restoration images (Red channel) Now the calculated limits of the image defocus restoration abilities were tested. Looking at Figure 42, it is evident that at image 16 (target at stage position 12cm) the Regularized Least Squares MSE value for the red channel is at 424.5, meaning that it is a closer representation of the actual ideal image than the unrestored image at focus. The CLS MSE value of for image 16 is the most defocused image that has an error lest than the best unrestored image. For the Wiener filter the last image with an MSE less than is image 18 (MSE = 446.6). These images, shown in Figure 46, show that our method for 57

69 determining the quality of the image restoration algorithm (through comparison of MSE errors) is a valid one. Note that the restored Wiener, RLS, and CLS images are comparable to the least blurred unrestored image at imparting the information contained in the optimal image. This analysis was then performed on all three image channels (Red, Green, and Blue) to further show that this holds true for different imaging system parameters. Least blurred unrestored (image #6) Wiener (image #18) optimal image RLS (image #16) CLS (image #16) Figure 46: Red channel V = 0 maximum defocus restored Cameraman images Performing the same steps for the green channel camera image yields that the best image quality is with image 9 (corresponding to the stage being at 58

70 9.3 cm). At this point the unrestored image has an MSE of when compared to the optimal image. Unlike the Red channel the optimally restored images do not occur with the same amount of defocus. The RLS restoration achieves its best result with image three (MSE of 228.9). The Wiener and CLS restorations both works best with image one (CLS has a MSE of 81.61, Wiener has a MSE of 115.8). The images corresponding to these results are shown in Figure 47. Least blurred unrestored (image #9) Wiener (image #1) optimal image RLS (image #3) CLS (image #1) Figure 47: Optimal Cameraman filter restoration images (Green channel) 59

71 The filters restoration parameters were then tested on the green channel imagery using the same methodology used on the red channel. These results are shown in Figure 48. Least blurred unrestored (image #9) Wiener (image #15) optimal image RLS (image #14) CLS (image #14) Figure 48: Green channel V = 0 maximum defocus restored Cameraman images Finally the optimal coefficient results were tested on the blue channel coefficients. The blue channel camera image that yields the best image quality is with image 6 (corresponding to the stage being at 9.0 cm). At this point the unrestored image has an MSE of when compared to the optimal image. 60

72 Unlike the Red channel the optimally restored images do not occur with the same amount of defocus. The RLS restoration achieves its best result with image one (MSE of 279.7). The Wiener and CLS restorations both works best with image ten (CLS has a MSE of 130, Wiener has a MSE of 202.4). The images corresponding to these results are shown in Figure 53. Least blurred unrestored (image #6) Wiener (image #10) optimal image RLS (image #1) CLS (image #10) Figure 49: Optimal Cameraman filter restoration images (Blue channel) 61

73 Least blurred unrestored (image #6) Wiener (image #14) optimal image RLS (image #13) CLS (image #14) Figure 50: Blue channel V = 0 maximum defocus restored Cameraman images The same image restoration simulations were then performed on the satellite image parameters. Looking at Figure 51, the best unrestored image (that with the lowest MSE differential with the optimal image) for the red channel is image six with an MSE of The MSE of the best RLS filter is 163.4, the MSE of the CLS filter is 48.58, and the MSE of the Wiener filter is (all image number six as well). For our Green channel (shown in Figure 52) the best unrestored image is image nine with an MSE of The best results for the RLS filter come with image 3 (MSE of 317.4). The best results for the CLS filter 62

74 MSE error come with image one (MSE of 92.06) and the best results for the Wiener filter also come with image one (MSE of 153.2). For the blue channel (Figure 53) the best unrestored image is image 6 with an MSE of The best results for the RLS filter come with image 1 (MSE of 395.2). The best results for the CLS filter come with image 10 (MSE of 166.2), and the best results for the Wiener filter come with image 10 (MSE of 279.5) Red channel RLS Wiener CLS unrestored image # Figure 51: Red channel image restoration accuracy (no noise) Satellite image 63

75 MSE error MSE error 2000 Green channel RLS Wiener CLS unrestored image # Figure 52: Green channel image restoration accuracy (no noise) Satellite image 2000 Blue channel RLS Wiener CLS unrestored image # Figure 53: Blue channel image restoration accuracy (no noise) Satellite image These optimally restored blurred images are shown in Figure 54. These are the most in-focus images (determined by having the lowest MSE errors) restored with the various filters. Figure 55 shows the results in restoring the maximum defocus that was calculated as being better than the original unrestored satellite image (similar to how it was done in the previous section). 64

76 For RLS and CLS filters the calculations show that blur levels up to image sixteen can be restored and still be better than the unrestored image. The Wiener filter can up go to image eighteen. Looking at this figure it shows that the method of using MSE error to determine an images restoring power is a valid one as the restored images look at least as good as the least blurred unrestored image. Least blurred unrestored (image #6) Wiener (image #6) optimal image RLS (image #6) CLS (image #6) Figure 54: Optimal Satellite filter restoration images for V=0 (Red channel) 65

77 Least blurred unrestored (image #6) Wiener (image #18) optimal image RLS (image #16) CLS (image #16) Figure 55: Red channel V = 0 maximum defocus restored Satellite images Similar results were also seen on the Blue and Green channels. Here as the amount of blur is nearly identical to that of the red channels satellite image and the trends shown in the calculations performed on the cameraman image. With the filters effects on both sets of images have been demonstrated, it was decided to limit the algorithm calculations to the satellite images. This is because the satellite images have greater high frequency spatial content and as such the parameters calculated for the filters (the RLS alpha, CLS gamma and Wiener SNR) result in less smoothing and, hopefully, a higher level of detail when applied to real world images. 66

78 MSE error MSE error MSE error MSE error The simulations were then performed on the images with noise added to better recreate real world results. As discussed previously, noise was added by using the imnoise command with variances of 4.47e-5, 1.2e-4, and 1e-3 on the satellite images. This noise level of V= 4.47e-5 images are what testing showed as being equivalent to the real world cameras used in section Once noise was added to the images the same tests were performed as previously done with the noiseless images. The results for the red channel are shown below in Figure Red channel (V = 0) 1200 Red channel (V = 4.47e-005) RLS Wiener CLS unrestored image # 400 RLS 200 Wiener CLS unrestored image # 1800 Red channel (V = ) 1800 Red channel (V = 0.001) RLS Wiener 400 CLS unrestored image # RLS Wiener 400 CLS unrestored image # Figure 56: Summary of filters restoration ability on simulated red channel for all noise levels - Satellite image 67

79 One thing revealed by Figure 56 is that the CLS filter is better than the RLS filter for low amounts of defocus but not for high amounts of defocus. It s also interesting to note how well the Wiener filter performs. The Wiener filter is the simplest filter and should be the worst performer but here it performs on par with the other restoration methods. This is simply a side effect of how the simulations were performed. To calculate the optimal wiener SNR, the algorithm required that you feed in the actual optimal image as well as the blurred image. It then tries to find the Signal to Noise Ratio that restores the blurred image to the optimal image. The other algorithms do not get to use the original image in calculating their optimal parameters and hence perform at a handicap. As such the Wiener filter was expected to perform significantly worse when applied to actual images because it was using a generic power spectrum and not one optimized for each simulated image that it is attempting to restore.. To better compare how noise effects each filter, each was plotted separately across the four noise levels in Figure 57 - Figure 59. These plots show the MSE error between the restoration of the blurred, noisy image and the optimal noiseless image. 68

80 MSE error MSE error Wiener Restoration Error for westconcordorthophoto.png V=0 V=4.47e-5 V= V= image # Figure 57: Red channel Wiener filter restoration error RLS Restoration Error for westconcordorthophoto.png V=0 V=4.47e-5 V= V= image # Figure 58: Red channel RLS filter restoration error 69

81 MSE error CLS Restoration Error for westconcordorthophoto.png 1500 V=0 V=4.47e-5 V= V= image # Figure 59: Red channel CLS filter restoration error It would now be a good time to examine how noise affects the filters restoration parameters (the Wiener SNR term, RLS alpha term and CLS gamma term), shown in Figure 60 - Figure 62. Note that as the noise levels increase, the optimal restoration parameter increases. This is to be expected as the restoration parameters control the amount of smoothing performed on the image, and higher noise levels would require more smoothing to restore the image. One very interesting discovery is just how insensitive the CLS s gamma filter is to noise, shown in Figure 62 when compared to the Wiener and RLS filters. 70

82 Alphas SNR Optimal Wiener SNR V=0 V=4.47e-5 V= V= Image # Figure 60: Optimized Wiener filter restoration coefficient Optimal RLS Alphas V=0 V=4.47e-5 V= V= Image # Figure 61: Optimized RLS filter restoration coefficient 71

83 Gamma Optimal CLS gammas V=0 V=4.47e-5 V= V= Image # Figure 62: Optimized CLS filter restoration coefficient 5.6 Multi-Channel Restoration -All Three Channels In Focus Now that single channel restoration has been examined, we will now investigate the multi-channel RLS restoration algorithms. This is a modified version of the RLS function that takes three input images of the same scene and uses cross-plane similarities within the images to aid in restoration [17]. Unfortunately when using simulated images there were no sub-pixel differences between the images and hence the results here are of limited correlation to the real-world optical system. It was worth simulating though to see if any increase in restoration ability is gained when using three identical images with different amounts of blur compared to simply restoring the best of the three images. To begin, three identical images were taken and blurred with three different amounts of blur from the real world prism systems measured Point Spread Functions. To ensure the amount of blurring added was similar but not 72

84 MSE error MSE error MSE error MSE error identical, the PSFs for each of the three channels were selected and fed into the algorithm at a specified defocus amount (i.e. the Red, Blue, and Green channels Point Spread Functions measured with the PSF target at 9cm on the optical rail system). The results were compared to simply restoring the single best focused image from the three imaging channels with a single channel RLS filter. This was done for each of the four added noise levels, represented by the variances (V) of 0, 4.47e-5,.00012, and.001. These results are shown in Figure V= 0 Multi Single V= 4.47e-005 Multi Single Image # Image # V= Multi Single V= Multi Single Image # Image # Figure 63: Comparison of multi-channel vs single channel algorithm restoration quality 73

85 Note that the multichannel algorithm actually starts out performing worse than the single channel algorithm but gets better comparatively as the noise levels increase. By the final, high noise level (V=.001) the multichannel algorithm actually performed better than the single channel algorithm. This is very interesting as one would expect the results to be at least as good as the best single channel image restoration as the other images are adding additional information to the algorithm. The algorithm used here has only a single alpha value as input which it uses on all three input channels. However, because the input channels had slightly different levels of blur (the Blue and Green channels are not as sharp as the red channel), one single alpha value cannot be optimal for all three channels. The multichannel RLS algorithm weighs all three images equally, meaning that the two images with increased blur actually cause the algorithm to perform less effectively because it thinks they are all of equal quality. To test if this is the case three new sets set of blurred, noisy images were created. Each set had identical blur applied (the measured blur of the red channel), to which noise was added as described previously. Due to the separate application of noise to each set of images this produced three different images with identical blur levels to feed into the multichannel RLS filter. This ensured that a single alpha value would be optimal for all three channels. The results of this are shown in Figure 64. For a noiseless image, the multichannel RLS algorithm actually performed identically well to the single channel algorithm. This is logical because without any noise, we are feeding three identical images into the multichannel RLS algorithm and then that same image into the single 74

86 MSE error MSE error MSE error MSE error channel RLS algorithm. This means that the algorithms have the exact same image information to work with and so produce identical results. As soon as noise is added to the input images of the algorithms, the multichannel algorithm begins to perform better than the single channel algorithm due to each input channel having three different sets of information (due to the noise differences in the image ) of the imaged scene that it can use to reconstruct the input image V= V= 4.47e-005 Multi Red Multi Red Single Single Image # Image # 1200 V= V= Multi Multi Red Single Single Image # Image # Figure 64: Red images multichannel RLS vs single channel RLS Having shown that the multichannel RLS algorithm outperforms the single channel when it can perfectly model the blur of all input channels with a single alpha, it was then possible to conclude that the multichannel algorithms poor 75

87 performance compared to the single channel shown in Figure 63 at low noise levels was due to the differing blur levels of each channel not providing enough MTF compensation to overcome the suboptimal alphas for the additional input channels. A large benefit of the multichannel algorithm is the ability to combine all three MTF functions into one single MTF for the system. The algorithm uses each channels MTF data to fill in the portions of the imaging systems MTF that are at or near zero for the single channel. If the additional channels MTFs do not add enough new data to the best single channel MTF (ie filling in the frequencies where it is zero) to counteract the fact that two of the three channels are having sub-optimal alphas applied to them, the multichannel RLS algorithm used here performs worse than the single. The Multichannel RLS algorithm got a performance boost when compared to the single channel as sensor noise levels increase (Figure 64) however due to it receiving additional information on the input scene from the extra channels. For comparisons sake we then performed an affine registration on the images and then had the multichannel filter restore them. This resulted in slightly blurred, imperfect images as the affine registration process did not perfectly restore the image to what it was. The results are shown in Figure

88 MSE error MSE error MSE error MSE error V=0 Multi Single V=4.47e-005 Multi Single Image # Image # V= Multi Single V=0.001 Multi Single Image # Image # Figure 65: Comparison of registered multi-channel vs single channel algorithm restoration quality When comparing Figure 63 to Figure 65, we find that the algorithm actually performed worse with the registered images. This was an important find as image registration needed to be performed when testing the filters on the real images. This worsening of performance is due to the apparent inability of our registration measuring methodology to perfectly reverse an affine transformation. The process introduces a slight blur which is reflected in the MSE error. This is shown in Figure 66, where an image (a zoomed in view of our satellite image) upon which an affine transformation registration is performed. An inverse affine 77

89 transformation was then performed to unregister the image but as shown in Figure 66 the reference image is not perfectly replicated but slightly blurred. reference affine affine + inverse affine Figure 66: Showing effects of image registration via affine transforms The importance of this discovery was seen when performing mutlichannel RLS restoration on actual images in the next section. Since image registration was needed to be performed in order to move the blue and green channels in line with the red, we expected those images to have an additional blur due to the affine transformation. 78

90 CHAPTER 6 ACTUAL IMAGES SINGLE CHANNEL Now with the simulations completed it was possible to apply the optimal image processing coefficients calculated therein to real images taken by the prism system. These images were taken, registered, and then the Single channel and multi channel Regularized Least Squares, the Constrained Least Squares, and the Wiener filter algorithms were applied to these registered images. This was done with the filter coefficients (alpha, gamma, SNR) calculated as being optimal for the corresponding blur level in the simulated images. 6.1 Show Of The Measured Images Five sample images (out of 36 total)were taken by the prism system with varying amounts of defocus and are shown in Figure 67. These images provide a good breadth of defocus to restore and will be the focus of the rest of this 79

91 paper. In this figure the images progress from slightly out of focus (Unrestored Image 1 is taken with the image target ½ cm before system focus), to in focus (Unrestored Image 6), to slightly out of focus (Unrestored Image 11 is at ½ cm past focus) to extremely out of focus (Unrestored Images16, and 21 are 3cm and 5.5cm past focus respectively). For reference, Unrestored Image 6 is 82cm from the systems entrance aperture. Unrestored Image 1 Unrestored Image Unrestored Image Unrestored Image Unrestored Image Figure 67: Unrestored prism images To aid in showing the progression of defocus,figure 68 was included which is a zoomed in view of the ladder shown in the bottom portion of the images shown in Figure 67. Examining the figure, Unrestored Images 1 and 11 80

92 are equally blurred when compared to Unrestored Image 6, which is to be expected as they are, in actuality, equally out of focus. Unrestored Image 1 Unrestored Image Unrestored Image Unrestored Image Unrestored Image Figure 68: Zoomed in unrestored images Now the restoration abilities of each image processing filter on real images at these levels of defocus were compared. Unlike the simulated examples, there is no numerical method to compare the quality of the images and hence we must rely on qualitative decisions to compare their restoration ability. This was done by visually examining the images of each of the selected defocus amounts and comparing them to the unrestored image. This was done to quickly test that the restoration algorithms are working as intended. 81

93 Unrestored Image 1 1 Wiener 1Image 1 RLS Image 1 1 CLS 1Image 1 Figure 69: Image #1 (0.5cm before focus) algorithm comparison 82

94 As seen in Figure 69, all three algorithms look good when compared to the original unrestored image. The restored image appears to have sharper edges, and greater details in the brickwork of the buildings. Additionally, the window panes are much more visible in the restored images than in the unrestored image. Beyond that however it is difficult to compare the filters performance to one another from this distance. Zooming in on the ladder within the image gives Figure 70. Comparing the four images, it is evident that while all three single channel filters sharpen the details of the ladder, the Wiener and CLS algorithms produce an image with what appears to almost be a Gaussian noise pattern. The RLS image clearly gives the best results in terms of image quality. Unrestored 1 Image 1 Wiener 1 Image 1 RLS 1 Image 1 CLS 1 Image 1 Figure 70: Zoomed in view of image 1 (0.5cm before focus) Next the image filters were compared where the image was at focus (Image #6). This is shown in Figure 71. As expected once again all three filters restored the image to a sharper, clearer state than the unrestored raw image. Once more a small subset of the image was examined to better measure their relative performance beyond that. This is shown in Figure 72. As shown there, 83

95 the single channel RLS filter once again has the best restored image quality of the examined filters. Figure 71: Image #6 (at focus) algorithm comparison 84

96 Figure 72: Zoomed In image 6 Moving on to Image 11 (lens is focused ½ cm in front of image target), the restoration results are shown in Figure 73. This is once again the same amount of defocus as Image 1, so the images should look comparable to Figure 69, which they in fact do. Zooming in on the image to better compare restoration abilities of the filters is done in Figure 74. Note that once again the RLS filter performs superior to the others. 85

97 Unrestored Image Wiener 11 Image 11 RLS Image CLS 11 Image 11 Figure 73: Filter comparison of image 11 86

98 Unrestored 11 Image 11 Wiener 11 Image 11 RLS 11 Image 11 CLS 11 Image 11 Figure 74: Zoomed in image 11 The performance of the image processing filters on Image sixteen is shown in Figure 75. As evident in the pictures, the Wiener filter and the CLS filter are really beginning to perform terribly when compared to the RLS filter. Although sharper with detail than the unrestored image, excessive ringing introduced by the discrete Fourier transform of the filters actually makes them harder to practically view. The Wiener filter not performing well is not a surprise as it is the simplest filter and it was expected it to perform poorly. It only performed so well in the simulated results because it was supplied the original image into the algorithm as an input to get the Signal to Noise Ratios of the actual images. The big surprise was the CLS filter. The CLS filter is a noniterative version of the RLS filter with a slightly different optimization goal. As such it was expected it to perform similarly to the RLS, which was even predicted in the simulated images results. This is obviously not the case. Its excessive ringing actually put it more in line with the Wiener filters performance. Because 87

99 the RLS filter clearly had the best image restoration ability, there was no need to examine the images in greater detail. Unrestored Image Wiener 16 Image 16 RLS Image CLS Image Figure 75: Image #16 filter comparison 88

100 Finally the most defocused set of images (out of the small subset chosen to study in this section) belonging to Image 21 were examined. Figure 76 shows the unrestored image compared to the three single channel filtered images. While all three of the restored images are sharper than the unrestored image, once again the single channel RLS filter proves to be the best due to the Wiener and CLS algorithm have excessive ringing. There is once again no point in zooming in on the images as it is evident that the RLS filter is the clear winner. It provides the sharpest image with the least ringing. Note that the CLS image here actually appears to have worse ringing than the Wiener filter. Unrestored 21 Image 21 Wiener 21 Image 21 RLS Image CLS Image Figure 76: Image 21 89

101 CHAPTER 7 MULTICHANNEL IMAGES After examining the performance of the three different types of single channel filters it was then possible to compare them to the multichannel RLS filters performance. More specifically, it was compared to the single channel RLS filter since it was the best performing of the single channel filters. Unrestored Image 1 Single RLS 1 Multi-RLS 1 Figure 77: Comparison between single and multi channel RLS at location 1 90

102 As can be seen in Figure 77, the single and multichannel RLS filters looked comparable in their abilities to reduce the blurring of the image. To get a better sense of which actually performs better, it was decided to zoom in on a detailed feature and compare them. Once again the ladder in the foreground was chosen to examine. This zoomed in view is shown in Figure 78. Upon inspecting this figure, it is evident that the single channel filter slightly outperforms the multichannel filter. Unrestored Image 1 Single RLS 1 Multi-RLS 1 Figure 78: Zoomed in comparison of single and multichannel RLS filters at location 1 91

103 Unrestored Image 6 Single RLS 6 Multi-RLS 6 Figure 79: Comparison between single and multi channel RLS at location 6 Unrestored Image 6 Single RLS 6 Multi-RLS 6 Figure 80: Zoomed in comparison of single and multi channel RLS filters at location 6 Figure 79 and and Figure 80 show location 6, where the image was in best focus. Examining these figures closely, it is evident that once again the single channel RLS filter slightly outperformed its multichannel counterpart. 92

104 Since in both previous cases (location 1 and location 6) the single channel RLS filter performed a slightly better image restoration than the multichannel version it would be expected for this trend to continue with the other three levels of defocus that were examined. Looking at Figure 81 and Figure 82, it is evident that this was the case. Unrestored Image 11 Single RLS 11 Multi-RLS 11 Unrestored Image 16 Single RLS 16 Multi-RLS 16 Unrestored Image 21 Single RLS 21 Multi-RLS 21 Figure 81: Comparison between single and multi channel RLS at locations 11, 16, and 21 93

105 Unrestored Image 11 Single RLS 11 Multi-RLS 11 Unrestored Image 16 Single RLS 16 Multi-RLS 16 Unrestored Image 21 Single RLS 21 Multi-RLS 21 Figure 82: Zoomed in comparison of single and multi channel RLS filters at locations 11, 16 and 21 Although it is difficult to tell with large amounts of defocus, it appears that in all five defocus cases that were chosen, the single channel algorithm 94

106 performed better than its multichannel neighbor. This corresponded to the simulated image results. As discussed in section , this could have been due to the different blur amounts confusing the algorithm and not adding enough new spatial information to cancel this out. On top of this, in these real images (unlike in the simulated images) it was necessary to perform image registration with the multichannel images to better align them before the algorithm could be applied. As previously discussed, image registration was not perfect and this caused the images to blur slightly before the multichannel algorithm could even be applied. Finally, there was the possibility that we the system point spread function was measured improperly. A poorly measured point spread function would severely degrade the ability of our filters to restore a blurred image taken by the imaging system. 7.1 Predicative Defocus Restoration Through Simulations The ability to predict the amount of defocus we can adequately restore through the use of simulated images was then tested. When processing the simulated restoration results, the amount of defocus that each algorithm could restore while still returning the blurred image to a state at least of equal quality to the most in focus unrestored image. This was predicted by calculating the Mean Squared Error (MSE) between the artificially blurred test image and the test image itself. It was proposed that any time the algorithms returned a restored image to an MSE value less than the value achieved when calculating the MSE between a defocused image and the original unblurred image, the restored defocused image would be better than the blurred, unrestored, in focus image 95

107 since there was less of a pixel-to-pixel value differential. As such It was postulated that it was possible to predict how much image defocus each algorithm could correct while still achieving a quality better than an in focus, uncorrected image. These simulated results were then tested with actual images taken by the input system. Since each of the actual images was, in reality, an average of 20 images (in an effort to eliminate the noise), the results obtained from the noiseless simulated images were used. These results predicted that it was possible to restore an image from target location #18 with a wiener filter and the image from location #16 with CLS and single channel RLS filters. These restored images should all be better than my best unrestored image, from location #6. These images are shown in Figure 83. As shown, all three of the restored images are nowhere near the image quality of my best unrestored image. This is not unexpected. The wiener filter only performed so well in the simulations since it had knowledge of the actual ideal scene under test. It performed very poorly in real world situations. The CLS filter has the ringing issues that have been noted previously. The RLS filters performs best as expected. It maintained much of the image content of the unrestored image but is much blurrier. Thus the predicative, numeric method of calculating the amount of defocus each filter can restore was a failure. This was actually not surprising due to the possibility for error in measuring the point spread function of our system. Measuring a systems point spread function is incredibly difficult, and many assumptions and automated methods were used in this thesis to get our measured PSF values. Even a slight error between the real and measured PSF 96

108 values would lead to the image filters studied in this thesis not performing to their optimal potential. In addition, better results may be achievable with a different metric than the Mean Squared Error (which performs poorly when a restored image contains ringing), and may be studied in the future. Unrestored Image 6 Wiener Image 18 RLS Image 16 CLS Image 16 Figure 83: Test of predicative defocus restoration abilities 97

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Computer Aided Design Several CAD tools use Ray Tracing (see

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit Experiment 1: Fraunhofer Diffraction of Light by a Single Slit Purpose 1. To understand the theory of Fraunhofer diffraction of light at a single slit and at a circular aperture; 2. To learn how to measure

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

Properties of Structured Light

Properties of Structured Light Properties of Structured Light Gaussian Beams Structured light sources using lasers as the illumination source are governed by theories of Gaussian beams. Unlike incoherent sources, coherent laser sources

More information

AgilOptics mirrors increase coupling efficiency into a 4 µm diameter fiber by 750%.

AgilOptics mirrors increase coupling efficiency into a 4 µm diameter fiber by 750%. Application Note AN004: Fiber Coupling Improvement Introduction AgilOptics mirrors increase coupling efficiency into a 4 µm diameter fiber by 750%. Industrial lasers used for cutting, welding, drilling,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

APPLICATION NOTE

APPLICATION NOTE THE PHYSICS BEHIND TAG OPTICS TECHNOLOGY AND THE MECHANISM OF ACTION OF APPLICATION NOTE 12-001 USING SOUND TO SHAPE LIGHT Page 1 of 6 Tutorial on How the TAG Lens Works This brief tutorial explains the

More information

Bias errors in PIV: the pixel locking effect revisited.

Bias errors in PIV: the pixel locking effect revisited. Bias errors in PIV: the pixel locking effect revisited. E.F.J. Overmars 1, N.G.W. Warncke, C. Poelma and J. Westerweel 1: Laboratory for Aero & Hydrodynamics, University of Technology, Delft, The Netherlands,

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing Peter D. Burns and Don Williams Eastman Kodak Company Rochester, NY USA Abstract It has been almost five years since the ISO adopted

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Laser Beam Analysis Using Image Processing

Laser Beam Analysis Using Image Processing Journal of Computer Science 2 (): 09-3, 2006 ISSN 549-3636 Science Publications, 2006 Laser Beam Analysis Using Image Processing Yas A. Alsultanny Computer Science Department, Amman Arab University for

More information

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1 Today Defocus Deconvolution / inverse filters MIT.7/.70 Optics //05 wk5-a- MIT.7/.70 Optics //05 wk5-a- Defocus MIT.7/.70 Optics //05 wk5-a-3 0 th Century Fox Focus in classical imaging in-focus defocus

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the ECEN 4606 Lab 8 Spectroscopy SUMMARY: ROBLEM 1: Pedrotti 3 12-10. In this lab, you will design, build and test an optical spectrum analyzer and use it for both absorption and emission spectroscopy. The

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon) MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department 2.71/2.710 Final Exam May 21, 2013 Duration: 3 hours (9 am-12 noon) CLOSED BOOK Total pages: 5 Name: PLEASE RETURN THIS BOOKLET WITH

More information

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie Measurement of low-order aberrations with an autostigmatic microscope William P. Kuhn Measurement of low-order aberrations with

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 3: Imaging 2 the Microscope Original Version: Professor McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create highly

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

Cardinal Points of an Optical System--and Other Basic Facts

Cardinal Points of an Optical System--and Other Basic Facts Cardinal Points of an Optical System--and Other Basic Facts The fundamental feature of any optical system is the aperture stop. Thus, the most fundamental optical system is the pinhole camera. The image

More information

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name: EE119 Introduction to Optical Engineering Spring 2003 Final Exam Name: SID: CLOSED BOOK. THREE 8 1/2 X 11 SHEETS OF NOTES, AND SCIENTIFIC POCKET CALCULATOR PERMITTED. TIME ALLOTTED: 180 MINUTES Fundamental

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

Digital Imaging Systems for Historical Documents

Digital Imaging Systems for Historical Documents Digital Imaging Systems for Historical Documents Improvement Legibility by Frequency Filters Kimiyoshi Miyata* and Hiroshi Kurushima** * Department Museum Science, ** Department History National Museum

More information

ADVANCED OPTICS LAB -ECEN Basic Skills Lab

ADVANCED OPTICS LAB -ECEN Basic Skills Lab ADVANCED OPTICS LAB -ECEN 5606 Basic Skills Lab Dr. Steve Cundiff and Edward McKenna, 1/15/04 Revised KW 1/15/06, 1/8/10 Revised CC and RZ 01/17/14 The goal of this lab is to provide you with practice

More information

Confocal Imaging Through Scattering Media with a Volume Holographic Filter

Confocal Imaging Through Scattering Media with a Volume Holographic Filter Confocal Imaging Through Scattering Media with a Volume Holographic Filter Michal Balberg +, George Barbastathis*, Sergio Fantini % and David J. Brady University of Illinois at Urbana-Champaign, Urbana,

More information

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1 TSBB09 Image Sensors 2018-HT2 Image Formation Part 1 Basic physics Electromagnetic radiation consists of electromagnetic waves With energy That propagate through space The waves consist of transversal

More information

Tutorial Zemax 9: Physical optical modelling I

Tutorial Zemax 9: Physical optical modelling I Tutorial Zemax 9: Physical optical modelling I 2012-11-04 9 Physical optical modelling I 1 9.1 Gaussian Beams... 1 9.2 Physical Beam Propagation... 3 9.3 Polarization... 7 9.4 Polarization II... 11 9 Physical

More information

Observational Astronomy

Observational Astronomy Observational Astronomy Instruments The telescope- instruments combination forms a tightly coupled system: Telescope = collecting photons and forming an image Instruments = registering and analyzing the

More information

Optics Laboratory Spring Semester 2017 University of Portland

Optics Laboratory Spring Semester 2017 University of Portland Optics Laboratory Spring Semester 2017 University of Portland Laser Safety Warning: The HeNe laser can cause permanent damage to your vision. Never look directly into the laser tube or at a reflection

More information

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 Objective: Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 This Matlab Project is an extension of the basic correlation theory presented in the course. It shows a practical application

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

Instructions for the Experiment

Instructions for the Experiment Instructions for the Experiment Excitonic States in Atomically Thin Semiconductors 1. Introduction Alongside with electrical measurements, optical measurements are an indispensable tool for the study of

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

Supplementary Materials

Supplementary Materials Supplementary Materials In the supplementary materials of this paper we discuss some practical consideration for alignment of optical components to help unexperienced users to achieve a high performance

More information

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Optical Coherence: Recreation of the Experiment of Thompson and Wolf Optical Coherence: Recreation of the Experiment of Thompson and Wolf David Collins Senior project Department of Physics, California Polytechnic State University San Luis Obispo June 2010 Abstract The purpose

More information

Diffuser / Homogenizer - diffractive optics

Diffuser / Homogenizer - diffractive optics Diffuser / Homogenizer - diffractive optics Introduction Homogenizer (HM) product line can be useful in many applications requiring a well-defined beam shape with a randomly-diffused intensity profile.

More information

Use of Computer Generated Holograms for Testing Aspheric Optics

Use of Computer Generated Holograms for Testing Aspheric Optics Use of Computer Generated Holograms for Testing Aspheric Optics James H. Burge and James C. Wyant Optical Sciences Center, University of Arizona, Tucson, AZ 85721 http://www.optics.arizona.edu/jcwyant,

More information

White Paper: Modifying Laser Beams No Way Around It, So Here s How

White Paper: Modifying Laser Beams No Way Around It, So Here s How White Paper: Modifying Laser Beams No Way Around It, So Here s How By John McCauley, Product Specialist, Ophir Photonics There are many applications for lasers in the world today with even more on the

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Student Name Date MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.161 Modern Optics Project Laboratory Laboratory Exercise No. 3 Fall 2005 Diffraction

More information

Focused Image Recovery from Two Defocused

Focused Image Recovery from Two Defocused Focused Image Recovery from Two Defocused Images Recorded With Different Camera Settings Murali Subbarao Tse-Chung Wei Gopal Surya Department of Electrical Engineering State University of New York Stony

More information

arxiv:physics/ v1 [physics.optics] 12 May 2006

arxiv:physics/ v1 [physics.optics] 12 May 2006 Quantitative and Qualitative Study of Gaussian Beam Visualization Techniques J. Magnes, D. Odera, J. Hartke, M. Fountain, L. Florence, and V. Davis Department of Physics, U.S. Military Academy, West Point,

More information

3.0 Alignment Equipment and Diagnostic Tools:

3.0 Alignment Equipment and Diagnostic Tools: 3.0 Alignment Equipment and Diagnostic Tools: Alignment equipment The alignment telescope and its use The laser autostigmatic cube (LACI) interferometer A pin -- and how to find the center of curvature

More information

Physics 3340 Spring Fourier Optics

Physics 3340 Spring Fourier Optics Physics 3340 Spring 011 Purpose Fourier Optics In this experiment we will show how the Fraunhofer diffraction pattern or spatial Fourier transform of an object can be observed within an optical system.

More information

Fast MTF measurement of CMOS imagers using ISO slantededge methodology

Fast MTF measurement of CMOS imagers using ISO slantededge methodology Fast MTF measurement of CMOS imagers using ISO 2233 slantededge methodology M.Estribeau*, P.Magnan** SUPAERO Integrated Image Sensors Laboratory, avenue Edouard Belin, 34 Toulouse, France ABSTRACT The

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

SIGNAL TO NOISE RATIO EFFECTS ON APERTURE SYNTHESIS FOR DIGITAL HOLOGRAPHIC LADAR

SIGNAL TO NOISE RATIO EFFECTS ON APERTURE SYNTHESIS FOR DIGITAL HOLOGRAPHIC LADAR SIGNAL TO NOISE RATIO EFFECTS ON APERTURE SYNTHESIS FOR DIGITAL HOLOGRAPHIC LADAR Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Far field intensity distributions of an OMEGA laser beam were measured with

Far field intensity distributions of an OMEGA laser beam were measured with Experimental Investigation of the Far Field on OMEGA with an Annular Apertured Near Field Uyen Tran Advisor: Sean P. Regan Laboratory for Laser Energetics Summer High School Research Program 200 1 Abstract

More information

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

IMAGE SENSOR SOLUTIONS. KAC-96-1/5 Lens Kit. KODAK KAC-96-1/5 Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2 KODAK for use with the KODAK CMOS Image Sensors November 2004 Revision 2 1.1 Introduction Choosing the right lens is a critical aspect of designing an imaging system. Typically the trade off between image

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Those who wish to succeed must ask the right preliminary questions Aristotle Images

More information

ADVANCED OPTICS LAB -ECEN 5606

ADVANCED OPTICS LAB -ECEN 5606 ADVANCED OPTICS LAB -ECEN 5606 Basic Skills Lab Dr. Steve Cundiff and Edward McKenna, 1/15/04 rev KW 1/15/06, 1/8/10 The goal of this lab is to provide you with practice of some of the basic skills needed

More information

Very short introduction to light microscopy and digital imaging

Very short introduction to light microscopy and digital imaging Very short introduction to light microscopy and digital imaging Hernan G. Garcia August 1, 2005 1 Light Microscopy Basics In this section we will briefly describe the basic principles of operation and

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall,

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

WaveMaster IOL. Fast and accurate intraocular lens tester

WaveMaster IOL. Fast and accurate intraocular lens tester WaveMaster IOL Fast and accurate intraocular lens tester INTRAOCULAR LENS TESTER WaveMaster IOL Fast and accurate intraocular lens tester WaveMaster IOL is a new instrument providing real time analysis

More information

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT Phase and Amplitude Control Ability using Spatial Light Modulators and Zero Path Length Difference Michelson Interferometer Michael G. Littman, Michael Carr, Jim Leighton, Ezekiel Burke, David Spergel

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch Design of a digital holographic interferometer for the M. P. Ross, U. Shumlak, R. P. Golingo, B. A. Nelson, S. D. Knecht, M. C. Hughes, R. J. Oberto University of Washington, Seattle, USA Abstract The

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Cameras. CSE 455, Winter 2010 January 25, 2010

Cameras. CSE 455, Winter 2010 January 25, 2010 Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project

More information

Camera Resolution and Distortion: Advanced Edge Fitting

Camera Resolution and Distortion: Advanced Edge Fitting 28, Society for Imaging Science and Technology Camera Resolution and Distortion: Advanced Edge Fitting Peter D. Burns; Burns Digital Imaging and Don Williams; Image Science Associates Abstract A frequently

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted

More information

AgilEye Manual Version 2.0 February 28, 2007

AgilEye Manual Version 2.0 February 28, 2007 AgilEye Manual Version 2.0 February 28, 2007 1717 Louisiana NE Suite 202 Albuquerque, NM 87110 (505) 268-4742 support@agiloptics.com 2 (505) 268-4742 v. 2.0 February 07, 2007 3 Introduction AgilEye Wavefront

More information

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image Introduction Chapter 16 Diagnostic Radiology Radiation Dosimetry I Text: H.E Johns and J.R. Cunningham, The physics of radiology, 4 th ed. http://www.utoledo.edu/med/depts/radther In diagnostic radiology

More information

Advanced Lens Design

Advanced Lens Design Advanced Lens Design Lecture 3: Aberrations I 214-11-4 Herbert Gross Winter term 214 www.iap.uni-jena.de 2 Preliminary Schedule 1 21.1. Basics Paraxial optics, imaging, Zemax handling 2 28.1. Optical systems

More information

Characterizing the Temperature. Sensitivity of the Hartmann Sensor

Characterizing the Temperature. Sensitivity of the Hartmann Sensor Characterizing the Temperature Sensitivity of the Hartmann Sensor Picture of the Hartmann Sensor in the Optics Lab, University of Adelaide Kathryn Meehan June 2 July 30, 2010 Optics and Photonics Group

More information

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester WaveMaster IOL Fast and Accurate Intraocular Lens Tester INTRAOCULAR LENS TESTER WaveMaster IOL Fast and accurate intraocular lens tester WaveMaster IOL is an instrument providing real time analysis of

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

DESIGN NOTE: DIFFRACTION EFFECTS

DESIGN NOTE: DIFFRACTION EFFECTS NASA IRTF / UNIVERSITY OF HAWAII Document #: TMP-1.3.4.2-00-X.doc Template created on: 15 March 2009 Last Modified on: 5 April 2010 DESIGN NOTE: DIFFRACTION EFFECTS Original Author: John Rayner NASA Infrared

More information

Performance of Image Intensifiers in Radiographic Systems

Performance of Image Intensifiers in Radiographic Systems DOE/NV/11718--396 LA-UR-00-211 Performance of Image Intensifiers in Radiographic Systems Stuart A. Baker* a, Nicholas S. P. King b, Wilfred Lewis a, Stephen S. Lutz c, Dane V. Morgan a, Tim Schaefer a,

More information

1.6 Beam Wander vs. Image Jitter

1.6 Beam Wander vs. Image Jitter 8 Chapter 1 1.6 Beam Wander vs. Image Jitter It is common at this point to look at beam wander and image jitter and ask what differentiates them. Consider a cooperative optical communication system that

More information

Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET

Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET The Advanced Optics set consists of (A) Incandescent Lamp (B) Laser (C) Optical Bench (with magnetic surface and metric scale) (D) Component Carriers

More information

Using Stock Optics. ECE 5616 Curtis

Using Stock Optics. ECE 5616 Curtis Using Stock Optics What shape to use X & Y parameters Please use achromatics Please use camera lens Please use 4F imaging systems Others things Data link Stock Optics Some comments Advantages Time and

More information

Investigation of an optical sensor for small angle detection

Investigation of an optical sensor for small angle detection Investigation of an optical sensor for small angle detection usuke Saito, oshikazu rai and Wei Gao Nano-Metrology and Control Lab epartment of Nanomechanics Graduate School of Engineering, Tohoku University

More information

Digital Images & Image Quality

Digital Images & Image Quality Introduction to Medical Engineering (Medical Imaging) Suetens 1 Digital Images & Image Quality Ho Kyung Kim Pusan National University Radiation imaging DR & CT: x-ray Nuclear medicine: gamma-ray Ultrasound

More information

3D light microscopy techniques

3D light microscopy techniques 3D light microscopy techniques The image of a point is a 3D feature In-focus image Out-of-focus image The image of a point is not a point Point Spread Function (PSF) 1D imaging 2D imaging 3D imaging Resolution

More information

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

A novel tunable diode laser using volume holographic gratings

A novel tunable diode laser using volume holographic gratings A novel tunable diode laser using volume holographic gratings Christophe Moser *, Lawrence Ho and Frank Havermeyer Ondax, Inc. 85 E. Duarte Road, Monrovia, CA 9116, USA ABSTRACT We have developed a self-aligned

More information

Sharpness, Resolution and Interpolation

Sharpness, Resolution and Interpolation Sharpness, Resolution and Interpolation Introduction There are a lot of misconceptions about resolution, camera pixel count, interpolation and their effect on astronomical images. Some of the confusion

More information

MODULAR ADAPTIVE OPTICS TESTBED FOR THE NPOI

MODULAR ADAPTIVE OPTICS TESTBED FOR THE NPOI MODULAR ADAPTIVE OPTICS TESTBED FOR THE NPOI Jonathan R. Andrews, Ty Martinez, Christopher C. Wilcox, Sergio R. Restaino Naval Research Laboratory, Remote Sensing Division, Code 7216, 4555 Overlook Ave

More information

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc.

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc. Beam Profiling by Michael Scaggs Haas Laser Technologies, Inc. Introduction Lasers are ubiquitous in industry today. Carbon Dioxide, Nd:YAG, Excimer and Fiber lasers are used in many industries and a myriad

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

Multi aperture coherent imaging IMAGE testbed

Multi aperture coherent imaging IMAGE testbed Multi aperture coherent imaging IMAGE testbed Nick Miller, Joe Haus, Paul McManamon, and Dave Shemano University of Dayton LOCI Dayton OH 16 th CLRC Long Beach 20 June 2011 Aperture synthesis (part 1 of

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES The current multiplication mechanism offered by dynodes makes photomultiplier tubes ideal for low-light-level measurement. As explained earlier, there

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

A Laser-Based Thin-Film Growth Monitor

A Laser-Based Thin-Film Growth Monitor TECHNOLOGY by Charles Taylor, Darryl Barlett, Eric Chason, and Jerry Floro A Laser-Based Thin-Film Growth Monitor The Multi-beam Optical Sensor (MOS) was developed jointly by k-space Associates (Ann Arbor,

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Imaging Fourier transform spectrometer

Imaging Fourier transform spectrometer Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2001 Imaging Fourier transform spectrometer Eric Sztanko Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

Southern African Large Telescope. RSS CCD Geometry

Southern African Large Telescope. RSS CCD Geometry Southern African Large Telescope RSS CCD Geometry Kenneth Nordsieck University of Wisconsin Document Number: SALT-30AM0011 v 1.0 9 May, 2012 Change History Rev Date Description 1.0 9 May, 2012 Original

More information

Design Description Document

Design Description Document UNIVERSITY OF ROCHESTER Design Description Document Flat Output Backlit Strobe Dare Bodington, Changchen Chen, Nick Cirucci Customer: Engineers: Advisor committee: Sydor Instruments Dare Bodington, Changchen

More information