SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE. Thesis. Submitted to. The School of Engineering of the UNIVERSITY OF DAYTON

Similar documents
A Study of Slanted-Edge MTF Stability and Repeatability

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Be aware that there is no universal notation for the various quantities.

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

Properties of Structured Light

AgilOptics mirrors increase coupling efficiency into a 4 µm diameter fiber by 750%.

ELEC Dr Reji Mathew Electrical Engineering UNSW

APPLICATION NOTE

Bias errors in PIV: the pixel locking effect revisited.

Sensors and Sensing Cameras and Camera Calibration

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing

SUPER RESOLUTION INTRODUCTION

Laser Beam Analysis Using Image Processing

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Deconvolution , , Computational Photography Fall 2018, Lecture 12

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope

ECEN 4606, UNDERGRADUATE OPTICS LAB

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Cardinal Points of an Optical System--and Other Basic Facts

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Digital Imaging Systems for Historical Documents

ADVANCED OPTICS LAB -ECEN Basic Skills Lab

Confocal Imaging Through Scattering Media with a Volume Holographic Filter

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Tutorial Zemax 9: Physical optical modelling I

Observational Astronomy

Optics Laboratory Spring Semester 2017 University of Portland

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

ECEN 4606, UNDERGRADUATE OPTICS LAB

Instructions for the Experiment

ME 6406 MACHINE VISION. Georgia Institute of Technology

Supplementary Materials

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Diffuser / Homogenizer - diffractive optics

Use of Computer Generated Holograms for Testing Aspheric Optics

White Paper: Modifying Laser Beams No Way Around It, So Here s How

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Focused Image Recovery from Two Defocused

arxiv:physics/ v1 [physics.optics] 12 May 2006

3.0 Alignment Equipment and Diagnostic Tools:

Physics 3340 Spring Fourier Optics

Fast MTF measurement of CMOS imagers using ISO slantededge methodology

Deconvolution , , Computational Photography Fall 2017, Lecture 17

SIGNAL TO NOISE RATIO EFFECTS ON APERTURE SYNTHESIS FOR DIGITAL HOLOGRAPHIC LADAR

Digital Image Processing

1.Discuss the frequency domain techniques of image enhancement in detail.

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Far field intensity distributions of an OMEGA laser beam were measured with

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

ADVANCED OPTICS LAB -ECEN 5606

Very short introduction to light microscopy and digital imaging

Deblurring. Basics, Problem definition and variants

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Midterm Examination CS 534: Computational Photography

WaveMaster IOL. Fast and accurate intraocular lens tester

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

Digital Image Processing

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

Cameras. CSE 455, Winter 2010 January 25, 2010

Camera Resolution and Distortion: Advanced Edge Fitting

Defense Technical Information Center Compilation Part Notice

AgilEye Manual Version 2.0 February 28, 2007

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image

Advanced Lens Design

Characterizing the Temperature. Sensitivity of the Hartmann Sensor

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Image Processing for feature extraction

DESIGN NOTE: DIFFRACTION EFFECTS

Performance of Image Intensifiers in Radiographic Systems

1.6 Beam Wander vs. Image Jitter

Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET

Using Stock Optics. ECE 5616 Curtis

Investigation of an optical sensor for small angle detection

Digital Images & Image Quality

3D light microscopy techniques

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

A novel tunable diode laser using volume holographic gratings

Sharpness, Resolution and Interpolation

MODULAR ADAPTIVE OPTICS TESTBED FOR THE NPOI

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc.

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Multi aperture coherent imaging IMAGE testbed

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES

On spatial resolution

A Laser-Based Thin-Film Growth Monitor

Modeling and Synthesis of Aperture Effects in Cameras

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Imaging Fourier transform spectrometer

Southern African Large Telescope. RSS CCD Geometry

Design Description Document

Transcription:

SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree Master of Science in Electro-Optics By Marc A. Finet UNIVERSITY OF DAYTON Dayton, Ohio August 2012

SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Name: Finet, Marc Alain APPROVED BY: Russell Hardie, Ph.D. Advisory Committee Chairman Professor Electrical Engineering & Electro-Optics Christopher D. Brewer, Ph.D. Committee Member Technical Advisor AFRL/MLPJE, WPAFB, OH Peter Powers, Ph.D. Committee Member Professor Department of Physics John G. Weber, Ph.D. Associate Dean School of Engineering Tony E. Saliba, Ph.D. Dean, School of Engineering & Wilke Distinguished Professor ii

ABSTRACT SENSOR HARDENING THROUGH TRANSLATION OF THE DETECTOR FROM THE FOCAL PLANE Name: Finet, Marc Alain University of Dayton Advisor: Dr. Russell Hardie The defense industry has numerous detectors that provide critical imaging capability on tactical and reconnaissance platforms and have been shown to be susceptible to permanent damage from high energy pulsed lasers in both laboratory and field testing. Much of the materials research into this involves two different methods of providing pulsed laser damage protection: extrinsic limiter implementation and intrinsic detector hardening. This thesis focused on what gains could be made using another method: system defocus and detector redundancy. The work of this thesis revolved around hardening a camera system through defocusing the focal plane array (FPA) and then using image restoration algorithms to regain the image quality of the degraded images. This system, a three channel image splitting prism with lens mount, provided a unique opportunity to test multiple images of an identical scene with slight spatial iii

misalignments, varying sensor defocus and precisely measured optical degradation as measured by the Point Spread Function. These defocused images were then restored using filters that utilized information from only a single channel (the Wiener Filter, Regularized Least Squares Filter, and Constrained Least Squares Filter) and across multiple channels (Multichannel Constrained Least Squares Filter). Results from the single channel filters were excellent and allowed significant sensor hardening without image degradation when compared to the unfiltered image. Results from the multichannel RLS filter as tested were disappointing when compared to those from the single channel however and could be expanded upon in future work. iv

TABLE OF CONTENTS ABSTRACT... iii TABLE OF CONTENTS... v LIST OF FIGURES.vii CHAPTER 1: INTRODUCTION... 1 CHAPTER 2: PROOF OF IMAGE HARDENING THROUGH TRANSLATION OF THE FOCAL ARRAY FROM FOCUS... 4 2.1 Experimental Setup... 6 2.2 Experimental Results... 8 CHAPTER 3: DESCRIPTION OF OPTICAL IMAGE PROCESSING FILTERS FOR USE IN IMAGE RESTORATION... 11 3.1 Wiener Filter Image Restoration... 11 3.2 Constrained Least Squares Image Restoration... 13 3.3. Regularized Least Squares Image Restoration... 15 3.4. Multichannel Regularized Least Squares Image Restoration... 17 CHAPTER 4: EXPERIMENTAL SETUP FOR DEFOCUS RESTORATION MEASUREMENT... 19 4.1 Image Capture and Defocus Measurement System Description... 20 4.1.1 Multichannel Prism Description... 22 v

4.1.2 Imaging Targets... 24 4.2 Defocus Hardening... 28 CHAPTER 5: SIMULATIONS... 37 5.1 Input Images... 38 5.2 Blur How It Works... 39 5.3 Add Noise How Noise Is Selected And How It Works... 45 5.4 Calculate Wiener SNR, RLS Alphas, CLS Gammas... 49 5.5 Apply Values To Simulated Images - Results... 54 5.6 Multi-Channel Restoration -All Three Channels In Focus... 72 CHAPTER 6: ACTUAL IMAGES SINGLE CHANNEL... 79 6.1 Show Of The Measured Images... 79 CHAPTER 7: MULTICHANNEL IMAGES... 90 7.1 Predicative Defocus Restoration Through Simulations... 95 CHAPTER 8: CONCLUSIONS... 98 REFERENCES... 105 vi

LIST OF FIGURES Figure 1: Laser damaged CCD array... 1 Figure 2: Gaussian beam profile through focus... 5 Figure 3: Experimental Setup proving defocused sensors harden them to high energy lasers... 6 Figure 4: Profile of Continuum beam used in experiment... 7 Figure 5: Damage grid on VPC-790B camera.... 8 Figure 6: Damage threshold of camera vs fluence... 9 Figure 7: Sample cross channel modulation transfer functions... 17 Figure 8: Experiment setup... 20 Figure 9: Pacific Corp PC-370A... 20 Figure 10: Prism system with attached cameras... 21 Figure 11: Image splitting prism... 22 Figure 12: Diagram showing dimensions and optical paths of Optec prisms... 23 Figure 13: Registration pattern... 25 Figure 14: Sample PSF... 26 Figure 15: Slanted edge target... 27 Figure 16: Sample image scene [10]... 28 Figure 17: Gaussian beam propagation [12]... 29 vii

Figure 18: Beam radius vs translation from focal plane... 31 Figure 19: Distance between sensor and focal point as image is translated.... 33 Figure 20: Distance between sensor and focal point as image is translated.... 34 Figure 21: Peak energy change vs image target position... 35 Figure 22: Hardening change vs distance from focus... 35 Figure 23: Cameraman.tif... 38 Figure 24: Westconcordorthophoto.png... 39 Figure 25: Scaled red PSFs... 41 Figure 26: Scaled blue PSFs... 41 Figure 27: Scaled green PSFs... 42 Figure 28: Radial cross section of red channel PSF... 42 Figure 29: Radial cross section of blue channel PSF... 43 Figure 30: Radial cross section of green channel PSF... 43 Figure 31: Blurring using PSF convolution (Red channel at 9cm)... 44 Figure 32: Simulated image blur at different distances... 45 Figure 33: Simulating the noise of the camera... 46 Figure 34: Cameraman.tif with 4 levels of noise... 48 Figure 35: Zoomed in simulated noise levels with different variance... 48 Figure 36: Optimal Wiener filter SNR values for all three channels (not at same scale)... 50 Figure 37: Optimal RLS filter alpha values for all three channels (not at same scale)... 51 Figure 38: Optimal CLS filter gamma values for all three channels... 52 viii

Figure 39: Comparison of RLS filter optimal alpha value scale (red channel)... 52 Figure 40: Comparison of Wiener filter optimal SNR value scale (red channel). 53 Figure 41: Comparison of CLS filter optimal gamma scale (red channel)... 53 Figure 42: Red channel image restoration accuracy (no noise) Cameraman.tif... 55 Figure 43: Green channel image restoration accuracy (no noise) Cameraman.tif... 55 Figure 44: Blue channel image restoration accuracy (no noise) Cameraman.tif... 56 Figure 45: Optimal Cameraman filter restoration images (Red channel)... 57 Figure 46: Red channel V = 0 maximum defocus restored Cameraman images 58 Figure 47: Optimal Cameraman filter restoration images (Green channel)... 59 Figure 48: Green channel V = 0 maximum defocus restored Cameraman images... 60 Figure 49: Optimal Cameraman filter restoration images (Blue channel)... 61 Figure 50: Blue channel V = 0 maximum defocus restored Cameraman images 62 Figure 51: Red channel image restoration accuracy (no noise) Satellite image... 63 Figure 52: Green channel image restoration accuracy (no noise) Satellite image... 64 Figure 53: Blue channel image restoration accuracy (no noise) Satellite image... 64 Figure 54: Optimal Satellite filter restoration images for V=0 (Red channel)... 65 ix

Figure 55: Red channel V = 0 maximum defocus restored Satellite images... 66 Figure 56: Summary of filters restoration ability on simulated red channel for all noise levels - Satellite image... 67 Figure 57: Red channel Wiener filter restoration error... 69 Figure 58: Red channel RLS filter restoration error... 69 Figure 59: Red channel CLS filter restoration error... 70 Figure 60: Optimized Wiener filter restoration coefficient... 71 Figure 61: Optimized RLS filter restoration coefficient... 71 Figure 62: Optimized CLS filter restoration coefficient... 72 Figure 63: Comparison of multi-channel vs single channel algorithm restoration quality... 73 Figure 64: Red images multichannel RLS vs single channel RLS... 75 Figure 65: Comparison of registered multi-channel vs single channel algorithm restoration quality... 77 Figure 66: Showing effects of image registration via affine transforms... 78 Figure 67: Unrestored prism images... 80 Figure 68: Zoomed in unrestored images... 81 Figure 69: Image #1 (0.5cm before focus) algorithm comparison... 82 Figure 70: Zoomed in view of image 1 (0.5cm before focus)... 83 Figure 71: Image #6 (at focus) algorithm comparison... 84 Figure 72: Zoomed In image 6... 85 Figure 73: Filter comparison of image 11... 86 Figure 74: Zoomed in image 11... 87 x

Figure 75: Image #16 filter comparison... 88 Figure 76: Image 21... 89 Figure 77: Comparison between single and multi channel RLS at location 1... 90 Figure 78: Zoomed in comparison of single and multichannel RLS filters at location 1... 91 Figure 79: Comparison between single and multi channel RLS at location 6... 92 Figure 80: Zoomed in comparison of single and multi channel RLS filters at location 6... 92 Figure 81: Comparison between single and multi channel RLS at locations 11, 16, and 21... 93 Figure 82: Zoomed in comparison of single and multi channel RLS filters at locations 11, 16 and 21... 94 Figure 83: Test of predicative defocus restoration abilities... 97 xi

CHAPTER 1 INTRODUCTION Figure 1: Laser damaged CCD array The Air Force has numerous detectors that provide critical imaging capability on tactical and reconnaissance platforms and have been shown to be susceptible to permanent damage (such as that shown in Figure 1) from high energy pulsed lasers in both laboratory and field testing. As such, the USAF is researching many methods to provide laser hardening to their imaging systems. Most materials research into this involves two different methods of providing 1

pulsed laser damage protection: extrinsic limiter implementation and intrinsic detector hardening. This thesis looks to examine a third option: system defocus and redundancy. This method is done through the use of an imaging system containing multiple, redundant detectors all imaging the same source. These systems divide the image either into several identical images, each with an equal number of photons, or into separate images and passing each band into a separate sensor. These images from several imaging sensors are then recombined into a single image. Examining the imaging system that splits the imaged scene into several identical images, it becomes evident that you have the capability to image an identical scene many times simultaneously under several different optical conditions such as image defocus. Since each image will contain different information on the same identical scene it is then possible to use post-processing algorithms to extract that data and provide an enhanced restored image of that scene. This thesis will cover just such a technique. Previous work has shown that under certain conditions you may be able to restore a defocused, blurred image to a better quality than if the image had been in focus through the use of image processing algorithms [1]. It has also shown that the limit to the amount and quality of the image restoration is dependent upon the size of the wavelength band visible to the imaging sensor and the ability to accurately measure the Point Spread Function of the system. Additionally, this work theorized that measuring 2

an image with multiple sensors, each with a varying amount of defocus with differing locations of the zeroes on their respective MTF s would aid in restoring defocused images. This thesis will build upon this work by proving that image defocus does increase the laser threshold of the system and testing the use of a redundant imaging system allows for greater image restoration than using a single sensor. A redundant imaging system that split the image into three identical channels (each with 1/3 the image intensity) was selected and then the sensors that read each of these channels were defocused by different amounts by moving them from the focal plane such that an identical image was placed onto each sensor with varying amounts of blur. The goal of this was to show that when the images were recombined after post-processing the additional data from the out of focus channels would increase the quality of the image when compared to the image recorded by a single imaging channel. 3

CHAPTER 2 PROOF OF IMAGE HARDENING THROUGH TRANSLATION OF THE FOCAL ARRAY FROM FOCUS 4

Normalized Intensity 1 0.9 0.8 spot in focus 2x spot size 3x spot size 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-5 -4-3 -2-1 0 1 2 3 4 5 Beam cross section Figure 2: Gaussian beam profile through focus A relatively simple way to reduce the threat of a high energy laser upon a digital imaging sensor is to reduce the energy density at the sensor s imaging plane by moving the detector out of the focal plane of the imaging lens. This has the effect of increasing the effective spot size which in turn reduces the peak laser intensity incident upon the detector, as shown in Figure 2). This section will describes the experiment performed which proves that by moving an imaging system away from the focal plane of a lens you increase the damage threshold of the lens/detector system. 5

2.1 Experimental Setup Reference Diode 1.2 N.D. Filter camera system under test Continuum LASER ~26uJ @ 532nm Lens (f=250mm) Joule Meter Figure 3: Experimental Setup proving defocused sensors harden them to high energy lasers The experimental setup shown in Figure 3 is designed to test a VPC 790B camera s damage threshold at several different spot sizes with each containing the same amount of energy. The beam used was a Continuum PowerLite Precision II 9010 operating at ~26uJ at a wavelength of 532nm. This beam proved to have a relatively good beam profile but contained high shot-to-shot energy fluctuations making it necessary to monitor the beams output energy with a reference diode calibrated to the lasers energy levels with a Joule meter. By calibrating the reference diode it was measured that for the setup used it would get a reading of 5.6E-6 Joules of energy upon the target (before the insertion of N.D. filters) for every Volt that the reference diode outputted. This allowed the tracking of the shot-to-shot energy fluctuations of the Continuum. fluctuations of the Continuum. 6

Zoom Raw beam profile at lens aperture Focused beam with 250mm lens Figure 4: Profile of Continuum beam used in experiment To cut down on the Continuums output energy to get near the camera s measured damage threshold it was necessary to use a stack of Neutral Density (N.D.) filters to reduce the energy of the incident beam. By inserting a total N.D. of 1.2 before the focusing lens it was possible to attenuate the beam s energy to a level that was just above the damage threshold of the camera when the beam was at focus. The attenuated beam was then steered into a lens with a focal length of 250mm. Beyond the lens the camera was moved through the Raleigh Range of the lens and then slowly out of focus. This allowed precise control of the spot size of the beam. 7

5mm out of focus Z In focus (0mm) 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 Figure 5: Damage grid on VPC-790B camera. The camera under test was attached to three Newport linear actuators controlled by a Newport ESP300 control box, allowing precision movement in the X,Y, and Z to an accuracy of less than ten microns. This allowed the creation of a damage grid pattern for easy analysis, with each row being a different distance from the focal point of the lens (with row one being at the focal point) repeated several times in different columns. 2.2 Experimental Results For this experiment 23 sets of data were taken, with each data set moving the camera system from in focus to 5mm out of focus over the course of eleven separate laser pulses. The results are shown in Figure 5. As you can see from Figure 5 different results were recorded between data sets (columns), with some data sets damaging and some not at identical distances from the focal plane of the lens. This is due to the shot-to-shot noise of the beam which was recorded 8

through the reference diode used in the experiment. Taking the energy levels of the laser pulses into account and plotting the results according to spot size gives Figure 6, with damage marked with circles. Figure 6: Damage threshold of camera vs fluence As shown, the laser energy levels needed to damage the sensor increases as the camera moves away from the focal plane and the beam radius increases, shown here as a decrease in fluence. This is to be expected and proves that the decrease in peak intensity overcomes any possible issues with heat dissipation. These fluence levels show that an increase in beam radius 9

from.023mm to.041mm increases the sensor damage threshold from approximately 0.52uJ to 0.81uJ, marking an increase in damage threshold of about 1/3. This corresponds to moving the sensor from the focal plane by 2.5mm or 1% of a 250mm lens. This is a significant amount of protection that is gained by a slight shift from the lens focal plane and was worth investigating more. 10

CHAPTER 3 DESCRIPTION OF OPTICAL IMAGE PROCESSING FILTERS FOR USE IN IMAGE RESTORATION Now that it has been proven that defocusing an imaging system hardens the optical sensor, we will talk about how the blurring caused by defocusing the image was removed using image restoration filters. This was done using Wiener filters, Regularized Least Squares Filters and Constrained Least Squares Filters. 3.1 Wiener Filter Image Restoration When describing image blur, it is easiest to describe it as a shift-invariant blur model with noise, described with formula (3.1.1) [2] 11

y( m, n) h( m, n)* x( m, n) u( m, n) (3.1.1) In this formula, y(m,n) represents the two dimensional image as observed by our imaging system with the original, perfect image itself is described as x(m,n). The Point Spread Function (PSF) of the imaging system is defined as h(m,n) and includes the blurring of all optical elements (the lens and the focal plane array of the imager). Finally, u(m,n) represents the noise levels introduced on the original image and seen on our observed image. The Wiener filter is best described in the frequency domain and can be written as G(k,l) such that Xˆ ( k, l) G( k, l) Y( k, l) (3.1.2) where X(k,l) is the desired restored image (which in this case is the Fourier Transform (FT) of the original image x), Y(k,l) the FT of the image as observed by the imaging system, and G(k,l) some as yet undetermined filter. G(k,l) can be chosen to minimize the expected value of the expression: l 2 X ( k, l) G( k, l) Y( k, ) (3.1.3) This expression is minimized with G( k, l) 2 H( k, l) H( k, l) S ( k, l) / S u x ( k, l) (3.1.4) H(k,l) = Fourier Transform of PSF S u (k,l) = noise power spectrum S x (k,l) = signal power spectrum 12

The noise power spectrum S u for an M x N image is S 2 u( k, l) MN u (3.1.5) with 2 u = Variance at each pixel. The signal power spectrum varies with each image, but most images have similar power spectra and the Wiener filter is insensitive to small variations in the signal power spectrum. This enables the use of a standard wiener filter model on each camera and have it work despite the images it s restoring being different. 3.2 Constrained Least Squares Image Restoration A major problem with the Wiener filter is that the power spectra of the undegraded image and noise must be known in advance. As discussed above, it is usually not possible to have an undegraded image to apply the filter to and so a general power spectra is traditionally used. The problem is that this means that the power spectra used is actually a statistical average and hence cannot be optimal to each image to which it is applied. An alternative method is called the Constrained Least Squares (CLS) filter. It has the advantage that the only prior knowledge about the imaging system required are the mean and variance of the noise. Unfortunately, this also means that it is extremely sensitive to noise. When describing the constrained least squares filter, it is once again easiest to describe it as a shift-invariant blur model with noise, described with Equation 3.1.1. 13

A major problem with solving this type of equation is that it has multiple solutions, and the solutions do not depend upon the continuity of the data (ie small changes in the observed image lead to very different results when calculating of the original image). In addition, H(k,l) is extremely sensitive to noise. The CLS filter alleviates these sensitivities by measuring the optimal solution against a measure of smoothness within the image and assumes the original image will have strong spatial correlations. The CLS filter used here was developed by myself and Dr. Russell Hardie (Univ. Dayton). To measure these spatial correlations the second derivative of the image (the Laplacian) was used to create a cost function, C. The cost function shown in equation (3.2.1) was used in this paper. C m, n 2 2 {[ y( m, n) h( m, n)* x( m, n)] [ l( m, n)* x( m, n)] } (3.2.1) Note that is a regularization parameter. This regularization parameter is what forces the image to be somewhat smooth by minimizing the high frequency content of the image. The larger the value the smoother the restored image will be. The frequency domain solution to the optimization problem is given by the expression ˆ H F( k, l) [ 2 H( k, l) ( k, l) ] G( k, l) 2 L( k, l) * (3.2.2) 14

where L(k,l) is the Fourier transform of the discrete Laplacian function in equation 3.2.3. 0 l ( x, y) 1/ 4 0 1/ 4 1 1/ 4 0 1/ 4 0 (3.2.3) Note that must be adjusted such that the constraint function is optimized ([3] pg. 266-268). 3.3. Regularized Least Squares Image Restoration An additional restoration method is called the Regularized Least Squares (RLS) filter. Like the Constrained Least Squares filter it has the advantage that the only prior knowledge about the imaging system required are the mean and variance of the noise. It is in some respects very similar to the CLS filter as discussed below. The RLS filter that used [4] was developed by Dr. Russell Hardie (University of Dayton). When describing the regularized least squares filter, it is once again easiest to describe it as a shift-invariant blur model with noise, described with Equation 3.1.1 Much like the CLS filter, the RLS filter has multiple solutions, and the solutions do not depend upon the continuity of the data (ie small changes in the observed image lead to very different results when calculating the original image). It too alleviates these sensitivities by measuring the optimal solution against a measure of smoothness within the image and assumes the original image will have strong spatial correlations. To measure these spatial 15

correlations the second derivative of the image (the Laplacian) was used to create a cost function, C. For this paper the cost function used was: C m, n 2 2 {[ y( m, n) h( m, n)* f ( m, n)] [ l( m, n)* f ( m, n)] } (3.3.1) where is a regularization parameter and f is the current estimate of the original image x. This regularization parameter is what forces the image to be somewhat smooth by minimizing the high frequency content of the image. The larger the value the smoother the restored image will be. When working with the filter it is best to operate in the frequency domain. The cost function C can then be written as: T T C 2H ( Hx y) C Cf (3.3.2) z 2 Note that L(k,l) is the Fourier transform of the discrete Laplacian function as described in Equation 3.2.3. Unlike the CLS filter, the RLS filter minimizes the cost function by iteratively guessing and improving the values for the initial observed image over a number of predefined iterations. Due to the iterative nature of the Regularized Least Squares filter, it is necessary to come up with an initial estimated value of the original image ( f ) and then try to improve it. For the original estimate the observed image, y, was used. f y (3.3.3) 16

This was then solved iteratively using the gradient decent optimization to choose the next estimated image value. This process is repeated for the desired number of iterations and if allowed to run for enough iterations it returned a close representation of the actual image that is observed by our system. 3.4. Multichannel Regularized Least Squares Image Restoration To use the Regularized Least Squares image restoration function on multiple frames of data, we simply change the cost function of the normal RLS filter to span several frames of data as shown in [4]. This turns our cost function into: C b a 1 m, n 2 2 {[ y( m, n) h( m, n)* f ( m, n)] [ l( m, n)* f ( m, n)] } (3.4.1) Where we are performing our multichannel RLS filtering over channels a=1,2,3, b. [4]. Figure 7: Sample cross channel modulation transfer functions The multichannel RLS algorithm has a major advantage over its single channel version in that it can use cross-channel information to increase its restoration ability. Because each frame is slightly different, each frame actually 17

contains different information regarding the same scene. Thus if we know the relationship between each frame we can use a mutlichannel filter to combine the information and create a single reconstructed image. This is extremely useful at filling in the zeroes of a camera systems Modulation Transfer Function (MTF). Figure 7 shows how this can be done by showing three sample MTF functions. At the spatial frequency where each MTF has a value of zero the imaging system is not able to reproduce any information of the scene that it is imaging. However, if all three of the sample imaging systems shown were imaging the same scene, we would be able fill in any gaps of our image scene with information obtained from the other two channels. In this way multichannel modulation transfer functions are able to achieve better performance than a single channel version. 18

CHAPTER 4 EXPERIMENTAL SETUP FOR DEFOCUS RESTORATION MEASUREMENT To test the restoration abilities of the image processing algorithms it was necessary to build a system setup that would allow the capture of images in a method that ensured precise and repeatable measurements over several imaging targets. Most importantly it had to allow fine, measured control in the amount of image blur being introduced into imaging system. This image blur and camera alignment needed to be easily and precisely duplicated such that one could image multiple imaging targets under identical conditions, enabling the correlation of the information obtained from multiple imaging targets with identical and alignments and induced optical blur levels 19

4.1 Image Capture and Defocus Measurement System Description 72.4mm 46.5mm 765.8mm Variable 45mm Image target Prism with mounted cameras Nikkor Rail Figure 8: Experiment setup Figure 9: Pacific Corp PC-370A The experimental setup chosen for this paper is shown in Figure 8. It was chosen to enable easily swap out imaging targets and move them through precisely measured amounts of defocus. The prism shown was an Optec prism designed specifically for this experiment. This prism splits the incoming image 20

into three identical output channels, each with an equal number of photons. This prism will be described further later in this section. Permanently mounted to each output channel is a PC-370A monochromatic CCD camera manufactured by Pacific Corporation (Figure 9 albeit without its own lens as shown), and an image of the prism with cameras permanently attached is shown in Figure 10. The imaging lens of the system was a Nikkor 105mm f2d-dc AF lens produced by Nikon, chosen for its high quality and the low amount of introduced image aberrations. Figure 10: Prism system with attached cameras The image targets used for this experiment were mounted to a sliding rail system with precisely measured and labeled millimeter ruler markings, enabling it to be moved both into and out of focus along the camera systems optical axis by precise, repeatable increments. At each location images were taken of three 21

different image targets: One to determine image registration parameters, one to measure defocus blurring (via the Point Spread Function), and one of a complex desert background scene (the sample image scene to restore). These images are discussed in more detail in section 4.1.2. 4.1.1 Multichannel Prism Description The prism selected was specially made by Optec Corporation. This prism takes a single input image and splits it into three identical output images (each receiving 1/3 of the photons) which are in turn imaged by a Figure 11: Image splitting prism 22

monochromatic camera system. Figure 11 shows the prism itself without lens mountings or the attached cameras. It is important to specify that the Optec prism was manufactured such that each optical output channel has an identical optical path length through the prism and outputs an identically aligned image. This allows the three mounted cameras on the end of the system to all share a single focal point when a lens is mounted onto the system. A diagram showing the optical paths of the channels is shown in Figure 12. Note that each channel has an identical optical path length (30mm) through the prism, which is equivalent to 18.2m in air Figure 12: Diagram showing dimensions and optical paths of Optec prisms 23

4.1.2 Imaging Targets The first measurement that was needed with this prism system was to determine the affine transform parameters required to provide image registration across camera channels and between stage positions. The camera systems attached to our prism are closely aligned to show the same image on all three channels. In actuality they are not perfectly aligned, however, meaning that there were minor variations between the images that would cause different amounts of image shift, blurring and other visual artifacts if not accounted for. To accommodate each of these alignment errors it was necessary to perform geometric transformations on the images to bring them into proper alignment. This includes performing spatial transformation of coordinates and using intensity interpolation that assigns intensity values to the spatially transformed pixels ([3] pg 87). The spatial transformation method that was used is known as the affine transform [5], which has the general form shown in Equation 4.1.2.1. This x ~ x y ~ y 1 1 T t11 t21 0 t t 12 22 0 t13 t 23 1 ~ x ~ y 1 (4.1.2.1) transformation can scale, rotate, translate, or sheer a set of coordinate points, depending on the values within the transform matrix T. Affine transformations have several nice features and ensure the image is not overly warped by turning lines into lines and preserving the ratios of distances along the lines [6]. 24

. Figure 13: Registration pattern To calculate these parameters a custom imaging target shown in Figure 13 was imaged by the system. This target has a good combination of line types and thicknesses, featuring many line intersections. It has several thin, fine lines to help with image registration when the camera is near focus while also having large, bold lines for when the camera is severely defocused and the images contain a high amount of blur. Once the affine transformation parameters for the system were measured, another target was inserted onto the rail system to measure the Modulation Transfer Functions (MTF) and Point Spread Functions (PSF) of the system. The MTF describes the ability of a CCD imaging system to reproduce the contrast or modulation present in a scene at any given spatial frequency [7]. The advantage of measuring the MTF is that it is a direct and quantitative measure of image quality and it is widely used within the photonics industry. The Point 25

Spread Function of an imaging system is another method of measuring its image quality. The PSF is a measurement of the optical blur of a system by imaging a point source and recording the corresponding blotch of light on the imaging plane. If an ideal camera system were to image a point source we would get an image of a perfectly circular spot in the image with a uniform gray level within the spot and zero light elsewhere. Since our camera system is not ideal, however, in an image of a point source we would get a gray level distribution that is high at the center of the point but that decreases radially outward, reaching zero a certain distance away from the center [8]. Many image restoration algorithms depend upon having an accurate point spread function, however, for this thesis the MTF was derived and used to estimate the PSF because the PSF of a camera system is more difficult to accurately measure directly. Figure 14: Sample PSF 26

Figure 15: Slanted edge target The target that was chosen to measure the MTF of the camera system was simply a slanted edge target printed with a high resolution, and is shown in Figure 15. Traditionally, this method of MTF measurement is done using a back-illuminated razorblade [9], but the cameras in this system are of a low enough resolution that a printed slanted-edge target will provide accurate results. From this target the Edge Spread Function (ESF) was measured, from which the MTF and PSF of our camera system were derived. 27

Figure 16: Sample image scene [10] The third and final image target used was a view of a sample desert scene from a distance (Figure 16). This image is a subset of a larger image [10] and was chosen because it has lots of detail allowing an easy visual representation of the image processing algorithms restoration abilities. 4.2 Defocus Hardening As proven in Chapter 2, the amount of defocus put into the imaging system had a direct effect on the systems protection from high energy laser systems. The experimental system used here was specifically designed to allow the easy 28

addition of defocus to the camera system by mounting our image targets onto a rail system. Simple Gaussian propagation models were then used to determine the laser hardening allowed by the image restoration algorithms. When calculating Gaussian beam intensity distribution it was necessary to use Equation 4.2.1 ([11] pg. 354]. 2 2r I( r) I0 exp( ) 2 w 0 (4.2.1) Note that r is the radius away from the beams propagation axis and I 0 is the peak, on-axis intensity of the beam. w 0 is the Gaussian beam radius, which is defined as the beam-width where the intensity falls to 1/e 2 of its on-axis intensity. Figure 17: Gaussian beam propagation [12] Propagation of Gaussian beams through an optical system can be treated almost as simply as geometric optics. Because of the unique self-fourier Transform characteristic of the Gaussian, an integral to describe the evolution of the intensity profile with distance is not needed. The transverse distribution intensity remains Gaussian at every point in the system; only the radius of the Gaussian and the radius of curvature of the wavefront change [12], as shown in Figure 17. 29

The change in beam width as the Gaussian beam propagates can be shown to be ([11]] pg. 352): 2 w ( x) w 2 0 x 1 2 w0 2 (4.2.2) Where w is the beam width, w 0 is the beam waist (the width at focus) and λ is the wavelength. To calculate the change in peak intensity as the focal plane array moves away from the focal point of the lens some assumptions were made about the system. It was assumed that the beam comes in collimated along the optical axis of our lens system. It was also assumed that the beam width of the laser targeting the system is ½ the width of the entrance aperture of our lens in order to ensure that diffraction could be ignored in the calculations. The lens had a focal length of 105mm, and its f/# was set to be 2.8. This gave an entrance aperture diameter of D focal length fnum 105mm 2.8 37.5mm (4.2.3) and a beam radius of 18.75mm. It was also assumed that the beam was collimated and operating at 531.5nm, a common laser wavelength. Given these assumptions it is possible to calculate the beam waist radius at focus to be [12]: 30

Beam radius (m) 4 f 2w0 D 4(531.5e 9) 105e 3 2w0 37.5e 3 w 947.42nm 0 (4.2.4) Once the minimum beam waist for our system at focus was calculated, how the beam waist expansion as the object image moves away from the focal point of the lens was plotted. Figure 18 shows how much the Gaussian beam in the system changes in radius as it moves away from its focal point. As shown, even 2.5mm past the focal point the beam radius has expanded from 947nm to 446 um. 4.5 x 10-4 beam radius vs translation of fpa from focus 4 3.5 3 2.5 2 1.5 1 0.5 0-2.5-2 -1.5-1 -0.5 0 0.5 1 1.5 2 2.5 Distance from focal point (m) x 10-3 Figure 18: Beam radius vs translation from focal plane 31

To relate this to the system used in this thesis, it must be shown how moving the imaging target out of focus corresponds to moving the fpa away from the focal point of the lens. To do this, a simple thin lens calculation [13] was performed: 1 1 1 f s s 1 2 (4.2.5) Examining and taking the knowledge that the red channel is in focus when the variable distance is 9.0cm, it is shown that s1=90mm+765.8mm+45mm = 900.8mm. From this and the knowledge of the lens having a focal length of 105mm, you can solve the distance between lens and fpa to be 118.85mm. Knowing that this is system focus, it can be calculated how much moving the target out of focus moves the fpa away from the focal point of the lens. This is shown in Figure 19. 32

defocus (mm) 2 Distance between sensor and focal point 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Stage position (m) Figure 19: Distance between sensor and focal point as image is translated. Now that this is done, it is possible to map how each of the stage positions for the setup changes the beam width on the focal plane array, and hence the maximum energy. This was done using Equations 4.2.1 and 4.2.2. The results are shown in both Figure 21 and Figure 22 with the assumption of a 5mW laser input to the system. Figure 21 is a useful look of how the peak energy changes with respect to the defocus corresponding to the image target position. It is useful to track how the defocus amounts (which are labeled with regards to target position on a rail system, i.e. 9.5cm, 12cm, etc) change the energy on the system. This data corresponds to Figure 22 which contains the same data but maps it according to the focal point arrays displacement from the focal point of the lens when dealing with a collimated input beam. 33

Peak Energy (J) 10 7 Peak Energy change vs target position 10 6 10 5 10 4 10 3 10 2 10 1 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 target Position (m) Figure 20: Distance between sensor and focal point as image is translated. Now that this is done, it is possible to map how each of the stage positions for the setup changes the beam width on the focal plane array, and hence the maximum energy. This was done using Equations 4.2.1 and 4.2.2. The results are shown in both Figure 21 and Figure 22 with the assumption of a 5mW laser input to the system. Figure 21 is a useful look of how the peak energy changes with respect to the defocus corresponding to the image target position. It is useful to track how the defocus amounts (which are labeled with regards to target position on a rail system, i.e. 9.5cm, 12cm, etc) change the energy on the system. This data corresponds to Figure 22 which contains the same data but maps it according to the focal point arrays displacement from the focal point of the lens when dealing with a collimated input beam. 34

Hardening factor Peak Energy (J) 10 7 Peak Energy change vs target position 10 6 10 5 10 4 10 3 10 2 10 1 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 target Position (m) Figure 21: Peak energy change vs image target position 10 6 Hardening gained vs defocus 10 5 10 4 10 3 10 2 10 1 10 0 0 2 4 6 8 10 12 14 16 18 fpa distance from focus (m) x 10-4 Figure 22: Hardening change vs distance from focus 35

As shown in these figures, a small amount of translation of the fpa from the focal plan can have a huge difference in the peak intensity incident upon the array. Figure 22 shows this by assigning a Hardening Factor to these calculations defined here as the ratio of the Intensity the system can withstand at each translation at focus vs the intensity it can withstand with the FPA at the focal plane of the system. These figures give a quick method of checking how much sensor hardening gained by the amounts of defocus shown in future sections. 36

CHAPTER 5 SIMULATIONS Before applying calculations to real images captured by the prism, it was necessary to simulate the images gathered by the prism. Performing these simulations served a very important purpose: it allowed the calculation of the optimal values needed for the Regularized Least Squares, Constrained Least Squares and Wiener filters. Running these simulations involved taking an idealized input image, blurring it in a similar method to how the camera blurs an image, adding appropriate noise levels to the blurred image, and then running the RLS and CLS algorithms at several different alpha and gamma values. The results were then compared and the alpha and gamma value that yielded the nearest result to the idealized input image was selected. The Wiener filter simulations made use of the fact that the Wiener filter is relatively insensitive to the power spectrum of the image. Combine this with the fact that most images have a fairly similar power spectra and the Signal to Noise 37

calculations found should be able to be applied to the Wiener filter simulations on the real prism-system images 5.1 Input Images The intensity prism simulations were performed with two different input images to compare the results. Ideally, the parameters used in the restoration algorithms calculated from simulating these two images would achieve similar results when applied to actual images. This would imply that the algorithms can restore the images properly no matter what type of scene the cameras were imaging. These sample input images are shown below: Figure 23: Cameraman.tif 38

Figure 24: Westconcordorthophoto.png These images (both included in the Matlab image processing toolbox) represented the ideal image that our algorithms were working towards. These ideal images had to be blurred and then had noise added to simulate the differences between the actual scene being imaged and our imaging systems representation of that scene. 5.2 Blur How It Works The blurring of an imaging system can be simulated using a simple twodimensional convolution of the input target image with the systems measured Point Spread Function (PSF). This simulates the blurring introduced by an imperfect lens [14]. To measure the point spread function, a slanted edge target (Figure 15) was imaged for each channel with the image defocused by precise amounts through its movement on the rail system (See Chapter 4). From this 39

image the Edge Spread Function (ESF) of my imaging system was measured using the sfrmat 2.0 Matlab tool developed by Peter Burns. This tool measures the ESF of several rows of CCD pixels and then averages them to determine the ESF of the system. The ESF returned was then converted to the Line Spread Function (LSF) of the system by taking the derivative of the ESF [9]. From the Line Spread Function the PSF of the optical system was derived by making the assumption that its PSF was circularly symmetric. This ESF to PSF conversion was performed using a Matlab script developed by Dr. Russell Hardie (University of Dayton). This allowed the creation of our system PSFs by taking the measured LSF and using it s values as radius values in a circular array to create PSFs. The Point Spread Functions that were measured are shown in Figure 25- Figure 27. The plotted PSFs were each taken from an image at one of the thirtysix defocus amounts (their title is the location of the image on the rail system described in Chapter 4 and as such indirectly represents its distance from the focal point of the lens). This was done for each of the three channels in the Optec prism (which, although color independent with this prism, are labeled Red, Blue and Green to tell them apart. This is simply a legacy label from another prism designed by Optec and have no actual correlation to the colors in this case). Each PSF was plotted here in addition to the plots of the cross sections of each PSF. This allows easy comparison of both the sharpness of the PSF between the channels (through their compared relative intensities) and also to see the spreading of the PSF within each channel as defocus is increased. 40

8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 25: Scaled red PSFs 8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 26: Scaled blue PSFs 41

8.5cm 8.6cm 8.7cm 8.8cm 8.9cm 9cm 9.1cm 9.2cm 9.3cm 9.4cm 9.5cm 10cm 10.5cm 11cm 11.5cm 12cm 12.5cm 13cm 13.5cm 14cm 14.5cm 15cm 15.5cm 16cm 16.5cm 17cm 17.5cm 18cm 18.5cm 19cm 19.5cm 20cm 20.5cm 21cm 21.5cm 22cm Figure 27: Scaled green PSFs Figure 28: Radial cross section of red channel PSF 42

Figure 29: Radial cross section of blue channel PSF Figure 30: Radial cross section of green channel PSF Comparing the PSFs, the red channel has the best image quality when the target is mounted at 9cm, while the blue and green channels have the best image when the target is at 8.5cm. This proves that the optical paths of the individual channels of the imaging system are not identical, as was planned. 43

Additionally it is evident by the sharpness of the PSFs that the red channel has the best image quality due to it being nearest to a perfect point spread function. Now the PSF of each channel at each target distance has been measured, these PSFs can be used to simulate the blur level. To do this it was necessary to take each input image, pad its edges (to prevent an accurate convolution along the images actual edges) and then perform a two-dimensional convolution with the systems s PSF. Example blur results are shown in Figure 31 and Figure 32. Unblurred Image Blurred Image PSF * = Figure 31: Blurring using PSF convolution (Red channel at 9cm) 44

Red Channel 9cm Red Channel 10cm Red Channel 12cm 50 50 50 100 100 100 150 150 150 200 200 200 250 50 100 150 200 250 250 50 100 150 200 250 250 50 100 150 200 250 Red Channel 14cm Red Channel 18cm Red Channel 22cm 50 50 50 100 100 100 150 150 150 200 200 200 250 50 100 150 200 250 250 50 100 150 200 250 250 50 100 150 200 250 Figure 32: Simulated image blur at different distances 5.3 Add Noise How Noise Is Selected And How It Works Now that the images have been accurately blurred, noise must be added to them in order to approximate noise levels typically seen in camera systems. To do this the assumption has been made that the camera system has a Gaussian noise profile. This allows the use of Matlabs imnoise function to add Gaussian white noise to the images. These simulations have been done at four different noise levels noiseless, actual camera noise (low), medium and high levels of noise. 45

averaged image single image 20 20 40 40 60 60 80 80 100 20 40 60 80 100 100 20 40 60 80 100 simulated image with V= 4.4746e-005 20 40 60 80 100 20 40 60 80 100 Figure 33: Simulating the noise of the camera To measure the amount of noise added to the camera system, it was necessary to take a set of 20 identical images and then average them to remove the noise. It was then possible to take both a single image, average the image and then perform a pixel to pixel comparison of the two images to find the Mean Averaged Error (MAE) of their pixels in Matlab. Noise was then added to the averaged image using the imnoise.m function in Matlab and slowly increased the variance of the noise until the MAE of the single image and the artificially noisy averaged image equaled the MAE between the single image and the 46

averaged image. A zoomed in comparison of the noiseless image, single image, and artificially noisy image are shown in Figure 33. As shown, adding Gaussian noise with a variance of 4.476e-5 closely simulates the noise inherent in our camera system. This noise level was found to allow the closest approximation of our real world camera system s actual noise levels. To simulate multiple levels of noise, simulations were done with four different levels of added noise variance (V=0, V=4.47e-5, V=1.2e-4, V=1e-3). These noise levels were selected to provide a broad range of noise levels to test our image restoration algorithms with, from noiseless (V=0) to more noise than we ever expect to actually encounter in a real world situation. Figure 34 shows the cameraman.tif image with these noise levels added. A zoomed in, 50 x 50 pixel comparison of these noise levels are shown in Figure 35 for better detail. 47

V = 0 V = 4.47e-5 50 50 100 100 150 150 200 200 250 50 100 150 200 250 V = 1.2e-4 250 50 100 150 200 250 V = 1e-3 50 50 100 100 150 150 200 200 250 50 100 150 200 250 250 50 100 150 200 250 Figure 34: Cameraman.tif with 4 levels of noise V = 0 V = 4.47e-5 10 10 20 20 30 30 40 40 50 10 20 30 40 50 50 10 20 30 40 50 V = 1.2e-4 V = 1e-3 10 10 20 20 30 30 40 40 50 10 20 30 40 50 50 10 20 30 40 50 Figure 35: Zoomed in simulated noise levels with different variance 48

5.4 Calculate Wiener SNR, RLS Alphas, CLS Gammas Once the blurring coefficients had been calculated and the noise levels added, it was possible to calculate optimal restoration coefficients for the selected image processing algorithms. For the Wiener filter, this means finding the SNR, for the Regularized Least Squares it means finding the alpha values, and for the Constrained Least Squares algorithm finding the gammas. These coefficients determine how each algorithm balances its desire to create a smooth image (due to the images probable strong spatial correlations) with the need to also create images with sharp edges where appropriate. To efficiently find the optimal filter values Matlabs fminbnd.m function was chosen which finds the minimum value of a function of one variable within a specified search area (Equation 5.4.1). min x f ( x) such that x 1 < x < x 2 (5.4.1) It does this through the use of the Golden Section search and parabolic interpolation [15] derived in Computer Methods for Mathematical Computations 110]. After the search routine had been chosen it was necessary to determine how to calculate the function that needed to be minimized. To do this three matlab functions were created (one for each filter) that reported the MSE (Mean Squared Error) between the target (unblurred and noiseless) optimal reference image and a simulated blurred image restored by the filter with a specific 49

SNR values SNR values SNR values SNR values restoration coefficient (called SNR, alpha, or gamma depending upon my filter). These functions were then put into into Matlabs fminbnd.m search routine to minimize the MSE error by using different restoration coefficients. These results were then run for each recorded amount of blur at four different noise levels and on two different images. The optimal results for the filter restoration coefficients are shown below in Figure 36 - Figure 38. Note that the amount of image defocus present in the image increases with image number, and the image number can be used to find the stage position from the PSF figures given in Figure 25 - Figure 27. 0.014 0.012 V = 0 Cameraman Satellite 0.014 0.012 V = 4.47e-005 Cameraman Satellite 0.01 0.01 0.008 0.008 0.006 0.006 0.004 0.004 0.002 0.002 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # 16 x 10-3 V = 0.00012 14 Cameraman Satellite 0.055 0.05 0.045 V = 0.001 Cameraman Satellite 12 0.04 10 8 6 4 2 0 5 10 15 20 25 30 35 40 Image # 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0 5 10 15 20 25 30 35 40 Image # Figure 36: Optimal Wiener filter SNR values for all three channels (not at same scale) 50

RLS alpha values RLS alpha values RLS alpha values RLS alpha values 7 x 10-3 V = 0 6 Cameraman Satellite 0.06 0.05 V = 4.47e-005 Cameraman Satellite 5 4 3 2 0.04 0.03 0.02 1 0.01 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # 0.14 0.12 V = 0.00012 Cameraman Satellite 0.5 0.45 V = 0.001 Cameraman Satellite 0.1 0.4 0.08 0.35 0.06 0.3 0.25 0.04 0.2 0.02 0.15 0 0 5 10 15 20 25 30 35 40 Image # 0.1 0 5 10 15 20 25 30 35 40 Image # Figure 37: Optimal RLS filter alpha values for all three channels (not at same scale) 51

Optimal RLS alpha value Optimal RLS alpha value gamma values gamma values CLS gamma values gamma values 1 0.9 V = 0 Cameraman Satellite 1 0.9 V = 4.47e-005 Cameraman Satellite 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 30 35 40 Image # 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 30 35 40 Image # 1 0.9 V = 0.00012 Cameraman Satellite 1 0.9 V = 0.001 Cameraman Satellite 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 38: Optimal CLS filter gamma values for all three channels 0.5 0.45 0.4 Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 0.5 0.45 0.4 Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 0.35 0.35 0.3 0.3 0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 39: Comparison of RLS filter optimal alpha value scale (red channel) 52

Optimal CLS gamma value Optimal CLS gamma value Optimal Wiener SNR value Optimal Wiener SNR value 0.06 0.05 Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 0.06 0.05 Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 0.04 0.04 0.03 0.03 0.02 0.02 0.01 0.01 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 40: Comparison of Wiener filter optimal SNR value scale (red channel) 1 0.9 0.8 Cameraman Image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 1 0.9 0.8 Satellite image V = 0 V = 4.47e-5 V = 1.2e-4 V = 1e-3 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 41: Comparison of CLS filter optimal gamma scale (red channel) Looking at the results shown in Figure 36 - Figure 38, it is easy to note that our calculations show that the results for the cameraman.tif image usually results in a larger optimal filter coefficient when compared to that calculated using the satellite image. This is due to the fact that the cameraman.tif image has less spatial detail than the satellite image westconcordorthophoto.png and 53

as such can allow more smoothing without losing its detail. As can be seen in Figure 39 - Figure 41, larger noise values require a larger optimal restoration parameter to help smooth out the effects of noise. However it is interesting to note that the CLS filter s optimal gamma parameter is fairly noise insensitive. 5.5 Apply Values To Simulated Images - Results Now that the optimal filter parameters have been calculated it was possible to apply them to our simulated images and compare them to our optimal images to check our filter quality. To do the Wiener, Regularized Least Squares and Constrained Least Squares filters were applied to the blurred images using the parameters gathered in the previous section. We then compared the results of this plus the artificially blurred image pixel by pixel to the optimal unblurred images and computed the Mean Squared Error (MSE) between these two images. Since the MSE should become lower as the restored image gets closer to the original optimal image, this allowed for a direct comparison to numerically measure the image restoration ability of each algorithm. The filters were first applied to the noiseless images (V = 0) because these images should be the easiest to restore. After applying the image restoration algorithms to the three image channels the MSE results are shown in Figure 42 - Figure 44. 54

MSE error MSE error 1200 Red channel 1000 800 600 400 RLS 200 Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 42: Red channel image restoration accuracy (no noise) Cameraman.tif 1200 Green channel 1000 800 600 400 RLS 200 Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 43: Green channel image restoration accuracy (no noise) Cameraman.tif 55

MSE error 1200 Blue channel 1000 800 600 400 RLS 200 Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 44: Blue channel image restoration accuracy (no noise) Cameraman.tif As shown in Figure 42, all of the image restoration algorithms improve upon the unrestored blurred image with the same amount of blur applied. The best unrestored image for the Red channel is image six (which corresponds to when the imaging targets are at the 9cm rail position), which is consistent if you look at the point spread functions shown in Figure 25. Figure 25 shows that the red channels PSF is optimal when the target is at the 9cm position which means that is the location of the systems focus. The mean squared error (MSE) value for the unrestored image at that position is 471.1. This means that any of the restored images that have an MSE value less than 471.1 are statistically closer to the actual ideal image than the best unrestored image at focus. The optimal restored images calculated for each filter are all also at image location six. The optimal RLS restoration MSE value is 122, the optimal CLS MSE value is 34.48, and the optimal Wiener MSE is 38.01. The red channel restored images corresponding to these algorithms are shown in Figure 45. 56

Least blurred unrestored (image #6) Wiener (image #6) optimal image RLS (image #6) CLS (image #6) Figure 45: Optimal Cameraman filter restoration images (Red channel) Now the calculated limits of the image defocus restoration abilities were tested. Looking at Figure 42, it is evident that at image 16 (target at stage position 12cm) the Regularized Least Squares MSE value for the red channel is at 424.5, meaning that it is a closer representation of the actual ideal image than the unrestored image at focus. The CLS MSE value of 415.8 for image 16 is the most defocused image that has an error lest than the best unrestored image. For the Wiener filter the last image with an MSE less than 471.1 is image 18 (MSE = 446.6). These images, shown in Figure 46, show that our method for 57

determining the quality of the image restoration algorithm (through comparison of MSE errors) is a valid one. Note that the restored Wiener, RLS, and CLS images are comparable to the least blurred unrestored image at imparting the information contained in the optimal image. This analysis was then performed on all three image channels (Red, Green, and Blue) to further show that this holds true for different imaging system parameters. Least blurred unrestored (image #6) Wiener (image #18) optimal image RLS (image #16) CLS (image #16) Figure 46: Red channel V = 0 maximum defocus restored Cameraman images Performing the same steps for the green channel camera image yields that the best image quality is with image 9 (corresponding to the stage being at 58

9.3 cm). At this point the unrestored image has an MSE of 551.4 when compared to the optimal image. Unlike the Red channel the optimally restored images do not occur with the same amount of defocus. The RLS restoration achieves its best result with image three (MSE of 228.9). The Wiener and CLS restorations both works best with image one (CLS has a MSE of 81.61, Wiener has a MSE of 115.8). The images corresponding to these results are shown in Figure 47. Least blurred unrestored (image #9) Wiener (image #1) optimal image RLS (image #3) CLS (image #1) Figure 47: Optimal Cameraman filter restoration images (Green channel) 59

The filters restoration parameters were then tested on the green channel imagery using the same methodology used on the red channel. These results are shown in Figure 48. Least blurred unrestored (image #9) Wiener (image #15) optimal image RLS (image #14) CLS (image #14) Figure 48: Green channel V = 0 maximum defocus restored Cameraman images Finally the optimal coefficient results were tested on the blue channel coefficients. The blue channel camera image that yields the best image quality is with image 6 (corresponding to the stage being at 9.0 cm). At this point the unrestored image has an MSE of 593.2 when compared to the optimal image. 60

Unlike the Red channel the optimally restored images do not occur with the same amount of defocus. The RLS restoration achieves its best result with image one (MSE of 279.7). The Wiener and CLS restorations both works best with image ten (CLS has a MSE of 130, Wiener has a MSE of 202.4). The images corresponding to these results are shown in Figure 53. Least blurred unrestored (image #6) Wiener (image #10) optimal image RLS (image #1) CLS (image #10) Figure 49: Optimal Cameraman filter restoration images (Blue channel) 61

Least blurred unrestored (image #6) Wiener (image #14) optimal image RLS (image #13) CLS (image #14) Figure 50: Blue channel V = 0 maximum defocus restored Cameraman images The same image restoration simulations were then performed on the satellite image parameters. Looking at Figure 51, the best unrestored image (that with the lowest MSE differential with the optimal image) for the red channel is image six with an MSE of 695.1. The MSE of the best RLS filter is 163.4, the MSE of the CLS filter is 48.58, and the MSE of the Wiener filter is 52.47 (all image number six as well). For our Green channel (shown in Figure 52) the best unrestored image is image nine with an MSE of 818.7. The best results for the RLS filter come with image 3 (MSE of 317.4). The best results for the CLS filter 62

MSE error come with image one (MSE of 92.06) and the best results for the Wiener filter also come with image one (MSE of 153.2). For the blue channel (Figure 53) the best unrestored image is image 6 with an MSE of 885.8. The best results for the RLS filter come with image 1 (MSE of 395.2). The best results for the CLS filter come with image 10 (MSE of 166.2), and the best results for the Wiener filter come with image 10 (MSE of 279.5). 2000 Red channel 1500 1000 500 RLS Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 51: Red channel image restoration accuracy (no noise) Satellite image 63

MSE error MSE error 2000 Green channel 1500 1000 500 RLS Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 52: Green channel image restoration accuracy (no noise) Satellite image 2000 Blue channel 1500 1000 500 RLS Wiener CLS unrestored 0 0 10 20 30 40 image # Figure 53: Blue channel image restoration accuracy (no noise) Satellite image These optimally restored blurred images are shown in Figure 54. These are the most in-focus images (determined by having the lowest MSE errors) restored with the various filters. Figure 55 shows the results in restoring the maximum defocus that was calculated as being better than the original unrestored satellite image (similar to how it was done in the previous section). 64

For RLS and CLS filters the calculations show that blur levels up to image sixteen can be restored and still be better than the unrestored image. The Wiener filter can up go to image eighteen. Looking at this figure it shows that the method of using MSE error to determine an images restoring power is a valid one as the restored images look at least as good as the least blurred unrestored image. Least blurred unrestored (image #6) Wiener (image #6) optimal image RLS (image #6) CLS (image #6) Figure 54: Optimal Satellite filter restoration images for V=0 (Red channel) 65

Least blurred unrestored (image #6) Wiener (image #18) optimal image RLS (image #16) CLS (image #16) Figure 55: Red channel V = 0 maximum defocus restored Satellite images Similar results were also seen on the Blue and Green channels. Here as the amount of blur is nearly identical to that of the red channels satellite image and the trends shown in the calculations performed on the cameraman image. With the filters effects on both sets of images have been demonstrated, it was decided to limit the algorithm calculations to the satellite images. This is because the satellite images have greater high frequency spatial content and as such the parameters calculated for the filters (the RLS alpha, CLS gamma and Wiener SNR) result in less smoothing and, hopefully, a higher level of detail when applied to real world images. 66

MSE error MSE error MSE error MSE error The simulations were then performed on the images with noise added to better recreate real world results. As discussed previously, noise was added by using the imnoise command with variances of 4.47e-5, 1.2e-4, and 1e-3 on the satellite images. This noise level of V= 4.47e-5 images are what testing showed as being equivalent to the real world cameras used in section 4.1.3. Once noise was added to the images the same tests were performed as previously done with the noiseless images. The results for the red channel are shown below in Figure 56. 2000 Red channel (V = 0) 1200 Red channel (V = 4.47e-005) 1500 1000 800 1000 600 500 RLS Wiener CLS unrestored 0 0 10 20 30 40 image # 400 RLS 200 Wiener CLS unrestored 0 0 10 20 30 40 image # 1800 Red channel (V = 0.00012) 1800 Red channel (V = 0.001) 1600 1600 1400 1400 1200 1200 1000 1000 800 600 RLS Wiener 400 CLS unrestored 200 0 10 20 30 40 image # 800 600 RLS Wiener 400 CLS unrestored 200 0 10 20 30 40 image # Figure 56: Summary of filters restoration ability on simulated red channel for all noise levels - Satellite image 67

One thing revealed by Figure 56 is that the CLS filter is better than the RLS filter for low amounts of defocus but not for high amounts of defocus. It s also interesting to note how well the Wiener filter performs. The Wiener filter is the simplest filter and should be the worst performer but here it performs on par with the other restoration methods. This is simply a side effect of how the simulations were performed. To calculate the optimal wiener SNR, the algorithm required that you feed in the actual optimal image as well as the blurred image. It then tries to find the Signal to Noise Ratio that restores the blurred image to the optimal image. The other algorithms do not get to use the original image in calculating their optimal parameters and hence perform at a handicap. As such the Wiener filter was expected to perform significantly worse when applied to actual images because it was using a generic power spectrum and not one optimized for each simulated image that it is attempting to restore.. To better compare how noise effects each filter, each was plotted separately across the four noise levels in Figure 57 - Figure 59. These plots show the MSE error between the restoration of the blurred, noisy image and the optimal noiseless image. 68

MSE error MSE error Wiener Restoration Error for westconcordorthophoto.png 1400 1200 1000 V=0 V=4.47e-5 V=.00012 V=.001 800 600 400 200 0 0 5 10 15 20 25 30 35 40 image # Figure 57: Red channel Wiener filter restoration error RLS Restoration Error for westconcordorthophoto.png 1200 1000 V=0 V=4.47e-5 V=.00012 V=.001 800 600 400 200 0 0 5 10 15 20 25 30 35 40 image # Figure 58: Red channel RLS filter restoration error 69

MSE error CLS Restoration Error for westconcordorthophoto.png 1500 V=0 V=4.47e-5 V=.00012 V=.001 1000 500 0 0 5 10 15 20 25 30 35 40 image # Figure 59: Red channel CLS filter restoration error It would now be a good time to examine how noise affects the filters restoration parameters (the Wiener SNR term, RLS alpha term and CLS gamma term), shown in Figure 60 - Figure 62. Note that as the noise levels increase, the optimal restoration parameter increases. This is to be expected as the restoration parameters control the amount of smoothing performed on the image, and higher noise levels would require more smoothing to restore the image. One very interesting discovery is just how insensitive the CLS s gamma filter is to noise, shown in Figure 62 when compared to the Wiener and RLS filters. 70

Alphas SNR 0.045 0.04 0.035 Optimal Wiener SNR V=0 V=4.47e-5 V=.00012 V=.001 0.03 0.025 0.02 0.015 0.01 0.005 0 0 5 10 15 20 25 30 35 40 Image # Figure 60: Optimized Wiener filter restoration coefficient 0.35 0.3 0.25 Optimal RLS Alphas V=0 V=4.47e-5 V=.00012 V=.001 0.2 0.15 0.1 0.05 0 0 5 10 15 20 25 30 35 40 Image # Figure 61: Optimized RLS filter restoration coefficient 71

Gamma 1 0.9 0.8 0.7 0.6 Optimal CLS gammas V=0 V=4.47e-5 V=.00012 V=.001 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 30 35 40 Image # Figure 62: Optimized CLS filter restoration coefficient 5.6 Multi-Channel Restoration -All Three Channels In Focus Now that single channel restoration has been examined, we will now investigate the multi-channel RLS restoration algorithms. This is a modified version of the RLS function that takes three input images of the same scene and uses cross-plane similarities within the images to aid in restoration [17]. Unfortunately when using simulated images there were no sub-pixel differences between the images and hence the results here are of limited correlation to the real-world optical system. It was worth simulating though to see if any increase in restoration ability is gained when using three identical images with different amounts of blur compared to simply restoring the best of the three images. To begin, three identical images were taken and blurred with three different amounts of blur from the real world prism systems measured Point Spread Functions. To ensure the amount of blurring added was similar but not 72

MSE error MSE error MSE error MSE error identical, the PSFs for each of the three channels were selected and fed into the algorithm at a specified defocus amount (i.e. the Red, Blue, and Green channels Point Spread Functions measured with the PSF target at 9cm on the optical rail system). The results were compared to simply restoring the single best focused image from the three imaging channels with a single channel RLS filter. This was done for each of the four added noise levels, represented by the variances (V) of 0, 4.47e-5,.00012, and.001. These results are shown in Figure 63. 1200 1000 V= 0 Multi Single 1200 1000 V= 4.47e-005 Multi Single 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # 1200 1000 V= 0.00012 Multi Single 1200 1000 V= 0.001 Multi Single 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 63: Comparison of multi-channel vs single channel algorithm restoration quality 73

Note that the multichannel algorithm actually starts out performing worse than the single channel algorithm but gets better comparatively as the noise levels increase. By the final, high noise level (V=.001) the multichannel algorithm actually performed better than the single channel algorithm. This is very interesting as one would expect the results to be at least as good as the best single channel image restoration as the other images are adding additional information to the algorithm. The algorithm used here has only a single alpha value as input which it uses on all three input channels. However, because the input channels had slightly different levels of blur (the Blue and Green channels are not as sharp as the red channel), one single alpha value cannot be optimal for all three channels. The multichannel RLS algorithm weighs all three images equally, meaning that the two images with increased blur actually cause the algorithm to perform less effectively because it thinks they are all of equal quality. To test if this is the case three new sets set of blurred, noisy images were created. Each set had identical blur applied (the measured blur of the red channel), to which noise was added as described previously. Due to the separate application of noise to each set of images this produced three different images with identical blur levels to feed into the multichannel RLS filter. This ensured that a single alpha value would be optimal for all three channels. The results of this are shown in Figure 64. For a noiseless image, the multichannel RLS algorithm actually performed identically well to the single channel algorithm. This is logical because without any noise, we are feeding three identical images into the multichannel RLS algorithm and then that same image into the single 74

MSE error MSE error MSE error MSE error channel RLS algorithm. This means that the algorithms have the exact same image information to work with and so produce identical results. As soon as noise is added to the input images of the algorithms, the multichannel algorithm begins to perform better than the single channel algorithm due to each input channel having three different sets of information (due to the noise differences in the image ) of the imaged scene that it can use to reconstruct the input image. 1200 V= 0 1200 V= 4.47e-005 Multi Red Multi Red Single Single 1000 1000 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # 1200 V= 0.00012 1200 V= 0.001 Multi Multi Red Single Single 1000 1000 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 64: Red images multichannel RLS vs single channel RLS Having shown that the multichannel RLS algorithm outperforms the single channel when it can perfectly model the blur of all input channels with a single alpha, it was then possible to conclude that the multichannel algorithms poor 75

performance compared to the single channel shown in Figure 63 at low noise levels was due to the differing blur levels of each channel not providing enough MTF compensation to overcome the suboptimal alphas for the additional input channels. A large benefit of the multichannel algorithm is the ability to combine all three MTF functions into one single MTF for the system. The algorithm uses each channels MTF data to fill in the portions of the imaging systems MTF that are at or near zero for the single channel. If the additional channels MTFs do not add enough new data to the best single channel MTF (ie filling in the frequencies where it is zero) to counteract the fact that two of the three channels are having sub-optimal alphas applied to them, the multichannel RLS algorithm used here performs worse than the single. The Multichannel RLS algorithm got a performance boost when compared to the single channel as sensor noise levels increase (Figure 64) however due to it receiving additional information on the input scene from the extra channels. For comparisons sake we then performed an affine registration on the images and then had the multichannel filter restore them. This resulted in slightly blurred, imperfect images as the affine registration process did not perfectly restore the image to what it was. The results are shown in Figure 65. 76

MSE error MSE error MSE error MSE error 1200 1000 V=0 Multi Single 1200 1000 V=4.47e-005 Multi Single 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # 1200 1000 V=0.00012 Multi Single 1200 1000 V=0.001 Multi Single 800 800 600 600 400 400 200 200 0 0 5 10 15 20 25 30 35 40 Image # 0 0 5 10 15 20 25 30 35 40 Image # Figure 65: Comparison of registered multi-channel vs single channel algorithm restoration quality When comparing Figure 63 to Figure 65, we find that the algorithm actually performed worse with the registered images. This was an important find as image registration needed to be performed when testing the filters on the real images. This worsening of performance is due to the apparent inability of our registration measuring methodology to perfectly reverse an affine transformation. The process introduces a slight blur which is reflected in the MSE error. This is shown in Figure 66, where an image (a zoomed in view of our satellite image) upon which an affine transformation registration is performed. An inverse affine 77

transformation was then performed to unregister the image but as shown in Figure 66 the reference image is not perfectly replicated but slightly blurred. reference affine affine + inverse affine 5 5 5 10 10 10 15 15 15 20 20 20 25 25 25 30 30 30 35 35 35 40 40 40 45 45 45 50 50 10 20 30 40 50 50 10 20 30 40 50 10 20 30 40 50 Figure 66: Showing effects of image registration via affine transforms The importance of this discovery was seen when performing mutlichannel RLS restoration on actual images in the next section. Since image registration was needed to be performed in order to move the blue and green channels in line with the red, we expected those images to have an additional blur due to the affine transformation. 78

CHAPTER 6 ACTUAL IMAGES SINGLE CHANNEL Now with the simulations completed it was possible to apply the optimal image processing coefficients calculated therein to real images taken by the prism system. These images were taken, registered, and then the Single channel and multi channel Regularized Least Squares, the Constrained Least Squares, and the Wiener filter algorithms were applied to these registered images. This was done with the filter coefficients (alpha, gamma, SNR) calculated as being optimal for the corresponding blur level in the simulated images. 6.1 Show Of The Measured Images Five sample images (out of 36 total)were taken by the prism system with varying amounts of defocus and are shown in Figure 67. These images provide a good breadth of defocus to restore and will be the focus of the rest of this 79

paper. In this figure the images progress from slightly out of focus (Unrestored Image 1 is taken with the image target ½ cm before system focus), to in focus (Unrestored Image 6), to slightly out of focus (Unrestored Image 11 is at ½ cm past focus) to extremely out of focus (Unrestored Images16, and 21 are 3cm and 5.5cm past focus respectively). For reference, Unrestored Image 6 is 82cm from the systems entrance aperture. Unrestored Image 1 Unrestored Image 11 100 100 200 200 300 Unrestored Image 6 300 400 100 400 500 200 500 600 100 200 300 400 300 600 100 200 300 400 Unrestored Image 16 400 Unrestored Image 21 500 100 600 100 200 100 200 300 400 200 300 300 400 400 500 500 600 600 100 200 300 400 100 200 300 400 Figure 67: Unrestored prism images To aid in showing the progression of defocus,figure 68 was included which is a zoomed in view of the ladder shown in the bottom portion of the images shown in Figure 67. Examining the figure, Unrestored Images 1 and 11 80

are equally blurred when compared to Unrestored Image 6, which is to be expected as they are, in actuality, equally out of focus. Unrestored Image 1 Unrestored Image 11 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 Unrestored Image 16 10 20 30 10 20 30 40 50 60 70 80 90 Unrestored Image 6 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 Unrestored Image 21 10 20 30 40 50 100 10 20 30 40 50 40 50 60 60 70 70 80 80 90 90 100 10 20 30 40 50 100 10 20 30 40 50 Figure 68: Zoomed in unrestored images Now the restoration abilities of each image processing filter on real images at these levels of defocus were compared. Unlike the simulated examples, there is no numerical method to compare the quality of the images and hence we must rely on qualitative decisions to compare their restoration ability. This was done by visually examining the images of each of the selected defocus amounts and comparing them to the unrestored image. This was done to quickly test that the restoration algorithms are working as intended. 81

Unrestored Image 1 1 Wiener 1Image 1 RLS Image 1 1 CLS 1Image 1 Figure 69: Image #1 (0.5cm before focus) algorithm comparison 82

As seen in Figure 69, all three algorithms look good when compared to the original unrestored image. The restored image appears to have sharper edges, and greater details in the brickwork of the buildings. Additionally, the window panes are much more visible in the restored images than in the unrestored image. Beyond that however it is difficult to compare the filters performance to one another from this distance. Zooming in on the ladder within the image gives Figure 70. Comparing the four images, it is evident that while all three single channel filters sharpen the details of the ladder, the Wiener and CLS algorithms produce an image with what appears to almost be a Gaussian noise pattern. The RLS image clearly gives the best results in terms of image quality. Unrestored 1 Image 1 Wiener 1 Image 1 RLS 1 Image 1 CLS 1 Image 1 Figure 70: Zoomed in view of image 1 (0.5cm before focus) Next the image filters were compared where the image was at focus (Image #6). This is shown in Figure 71. As expected once again all three filters restored the image to a sharper, clearer state than the unrestored raw image. Once more a small subset of the image was examined to better measure their relative performance beyond that. This is shown in Figure 72. As shown there, 83

the single channel RLS filter once again has the best restored image quality of the examined filters. Figure 71: Image #6 (at focus) algorithm comparison 84

Figure 72: Zoomed In image 6 Moving on to Image 11 (lens is focused ½ cm in front of image target), the restoration results are shown in Figure 73. This is once again the same amount of defocus as Image 1, so the images should look comparable to Figure 69, which they in fact do. Zooming in on the image to better compare restoration abilities of the filters is done in Figure 74. Note that once again the RLS filter performs superior to the others. 85

Unrestored Image 11 11 Wiener 11 Image 11 RLS Image 11 11 CLS 11 Image 11 Figure 73: Filter comparison of image 11 86

Unrestored 11 Image 11 Wiener 11 Image 11 RLS 11 Image 11 CLS 11 Image 11 Figure 74: Zoomed in image 11 The performance of the image processing filters on Image sixteen is shown in Figure 75. As evident in the pictures, the Wiener filter and the CLS filter are really beginning to perform terribly when compared to the RLS filter. Although sharper with detail than the unrestored image, excessive ringing introduced by the discrete Fourier transform of the filters actually makes them harder to practically view. The Wiener filter not performing well is not a surprise as it is the simplest filter and it was expected it to perform poorly. It only performed so well in the simulated results because it was supplied the original image into the algorithm as an input to get the Signal to Noise Ratios of the actual images. The big surprise was the CLS filter. The CLS filter is a noniterative version of the RLS filter with a slightly different optimization goal. As such it was expected it to perform similarly to the RLS, which was even predicted in the simulated images results. This is obviously not the case. Its excessive ringing actually put it more in line with the Wiener filters performance. Because 87

the RLS filter clearly had the best image restoration ability, there was no need to examine the images in greater detail. Unrestored Image 16 16 Wiener 16 Image 16 RLS Image 16 16 CLS Image 16 16 Figure 75: Image #16 filter comparison 88

Finally the most defocused set of images (out of the small subset chosen to study in this section) belonging to Image 21 were examined. Figure 76 shows the unrestored image compared to the three single channel filtered images. While all three of the restored images are sharper than the unrestored image, once again the single channel RLS filter proves to be the best due to the Wiener and CLS algorithm have excessive ringing. There is once again no point in zooming in on the images as it is evident that the RLS filter is the clear winner. It provides the sharpest image with the least ringing. Note that the CLS image here actually appears to have worse ringing than the Wiener filter. Unrestored 21 Image 21 Wiener 21 Image 21 RLS Image 21 21 CLS Image 21 21 Figure 76: Image 21 89

CHAPTER 7 MULTICHANNEL IMAGES After examining the performance of the three different types of single channel filters it was then possible to compare them to the multichannel RLS filters performance. More specifically, it was compared to the single channel RLS filter since it was the best performing of the single channel filters. Unrestored Image 1 Single RLS 1 Multi-RLS 1 Figure 77: Comparison between single and multi channel RLS at location 1 90

As can be seen in Figure 77, the single and multichannel RLS filters looked comparable in their abilities to reduce the blurring of the image. To get a better sense of which actually performs better, it was decided to zoom in on a detailed feature and compare them. Once again the ladder in the foreground was chosen to examine. This zoomed in view is shown in Figure 78. Upon inspecting this figure, it is evident that the single channel filter slightly outperforms the multichannel filter. Unrestored Image 1 Single RLS 1 Multi-RLS 1 Figure 78: Zoomed in comparison of single and multichannel RLS filters at location 1 91

Unrestored Image 6 Single RLS 6 Multi-RLS 6 Figure 79: Comparison between single and multi channel RLS at location 6 Unrestored Image 6 Single RLS 6 Multi-RLS 6 Figure 80: Zoomed in comparison of single and multi channel RLS filters at location 6 Figure 79 and and Figure 80 show location 6, where the image was in best focus. Examining these figures closely, it is evident that once again the single channel RLS filter slightly outperformed its multichannel counterpart. 92

Since in both previous cases (location 1 and location 6) the single channel RLS filter performed a slightly better image restoration than the multichannel version it would be expected for this trend to continue with the other three levels of defocus that were examined. Looking at Figure 81 and Figure 82, it is evident that this was the case. Unrestored Image 11 Single RLS 11 Multi-RLS 11 Unrestored Image 16 Single RLS 16 Multi-RLS 16 Unrestored Image 21 Single RLS 21 Multi-RLS 21 Figure 81: Comparison between single and multi channel RLS at locations 11, 16, and 21 93

Unrestored Image 11 Single RLS 11 Multi-RLS 11 Unrestored Image 16 Single RLS 16 Multi-RLS 16 Unrestored Image 21 Single RLS 21 Multi-RLS 21 Figure 82: Zoomed in comparison of single and multi channel RLS filters at locations 11, 16 and 21 Although it is difficult to tell with large amounts of defocus, it appears that in all five defocus cases that were chosen, the single channel algorithm 94

performed better than its multichannel neighbor. This corresponded to the simulated image results. As discussed in section 5.1.5.1, this could have been due to the different blur amounts confusing the algorithm and not adding enough new spatial information to cancel this out. On top of this, in these real images (unlike in the simulated images) it was necessary to perform image registration with the multichannel images to better align them before the algorithm could be applied. As previously discussed, image registration was not perfect and this caused the images to blur slightly before the multichannel algorithm could even be applied. Finally, there was the possibility that we the system point spread function was measured improperly. A poorly measured point spread function would severely degrade the ability of our filters to restore a blurred image taken by the imaging system. 7.1 Predicative Defocus Restoration Through Simulations The ability to predict the amount of defocus we can adequately restore through the use of simulated images was then tested. When processing the simulated restoration results, the amount of defocus that each algorithm could restore while still returning the blurred image to a state at least of equal quality to the most in focus unrestored image. This was predicted by calculating the Mean Squared Error (MSE) between the artificially blurred test image and the test image itself. It was proposed that any time the algorithms returned a restored image to an MSE value less than the value achieved when calculating the MSE between a defocused image and the original unblurred image, the restored defocused image would be better than the blurred, unrestored, in focus image 95

since there was less of a pixel-to-pixel value differential. As such It was postulated that it was possible to predict how much image defocus each algorithm could correct while still achieving a quality better than an in focus, uncorrected image. These simulated results were then tested with actual images taken by the input system. Since each of the actual images was, in reality, an average of 20 images (in an effort to eliminate the noise), the results obtained from the noiseless simulated images were used. These results predicted that it was possible to restore an image from target location #18 with a wiener filter and the image from location #16 with CLS and single channel RLS filters. These restored images should all be better than my best unrestored image, from location #6. These images are shown in Figure 83. As shown, all three of the restored images are nowhere near the image quality of my best unrestored image. This is not unexpected. The wiener filter only performed so well in the simulations since it had knowledge of the actual ideal scene under test. It performed very poorly in real world situations. The CLS filter has the ringing issues that have been noted previously. The RLS filters performs best as expected. It maintained much of the image content of the unrestored image but is much blurrier. Thus the predicative, numeric method of calculating the amount of defocus each filter can restore was a failure. This was actually not surprising due to the possibility for error in measuring the point spread function of our system. Measuring a systems point spread function is incredibly difficult, and many assumptions and automated methods were used in this thesis to get our measured PSF values. Even a slight error between the real and measured PSF 96

values would lead to the image filters studied in this thesis not performing to their optimal potential. In addition, better results may be achievable with a different metric than the Mean Squared Error (which performs poorly when a restored image contains ringing), and may be studied in the future. Unrestored Image 6 Wiener Image 18 RLS Image 16 CLS Image 16 Figure 83: Test of predicative defocus restoration abilities 97