Compressive Optical MONTAGE Photography

Similar documents
Compressive Imaging Sensors

Use of Computer Generated Holograms for Testing Aspheric Optics

Polarization Experiments Using Jones Calculus

Confocal Imaging Through Scattering Media with a Volume Holographic Filter

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Coding & Signal Processing for Holographic Data Storage. Vijayakumar Bhagavatula

Real-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs

SUPPLEMENTARY INFORMATION

ELEC Dr Reji Mathew Electrical Engineering UNSW

Broadband Optical Phased-Array Beam Steering

1. INTRODUCTION ABSTRACT

Wavefront Sensing In Other Disciplines. 15 February 2003 Jerry Nelson, UCSC Wavefront Congress

Wavefront sensing by an aperiodic diffractive microlens array

EE-527: MicroFabrication

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

PolarCam and Advanced Applications

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit

OCT Spectrometer Design Understanding roll-off to achieve the clearest images

Performance comparison of aperture codes for multimodal, multiplex spectroscopy

Diffraction lens in imaging spectrometer

Multi-aperture camera module with 720presolution

Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens

Ultra-thin Multiple-channel LWIR Imaging Systems

Analysis of Hartmann testing techniques for large-sized optics

Digital Imaging Rochester Institute of Technology

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

Observational Astronomy

Compressive Through-focus Imaging

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Fiber Optic Sensing Applications Based on Optical Propagation Mode Time Delay Measurement

ABSTRACT. Imaging Plasmons with Compressive Hyperspectral Microscopy. Liyang Lu

Astronomical Cameras

Bias errors in PIV: the pixel locking effect revisited.

USE OF FT IN IMAGE PROCESSING IMAGE PROCESSING (RRY025)

MULTISPECTRAL IMAGE PROCESSING I

Purpose: Explain the top 10 phenomena and concepts. BPP-1: Resolution and Depth of Focus (1.5X)

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

Spatially Resolved Backscatter Ceilometer

Sensitivity Enhancement of Bimaterial MOEMS Thermal Imaging Sensor Array using 2-λ readout

Lecture 7: Wavefront Sensing Claire Max Astro 289C, UCSC February 2, 2016

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

3550 Aberdeen Ave SE, Kirtland AFB, NM 87117, USA ABSTRACT 1. INTRODUCTION

Synthesis of projection lithography for low k1 via interferometry

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

PROCEEDINGS OF SPIE. Measurement of the modulation transfer function (MTF) of a camera lens

Computer Generated Holograms for Optical Testing

LSM 780 Confocal Microscope Standard Operation Protocol

PHY 431 Homework Set #5 Due Nov. 20 at the start of class

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Cameras. CSE 455, Winter 2010 January 25, 2010

CODED MEASUREMENT FOR IMAGING AND SPECTROSCOPY

EUV Plasma Source with IR Power Recycling

2013 LMIC Imaging Workshop. Sidney L. Shaw Technical Director. - Light and the Image - Detectors - Signal and Noise

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Camera Selection Criteria. Richard Crisp May 25, 2011

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

COMPRESSIVE SPECTRAL IMAGING BASED ON COLORED CODED APERTURES

MUSKY: Multispectral UV Sky camera. Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM

3.0 Alignment Equipment and Diagnostic Tools:

R.B.V.R.R. WOMEN S COLLEGE (AUTONOMOUS) Narayanaguda, Hyderabad.

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope

Reflectors vs. Refractors

Radial Polarization Converter With LC Driver USER MANUAL

Physics 431 Final Exam Examples (3:00-5:00 pm 12/16/2009) TIME ALLOTTED: 120 MINUTES Name: Signature:

Test procedures Page: 1 of 5

Fig Color spectrum seen by passing white light through a prism.

Deformable MEMS Micromirror Array for Wavelength and Angle Insensitive Retro-Reflecting Modulators Trevor K. Chan & Joseph E. Ford

Spatially Varying Color Correction Matrices for Reduced Noise

Laboratory Experiment of a High-contrast Imaging Coronagraph with. New Step-transmission Filters

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Midterm Examination CS 534: Computational Photography

Design of Infrared Wavelength-Selective Microbolometers using Planar Multimode Detectors

Vision Research at. Validation of a Novel Hartmann-Moiré Wavefront Sensor with Large Dynamic Range. Wavefront Science Congress, Feb.

Applications of Optics

Dispersion multiplexing with broadband filtering for miniature spectrometers

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

A novel tunable diode laser using volume holographic gratings

Sensitive measurement of partial coherence using a pinhole array

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

Waveform-Space-Time Adaptive Processing for Distributed Aperture Radars

Optical Design of an Off-axis Five-mirror-anastigmatic Telescope for Near Infrared Remote Sensing

Improving the Detection of Near Earth Objects for Ground Based Telescopes

Image Formation and Camera Design

Chapter 36. Image Formation

Design and characterization of 1.1 micron pixel image sensor with high near infrared quantum efficiency

Wavelength Stabilization of HPDL Array Fast-Axis Collimation Optic with integrated VHG

Computer Vision. Howie Choset Introduction to Robotics

Light gathering Power: Magnification with eyepiece:

Practical Flatness Tech Note

Mutually Optimizing Resolution Enhancement Techniques: Illumination, APSM, Assist Feature OPC, and Gray Bars

Sharpness, Resolution and Interpolation

LOS 1 LASER OPTICS SET

LENSES. INEL 6088 Computer Vision

Vibration-compensated interferometer for measuring cryogenic mirrors

Copyright 2005 Society of Photo Instrumentation Engineers.

Supplementary Figure 1. Effect of the spacer thickness on the resonance properties of the gold and silver metasurface layers.

Digital Camera Technologies for Scientific Bio-Imaging. Part 2: Sampling and Signal

Improving registration metrology by correlation methods based on alias-free image simulation

Transcription:

Invited Paper Compressive Optical MONTAGE Photography David J. Brady a, Michael Feldman b, Nikos Pitsianis a, J. P. Guo a, Andrew Portnoy a, Michael Fiddy c a Fitzpatrick Center, Box 90291, Pratt School of Engineering, Duke University, Durham, NC 27708 b Digital Optics Corporation, 9815 David Taylor Drive, Charlotte, NC 28262 c Center for Optoelectronics and Optical Communications, University of North Carolina Charlotte, 9201 University City Blvd. Charlotte, NC 28223. ABSTRACT The Compressive Optical MONTAGE Photography Initiative (COMP-I) is an initiative under DARPA s MONTAGE program. The goals of COMP-I are to produce 1 mm thick visible imaging systems and 5 mm thick IR systems without compromising pixel-limited resolution. Innovations of COMP-I include focal-plane coding, block-wise focal plane codes, birefringent, holographic and 3D optical elements for focal plane remapping and embedded algorithms for image formation. In addition to meeting MONTAGE specifications for sensor thickness, focal plane coding enables a reduction in the transverse aperture size, physical layer compression of multispectral and hyperspectral data cubes, joint optical and electronic optimization for 3D sensing, tracking, feature-specific imaging and conformal array deployment. Keywords: Focal plane, multiple aperture, transmission masks, image sampling 1. INTRODUCTION Focal plane coding, consisting of structured arrangement of pixel sampling geometry, enables non-degenerate sampling in multiple aperture imaging systems. The COMP-I program uses of transmission masks for focal plane coding and has shown that focal plane coding enables compressive image sampling. Core aspects of COMP-I systems include 1. Focal plane coding is the core COMP-I innovation. The fundamental vision of MONTAGE is to use integrated sensing and processing to break the conventional isomorphism between image pixels and digital samples. Focal plane coding consists of intelligent sampling and remapping of optical pixels to enable efficient digital reconstruction. COMP-I has explored four approaches to focal plane coding: focal plane pixel masks, birefringent and refractive remapping elements, holographic lenslet arrays and photonic crystal remapping elements. In first generation systems, focal plane coding is the preferred implementation strategy. 2. Block-wise and multiscale focal plane codes have been developed by the COMP-I team. Block-wise and wavelet coding form the basis of current image compression schemes. The COMP-I team has developed and simulated codes based on realistic optical design rules that improve the SNR for image estimation from generalized sampling by orders of magnitude and that enable data efficient transverse sampling. 3. Compressive imaging is an extension of generalized sampling to include the sampling on image-aware bases. Compressive sampling measures non-local bases using focal plane coding to enable sampling below naïve Nyquist limits. The COMP-I team has demonstrated the feasibility of compressive imaging in simulation of focal plane coded systems. 4. Image fusion, inversion and registration algorithms are enabled by block-wise and multi-resolution approaches. Because individual sub-apertures or clusters of apertures may be designed to reconstruct the full resolution optical image, image fusion for higher level estimation may rely on Bayesian or other nonlinear estimation algorithms on the full resolution image. This approach dramatically relaxes registration criteria. 5. Thin imaging consistent with the MONTAGE specification is enabled by innovations 1-5. COMP-I achieves 5-10 times reduction in system thickness through focal plane codes to enable sub-nyquist transverse sampling of the optical intensity distribution on the focal plane. COMP-I achieves 6-9 times reduction by joint optimization of physical design. Photonic Devices and Algorithms for Computing VII, edited by Khan M. Iftekharuddin, Abdul A. Awwal, Proc. of SPIE Vol. 5907, 590708, (2005) 0277-786X/05/$15 doi: 10.1117/12.613213 Proc. of SPIE Vol. 5907 590708-1

Imaging system design begins with the focal plane. Suppose that the focal plane consists of pixels of size δ. The size of the image on the focal plane is D. The number of pixels is N = D/ δ. In a conventional imaging system, a lens of focal length F = D/ N. A. is used to form an image. The diffraction limited resolution of the field distribution on the focal plane is λ / NA... Typically, the diffraction limited resolution is much less than δ. The angular field of view is approximatelysin θ = D/ F = N. A.. The angular resolution is θ = δ / F due to the focal plane and θ = λ / D due to the diffraction limit. Since F and D are related by the numerical aperture, λ θδ / θλ = NA.. δ / λ. Thus, for a given focal plane, the angular resolution is inversely proportional to F, meaning that thicker systems have better angular resolution. If we could reduce δ by an order of magnitude, we could reduce F by an order of magnitude while still maintaining θδ = δ / F. Unfortunately, it is not possible to reasonably reduce δ in the focal plane. It is possible, however, to effectively reduce δ through compressive coding. Coding consists of nonlocally remapping image fields with wavelength-scale 3D focal plane optics. In a conventional imaging system, the focal plane averages wavelength-scale features within each pixel. The reference structure layer in the proposed system remaps the field in the focal plane such that wavelength-scale features from disjoint pixels are measured by each pixel. A pixel measurement in a conventional system may be modeled as ( ) m= I r dr A where A is the area of the pixel. With compressive coding, the ith pixel measurement is m i ( ) ( ) = hi r I r dr where hi ( r ) is a non-local map of the focal intensity onto the ith pixel. hi ( r ) is a non-convex distribution. Focal plane coding consists of selecting hi ( r ). Focal plane coding may be used to improve resolution or to produce thin imaging systems for the visible and infrared spectral ranges. 2. COMPRESSIVE SAMPLING We consider a multiple aperture imaging system where each aperture includes an imaging lens, a focal plane coding element and an electronic focal plane. Image synthesis from multiple aperture systems was pioneered in the TOMBO system [1, 2]. The lens-focal length distance is adjusted such that an array of images is formed on the focal plane. We assume for simplicity that the redundant images are identical, although aperture to aperture variations in sampling may be corrected algorithmically and may be deliberately designed into conformal aperture arrays. The focal plane intensity distribution formed by an aperture is I ( r) = I ( r ) h( r r ) dr, where I ( ) o o r is the true intensity distribution of the object in its native space. The focal plane intensity distribution is integrated on the th th i pixel of the j aperture to obtain the measurement value ( r) ( r) r ( ) ( ) ( ) m = p I d = I r p r h r r dr dr o δ (1) Proc. of SPIE Vol. 5907 590708-2

where p ( r ) is the focal plane code for the th example a sinc or wavelet basis, I ( ) sψ ( ) m n n n i pixel in the r = r then Eqn. (1) becomes o n n n th j aperture. If we expand the object intensity using for = H s (2) where m and s are measurement and object state vectors, respectively, and Hn == ψ n ( r ) p ( r) h( r r ) dr dr (3) As a first approximation, one may assume a local object basis such that the coefficients s n correspond to high resolution pixel states in the image. If we break the image into sub-blocks as in the original JPEG standard, for example, we may consider the linear transformation m= Hs as implemented on each sub-block. For a 4 4 sub-block size, for example, the object state vector s corresponds to optical resolution cells as shown here: s1 s2 s3 s4 s5 s6 s7 s8. s9 s10 s11 s12 s s s s 13 14 15 16 6 8 10 12 1 1 1 1 1 1 1 1 1 1 1 0 0-1 -1-1 1 0 0-1 -1 0 0 1 1 0-1 -1 1 1 0-1 1-1 -1 1 1-1 -1 1 1-1 0 1-1 0 1-1 0-1 1 0 0 1-1 0 0-1 1-1 1-1 1 0 Figure 2: 8x8 transformation code for quantized cosine transform. 2 4 6 Figure 1: Mask patterns for 4x4 Hadamard blocks (Spatialcode association is not unique). A mapping is implemented on s by masking the block in each subaperture with a different focal plane code. The entire block is integrated on a single pixel in each subaperture. In the case of 4 4 sub-blocks, H is a 16 16 matrix and m is a 16 1 vector. For example, if H is a Hadamard-S matrix (which is a 0-1 matrix obtained by shifting the Hadamard matrix elementwise up by 1 and scaling it by ½), the masks or codes for the 16 subapertures are as shown in Fig. 1. The figure shows the optical transmission pattern over each square pixel on the focal plane (white=1, black =0). Each pixel is segmented into a 4x4 grid of optical resolution elements. In the first subaperture, each electronic pixel integrates all incident optical power according to the code (all 1s) in the upper left corner block. In the second subaperture, the second and fourth columns of the source distribution s are blocked according to the code in the (1, 2) Proc. of SPIE Vol. 5907 590708-3

block in the transmission pattern. A complete image is acquired using 16 sub-apertures, each following a specific code as described. We also consider the case where the elements of H are drawn from the set (-1, 0,1). In this case, H = H 1 - H 2, both H 1 and H 2 draw elements from the binary set (0,1). Coding schemes based on such matrices can be implemented easily. We illustrate next the compressive design of the transformation matrices and image reconstruction. The noncompressive design may be viewed as an extreme case where all measurements are used. fr11 alt PHT DCT QCT PHT DCT QCT PHT DCT QCT Figure 3: Reconstructions using 4.69%, 15.63% and 32.81% of transformed components/available measurements. In compressive system design, we use certain transforms to enable measurements of the principal components of the source image in a representation and source estimates by numerical decompression with high fidelity. We introduce a couple of such transforms, which are new to our knowledge. Partition an image source into blocks of, for example, 8x8 Proc. of SPIE Vol. 5907 590708-4

pixels. Consider the two-dimensional transformation of each 8x8 block S, C = Q S Q T, where the transform matrix Q is defined as in Fig. 2. The transform matrix has the following properties. Its elements are from the set (0, 1, -1), implying that the transform can be easily implemented as a mask. The rows of the matrix are orthogonal. The row vectors are quite even in Euclidean length, with the ratio 2 between the largest and the smallest. When the source image is spatially continuous within block S, the transformed block C exhibits the compressible property that its elements decay along the diagonals. We may therefore truncate the elements on the lower anti-diagonals and measure only the remaining elements with fewer sensors. Denote by C the truncated block matrix. We then get an estimate of the source block S from Q -1 C Q -T (decompression). The same transform matrix is used for all blocks of image S. The above ideas are similar to the image compression with the discrete cosine transforms, as used in the JPEG protocol. In fact, the specific matrix Q can be obtained by rounding the discrete cosine transform (DCT) of the second kind into the set (0, 1, -1). We therefore refer to Q as the quantized cosine transform (QCT). But the very structure of the QCT matrix itself can be used to explain the compression. Simulation results for QCT sampling and reconstruction are provided in Fig. 3. Visually the effectiveness of the compression with the QCT is surprisingly close to that with the DCT. We also use a permuted Hadamard transform (PHT) with row ordering [1, 5, 7, 3, 4, 8, 2, 6]. We skip here the quantitative comparisons among these transform. Based on the basic 8x8 QCT and PHT matrices, we can also construct larger transform matrices of hierarchical structure for multiple resolution analysis. 3. FOCAL PLANE CODING MASKS We have previously considered transmission masks in coherent imaging and interferometry as field sampling elements displaced above the focal plane [3]. In the present context we consider sampling masks directly in contact with the focal plane. As a first step to demonstrating focal plane coding using transmission masks, we have experimentally shown that sub-pixel apertures can create sub-pixel response on the CCD sensors. We have also used these masks to characterize point-spread function of the imaging system by using the sub-pixel aperture scanning technique. Most imaging systems are linear imaging systems. In a linear imaging system, the image i( y) and the object s( y, are related as i( y) = dλ λ y' s( y', h( y; y'; dx' dy', (4) where h( y; y'; is the impulse response function, also called point spread function (PSF) at the wavelength λ. In a digital electronic imaging system, the image is sampled by the photo-detector array. The signal from the pixel of (m, n) is i ( m, n) = i( y) p( x ma, y na) dxdy, (5) where p(y) is the pixel response function 1 for x a/ 2 and y a/ 2 p ( y) =, (6) 0 otherwise Rewriting Eqn. (5), the signal from the (m, n) pixel is i( m, n) = y p( x ma, y na) dxdy dλ λ y' s( y', f ( y; y'; dx' dy' The pixel size of the CCD is 5.6 micron square with an individual micro-lens on the top of each pixel. The CCD has a total number of 650 x 490 pixels. The sub-pixel mask is a 120 nm chrome mask on a glass substrate. One mask pattern (7) Proc. of SPIE Vol. 5907 590708-5

is shown in Fig. 4. There are four sub-pixel line apertures with one, two, three, and four micron width in the mask pattern. The pitch (center to center distance) of the sub-pixel apertures on the mask matches the pitch of the CCD pixels. We align the sub-pixel apertures to the pixels of the CCD. To show that this sub-pixel mask can create localized sub-pixel response, we used a high-na (NA=0.5) objective lens to focus a far-field point source light on the mask. The far-field point source was created by focusing a HeNe laser (0.6328 nm wavelength) to a 15 micron pin hole. The polarization of the light is parallel to the line apertures (vertically in Fig. 4). Theoretically, the Airy radius of the focused spot size is 0.78 micron. 4 micron 3 micron 2 micron 1 micron Fig. 4 The sub-pixel mask pattern In the experiment, the mask coded CCD was on a high resolution translation stage. The movement resolution is about 50 nm. We translated the CCD to scan the focused spot crossing the sub-pixel apertures with an increment step of 0.2 micron. Figure 5 shows the signals from four mask coded pixels versus the scanning distance. The vertical axis is the pixel number by the size of the line aperture. Figure 6 plots the signals of four pixels for scanning distance. The signals from pixels with three and four micron apertures have flattops. This indicates the spot size is smaller than the sizes of the apertures. Therefore, the pixels with three and four micron apertures capture all the light. When the focused spot (PSF) overlapped with the two micron and one micron apertures, only partial light was transmitted. IS 20 Distance (km) Fig. 5. Signals from coded aperture pixels versus the scanning distance. Proc. of SPIE Vol. 5907 590708-6

50 40 Four micron apperture Three micron apperture Two micron apperture One micron apperture Intensity (a.u) 30 20 10 0 0 5 10 15 20 25 30 35 Shift distance (micron) Fig. 6. Signals from coded aperture pixels versus the scanning distance. In summary, we have shown that the sub-pixel mask can create localized response in each individual pixel of the CCD camera. ACKNOWLEDGEMENT This work was supported by DARPA s MONTAGE program contract N01-AA-23103. REFERENCES 1. Tanida, J., et al., Color imaging with an integrated compound imaging system. Optics Express, 2003. 11(18): p. 2109-2117. 2. Tanida, J., et al., Thin observation module by bound optics (TOMBO): concept and experimental verification. Applied Optics, 2001. 40(11): p. 1806-1813. 3. Tumbar, R. and D.J. Brady, Sampling field sensor with anisotropic fan-out. Applied Optics, 2002. 41(31): p. 6621-6636. Proc. of SPIE Vol. 5907 590708-7