Digital Photographic Imaging Using MOEMS

Similar documents
Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Spatially Resolved Backscatter Ceilometer

LENSES. INEL 6088 Computer Vision

Instructions for the Experiment

DETERMINING CALIBRATION PARAMETERS FOR A HARTMANN- SHACK WAVEFRONT SENSOR

Basic principles of photography. David Capel 346B IST

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Image sensor combining the best of different worlds

Wavelength Stabilization of HPDL Array Fast-Axis Collimation Optic with integrated VHG

Image Formation and Capture

Wavefront sensing by an aperiodic diffractive microlens array

Introduction. Lighting

Bits From Photons: Oversampled Binary Image Acquisition

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

brief history of photography foveon X3 imager technology description

Beacon Island Report / Notes

A novel tunable diode laser using volume holographic gratings

Testing Aspheric Lenses: New Approaches

Laser Scanning for Surface Analysis of Transparent Samples - An Experimental Feasibility Study

Applications of Steady-state Multichannel Spectroscopy in the Visible and NIR Spectral Region

Building a Real Camera

Parallel Mode Confocal System for Wafer Bump Inspection

How does prism technology help to achieve superior color image quality?

Light. Path of Light. Looking at things. Depth and Distance. Getting light to imager. CS559 Lecture 2 Lights, Cameras, Eyes

Topic 9 - Sensors Within

Digital Photographs, Image Sensors and Matrices

LENSLESS IMAGING BY COMPRESSIVE SENSING

Image Formation: Camera Model

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

University Of Lübeck ISNM Presented by: Omar A. Hanoun

CMOS Star Tracker: Camera Calibration Procedures

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Development of a Low-order Adaptive Optics System at Udaipur Solar Observatory

NSERC Summer Project 1 Helping Improve Digital Camera Sensors With Prof. Glenn Chapman (ENSC)

Advanced 3D Optical Profiler using Grasshopper3 USB3 Vision camera

Photolithography II ( Part 2 )

Active Aperture Control and Sensor Modulation for Flexible Imaging

White Paper: Modifying Laser Beams No Way Around It, So Here s How

Novel Hemispheric Image Formation: Concepts & Applications

Be aware that there is no universal notation for the various quantities.

Sensitivity Enhancement of Bimaterial MOEMS Thermal Imaging Sensor Array using 2-λ readout

The Performance Improvement of a Linear CCD Sensor Using an Automatic Threshold Control Algorithm for Displacement Measurement

Sensors and Sensing Cameras and Camera Calibration

Cameras, lenses and sensors

ME 6406 MACHINE VISION. Georgia Institute of Technology

Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA

Study of self-interference incoherent digital holography for the application of retinal imaging

Rotation/ scale invariant hybrid digital/optical correlator system for automatic target recognition

Pixel CCD RASNIK. Kevan S Hashemi and James R Bensinger Brandeis University May 1997

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

Video Microscopy of Selective Laser Sintering. Abstract

Applications of Optics

Development of a new multi-wavelength confocal surface profilometer for in-situ automatic optical inspection (AOI)

Digital Photographs and Matrices

PROCEEDINGS OF SPIE. Measurement of the modulation transfer function (MTF) of a camera lens

DESIGNING AND IMPLEMENTING AN ADAPTIVE OPTICS SYSTEM FOR THE UH HOKU KE`A OBSERVATORY ABSTRACT

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

Properties of Structured Light

Integral 3-D Television Using a 2000-Scanning Line Video System

A 3D Profile Parallel Detecting System Based on Differential Confocal Microscopy. Y.H. Wang, X.F. Yu and Y.T. Fei

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

MODULAR ADAPTIVE OPTICS TESTBED FOR THE NPOI

Measurement of the Modulation Transfer Function (MTF) of a camera lens. Laboratoire d Enseignement Expérimental (LEnsE)

Lithography. 3 rd. lecture: introduction. Prof. Yosi Shacham-Diamand. Fall 2004

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987)

Chapter 25. Optical Instruments

Copyright 2000 Society of Photo Instrumentation Engineers.

University of Pennsylvania Center for Sensor Technologies SUNFEST

Multi-aperture camera module with 720presolution

Megapixels and more. The basics of image processing in digital cameras. Construction of a digital camera

Optical basics for machine vision systems. Lars Fermum Chief instructor STEMMER IMAGING GmbH

Color electroholography by three colored reference lights simultaneously incident upon one hologram panel

SPECTRAL SCANNER. Recycling

INTRODUCTION TO CCD IMAGING

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

11/25/2009 CHAPTER THREE INTRODUCTION INTRODUCTION (CONT D) THE AERIAL CAMERA: LENS PHOTOGRAPHIC SENSORS

MEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

A Laser-Based Thin-Film Growth Monitor

A novel solution for various monitoring applications at CERN

Single Camera Catadioptric Stereo System

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

Cameras. CSE 455, Winter 2010 January 25, 2010

MUSKY: Multispectral UV Sky camera. Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM

A high-resolution fringe printer for studying synthetic holograms

Opto-VLSI-based reconfigurable photonic RF filter

The Importance of Wavelengths on Optical Designs

Deformable MEMS Micromirror Array for Wavelength and Angle Insensitive Retro-Reflecting Modulators Trevor K. Chan & Joseph E. Ford

CSE 473/573 Computer Vision and Image Processing (CVIP)

Chapter 17: Wave Optics. What is Light? The Models of Light 1/11/13

Image Formation and Camera Design

11Beamage-3. CMOS Beam Profiling Cameras

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

MICROVISON-ACTIVATED AUTOMATIC OPTICAL MANIPULATOR FOR MICROSCOPIC PARTICLES

VC 11/12 T2 Image Formation

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

Aberrations and adaptive optics for biomedical microscopes

Transcription:

Digital Photographic Imaging Using MOEMS Vasileios T. Nasis a, R. Andrew Hicks b and Timothy P. Kurzweg a a Department of Electrical and Computer Engineering, Drexel University, Philadelphia, USA b Department of Mathematics, Drexel University, Philadelphia, USA ABSTRACT In previous work, we proposed a method for imaging using micro optical electromechanical (MOEM) mirrors. Our solution was to introduce into the imaging sensor optics a 2-D micro-mirror array. This device provides high resolution images with a wide field of view. In this paper, we provide further simulations that validate the functionality of our system design. In addition, we present our first system prototype that can produce an image with higher resolution and support a wider field of view than the image sensor employed in the system. Keywords: MEMS, MOEMS, Micro-Mirror Array, Digital Imaging 1. INTRODUCTION Unlike traditional cameras that use film to capture and store images, digital cameras use electronic solid-state devices called image sensors (e.g. CCD, CMOS). 1, 2 These image sensors are pixilated metal oxide semiconductors that contain a large number of photosensitive diodes to capture light intensity when the shutter opens. The design goal of these systems is to maximize the resolution of the captured image. This can be achieved by reducing the size of the photo-sensors in order to fit more sensors into the same space. A smaller photo-sensor covers less area, however, results in fewer photons being captured. This is a fundamental optics problem that all image sensors incur, and is the bottleneck for creating ultra high-resolution image sensors. Another approach used in creating images of higher resolution than the native resolution of the employed camera, is image mosaicing. The simplest mosaics are created by panning the camera around its optical center, in which case the panoramic image can be created on a cylindrical or a spherical manifold. The original images, which are formed by perspective projection onto a plane, are wrapped to be perspectively projected into an appropriate cylinder, where they can be combined to a full 360 degrees panorama. In [3], such a system is described that is able to create spherical mosaics using a zoom lens and a pan-tilt mechanism mounted on a robot. The biggest disadvantage of such a method is that the image acquisition is slow since the camera position must be moved many times to capture a complete image. More importantly, this technique does not contribute in increasing the resolution of the image or the field of view. Our solution to these limitations is to introduce a micro-mirror array into the optics of the image sensor. Conventional cameras image the object that is directly in the field of view of the image sensor. However, in our case, the camera images the object indirectly through a micro-mirror array. This system can image a given surface much faster and, with a proper reconstruction algorithm, produce higher resolution image than if we used any of the other methods. In this paper, we present a description for our approach of digital imaging by using micro-mirrors, and we provide supportive simulations of our model as well as the first proof of concept image that we have developed with our experimental prototype. Further author information: (Send correspondence to R.A.H) R.A.H: Email: ahicks@math.drexel.edu, Telephone: +1 215 895 2681 T.P.K.: E-mail: kurzweg@ece.drexel.edu, Telephone: +1 215 895 0549

Figure 1. A correspondence between the pixels in an image and a collection of points on a surface can be achieved by using a single small mirror for each pixel Figure 2. A schematic diagram of our concept of image acquisition. 2. THEORETICAL APPROACH Assume a simple camera model, where each individual sensing element of a CCD chip corresponds to a single light ray, e.g. a pinhole camera model. The goal is to image a given surface. A correspondence between the surface and the CCD chip is given, determining the geometric distortion of the image. Different choices of this correspondence could mimick the geometric distortion in real camera lenses. For example, the correspondence could be associated with the correspondence induced by fisheye lens. To realize a given correspondence one can imagine placing a small flat mirror along a given ray, and tilting it appropriately to map the ray from a point on the surface to a corresponding CCD element, as in Figure 1. It is clear that the depth along the ray where the mirror is placed is not important. That is, from any point on the ray, some appropriate tilt can be chosen to achieve the desired correspondence. Thus, the entire correspondence could be achieved by a family of small flat mirrors, one for each ray. Since the depths are arbitrary, we may choose the center of each small mirror to lie in a fixed plane, like the mirrors on a micromirror array. Therefore, by properly orienting each individual mirror of a micromirror array, it should be possible to realize any correspondence. An important implication of this fact is that because micromirror arrays can change state very rapidly, one can use a video camera to rapidly record an array in many different states, i.e., one state for each frame. A sensor configured with a micromirror array could be configured to image a different portion of space in each frame (see Figure 2). Thus, the sensor can progressively scan through space, taking images at a video framerate. The color values for each individual micro mirror can then be extracted from each frame, and assembled into a single image offline. The resolution of an image acquired in the manner is limited by the number of mirrors that are observable times the number of distinct states achievable by an individual mirror.

Figure 3. Simulation of the system set-up. On the top left corner of the image is a camera snapshot of the mirrors. 3. SIMULATED RESULTS In [4], we presented simulated results supporting the validity of our technique. It was shown that the images captured with the proposed method had resolution much higher than the number of the micro-mirrors that were used for the process of imaging. All the simulations were performed in the free-ware tool, POV-ray, which was interfaced to Matlab for pixel positioning computation and the image reconstruction. In this paper, we present new simulation results based on our system using a micro-mirror array mimicking the Lucent s WavestarT M LambdaRouter. This particular array contains a total of 256 micro-mirrors in square matrix distribution, where each of the mirrors has a diameter of 650µm and can tilt in two degrees of freedom at angles of approximately ±8. In addition, we assume that each mirror has 100 controllable and repeatable states, in both the x and y directions, although, in reality, the number of states can be much greater. In this simulation, we created an object in POV-ray consisting of a red and blue square checkerboard. On the top of the checkerboard a few objects were randomly placed. Above this checkerboard plane, we placed a model of the LambdaRouter micro-mirror array at an angle of 45 with respect to the norm of the object plane. On the same plane as the mirror array us a camera, which is focused on mirrors. The system, including the objects, the mirror array, and the camera can been seen in Figure 3. As the mirror-array passes through each of its possible states, the camera captures a snapshot of the mirror array. For every snapshot, the center pixel is extracted from each mirror in the array. Due to the size of the array, we extract a total of 256 pixels from every snapshot. These pixels are used to create the reconstructed image, by determining the correspondence to the original object image. Since the mirrors have two tilting directions each with 100 states on each axis of rotation, a total of 10,000 is achieved. If we extract 256 pixels from each snapshot (as seen in Figure 3), our resulting image will have a total of 2.56 Mpixels, as seen in Figure 4. In other words, with a camera of only 256 pixels we were able to capture an image of 2.56 Mpixels. Looking at the distribution of the pixels in the reconstructed image, as seen in Figure 5, we see that the density of the pixels toward the middle of the image plane is much greater than the number of the pixels towards the edges. This is explained geometrically, as the mirrors tilts with the angle θ, increasing the displacement of the point from the middle of the image plane by a distance proportional to the tan(θ). It is within our immediate plans to develop an algorithm, or optical setup, that will allow an equal distribution of the pixels in the image plane.

(a) Original Image (b) Reconstructed Image Figure 4. The original image is 360 k-pixels and the reconstructed one is 2.5 Mpixels after it was captured using our MOEMS imaging method. Figure 5. Pixel distribution for the states that the 256 mirrors have taken. The pixels are not equally distributed due to the nature of the mirrors rotation.

Figure 6. The experimental set up of our system design. The camera, the mirror and the object plane form a right angle. The mirror is sitting on a rotational stage to scan the image. 4. EXPERIMENTAL RESULTS To this point, we have demonstrated our system through simulations. In this section of the paper, we extend the validity of our system design by building the first prototype in the lab and capturing the first wide view, high resolution image. Lucent Technologies has generously provided us with the micro-mirror chip of the LambdaRouter. 5 Even though in theory we should be able to use the chip in our system design, it is challenging to use without modifications, as discussed later in this paper. Therefore, our first prototype uses a single mirror system to prove the validity of our technique. Our main goal in this experiment is to demonstrate that it is possible to create a wide field, high resolution image using micro-mirrors. In Figure 6, we show the prototype system that we designed for our proof of concept experiments. The experimental set up is similar to the one used for the simulations. Our image sensor is the Sony DFW-V300 CCD camera with a 75mm focusable Double Gauss Macro Imaging Lens from Edmund Optics (Part No. 54691 ), which has a narrow field of view. The camera was connected to a PC which was running Matlab and the Image Acquisition toolbox in order to capture snapshots and import them into the Matlab environment for processing. The scanning micro-mirror was purchased from Fraunhofer IPMS and has a diameter of 1mm. Since, the micro-mirror is a scanning mirror, there is no control to stop the mirror plate at designated angles. Therefore, the mirror was placed on a rotational stage and was rotated manually while snapshots were captured with the camera. Across the mirror, at an angle of 45 with respect to the mirror normal, was the object plane, which was a white sheet of paper with the typed word Drexel. In Figure 7, a snapshot of the camera image as it is focused on the object plane through the micro-mirror is shown. In this case, we collect a large number of pixels from each snapshot. After we collect all the appropriate pixels from each snapshot, we arrange them appropriately on the image plane. At that point we have reconstructed an image of object plane that is has higher resolution and wider field of view than a single snapshot from a micro-mirror. The reconstructed image appears on Figure 8.

(a) (b). Figure 7. (a)fraunhofer mirror. (b)the Fraunhofer mirror focused on the object plane. Figure 8. After capturing many snapshots, as shown in Figure 7(b), we collect the appropriate pixels and reconstruct the image. The final image has much higher resolution and has a wider field of view than any single snapshot that can be captured with the image sensor. 5. FUTURE WORK In order to be able to create a high resolution image with a wide field of view, we use a 2-D micro-mirror array. Such a mirror array has been provided from Lucent Technologies. Even though its specifications fit our needs, this device was built for optical switching and routing network traffic. 5 The micro-mirror array is packaged with an overglass protection that prevents it from damage, corrosion, and stray light reflection. Since the chip was designed to work for telecommunication applications, the antireflection coating on the overglass was designed to operate for a wavelength in the range of 1300nm 1550nm. This coating causes high reflections in the visible range, as can be seen Figure 9). As result, we are unable to focus on the object plane through the mirror-array, as the reflection is dominant, suppressing the light coming of the mirrors. We are currently working on resolving this reflection issue by applying a new index matching coating in order to eliminate reflection of the overglass in the visible range. In addition to the reflection problem, we are also working on characterizing the mirror array. To estimate the correspondence between the mirror, the object plane and the image plane. Therefore, an accurate tilting of the mirrors is necessary. We have began characterizing the mirrors by reflecting a laser beam off the mirrors and calculate the Voltage vs. Angle correspondence (see Figure 10). In the simulations, we saw that the distribution of the pixels over the image plane it is not even. That is due to the geometrical nature of the design. Currently, we are investigating new models that will allow in the end to have equal distribution of pixels over the entire image plane, both in software and in physical optical solution In the course of this research, we have seen that many factors need to be take into account in order to perform a successful imaging with MOEMS. Such factors include the choice of the CCD, lenses, illumination, and reflection. As we proceed with this research, we will evaluate all these factors and study their effects on the results.

Figure 9. Lucent micro-mirror array chip from the LambdaRouter. The reflection caused by the overglass is easily identified in the photo. (a) Micro-mirror behavior through a large range of voltages (b) Set up of the micro-mirror characterization process Figure 10. LambdaRouter Characterization.

6. CONCLUSION Our imaging work using MOEMS is in its beginning stages. We have presented simulation examples showing high resolution imaging with micro-mirrors, as well as our first experimental prototype. We are currently working on the open issues, described in the previous section, and we are expecting to develop a new experimental prototype, based on Lucent s LambdaRouter, that will allow to us to perform higher resolution imaging in the near future. 7. ACKNOWLEDGMENTS This work is supported by The National Science Foundation (NSF) under the grant ISS-0413012 from the Computer Vision Division. In addition, Lucent Technologies has provided us with the Wavestar T M LambdaRouter micro-mirror array system and high speed, high voltage drivers. REFERENCES 1. E. Fossum, Cmos image sensors: electronic camera-on-a-chip, IEEE Transactions on Electron Devices 44, pp. 1689 1698, 2005. 2. A. Theuwissen, Ccd or cmos image sensors for consumer digital still photography?, International Symposium on VLSI Technology, Systems, and Applications,, pp. 168 171, 2001. 3. A. Kropp, N. Master, and S. Teller, Acquiring and rendering high-resolution spherical mosaics, in Workshop on Omnidirectional Vision, Proc. IEEE, pp. 47 53, 2000. 4. R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, Micromirror array theory for imaging sensors, in MOEMS Display and Imaging Systems III, D. L. D. Hakan Urey, ed., Proc. SPIE 5721, 2005. 5. D. J. Bishop, C. R. Giles, and G. P. Austin, The lucent lambdarouter: Mems technology of the future here today, IEEE Communications Magazine 40, pp. 75 79, 2002.