Be aware that there is no universal notation for the various quantities.

Similar documents
FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

Physics 3340 Spring Fourier Optics

1 Laboratory 7: Fourier Optics

ECEN 4606, UNDERGRADUATE OPTICS LAB

Imaging Systems Laboratory II. Laboratory 8: The Michelson Interferometer / Diffraction April 30 & May 02, 2002

ECEN 4606, UNDERGRADUATE OPTICS LAB

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam

ADVANCED OPTICS LAB -ECEN Basic Skills Lab

Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET

E X P E R I M E N T 12

ADVANCED OPTICS LAB -ECEN 5606

PH 481/581 Physical Optics Winter 2014

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

Spatial Light Modulator (SLM) Workshop, BFY 2012 Conference Douglas Martin and Shannon O Leary Lawrence University and Lewis & Clark College

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit

Optical System Design

Physics 23 Laboratory Spring 1987

Chapter 18 Optical Elements

Education in Microscopy and Digital Imaging

Lenses. Optional Reading Stargazer: the life and times of the TELESCOPE, Fred Watson (Da Capo 2004).

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Supplementary Materials

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc.

Physics 431 Final Exam Examples (3:00-5:00 pm 12/16/2009) TIME ALLOTTED: 120 MINUTES Name: Signature:

The diffraction of light

Phys 531 Lecture 9 30 September 2004 Ray Optics II. + 1 s i. = 1 f

LEOK-3 Optics Experiment kit

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

Chapter 3. Introduction to Zemax. 3.1 Introduction. 3.2 Zemax

NANO 703-Notes. Chapter 9-The Instrument

Applications of Optics

Optics Laboratory Spring Semester 2017 University of Portland

Systems Biology. Optical Train, Köhler Illumination

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

OPTICS I LENSES AND IMAGES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

Laboratory experiment aberrations

Will contain image distance after raytrace Will contain image height after raytrace

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

25 cm. 60 cm. 50 cm. 40 cm.

PHYS 160 Astronomy. When analyzing light s behavior in a mirror or lens, it is helpful to use a technique called ray tracing.

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

PHY 431 Homework Set #5 Due Nov. 20 at the start of class

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Difrotec Product & Services. Ultra high accuracy interferometry & custom optical solutions

1 Introduction Installation... 4

AgilOptics mirrors increase coupling efficiency into a 4 µm diameter fiber by 750%.

Collimation Tester Instructions

Chapter 8. The Telescope. 8.1 Purpose. 8.2 Introduction A Brief History of the Early Telescope

Computer Generated Holograms for Optical Testing

Laboratory 7: Properties of Lenses and Mirrors

INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER

PHYS 3153 Methods of Experimental Physics II O2. Applications of Interferometry

Microscope anatomy, image formation and resolution

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

Basic Optics System OS-8515C

Chapter 25 Optical Instruments

Big League Cryogenics and Vacuum The LHC at CERN

DESIGN NOTE: DIFFRACTION EFFECTS

Real-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Video. Part I. Equipment

Activity 6.1 Image Formation from Spherical Mirrors

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

On spatial resolution

Comparison of FRD (Focal Ratio Degradation) for Optical Fibres with Different Core Sizes By Neil Barrie

Modulation Transfer Function

Laser Telemetric System (Metrology)

Instructions for the Experiment

OptiSpheric IOL. Integrated Optical Testing of Intraocular Lenses

AP Physics Problems -- Waves and Light

Optical design of a high resolution vision lens

Guide to SPEX Optical Spectrometer

Radial Polarization Converter With LC Driver USER MANUAL

Technical Note How to Compensate Lateral Chromatic Aberration

A simple and effective first optical image processing experiment

GEOMETRICAL OPTICS AND OPTICAL DESIGN

PH 481/581 Physical Optics Winter 2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 2.71/2.710 Optics Spring 14 Practice Problems Posted May 11, 2014

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

Katarina Logg, Kristofer Bodvard, Mikael Käll. Dept. of Applied Physics. 12 September Optical Microscopy. Supervisor s signature:...

Chapter 34 The Wave Nature of Light; Interference. Copyright 2009 Pearson Education, Inc.

Akinori Mitani and Geoff Weiner BGGN 266 Spring 2013 Non-linear optics final report. Introduction and Background

Chapter 36: diffraction

Gravitational Lensing Experiment

EE-527: MicroFabrication

Supplementary Figure 1. Effect of the spacer thickness on the resonance properties of the gold and silver metasurface layers.

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions

N.N.Soboleva, S.M.Kozel, G.R.Lockshin, MA. Entin, K.V. Galichsky, P.L. Lebedinsky, P.M. Zhdanovich. Moscow Institute ofphysics and Technology

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

Design Description Document

Geometric Optics. Objective: To study the basics of geometric optics and to observe the function of some simple and compound optical devices.

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

Light sources can be natural or artificial (man-made)

Transcription:

Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and is the foundation of spatial filtering and image processing. It can be conveniently modeled using a two dimensional Fourier transform, where the orthogonal spatial coordinates x and y map into spatial frequencies kx and ky. The coordinate z refers to the light propagation axis. At sufficiently large z, the diffraction far field is reached. In the Fraunhofer approximation, the far field image is well described by the 2 D Fourier transform of the object. The far field exists at the focus of a positive lens, where it can be conveniently measured and manipulated. Excellent references on Fourier optics are the texts of Hecht (Chapter 11), Saleh and Teich (Chapter 4), and the classic textbook by Goodman including Chapter 5 on image processing. The wikipedia page for Fourier Optics is http://en.wikipedia.org/wiki/fourier_optics. Be aware that there is no universal notation for the various quantities. Background Consider a 2D object produced by light illuminating a transmitting film. Define the electric field transmitted through the film as f(x,y); this function contains amplitude and phase information. The Fourier transform of this field (F) is also two dimensional, rendered in reciprocal units known as spatial frequencies (kx, ky): (1) Unlike the familiar Fourier transform of a single variable, this is a double integral over the 2D spatial coordinates x and y. An equivalent 2D Fourier transform exists in polar coordinates. The polar form is useful when there is circular symmetry. Geometric optical imaging by a single lens of focal length f (Fig. 1) is described as follows: where P and S are both positive values. The magnification that occurs in the image plane is: (2)

(3) where the minus sign indicates the image is inverted. Fig. 1 Imaging with a single lens The spatial Fourier transform of an object occurs in the far field. This is attained in the Fraunhofer limit at large z and also at the focus of a lens. The electric field g(x,y) at the focal plane of a lens of focal length f, assuming monochromatic light of wavelength λ and electric field amplitude E0 is: (4) Note that the coordinates x and y are now in the focal plane. In the special case where the object distance P = f, the phase curvature disappears leaving an exact Fourier transform of the object electric field at the focus. This can be implemented using the two lens setup shown in Fig. 2. Fig. 2 Setup for studying image processing using Fourier techniques

Referring to Fig. 2, an object is located at a distance P = f in front of the first lens. This places the image at S = ; the Fourier transform of the object occurs in the plane at a distance f after the lens. To make the inverse Fourier transform, a second lens of focal length f is placed as shown. Analysis using Eq. (2) and (3) shows that the image (i.e. inverse Fourier transform) will be located at a distance S = f after the second lens with a magnification M = 1. If there is no filter present in the Fourier plane, the image will be the same size as the object, but inverted. Setup You will need to build the setup shown in Fig. 2, illustrated in more detail in Fig. 3. The two lenses, object holder, and filter holder are set in sliding stages on the long optical rail. This allows their relative position to be easily adjusted. Begin by aligning the HeNe laser beam parallel to the rail. Insert a polarizer (not shown) in between the two turning mirrors. The polarizer controls the laser power. Place an adjustable iris on a movable stage and use the two mirrors to precisely position the beam in the center of the pinhole when it is at the extreme ends of the rail. Mark the position of the beam on the wall as it will be used to help align other components. You should also keep the beam alignment iris in its stage. Fig. 3 Fourier imaging with spatial filter and collimating telescope The laser beam should be filtered to remove high spatial frequencies that can distort the images. This is accomplished with a 25 50 µm diameter pinhole located at the focus of a microscope objective. Recall that high spatial frequencies are diffracted far from the

optical axis; a pinhole acts as a low pass filter to remove them and thereby clean up the beam. A very short focal length provided by a microscope objective is very effective for low pass filtering. Why is this? Position a beam position locater slightly past the location of the microscope objective. This can be a simple screen or an iris that will be used to center the objective on the beam axis. Insert the objective into the beam path and center the rapidly diverging laser beam on your reference point. If the spot on the reference mark is too big, reduce the distance to compensate. Insert the pinhole on the x y z stage; it is held in place by small permanent magnets. The focal distance is very short, so the pinhole holder may need to properly positioned to get sufficient clearance. You will need to adjust its x, y, and z positions to maximize the optical power transmitted through the pinhole. A power meter may be helpful in this process. You are trying to achieve a uniform beam profile at the output. Because the beam is rapidly diverging, it must be collected and collimated with a positive lens. Adjust the horizontal and vertical position of this lens to center it on the reference mark you made on the wall. Change the separation between it and the pinhole to collimate the beam. The combination of the microscope objective and collimating lens defines a beam magnifying telescope. Place the object mount on the near end of the rail, closest to the collimating lens. Imaging is done with two high quality (and expensive) lenses manufactured by SORL. Use the expanded beam to measure their focal lengths ( f ) and then build the setup shown in Fig. 2. The iris that was used to align the laser along the rail can help to accurately center Lens 1 and Lens 2 and to measure their common focal length. You will need stage mounts with both horizontal and vertical adjustment. A CCD camera displays the image as real time video. Images are much larger than the camera active area, however, and should be de magnified ( M < 1). Using the Eqs. (1) and (2), design on imaging system for the camera. This can be done with a single positive lens with much shorter focal length than Lenses 1 and 2. It is OK if the camera is located off the rail and you don't need to demagnify the image by more than 2x. Choose a simple object to fine tune the alignment of the imaging system; a good choice is periodic vertical lines (binary amplitude grating) or round hole or stop. To view video on the computer, you must setup and run the LabView program described in the Appendix. 1. Abbe Porter experiment Insert an object with orthogonal vertical and horizontal lines. This is available as a single slide or can be made by overlapping two binary grating slides. Be sure that the spatial period is very small; this will make filtering easier. Why

is this? Record the image as a.png file. Using LabView, design a spatial filter that leaves only vertical lines in the image. Record the Fourier transform as calculated by the software and save the filtered image, again as two.png files. Using this design, perform spatial filtering in the Fourier plane, save the image, and compare it to the image produced by the program. Describe any differences and their causes. Repeat this procedure to leave only horizontal lines in the image. 2. Low and high pass spatial filter Select an object that displays a variety of geometric shapes containing periodic lines (gratings). Record the image, then setup a low pass filter in software that removes the grating structure from the shapes; save it. Implement the same low pass filtering in the Fourier plane by placing an adjustable iris at the appropriate position. Repeat this for a high pass filter; the grating features should remain but the boundaries defining the shapes should blur. First do this with LabView, then in the Fourier plane. Describe any difficulties you encounter. 3. Image correction Select an object that displays periodic vertical lines running through a simple scene. Record the image, then use an appropriate spatial filter in software to remove these lines. There will likely be some distortion; why is this? Try to do the same with a spatial filter in the Fourier plane and save the filtered image. 4. Convert binary grating to sinusoidal grating Select an object with closely spaced vertical lines (square amplitude grating). Design a spatial filter that produces a sinusoidal grating image. Implement this in software and again with a filter in the Fourier plane. Save all the relevant images as.png files. Use the LabView static image program (Appendix 1) to show the rectangular modulation of the unfiltered image and the sinusoidal amplitude modulation on the filtered image. This is done by taking a line cross section of the saved images. Design a spatial filter to produce a spatial frequency that is 2x larger than the first grating (i.e. spatial period that is 2x smaller). Determine the spatial period (pixels/cycle) of the grating images. 5. Modulation transfer function Place a straight beam stop (eg. razor blade) in the object plane to block half the light (Fig. 4a). In the absence of diffraction, the amplitude cross section in the image plane would look like the black curve in Fig. 4b. A real imaging system, however, displays a finite slope edge response function (ERF) illustrated by the red curve in Fig. 4b. Capture an image frame from the video program and then use the line profile feature of FFTimage13.vi to record its ERF to a data file. Since there will likely be spatial noise in the bright portion of the image, it can be smoothed using the area profile tab of the program. This function essentially averages adjacent line responses. Calibrate the imaging system by recording an object of known size, such as a clear aperture or stop with an independently measured diameter. Convert pixels to length using the line profile function. Perform an analysis of the data as described below.

Fig. 4 Object mask a) to produce edge response function shown by the red curve in b). The derivative of the ERF as a function of the pixel position coordinate is called the line spread function (LSF). For the step function shown as a black curve in Fig. 4b, a deltafunction LSF is produced. The LSF of the ERF shown in red results in a curve with nonzero width, i.e. the delta function gets broadened. The LSF gives a direct measure of the spatial resolution of your imaging system. It essentially reveals the minimum feature size that will blur in the image plane. The point spread function performs the equivalent image characterization in two dimensions. Using your saved ERF, write a program to obtain the LSF and plot it. Next, take the Fourier transform of the LSF to produce the modulation transfer function. Note that the units of the x axis are reciprocal space, i.e. 1/pixel. Your length calibration allows conversion of this axis to spatial frequency. An ideal system has a uniform modulation transfer function, resulting from the Fourier transform of a delta function. As the function deviates from ideal and narrows, the spatial resolution becomes poorer. Appendix I. LabView programs for image display and processing Two LabView programs (VIs) are available for this experiment. They are located on the Windows partition of the PC. One program performs image processing on saved files. The second displays and processes streaming video from a CCD camera. The programs cannot run simultaneously. It is important to always stop them with the button on the front panel, not with the abort button on the menu bar. Program 1: FFTimage14.vi This LabView VI displays, filters, and processes saved image files. Start the VI by clicking the run arrow on menu bar. Use the File path to locate and open a saved image. There are four tabs that enable various functions, described as follows: Phase Mask Displays the input image in the left window. The right window shows the an image after the sequence of Fourier transform followed by inverse Fourier transform (F 1 F). Phase can be adjusted in the Fourier plane. Images can be saved to.png files by

right clicking anywhere on the image. Spatial Mask The Fourier transform of the input image is viewed in the left window. A versatile spatial filter mask tool is available. Multiple regions of custom size and shape can be defined. The mask can be inverted, which is useful for studying the difference between low and high pass filtering. View the filtered image in the Phase Mask tab. Grating Takes the Fourier transform (i.e. diffraction pattern) of the input image after passing through one of four selectable gratings: i) binary amplitude, ii) binary phase, iii) sinusoidal amplitude, iv) sinusoidal phase. The modulation depth is adjustable from 0 100% or 0 180 o depending on whether an amplitude or phase grating, respectively, is selected. The grating period can also be specified. Line Profile Obtains the amplitude profile of an image at a specified cross section. The cross section is defined by a straight line that is positioned with a simple click and drag. The source can be selected as: i) input image, ii) Fourier transform of input, iii) filtered Fourier transform. The profile data can be saved to a two column spreadsheet file showing amplitude vs pixel number. Area Profile Performs the same operation as the Line Profile tab, except it averages over a rectangular area and specified direction (horizontal or vertical). This is useful for determining the edge response function when there is significant spatial noise. By averaging over a large enough area, this noise can be smoothed out. Program 2: FFTcamera5.vi This VI interfaces to a CCD video camera via a Sensoray 2253S encoder. Connect the encoder to a USB terminal on the PC. Analog video from the camera is supplied to the BNC input port. The encoder is powered by the USB, but the camera requires an external DC power supply. A red LED on the encoder indicates it is receiving power from the USB. Sensoray driver software must be installed on the PC. Use the Select session button on the Front Panel to locate the encoder device. The VI has three tabs that function very similar to the first program. The Image Display tab has three functions available: i) modify phase in Fourier plane, ii) spatial filter, iii) apply grating. The latter two functions are configured in their corresponding tabs (see details in the first program). No line or area profile is available in this VI. You can save a single video frame and open it with Program 1.

Appendix II. Diffraction examples When light encounters an obstacle, it is diffracted. Diffraction is generally a complicated distribution of field amplitudes and phase. In special cases, however, the field distribution can be rendered with analytical expressions. 1) A rectangular aperture of dimensions a/2 < x < a/2 and b/2 < y < b/2 displays a farfield intensity distribution profile given by: 2) A circular aperture of radius a displays the Airy pattern: where J1 is the first order Bessel function.