USE OF FT IN IMAGE PROCESSING IMAGE PROCESSING (RRY025)

Similar documents
2D Discrete Fourier Transform

IMAGE PROCESSING (RRY025) THE CONTINUOUS 2D FOURIER TRANSFORM

Lecture 15: Fraunhofer diffraction by a circular aperture

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase

ELEC Dr Reji Mathew Electrical Engineering UNSW

Binocular and Scope Performance 57. Diffraction Effects

The Fourier Transform

Be aware that there is no universal notation for the various quantities.

Modulation Transfer Function

Transforms and Frequency Filtering

Observational Astronomy

LECTURE 13 DIFFRACTION. Instructor: Kazumi Tolich

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Basic Mapping Simon Garrington JBO/Manchester

Fourier transforms, SIM

Bioimage Informatics

Frequency Domain Enhancement

CS4495/6495 Introduction to Computer Vision. 2C-L3 Aliasing

( ) Deriving the Lens Transmittance Function. Thin lens transmission is given by a phase with unit magnitude.

ECEN 4606, UNDERGRADUATE OPTICS LAB

1.Discuss the frequency domain techniques of image enhancement in detail.

SUPER RESOLUTION INTRODUCTION

Astronomical Cameras

Images and Filters. EE/CSE 576 Linda Shapiro

Midterm Review. Image Processing CSE 166 Lecture 10

DESIGN NOTE: DIFFRACTION EFFECTS

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS. GUI Simulation Diffraction: Focused Beams and Resolution for a lens system

Diffraction Single-slit Double-slit Diffraction grating Limit on resolution X-ray diffraction. Phys 2435: Chap. 36, Pg 1

Improving the Detection of Near Earth Objects for Ground Based Telescopes

Part 2: Fourier transforms. Key to understanding NMR, X-ray crystallography, and all forms of microscopy

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

Optics of Wavefront. Austin Roorda, Ph.D. University of Houston College of Optometry

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Chapter 34 The Wave Nature of Light; Interference. Copyright 2009 Pearson Education, Inc.

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

Sampling and Signal Processing

Computer Graphics (Fall 2011) Outline. CS 184 Guest Lecture: Sampling and Reconstruction Ravi Ramamoorthi

Experiment 1: Fraunhofer Diffraction of Light by a Single Slit

The predicted performance of the ACS coronagraph

Episode 323: Diffraction

Light gathering Power: Magnification with eyepiece:

Chapter 4: Fourier Optics

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

CoE4TN4 Image Processing. Chapter 4 Filtering in the Frequency Domain

Single, Double And N-Slit Diffraction. B.Tech I

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Image Processing for feature extraction

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Lecture 2: SIGNALS. 1 st semester By: Elham Sunbu

Exercise Problems: Information Theory and Coding

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Modulation Transfer Function

EE 215 Semester Project SPECTRAL ANALYSIS USING FOURIER TRANSFORM

OPTICAL IMAGE FORMATION

Vocabulary: Description: Materials: Objectives: Safety: Two 45-minute class periods (one for background and one for activity) Schedule:

Astronomical Detectors. Lecture 3 Astronomy & Astrophysics Fall 2011

ISO INTERNATIONAL STANDARD. Photography Electronic still-picture cameras Resolution measurements

ASD and Speckle Interferometry. Dave Rowe, CTO, PlaneWave Instruments

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

+ a(t) exp( 2πif t)dt (1.1) In order to go back to the independent variable t, we define the inverse transform as: + A(f) exp(2πif t)df (1.

Chapter 2 Fourier Integral Representation of an Optical Image

Deconvolution , , Computational Photography Fall 2018, Lecture 12

What is a digital image?

Topic 4: Image Filtering Workshop Solutions

Image and Multidimensional Signal Processing

EE-527: MicroFabrication

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

1 Laboratory 7: Fourier Optics

Sharpness, Resolution and Interpolation

Examples of image processing

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Topic 04 What is a digital image?

Resolving Power of a Diffraction Grating

Compressive Optical MONTAGE Photography

Biomedical Signals. Signals and Images in Medicine Dr Nabeel Anwar

Digital Image Processing

Option G 4:Diffraction

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

Filters. Materials from Prof. Klaus Mueller

Introduction to Imaging in CASA

Introduction to Interferometry. Michelson Interferometer. Fourier Transforms. Optics: holes in a mask. Two ways of understanding interferometry

Lecture 8. Lecture 8. r 1

Image Enhancement in the Spatial Domain (Part 1)

Image Filtering. Median Filtering

ELECTRONIC HOLOGRAPHY

Digital Image Processing. Image Enhancement: Filtering in the Frequency Domain

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester

PHYS 352. FFT Convolution. More Advanced Digital Signal Processing Techniques

Sampling and pixels. CS 178, Spring Marc Levoy Computer Science Department Stanford University. Begun 4/23, finished 4/25.

Lecture 2: Interference

6.003: Signal Processing. Synthetic Aperture Optics

Diffraction of a Circular Aperture

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

TDI2131 Digital Image Processing

Transcription:

IMAGE PROCESSIG (RRY25) USE OF FT I IMAGE PROCESSIG Optics- originofimperfectionsinimagingsystems(limited resolution/blurring related to 2D FTs)- need to understand using Continuous FT. Sampling -Capturecontinuousimageontoasetofdiscrete pixel values, can arrange to do without loss off informationn if yquist sampled, requires understanding FTs. IMAGIG, SAMPLIG -2DDISCRETEFOURIERTRASFORMPART-I Filtering -Havingnowcapturedadigitalimagecando Discrete-FT (DFT) to Fourier domain and then keep only low spatial frequencies (hence smooth image) or just high spatial frequencies (to sharpen images). Fast Convolution - If we take sampled image to Fourier domain, multiply it by a function and inverse FT we achieve a fast convolution - using the DFT can do the operations we described in previous lectures for smoothing/sharpending/edge detection etc much faster than doing eveything in image domain. 2 USE OF FT I IMAGE PROCESSIG (cont) Image Restoration -Canremoveopticdistortionsdescribed above under optics by DFT, filter multiplication and inverse DFT (see future lectures on image restoration) Image Compression -DFTtoFourierdomain,deleteand don t transmit high spatial frequencies that are not visibleto eye - can use but other transforsm (i.e. cosine and wavelet) better. EXAMPLE OF COTIUOUS FT - Diffraction limited telescope/camera imaging Consider a telescope or camera. Looking at a distant object at a single wavelength. Consider a point in the image, say the a star or point on a person s belt buckle, s(x, y) =δ(x x o.y y o ). From optics the electric field E(u,v) at the aperture is related to the continuous Fourier Transform of the source amplitude as a function of angular coordinates, that is E(u, v) =FT(s(x, y)). Object APERTURE A(u,v) CCD LES s(x,y) E(u,v) E (u,v) s(x,y) Continuous FT Continuous FT Figure : After being blocked by the aperture function A(u, v) = over a circular aperture and A(u, v) = elsewhere,then. E (u, v) =E(u, v)a(u, v) The lens does another FT such that the electric field at the CCD is the forward Fourier transform of E (u, v) so FT(E (u, v)) = s( x, y) FT(A(u, v)) 3 4

[where we use the fact that two forward Fourier transforms inverts the image FT(FT(s(x,y)) = s(-x,-y)] In fact the CCD detects the incident power falling, (square of the electric field) so the effective convolving function (Point Spread Function of PSF) is FT(A(u, v)) 2. All points on the image are independant in their electric field properties so it can be shown that when viewing a complicated image s(x,y) the image formed at the CCD is s(-x.-y) * FT(A(u, v)) 2 The final image formed by the lens is thus the true image convolved with a Point Spread Function (PSF). For a circular aperture this is an Airy function or Modified Bessel fnction squared. A cluster of stars as imaged by a telescope is then not seen as a set of points. Instead each point is a little disk surrounded by weak ripples. The disk gets smaller as the lens gets bigger and the resolving power of the telescope or camera gets larger. LES AS SPATIAL LOW PASS FILTER Consider camera viewing corrugated roof functions of different spatial frequency (cycles/radian). For low spatial frequency (or large angular spacing θ beween peaks) light just pass through the lens and produces a ripple on the CCD of wavelength L θ on the CCD (where lens focal length is L). But there is a maximum spatial frequency (minimum θ min ) that passes through, for this spatial frequency the FT of the input image (Electric field at aperture) lies outside the range passed by the lens. This minimum θ min = λ/d, where d is lens diameter and λ is wavelength, corresponding to a maximum spatial freq of /θ = d/λ. Hence the minumum spacing between ripple peaks on CCD is then x min = Lλ/d. The convolution in the image domain by FT(A(u, v)) 2 is equivalent to multiplying the FT of the image by the autocorrelation of the aperture A(u,v) with itself - see autocorrelation theoroem in last lecture. The radius within which this function in non-zero is equal to the lens diameter. Hence aaperture/lensactsasalowpassspatialfrequencyfilter, passing spatial freq smaller than d/λ. 5 6 YQUIST SAMPLIG After passing through the lens, the image that falls on the CCD is a bandlimited function, i.e contains a maximum spatial frequency. If we can have a CCD spacing which is x min /2(yquist sampling) then we can recover all information in the continuous image. Figure 2: To see this consider the sampled image and it FT. If f(x, y) is the bandlimited imaging falling on the CCD then f(x, y) III x (x, y) isthesampledimage,wherethesecondfunction is the bed of nails function introduced in the last lecture. The FT of this sampled image is (using the convolution theorem) F (u, v) III / x (u, v). This is shown in the bottom figure on the next page. Figure 3: Top is the bed-of-nails sampling function (top), with δ functions separated by x. Thespectrumofthesampled image is shown at the bottom, it is the FT of the unsampled image F (u, v) copiedmultipletimesonagridof seperation / x. If light passed through an ellipical shaped lens before falling on the CCD, the FT of the image before sampling (F(u,v)) is only non-zero within the grey ellipses. In the special case of a circular lens the ellipse becomes circles 7 8

of radius d/(lλ) whichcanbeenclosedwithinsquares of size 2W u =2W v =2d/(Lλ). If this is smaller than the grid spacing / x as shown here there is no overlap between the circles. This is the yquist sampling criteria that x <Lλ/2d = x min /2 If the yquist criteria holds and there is no overlap between copies of F (u, v) thenwecanrecoverf (u, v) from the FT of the sampled image by multiplying by a top hat of width 2W u =2W v. Once we know F (u, v) we can inverse FT to get f(x, y) -andhencecanrecoverthe original pre-sampled image from the sampled data. If the sampling is less dense than yquist in the FT of the sampled image the grid has smaller seperation and the FTs will overlap, we get aliased frequencies, and we cannot recover the original unsampled image. 2D DISCRETE FOURIER TRASFORMS So far we have discussed the Continuous Fourier Transforms. Important for understanding optics of telecopes/cameras etc. Course deals mostly with the Discrete Fourier Transform (DFT), calculated by computer on a discretely sampled image. If this image is bandlimited and yquist sampled then DFT of image will be very close to continous image FT. Can use DFT on sampled image to filter, remove optical distortions, compress etc etc. For such x sampled pixel arrays the 2D Discrete Fourier Transform (DFT) and Inverse DFT are defined to be (Gonzalez and Wood definition). F (k, l) = m= n= f(m, n)exp( 2πi (km + ln)) f(m, n) = k= l= F (k, l)exp(+ 2πi (km + ln)) Where normally m,n,k,l all are assumed to run from to -. otethe definition used by MATLAB has instead for the forward transform the factor unity in front of the summation and for the inverse transform the factor / 2.Also in MATLAB indices run for to, not to -. 9 Some similarities and differences to continuous FT. -DFTusesasampledversionf(m, n) ofthecontinuous image f(x, y) -Usessumsnotintegrals -Limitsareand-not± -DFThasascalefactor(/) O-CETRED AD CETRED DFTs Using the normal definition of DFT, we get the largest amplitudes in the corners. We can obtain a plot which is more consistent with continuous Dourier transforms if we centre the DFT (i.e. implemented with the fftshift command in MATLAB). Evaluate in two steps, first a D transform of each row, then D transforms of each column of result. The DFT can be evaluated with a fast transform which takes of order 2 2 Log 2 operations for an x image. The DFT gives an aaccurate version of the continuous FT of the image falling on the CCD provided that this is a bandlimited image that has been yquist sampled. If not true the fact that the DFT works on a sampled image gives rise to aliased spatial frequencies in the FT. The DFT can be evaluated at any value of the spatial frequencies k,l. The uncentered DFT plots k,l for to -, we can instead evaluate and plot from -/2 to /2 -. This result uses the periodicity property of the DFT i.e. that F (k + j,l + k) =F (k, l), where j,k are integers. 2

Given that F (k, l) = f(m, n)exp( 2πi (km + ln)) Object f(m,n) n - m k on-centred DFT F(k,L) l - - what is the DFT at spatial frequencies k+h, l+j where handjareintegers? Extended DFT Calculated at all k,l F (k + h, l + j) = f(m, n)exp( 2πihm)exp( 2πijn)exp( 2πi (km+nl)) /2 h,m,j,m are all integers so exp( 2πihm) = and exp( 2πijn) = -/2 l /2- k /2- Hence F (k + h, l + j) =F (k, l)) The centred DFT is the contents of the blue square. Can be considered the DFT calculated in the range k=-/2 to /2-; l=-/2 to /2- Figure 4: 3 4 DFTs OF TYPICAL IMAGES Input Image In general transforms give a complex number at each sampled spatial frequency, k, l. Canplotrealandimaginary parts of transform or more commonly the amplitude, A(k, l) andphase,φ(k, l). Amplitudes normally has a very large range, often therefore plot Log(+ A(k,l)). It is the phase that contains most of the information about the position of edges in the image. The amplitude tells us mainly how sharp these edges are (see last lecture). Log Amp of Centred DFT Phase of Centred DFT many images have spikes along the k,l axes. These can be explained in terms of the approximations used in doing DFTs. Other sources of spikes are regions of sharp edges or narrow rectangles within the image (e.g. camera legs in the cameraman image). Figure 5: The log amplitude and phase of the centred DFT of the cameraman image 5 6

RELATIOSHIP BETWEE COTIUOUS FT AD DFT y f(x,y)dx y Can consider implementing DFT via continuous FTs. First take the sampled version of image f(x,y), then repeat this periodically an infinite number of times, then do a continuous FT. The result in the centre is the DFT of f(x,y) (see Following figures). x Mathematically form III x, y (x, y)) [III δx,δy (x, y)) f(x, y)] 2D Continuous Fourier Transform v F(o,v) v D Continuous FT Then take continuous FT which gives (after applyimg convolution theoreom twice) III / x,/ y (u, v)) [III /δx,/δy (u, v)) F (u, v) u Figure 6: How to implement 2D DFT via the 2D continuous FT. Illustrates relationship between the two types of transform. Take original x sampled image (tire), repeat an infinite number of times then take 2D continuous transform. The x DFT of the tire is found within the central square. Even if there are no sharp edges within the input image, there can be sharp disconti- 7 8 nuities between the top and bottom (or left and right sides) of the repeated image, as shown by the slice at the top right. When we take FT these discontinuities can give strong vertical (and horizontal) spikes along the u and v axes. EDGE EFFECTS/ORIGI OF SPIKES Can get spikes in FT because of sharp-edged objects in image, spike is perpendicular to direction of the edge. But also get large vertical spike when there is a large difference in brightness between the top and bottom of picture and get a large horizontal spike when there is adifferencebetweenleftandright. If we consider the DFT in terms of doing a continuous FT of a sampled repeated version of the input image we can understand the spikes in terms of requiring high spatial frequencies to represent discontinuities. One reason why the DFT is not optimum for image compression, power at high spatial frequencies. JPEG for instance uses the Cosine transform, which avoids this problem. Edge effects much more significant on images than in D signal processing (where there also is an effect if the starting and ending samples are different). Reason is much shorter length of each row/column compared to length of D signal (often 496 samples or longer). 9 2