ELEC Dr Reji Mathew Electrical Engineering UNSW

Similar documents
Chapter 2 Fourier Integral Representation of an Optical Image

Design of FIR Filters

Digital Filters IIR (& Their Corresponding Analog Filters) Week Date Lecture Title

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Image Processing for feature extraction

SUPER RESOLUTION INTRODUCTION

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase

F I R Filter (Finite Impulse Response)

ELEC Dr Reji Mathew Electrical Engineering UNSW

Biomedical Signals. Signals and Images in Medicine Dr Nabeel Anwar

Εισαγωγική στην Οπτική Απεικόνιση

Digital Filters FIR and IIR Systems

ECE438 - Laboratory 7a: Digital Filter Design (Week 1) By Prof. Charles Bouman and Prof. Mireille Boutin Fall 2015

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Sensors and Sensing Cameras and Camera Calibration

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Objectives. Presentation Outline. Digital Modulation Lecture 03

Corso di DATI e SEGNALI BIOMEDICI 1. Carmelina Ruggiero Laboratorio MedInfo

Be aware that there is no universal notation for the various quantities.

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Lane Detection in Automotive

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

PROBLEM SET 6. Note: This version is preliminary in that it does not yet have instructions for uploading the MATLAB problems.

Principles of Baseband Digital Data Transmission

ME scope Application Note 01 The FFT, Leakage, and Windowing

Final Exam Practice Questions for Music 421, with Solutions

USE OF FT IN IMAGE PROCESSING IMAGE PROCESSING (RRY025)

Digital Image Processing

DESIGN NOTE: DIFFRACTION EFFECTS

Image Filtering. Median Filtering

Fast MTF measurement of CMOS imagers using ISO slantededge methodology

Fourier transforms, SIM

Solution Set #2

Convolution Pyramids. Zeev Farbman, Raanan Fattal and Dani Lischinski SIGGRAPH Asia Conference (2011) Julian Steil. Prof. Dr.

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Radiometry I: Illumination. cs348b Matt Pharr

The Formation of an Aerial Image, part 2

Polarization Experiments Using Jones Calculus

2) How fast can we implement these in a system

Cameras. CSE 455, Winter 2010 January 25, 2010

Midterm is on Thursday!

Lecture 3 Review of Signals and Systems: Part 2. EE4900/EE6720 Digital Communications

>>> from numpy import random as r >>> I = r.rand(256,256);

digital film technology Scanity multi application film scanner white paper

Motivation: Image denoising. How can we reduce noise in a photograph?

Lecture 3, Multirate Signal Processing

Subband coring for image noise reduction. Edward H. Adelson Internal Report, RCA David Sarnoff Research Center, Nov

LENSES. INEL 6088 Computer Vision

Image preprocessing in spatial domain

Observational Astronomy

Optics of Wavefront. Austin Roorda, Ph.D. University of Houston College of Optometry

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

Digital Communication System

Practical Image and Video Processing Using MATLAB

Motivation: Image denoising. How can we reduce noise in a photograph?

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

ELEC-C5230 Digitaalisen signaalinkäsittelyn perusteet

ECE 484 Digital Image Processing Lec 09 - Image Resampling

Imaging Optics Fundamentals

CS3291: Digital Signal Processing

Single Slit Diffraction

Digital Processing of Continuous-Time Signals

Part I Feature Extraction (1) Image Enhancement. CSc I6716 Spring Local, meaningful, detectable parts of the image.

1.Discuss the frequency domain techniques of image enhancement in detail.

Signals and Filtering

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

DSP Laboratory (EELE 4110) Lab#10 Finite Impulse Response (FIR) Filters

Enhanced Sample Rate Mode Measurement Precision

Filip Malmberg 1TD396 fall 2018 Today s lecture

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Performance Factors. Technical Assistance. Fundamental Optics

Digital Processing of

Analysis and design of filters for differentiation

PHYS 352. FFT Convolution. More Advanced Digital Signal Processing Techniques

Multispectral. imaging device. ADVANCED LIGHT ANALYSIS by. Most accurate homogeneity MeasureMent of spectral radiance. UMasterMS1 & UMasterMS2

Digital Signal Processing

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

SAMPLING THEORY. Representing continuous signals with discrete numbers

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Visible Light Communication-based Indoor Positioning with Mobile Devices

Supplementary Figure 1. Effect of the spacer thickness on the resonance properties of the gold and silver metasurface layers.

Unit 1: Image Formation

CSE 473/573 Computer Vision and Image Processing (CVIP)

Computer Vision, Lecture 3

Implementation of Digital Signal Processing: Some Background on GFSK Modulation

Lab 3.0. Pulse Shaping and Rayleigh Channel. Faculty of Information Engineering & Technology. The Communications Department

Signals and Systems. Lecture 13 Wednesday 6 th December 2017 DR TANIA STATHAKI

The Noise about Noise

CS 775: Advanced Computer Graphics. Lecture 12 : Antialiasing

Frequency Domain Enhancement

FIR FILTER DESIGN USING A NEW WINDOW FUNCTION

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

Optical transfer function shaping and depth of focus by using a phase only filter

Reflectors vs. Refractors

Transcription:

ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW

Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ p Stop-band tolerances: δ s How to design this low pass filter? Windowing? ω p ω s ω δ s

Windowing Design a 2-D FIR filter Start with a desired frequency response h d (ω) Where h d (ω) satisfies the requirements (stop & pass band) Perhaps an ideal filter h d (ω) Then take the inverse DSFT to obtain h d [n] h d [n] may have infinite support Apply a spatial window w[n] to limit support to a finite region n n

Windowing Windowing in the spatial domain: affects filter characteristics in the frequency domain Does h(ω) which is the DSFT of h n still satisfy the original constrains? Multiplication of spatial signals equates to convolution of the corresponding DSFT of the signals Ideally - window will have as narrow a bandwidth as possible That is w(ω) should decay rapidly with increasing ω However, narrow band windows have large spatial support Constrained by the desired support size

Windowing Remember: we want a filter with zero phase h ω = h(ω) w(ω) and h d (ω) will both need to be real-valued So h ω will also be real-valued (zero phase) Follows on from the convolution equation Convenient: express window as spatially continuous function w x And w n = w x x=n We assume that the window is sufficiently narrow band Essentially bandlimited to the Nyquist region, ω [ π, π] 2 So FT of w x is identical to the DSFT of w n Therefore, we are only concerned with the properties of the spatially continuous function w x

Windowing Simplest case: window is a separable product of 1D functions Determines the extent of the window This trivial window is rarely used in practice because its frequency response decays only slowly.

Windowing Smooth function: means high frequencies components are avoided minimising the effective bandwidth of w(ω) one period of a cosine raised with its negative peaks just touch zero ( raised cosine ) t Multiply a window: To Restrict an impulse response that is infinite t

Windowing

Windowing For some applications, superior results can be obtained by using non-separable windows. When designing a circularly symmetric filter, a circularly symmetric window function is generally preferable w(s) = w t t= s Where w t can be the 1D window functions we discussed earlier.

Filter Design Frequency Sampling Find a finite set of filter tap values Which minimizes a weighted combination of squared differences between the desired and actual frequency responses Can uniformly distribute ω k in the region ω [ π, π] 2 Alternatively, can place more samples in the neighbourhood of critical frequency regions such as (pass and stop) band edges

Frequency Sampling Want to ensure that the designed filter h n is zero phase Remember that zero phase implies a symmetric filter. h n = h n The sin terms are cancelled out. Note summation over

Frequency Sampling We wish to minimize the weighted squared error expression weight which reflects the relative importance of matching the desired response at frequency ω k Summation for k = 1 K

Frequency Sampling

Frequency Sampling Minimization of the weighted squared error expression is a classic linear algebra problem whose solution is well known: Can adjust the weights and number of samples (K) Reduce the density of samples in critical frequency regions by increasing weights associated with a smaller number of samples

Filter Design Transformation Method (McClellan Transformation) 1D filter design problem is much easier Methods exist for designing optimum 1D zero phase FIR filters subject to constraints on tolerances in the pass- and stop-bands Map the 2-D design problem into a 1-D design problem Without losing the finite impulse response property. The method is relevant only for zero phase designs. An approximate circularly symmetric function specifies the 2D to 1D mapping

Transformation Method Frequency response of a 1-D zero phase filter with 2N + 1 taps Taking the DTFT of zero phase filter h n, defined for -N to N Remember that zero phase implies a symmetric filter. h n = h n The sin terms are cancelled out. Note summation from 1 to N

Transformation Method Frequency response of a 1-D zero phase filter with 2N + 1 taps Taking the DTFT of zero phase filter h n, defined for -N to N By application of Chebychev polynomials

Transformation Method

Transformation Method x = cos ω C n = cos nω C 0 = cos 0ω = 1 C 1 = cos ω C 2 = cos 2ω = 2 cos ω cos ω 1 = 2cos 2 ω 1 C 3 = cos 3ω = 2 cos ω cos 2ω cos ω = 4cos 3 ω 3 cos ω cos nω can be expressed as an nth-degree polynomial in cos ω

Transformation Method Frequency response of a 1-D zero phase filter with 2N + 1 taps Taking the DTFT of zero phase filter h n, defined for -N to N By application of Chebychev polynomials Coefficients b n can be obtained from a n Next: Frequency mapping from ω 1, ω 2 cos (ω)

Transformation Method μ ω is the DSFT of a finite support sequence μ[n] Essentially μ[n] controls the mapping (kernel) h[n] has finite support Summation of n-fold convolution of finite support sequence μ[n]

Transformation Method Given μ[n], what can we say about the Region of Support for the final 2D filter h[n]? And what is DSFT of μ[n], which governs the mapping ω 1, ω 2 cos (ω)?

Transformation Method Example: for a 7 tap 1D filter N=3 (-3 0 3) So μ n convolved three times (due to μ ω 3 ) Since μ n has a region of support [ 1, 1] 2 The support of h[n] will be [ 3, 3] 2

Transformation Method DSFT of μ n The mapping represents quite a good approximation to a circularly symmetric function. (plot next slide)

Transformation Method Steps to follow for the Frequency Transform Method 1. Select a transformation kernel, μ n. Possible to design your own. The original McClellan kernel is a common choice. 2. Map the 2-D frequency domain specifications on h ω back to specification on the 1-D kernel, h ω. Stop band requirement for the 2D filter: ω ω s Find the largest ω s such that μ ω cos (ω s ) for all ω ω s ω 1 tolerances map directly from 2-D to the 1-D ω ω 2

Transformation Method Steps to follow for the Frequency Transform Method 1. Select a transformation kernel, μ n. Possible to design your own. The original McClellan kernel is a common choice. 2. Map the 2-D frequency domain specifications on h ω back to specification on the 1-D kernel, h ω. 3. Design a suitable 1-D kernel using well established methods. Thereby determine the b n coefficients. ω 1 ω ω 2

Transformation Method Input Image x n μ n μ n μ n b[0] b[1] b[2] b[3] + + + + Example: N = 3 Weighted summation of convolutions of the mapping kernel. Weights b n as determined from the 1D filter design

Filtering Examples Gaussian Filters Normalisation chosen so that Spatially continuous impulse response has unit DC gain

Gaussian Filters In most cases the off-diagonal term, σ 1,2, is zero So that h(s) is a separable Gaussian function whose contours have an elliptical cross-section (circular if σ 1 = σ 2 ). s 2 s 1

Gaussian Filters The σ 1,2 term may be used to design arbitrarily rotated elliptical cross-sections. s 2 s 1

Gaussian Filters Gaussian filters have the property that their Fourier transform is also a Gaussian function

Gaussian Filters Then: DSFT of the sampled impulse response, h n, will be almost identical to the spatially continuous Fourier transform of h(s). DSFT of the sampled impulse response is Gaussian and also that h n = 1 n Since the spatial impulse response decays rapidly as n becomes large, can simply truncate the impulse response (i.e. apply a rectangular window) to some suitably large region of support.

Gaussian Filters Important properties Gaussian filters - their Fourier transform is also a Gaussian function Both spatial & frequency domain representations are real & positive. For simplicity - Ignore the possibility that σ 1,2 might not be zero, Horizontal and vertical spread of the Gaussian function is given by σ 1 and σ 2 Horizontal and vertical spread in the frequency domain is given by 1 σ 1 and 1 σ 2 So the product of the spatial & frequency spreads is always exactly 1.

Moving Average Filters Low complexity Low pass, high pass and band pass implementations A2 B2 A1 B1 (B1 A1 + 1)(B2 A2 + 1) multiplications & additions for each pixel (B1 A1 + 1) + (B2 A2 + 1) multiplications & additions per output pixel

Moving Average Filters Low-pass filter, whose frequency response is a sinc function Filter is separable into the row and column filters Can be implemented independent of filter s region of support?

Moving Average Filters x(0,0) x(0,1) x(0,2) x(0,3) x(0,4) x(0,5) x(0,6) x(0,7) x(1,0) x(1,1) x(1,2) x(1,3) x(1,4) x(1,5) x(1,6) x(1,7) x(2,0) x(2,1) x(2,2) x(2,3) x(2,4) x(2,5) x(2,6) x(2,7) x(3,0) x(3,1) x(3,2) x(3,3) x(3,4) x(3,5) x(3,6) x(3,7) x(4,0) x(4,1) x(4,2) x(4,3) x(4,4) x(4,5) x(4,6) x(4,7) Each subsequent scaled output value requires only two additions

Moving Average Filters

Unsharp Masking One of the oldest image processing operations Ad-hoc technique: artificially increasing sharpness of an image x n

Image Acquisition Models Scene illuminant reflects off scene surfaces; the resulting radiant flux passes through an optical system, is integrated over sensor surfaces, converted to an electrical signal and quantized

Image Acquisition Models Scene surface reflectivities as a function of wavelength λ, and position s = (s 1, s 2 ) Radiant power spectral density ( scene radiance ) at the surface of the sensor Spectral power density of the illuminant as a function of position and wavelength Note: Variation in s and λ Example of variations in λ : Outdoor and indoor images

Image Acquisition Models We ignore surface orientation and its impact upon scene radiance. Also, we ignore fluorescence.

Optics Add the impact of optics to our simple model Resulting scene radiance at the surface of the sensor Point spread function (PSF) of the optical system In general, the optical PSF depends on both wavelength λ and spatial position z (ie varies across the sensor). We ignore the dependence of h o z,λ(s) on z. optics are not ideal, but are at least linear

Optics This is the convolution integral. Linear systems which are modeled by convolution are known as Linear Shift Invariant (LSI) systems. LSI property is valid only when we ignore the dependence of h o z,λ(s) on z. LSI model is sufficiently accurate in the so-called centre-field

Optics Away from centre field a number of aberrations appear Example: Chromatic aberrations h o z,λ(s) introduces shifts which depend upon both position z and wavelength λ. Different wavelengths (colours) experience different geometric distortions do not properly line up blurring

Optics Simplifications ignore spatial variations of the optical system PSF restrict to centre field Even with these simplifications the PSF is not simply an impulse non-ideal causes of these non-idealities are refractive and diffractive limitations Refractive: lens imperfections Diffractive: finite lens size wavelength dependent Narrow aperture: comparable in size to incident light wavelength

Sensor Integration Integration across all wavelength Spectral sensitivity 2D sequence of pixel values Integration over the aperture of the sensor Ideal sensor

Sensor Integration

Sensing Devices CCD photons converted into electrons: accumulated in charge wells collection of 1-D analog shift registers entire array of pixels must be shifted difficult to retrieve a subset of the pixels clocking the out pixels : Consumes a large amount of power

Sensing Devices CMOS Most popular even at high quality end Sensing element CMOS transistor with a transparent gate Or a photodiode connected to a CMOS amplifier At each photoreceptor is a CMOS circuit, which amplify the signal & provide addressing capability Pixels in CMOS sensors addressed in a manner similar random access memory Low power, inexpensive manufacturing System-on-a-chip: A-D conversion, image processing on the same chip

Sensing Devices CCD and CMOS typically respond to infra-red light Infra-red is outside the visible spectrum Therefore require infra-red absorbtion filters in the optical path

Combining Optical and Aperture Effects Combine the effects of optical and sensor integration into a single PSF h v λ (s) Integral over wavelength Convolution with optical PSF Multiplication with the aperture

Combining Optical and Aperture Effects

Combining Optical and Aperture Effects

Combining Optical and Aperture Effects System PSF is obtained by convolving the optical PSF with a flipped version of the sensor integration aperture. Sensor samples are then obtained by convolving radiant scene intensity at the sensor surface by the system PSF impulsively sampling the resulting spatially and integrating over the spectral response characteristic of the sensor

Combining Optical and Aperture Effects What s wrong if the system PSF is an impulse? What s wrong if the system PSF is excessively smooth with extent larger than the separation between pixels (excessively smooth disc)?