Solution Set #2

Similar documents
Introduction. Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University

Exercise questions for Machine vision

Optical design of a high resolution vision lens

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

The Sine Function. Precalculus: Graphs of Sine and Cosine

ELEC Dr Reji Mathew Electrical Engineering UNSW

LENSES. INEL 6088 Computer Vision

DECISION NUMBER FOURTEEN TO THE TREATY ON OPEN SKIES

Amplitude, Reflection, and Period

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Image and Video Processing

General Imaging System

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

Bias errors in PIV: the pixel locking effect revisited.

4-4 Graphing Sine and Cosine Functions

E X P E R I M E N T 12

ME 6406 MACHINE VISION. Georgia Institute of Technology

Variable microinspection system. system125

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

1 ONE- and TWO-DIMENSIONAL HARMONIC OSCIL- LATIONS

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

PHYS 160 Astronomy. When analyzing light s behavior in a mirror or lens, it is helpful to use a technique called ray tracing.

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Integral 3-D Television Using a 2000-Scanning Line Video System

Digital Imaging with the Nikon D1X and D100 cameras. A tutorial with Simon Stafford

Digital Imaging Rochester Institute of Technology

Image and Multidimensional Signal Processing

1 Laboratory 7: Fourier Optics

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

You analyzed graphs of functions. (Lesson 1-5)

Cameras. CSE 455, Winter 2010 January 25, 2010

ECEN 4606, UNDERGRADUATE OPTICS LAB

Graphing Sine and Cosine

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Digital Images & Image Quality

Acquisition and representation of images

Be aware that there is no universal notation for the various quantities.

Acquisition and representation of images

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

Physics 3340 Spring Fourier Optics

MTF and PSF measurements of the CCD detector for the Euclid visible channel

Supplementary Figure 1. Effect of the spacer thickness on the resonance properties of the gold and silver metasurface layers.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

Fourier transforms, SIM

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Introduction to Computer Vision

Optics Practice. Version #: 0. Name: Date: 07/01/2010

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

A 3D Multi-Aperture Image Sensor Architecture

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

SPECKLE INTERFEROMETRY WITH TEMPORAL PHASE EVALUATION: INFLUENCE OF DECORRELATION, SPECKLE SIZE, AND NON-LINEARITY OF THE CAMERA

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced

DetectionofMicrostrctureofRoughnessbyOpticalMethod

Sharpness, Resolution and Interpolation

Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

A Study of Slanted-Edge MTF Stability and Repeatability

Section 7.6 Graphs of the Sine and Cosine Functions

Image Acquisition and Representation. Camera. CCD Camera. Image Acquisition Hardware

Image Acquisition and Representation

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

High-speed Micro-crack Detection of Solar Wafers with Variable Thickness

EE 392B: Course Introduction

5.3 Trigonometric Graphs. Copyright Cengage Learning. All rights reserved.

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

DETERMINING CALIBRATION PARAMETERS FOR A HARTMANN- SHACK WAVEFRONT SENSOR

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

30 lesions. 30 lesions. false positive fraction

Supplementary Figure 1. GO thin film thickness characterization. The thickness of the prepared GO thin

Design Description Document

OFFSET AND NOISE COMPENSATION

Problems from the 3 rd edition

PhysicsAndMathsTutor.com 1

Laser Telemetric System (Metrology)

Chapter 4 MASK Encryption: Results with Image Analysis

Fiber Optic Communications

Computer Vision. Howie Choset Introduction to Robotics

MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 2.71/2.710 Optics Spring 14 Practice Problems Posted May 11, 2014

Copyright 2009 Pearson Education, Inc. Slide Section 8.2 and 8.3-1

EC-433 Digital Image Processing

Light gathering Power: Magnification with eyepiece:

1. The ray diagram shows the position and size of the image, I, of an object, O, formed by a lens, L.

Unit 1: Image Formation

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

SOME PHYSICAL LAYER ISSUES. Lecture Notes 2A

Exercise Problems: Information Theory and Coding

PLazeR. a planar laser rangefinder. Robert Ying (ry2242) Derek Xingzhou He (xh2187) Peiqian Li (pl2521) Minh Trang Nguyen (mnn2108)

Aberrations of a lens

Name: Period: Date: Math Lab: Explore Transformations of Trig Functions

Evaluation of laser-based active thermography for the inspection of optoelectronic devices

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Gaussian Ray Tracing Technique

Determination of Focal Length of A Converging Lens and Mirror

Precalculus ~ Review Sheet

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Spatial harmonic distortion: a test for focal plane nonlinearity

Photons and solid state detection

Transcription:

05-78-0 Solution Set #. For the sampling function shown, analyze to determine its characteristics, e.g., the associated Nyquist sampling frequency (if any), whether a function sampled with s [x; x] may be recovered from the samples, the corresponding interpolation function, etc. This function may be written as the difference of two comb functions with the same separation parameter equal to : +! X +! X s [x; x] [x n x] [x n x ] n + X n n h ³ x n i! X + n h³ x x i! n n h COMB x i COMB x h COMB x i x COMB h ³ x n i! n ³ x x! n

S [ξ; x] F {s [x; x]} COMB [ ξ] COMB [ ξ] exp [ πiξ] COMB [ ξ] ( exp [ πiξ]) +! X [ ξ k] ( cos [πξ]+isin [πξ]) ξ! ( cos [πξ]) + µ cos π! +! ( cos [πk]) + ³ k! ( ) + ³ k ( ) if k is odd 0 if k is even COMB ξ ξ ξ! i sin [πξ]! i sin π k! i sin [πk]! i 0 k This means that the spectrum of the input object function will be translated to be centered at halfinteger values of ξ: F{f [x] s [x; x]} F [ξ] S [ξ; x] F [ξ] COMB ξ F [ξ] k F ξ k In other words, the central frequency of each replica of the object spectrum will be placed at a halfinteger value of ξ. The Nyquist frequency remains the same, but the spectrum of the interpolation function must be offset: X +! (F [ξ] S [ξ; x]) H [ξ] F ξ k RECT ξ x ˆF [ξ] ((F [ξ] S [ξ; x]) H [ξ]) ξ +

. A square CCD sensor with linear dimension of 7mm and pixel count along each axis of 04 pixels is used in a camera with a lens of focal length f 35mmthat is focused on a flat scene at a distance of z 500 mm. The schematic of the imaging system is shown below. Assuming ray optics (no diffraction), determine the resolution of the camera in line pairs per mm (analogous to cycles per mm for nonsinusoidal signals). This problem is actually pretty easy if you know the basics of optical imaging. The image formed by a lens with focal length f of an object located at distance z will be located at a distance z from the lens that satisfies the equations: + z z f 500 mm 35 mm z z z f z f 500 mm 35 mm 3500 93 mm 37.634 mm The magnification of the image is the ratio of the distances: M T z 3500 93 mm z 500 mm 7 93 0.075 which means that the image is smaller than the object and upside down (which has no consequence). The fact that the image is smaller suggests that the action of the lens is a minification rather than a magnification. The linear dimension of the sensor with 04 pixels is 7mm, which means that the linear dimension of the pixel (assuming 00% fill factor) is: 7mm 04 6.84 μm So the projected size of the pixel on the object is: 7mm 04 7 90.8 μm 93 A line pair consists of two adjacent pixels (one each white and black), which has a period of 90.8 μm 8.64 μm 0.8 mm So the spatial resolution in line pairs per millimeter is: line pairs 0.8 mm line pairs 5.6 mm 3

3. A flat object with uniform reflectance with value ρ is illuminated by a light source with intensity distribution h i [x, y] 55 exp ³[x i x 0 ] +[y y 0 ] The resulting image is quantized to k bits of intensity resolution. Assume that the eye can detect an abrupt change of gray scale of eight shades of intensity over the dynamic range. Determine the value of k where false contouring is visible. The profile of the object along the x-axis is a Gaussian function: h i [x, 0] 55 exp ³[x i x 0 ] +[0 y 0 ] This is plotted for x 0 y 0 0: i[x,0] 50 00 50 00 50-3 - - 0 3 x So we want to ensure that i 8 56 k k 56 3 k 5 8 4

4. Sketch the -D image that would be obtained for the previous problem if k. If k, there are 4 gray values i 0,,, 3, and the result is obtained by evaluating the greatest integer of i [x, 0] 56 µ h ³ i 64 exp [x x 0 ] +[0 y 0 ] + i[x,0] 4 3-3 - - 3 x 5

5. High-definition TV generates images with a vertical resolution of 5 scan lines and a width to height aspect ratio of 6:9. The system transmits 8 bits of data for each of the three additive primary colors (red, green, blue). Determine the number of bits per second necessary to store an HDTV program assuming no compression. zzzz first, find the number of pixels in the image. The number of pixels in each row is: 5 6 9 000 The number of pixels per frame therefore is: 5 000, 50, 000 and the number of bits per frame is the number of pixels multiplied by the number of bits per pixel: 5 000 8 354 0 6 In US television, there are 30 frames per second, so the number of bits per second is: 54 0 6 30.6 0 9 bits per second.6 0 9 04 04 04 8 0.9 gigabytes per second The remaining missing piece of information is the length of the program in seconds; for a one-hour program, the number of bits is:.6 0 9 3600 5. 83 0 bits per hour 679 gigabytes per hour o an uncompressed program will fill up your hard disk fairly fast, hence one reason to find useful video compression routines. 6

6. Use the program of your choice (Matlab, MathCAD, Excel, etc.) to do the following () construct and graph the following -D functions, () quantize to one bit by thresholding, and (3) quantize to one bit by error diffusion. h (a) f [n] cos π n i, 56 n 55 8 (a) f [n] before and after independent quantization to one bit (two levels); the graph of the thresholded function is scaled to have the same extrema as f [n]; (b)f [n] before and after error-diffused quantization to one bit; (c) detail view of (b), showing oscillations between the two output levels in the vicinity of the most rapid change in amplitude 7

h (b) f [n] cos π n i, 56 n 55 6 (a) f [n] before and after independent quantization to one bit (two levels); (b) magnified detail view of (a); (c) f [n] before and after error-diffused quantization to one bit; (d) detail view of (c), showing single oscillation between the two output levels near the most rapid change in amplitude. 8