Deconvolution , , Computational Photography Fall 2017, Lecture 17

Similar documents
Deconvolution , , Computational Photography Fall 2018, Lecture 12

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2018, Lecture 14

Lenses, exposure, and (de)focus

Coded Computational Photography!


Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Admin Deblurring & Deconvolution Different types of blur

Computational Approaches to Cameras

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Deblurring. Basics, Problem definition and variants

Computational Cameras. Rahul Raguram COMP

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

multiframe visual-inertial blur estimation and removal for unmodified smartphones

High dynamic range imaging and tonemapping

Lecture 3: Linear Filters

Coded Aperture for Projector and Camera for Robust 3D measurement

Tonemapping and bilateral filtering

fast blur removal for wearable QR code scanners

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Computational Camera & Photography: Coded Imaging

Improved motion invariant imaging with time varying shutter functions

A Review over Different Blur Detection Techniques in Image Processing

Total Variation Blind Deconvolution: The Devil is in the Details*

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Blind Correction of Optical Aberrations

Computational Photography Image Stabilization

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

More image filtering , , Computational Photography Fall 2017, Lecture 4

Gradient-Based Correction of Chromatic Aberration in the Joint Acquisition of Color and Near-Infrared Images

Coded Aperture Pairs for Depth from Defocus

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

Coding and Modulation in Cameras

Transfer Efficiency and Depth Invariance in Computational Cameras

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

What are Good Apertures for Defocus Deblurring?

Image Deblurring with Blurred/Noisy Image Pairs

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

Spline wavelet based blind image recovery

A Framework for Analysis of Computational Imaging Systems

Fast Blur Removal for Wearable QR Code Scanners (supplemental material)

Blur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

ELEC Dr Reji Mathew Electrical Engineering UNSW

Focused Image Recovery from Two Defocused

Optical image stabilization (IS)

When Does Computational Imaging Improve Performance?

IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY. Khosro Bahrami and Alex C. Kot

Defocus Map Estimation from a Single Image

6.A44 Computational Photography

Optical image stabilization (IS)

Camera Intrinsic Blur Kernel Estimation: A Reliable Framework

Restoration of Motion Blurred Document Images

Hardware Implementation of Motion Blur Removal

Image Deblurring Using Dark Channel Prior. Liang Zhang (lzhang432)

( ) Deriving the Lens Transmittance Function. Thin lens transmission is given by a phase with unit magnitude.

A Comprehensive Review on Image Restoration Techniques

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu>

Modeling and Synthesis of Aperture Effects in Cameras

Computational Photography Introduction

Enhanced Method for Image Restoration using Spatial Domain

To Denoise or Deblur: Parameter Optimization for Imaging Systems

A Comparative Review Paper for Noise Models and Image Restoration Techniques

ASD and Speckle Interferometry. Dave Rowe, CTO, PlaneWave Instruments

6.003: Signal Processing. Synthetic Aperture Optics

2015, IJARCSSE All Rights Reserved Page 312

Comparison of Reconstruction Algorithms for Images from Sparse-Aperture Systems

Chapter 2 Fourier Integral Representation of an Optical Image

Resolution. [from the New Merriam-Webster Dictionary, 1989 ed.]:

Introduction , , Computational Photography Fall 2018, Lecture 1

Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility

Digital Image Processing

Motion Deblurring of Infrared Images

Region Based Robust Single Image Blind Motion Deblurring of Natural Images

ECEN 4606, UNDERGRADUATE OPTICS LAB

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Extended depth of field for visual measurement systems with depth-invariant magnification

Sharpness Metric Based on Line Local Binary Patterns and a Robust segmentation Algorithm for Defocus Blur

Image formation - Cameras. Grading & Project. About the course. Tentative Schedule. Course Content. Students introduction

Michal Šorel, Filip Šroubek and Jan Flusser. Book title goes here

Motion Estimation from a Single Blurred Image

Image Restoration Techniques: A Survey

A machine learning approach for non-blind image deconvolution

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Single Image Blind Deconvolution with Higher-Order Texture Statistics

Motion Blurred Image Restoration based on Super-resolution Method

Computational Photography and Video. Prof. Marc Pollefeys

CS354 Computer Graphics Computational Photography. Qixing Huang April 23 th 2018

Motion-invariant Coding Using a Programmable Aperture Camera

Optical image stabilization (IS)

e-issn: p-issn: X Page 145

Coded Aperture and Coded Exposure Photography

Learning to Estimate and Remove Non-uniform Image Blur

Computer Vision, Lecture 3

Transcription:

Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17

Course announcements Homework 4 is out. - Due October 26 th. - There was another typo in HW4, download new version. - Drop by Yannis office to pick up cameras any time. Homework 5 will be out on Thursday. - You will need cameras for that one as well, so keep the ones you picked up for HW4. Project ideas were due on Piazza on Friday 20 th. - Responded to most of you. - Some still need to post their ideas. Project proposals are due on Monday 30 th.

Overview of today s lecture Telecentric lenses. Sources of blur. Deconvolution. Blind deconvolution.

Slide credits Most of these slides were adapted from: Fredo Durand (MIT). Gordon Wetzstein (Stanford).

Why are our images blurry?

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus.

Lens imperfections Ideal lens: An point maps to a point at a certain plane. object distance D focus distance D

Lens imperfections Ideal lens: An point maps to a point at a certain plane. Real lens: A point maps to a circle that has non-zero minimum radius among all planes. object distance D focus distance D What is the effect of this on the images we capture?

Lens imperfections Ideal lens: An point maps to a point at a certain plane. Real lens: A point maps to a circle that has non-zero minimum radius among all planes. blur kernel object distance D focus distance D Shift-invariant blur.

Lens imperfections What causes lens imperfections?

Lens imperfections What causes lens imperfections? Aberrations. Diffraction. small aperture large aperture

Lens as an optical low-pass filter Point spread function (PSF): The blur kernel of a lens. Diffraction-limited PSF: No aberrations, only diffraction. Determined by aperture shape. blur kernel object distance D focus distance D diffraction-limited PSF of a circular aperture

Lens as an optical low-pass filter Point spread function (PSF): The blur kernel of a lens. Diffraction-limited PSF: No aberrations, only diffraction. Determined by aperture shape. blur kernel diffraction-limited OTF of a circular aperture object distance D focus distance D diffraction-limited PSF of a circular aperture Optical transfer function (OTF): The Fourier transform of the PSF. Equal to aperture shape.

Lens as an optical low-pass filter * = image from a perfect lens imperfect lens PSF image from imperfect lens x * c = b

Lens as an optical low-pass filter If we know c and b, can we recover x? * = image from a perfect lens imperfect lens PSF image from imperfect lens x * c = b

If we know c and b, can we recover x? Deconvolution x * c = b

Deconvolution x * c = b Reminder: convolution is multiplication in Fourier domain: F(x). F(c) = F(b) If we know c and b, can we recover x?

Deconvolution x * c = b Reminder: convolution is multiplication in Fourier domain: F(x). F(c) = F(b) Deconvolution is division in Fourier domain: F(x est ) = F(c) \ F(b) After division, just do inverse Fourier transform: x est = F -1 ( F(c) \ F(b) )

Any problems with this approach? Deconvolution

Deconvolution The OTF (Fourier of PSF) is a low-pass filter zeros at high frequencies The measured signal includes noise b = c * x + n noise term Any problems with this approach?

Deconvolution The OTF (Fourier of PSF) is a low-pass filter zeros at high frequencies The measured signal includes noise b = c * x + n noise term When we divide by zero, we amplify the high frequency noise

Even tiny noise can make the results awful. Example for Gaussian of σ = 0.05 Naïve deconvolution -1 * = b * c-1 = x est

Apply inverse kernel and do not divide by zero: Wiener Deconvolution F(c) 2 F(b) x = F -1 ( ) est F(c) 2 + 1/SNR(ω) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = mean signal at ω noise std at ω

Deconvolution comparisons naïve deconvolution Wiener deconvolution

Deconvolution comparisons σ = 0.01 σ = 0.05 σ = 0.01

Apply inverse kernel and do not divide by zero: Wiener Deconvolution F(c) 2 F(b) x = F -1 ( ) est F(c) 2 + 1/SNR(ω) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = mean signal at ω noise std at ω

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω 2 This is a natural image statistic

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω 2 This is a natural image statistic Noise tends to have flat spectrum, σ(ω) = constant We call this white noise What is the SNR?

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω 2 This is a natural image statistic Noise tends to have flat spectrum, σ(ω) = constant We call this white noise Therefore, we have that: SNR(ω) = 1 / ω 2

Apply inverse kernel and do not divide by zero: Wiener Deconvolution F(c) 2 F(b) x = F -1 ( ) est F(c) 2 + 1/SNR(ω) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = 1 ω 2

Wiener Deconvolution For natural images and white noise, it can be re-written as the minimization problem min x b c x 2 + x 2 What does this look like? How can it be solved? gradient regularization

Deconvolution comparisons blurry input naive deconvolution gradient regularization original

Deconvolution comparisons blurry input naive deconvolution gradient regularization original

and a proof-of-concept demonstration noisy input naive deconvolution gradient regularization

Can we do better than that?

Can we do better than that? Use different gradient regularizations: L 2 gradient regularization (Tikhonov regularization, same as Wiener deconvolution) min x b c x 2 + x 2 L 1 gradient regularization (sparsity regularization, same as total variation) min x b c x 2 + x 1 L n<1 gradient regularization (fractional regularization) min x b c x 2 + x 0.8 How do we solve for these? All of these are motivated by natural image statistics. Active research area.

Comparison of gradient regularizations input squared gradient regularization fractional gradient regularization

High quality images using cheap lenses [Heide et al., High-Quality Computational Imaging Through Simple Lenses, TOG 2013]

Deconvolution If we know b and c, can we recover x? How do we measure this?? * = x * c = b

PSF calibration Take a photo of a point source Image of PSF Image with sharp lens Image with cheap lens

If we know b and c, can we recover x? Deconvolution? * = x * c = b

If we know b, can we recover x and c? Blind deconvolution? *? = x * c = b

Camera shake

If we know b, can we recover x and c? Camera shake as a filter * = image from static camera PSF from camera motion image from shaky camera x * c = b

Multiple possible solutions How do we detect this one?

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image looks like a natural image. The kernel looks like a motion PSF.

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image looks like a natural image. The kernel looks like a motion PSF.

Shake kernel statistics Gradients in natural images follow a characteristic heavy-tail distribution. sharp natural image blurry natural image

Shake kernel statistics Gradients in natural images follow a characteristic heavy-tail distribution. sharp natural image blurry natural image Can be approximated by x 0.8

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image looks like a natural image. Gradients in natural images follow a characteristic heavy-tail distribution. The kernel looks like a motion PSF. Shake kernels are very sparse, have continuous contours, and are always positive How do we use this information for blind deconvolution?

Regularized blind deconvolution Solve regularized least-squares optimization min x,b b c x 2 + x 0.8 + c 1 What does each term in this summation correspond to?

Regularized blind deconvolution Solve regularized least-squares optimization min x,b b c x 2 + x 0.8 + c 1 data term natural image prior shake kernel prior Note: Solving such optimization problems is complicated (no longer linear least squares).

A demonstration input deconvolved image and kernel

A demonstration input deconvolved image and kernel This image looks worse than the original This doesn t look like a plausible shake kernel

Regularized blind deconvolution Solve regularized least-squares optimization min x,b b c x 2 + x 0.8 + c 1 loss function

Regularized blind deconvolution Solve regularized least-squares optimization min x,b b c x 2 + x 0.8 + c 1 inverse loss loss function Where in this graph is the solution we find? pixel intensity

Regularized blind deconvolution Solve regularized least-squares optimization min x,b b c x 2 + x 0.8 + c 1 inverse loss loss function many plausible solutions here optimal solution pixel intensity Rather than keep just maximum, do a weighted average of all solutions

A demonstration input maximum-only average This image looks worse than the original

More examples

Results on real shaky images

Results on real shaky images

Results on real shaky images

Results on real shaky images

More advanced motion deblurring [Shah et al., High-quality Motion Deblurring from a Single Image, SIGGRAPH 2008]

Why are our images blurry? Lens imperfections. Camera shake. Can we solve all of these problems in the same way? Scene motion. Depth defocus.

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Can we solve all of these problems in the same way? No, because blur is not always shift invariant. See next lecture. Depth defocus.

References Basic reading: Szeliski textbook, Sections 3.4.3, 3.4.4, 10.1.4, 10.3. Fergus et al., Removing camera shake from a single image, SIGGRAPH 2006. the main motion deblurring and blind deconvolution paper we covered in this lecture. Additional reading: Heide et al., High-Quality Computational Imaging Through Simple Lenses, TOG 2013. the paper on high-quality imaging using cheap lenses, which also has a great discussion of all matters relating to blurring from lens aberrations and modern deconvolution algorithms. Levin, Blind Motion Deblurring Using Image Statistics, NIPS 2006. Levin et al., Image and depth from a conventional camera with a coded aperture, SIGGRAPH 2007. Levin et al., Understanding and evaluating blind deconvolution algorithms, CVPR 2009 and PAMI 2011. Krishnan and Fergus, Fast Image Deconvolution using Hyper-Laplacian Priors, NIPS 2009. Levin et al., Efficient Marginal Likelihood Optimization in Blind Deconvolution, CVPR 2011. a sequence of papers developing the state of the art in blind deconvolution of natural images, including the use Laplacian (sparsity) and hyper-laplacian priors on gradients, analysis of different loss functions and maximum a- posteriori versus Bayesian estimates, the use of variational inference, and efficient optimization algorithms. Minskin and MacKay, Ensemble Learning for Blind Image Separation and Deconvolution, AICA 2000. the paper explaining the mathematics of how to compute Bayesian estimators using variational inference. Shah et al., High-quality Motion Deblurring from a Single Image, SIGGRAPH 2008. a more recent paper on motion deblurring.