Deblurring Basics, Problem definition and variants
Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion
Credit: Kenneth Josephson
Kinds of blur Spatially invariant vs. Spatially varying
Kinds of blur 1D vs 2D blur kernels
Problem definition Lets assume a spatially invariant blurring model
LTI modeling Blurring h(x)
LTI modeling Blurring h(x) Can we get back the sharp image?
LTI modeling Blurring h(x) Could we consider getting back the sharp image?
LTI modeling Blurring h(x) Could we consider getting back the sharp image? Challenges 1. What if has zeros? 2. Do we know h(t)? 3. (peril of DFT) Periodicity!
Image Priors Suppose h(t) is known, today image deblurring is considered close-to-solved Regularizer Data term Commonly used regularizers Wavelet sparsity Gradients of the image are sparse Poisson gradients Mixture of Gaussians model for image gradients
BlurBurst Multi-image deblurring technique
Telephoto imaging Low vision photography
Large blur Blur due to camera shake can be very large Telephoto (zoom) imaging Narrow field of view implies small handshake are amplified Low-light imaging Large exposure times leads to large blurs Hand-held high dynamic range (HDR) imaging
Deblurring on large blur (Shan et al, 2008)
Deblurring on large blur (Shan et al, 2008)
Why is large blur hard? Extreme loss of high-frequency information
Use multiple images!!! Most cameras have a burst-mode to capture images in rapid succession Two key ideas More images implies great resiliency to noise Camera shake is never the same across images! Different frequency components are attenuated in different images
Use multiple images!!!
Use multiple images!!! Most cameras have a burst-mode to capture images in rapid succession Two key ideas More images implies great resiliency to noise Camera shake is never the same across images! Different frequency components are attenuated in different images Challenges Registration!!!!
Why deblurring is easier with multiple image? (a) Single image Blur PSF 5 pixels (b) Single image Blur PSF 25 pixels Compensating high-frequency information each other along perpendicular directions of blur kernels. Improving SNR just by virtue of noise suppression (c) Two images Blur PSF 25 pixels (d) Six image Blur PSF 25 pixels (a) (b) (c) (d) Orig. Comparison of select patches
Algorithm Blur Latent Registration Kernel Image Regularization: L1 TV norm norm Feature extraction and matching: - Extracting SIFT features and match them to a pre-selected reference image. Homography estimation: - The feature correspondences are used to fit a homography transformation using RANSAC. Noise Model: Solved by by SPG-L1 M-Fista
Deblurring on large blur (Shan et al, 2008)
(Shan et al, 2008)
Input image
Shan et al.
BlurBurst (us)
Tripod image
Real Data Results Low Light Scene Exposure time: 1.6sec 49x49 blur kernel size 6 input images at 512x512 Shan s Single Deblur (08) Cho s Single Deblur (09) Input Images Estimated Blur Kernels Šroubek s Multi Deblur (12) Ours
Real Data Results Low Light Scene Exposure time: 0.25sec 39x39 blur kernel size 8 input images at 512x512 Shan s Single Deblur (08) Cho s Single Deblur (09) Input Images Estimated Blur Kernels Šroubek s Multi Deblur (12) Ours
Real Data Results Telephoto & Low Light Focal Length: 300mm Exposure time: 0.2sec 49x49 blur kernel size 6 input images at 512x512 Shan s Single Deblur (08) Cho s Single Deblur (09) Input Images Estimated Blur Kernels Šroubek s Multi Deblur (12) Ours
Coded Exposure Photography: Motion deblurring using fluttered shutter Raskar, Agrawal, and Tumblin SIGGRAPH 2006 Slide-deck credit: Raskar, Agrawal, Tumblin
Traditional Camera Shutter is OPEN
Our Camera Flutter Shutter
Shutter is OPEN and CLOSED
Comparison of Blurred Images
Implementation Completely Portable
Lab Setup
Sync Function Blurring == Convolution Traditional Camera: Box Filter
Preserves High Frequencies!!! Flutter Shutter: Coded Filter
Comparison
Inverse Filter Unstable Inverse Filter stable
Motion Blur as Convolution
Short Exposure Long Exposure Coded Exposure Our result Matlab Lucy Ground Truth
Are all codes good? Our Code All ones Alternate Random
Need to consider zero padded codes!!!
License Plate Retrieval
License Plate Retrieval
Input Image
Rectified Crop Deblurred Result
Varying Exposure Video Exposure Time Fourier Transform Slide courtesy Agrawal
Varying Exposure Video No common nulls Exposure Time Fourier Transform Exposure Time Slide courtesy Agrawal
Varying Exposure Video No common nulls Exposure Time Fourier Transform Exposure Time Exposure Time Slide courtesy Agrawal
Varying Exposure Video = PSF Null-Filling Fourier Transform Joint Frequency Spectrum Preserves All Frequencies Slide courtesy Agrawal
Varying Exposure Video
Blurred Photos Deblurred Result Slide courtesy Agrawal
Key Idea: PSF Null-Filling Individual non-invertible PSF s combined into jointly-invertible PSF Information lost in any single photo is captured in some other photo For motion deblurring Achieve PSF null-filling by varying the exposure time of successive photos Varying Exposure Photo 1 Photo 2 Photo 3 Slide courtesy Agrawal
Single Image Deblurring (SID) Multiple Image Deblurring (MID) Slide courtesy Agrawal
Motion Invariant Photography Levin, Sand, Cho, Durand, Freeman SIGGRAPH 2008 Slide-deck credit: Levin et al.
Motion Blur Most scene is static Can moving linearly from left to right
Overcoming motion blur? Reduce shutter speed But reduces amount of light When shutter speed is as fast as we physically can and there is still blur Computational solution: Deconvolution
Why is motion deblurring hard? Need to know blur kernel (motion velocity)? Input blurry image Correct kernel Output from correct kernel? Input blurry image Wrong kernel Output from wrong kernel
Why is motion deblurring hard? Need to know blur kernel (motion velocity) Need to segment image Entire image deblurred with kernel corresponding to the cans velocity
Why is motion deblurring hard? Need to know blur kernel (motion velocity) Need to segment image Existing solutions: - Multiple input images/ additional hardware: Bascle et al 1996; Rav-Acha and Peleg 2005; Zheng 2005; Bar et al 2007; Ben-Ezra and Nayar 2004; Yuan et al. 2007 - Image statistics- single image with restricted assumptions: Fergus et al 2006; Levin 2006; Shan et al 2008;
Why is motion deblurring hard? Need to know blur kernel (motion velocity) Need to segment image Information loss (reduced signal to noise ratio) blurred input deblurred static input
Why is motion deblurring hard? Need to know blur kernel (motion velocity) Need to segment image Information loss - Existing approach: Flutter Shutter, Raskar et al 2006 Close & open shutter during exposure, achieves broad-band kernel. But does not address kernel estimation and segmentation
Counter intuitive solution: To reduce motion blur, increase it! - move camera as picture is taken Makes blur invariant to motion- can be removed with spatially uniform deconvolution - kernel is known (no need to estimate motion) - kernel identical over the image (no need to segment) Makes blur easy to invert
Inspiration: depth invariant defocus Wave front coding - manipulate optical element Cathey and Dowski 94 Vary object/detector distance during integration - Hausler 72 - Nagahara, Kuthirummal, Zhou, Nayar 08
Motion invariant blur- disclaimers: Assumes 1D motion (e.g. horizontal) Degrades quality for static objects
Controlling motion blur
Controlling motion blur Can we control motion blur?
Controlling motion blur
Controlling motion blur
Controlling motion blur
Motion invariant blur Controlling motion blur
Parabolic sweep Time t Sensor position x(t)=a t 2 Start by moving very fast to the right Continuously slow down until stop Continuously accelerate to the left Intuition: For any velocity, there is one instant where we track perfectly. Sensor position x
Motion invariant blur
Motion invariant blur
Motion invariant blur
Static camera Unknown and variable blur kernels Our parabolic input Blur kernel is invariant to velocity Our output after deblurring NON-BLIND deconvolution
Deblurring and information loss Assume: we could perfectly identify blur kernel Which camera has motion blur that is easy to invert? - Static? Flutter Shutter? Parabolic? Prove: parabolic motion achieves near optimal information preservation blurred input deblurred static input
Comparing camera reconstruction Blurred input Static Flutter Shutter Parabolic Deblurred output Note: synthetic rendering, exact PSF is known
Hardware construction Ideally move sensor (requires same hardware as existing stabilization systems) In prototype implementation: rotate camera variable radius cam Lever Rotating platform
Linear rail Static camera input- Unknown and variable blur Our parabolic input- Blur is invariant to velocity
Linear rail Static camera input- Unknown and variable blur Our output after deblurring- NON-BLIND deconvolution
Human motion- no perfect linearity Input from a static camera Deblurred output from our camera
Violating 1D motion assumption- forward motion Input from a static camera Deblurred output from our camera
Violating 1D motion assumption- stand-up motion Input from a static camera Deblurred output from our camera
Violating 1D motion assumption- rotation Input from a static camera Deblurred output from our camera