Coded photography , , Computational Photography Fall 2017, Lecture 18

Similar documents
Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded Computational Photography!

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Deblurring. Basics, Problem definition and variants

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Coding and Modulation in Cameras


Computational Camera & Photography: Coded Imaging

Computational Approaches to Cameras

Computational Cameras. Rahul Raguram COMP

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Lenses, exposure, and (de)focus

Transfer Efficiency and Depth Invariance in Computational Cameras

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

A Framework for Analysis of Computational Imaging Systems

When Does Computational Imaging Improve Performance?

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

High dynamic range imaging and tonemapping

Computational Photography

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Coded Aperture and Coded Exposure Photography

Admin Deblurring & Deconvolution Different types of blur

Tonemapping and bilateral filtering

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Coded Aperture for Projector and Camera for Robust 3D measurement

Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

Computational Photography Introduction

Improved motion invariant imaging with time varying shutter functions

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Optimal Single Image Capture for Motion Deblurring

An Analysis of Focus Sweep for Improved 2D Motion Invariance

To Denoise or Deblur: Parameter Optimization for Imaging Systems

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Optical image stabilization (IS)

fast blur removal for wearable QR code scanners

Introduction to Light Fields

Optical image stabilization (IS)

Less Is More: Coded Computational Photography

Toward Non-stationary Blind Image Deblurring: Models and Techniques

CVPR Easter School. Michael S. Brown. School of Computing National University of Singapore

A Review over Different Blur Detection Techniques in Image Processing

Modeling and Synthesis of Aperture Effects in Cameras

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility

4D Frequency Analysis of Computational Cameras for Depth of Field Extension

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Introduction , , Computational Photography Fall 2018, Lecture 1

Basic Camera Concepts. How to properly utilize your camera

To Denoise or Deblur: Parameter Optimization for Imaging Systems

Motion-invariant Coding Using a Programmable Aperture Camera

Simulated Programmable Apertures with Lytro

6.A44 Computational Photography

Extended Depth of Field Catadioptric Imaging Using Focal Sweep

Computational Photography and Video. Prof. Marc Pollefeys

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Extended depth of field for visual measurement systems with depth-invariant magnification

Focal Sweep Videography with Deformable Optics

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Coded Aperture Pairs for Depth from Defocus

Implementation of Image Deblurring Techniques in Java

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

What are Good Apertures for Defocus Deblurring?

High resolution extended depth of field microscopy using wavefront coding

Coded Exposure HDR Light-Field Video Recording

Why learn about photography in this course?

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner.

Image Deblurring with Blurred/Noisy Image Pairs

Full Resolution Lightfield Rendering

Blur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Changyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012

Optical image stabilization (IS)

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique

Flexible Depth of Field Photography

Image stabilization (IS)

What will be on the midterm?

Reikan FoCal Fully Automatic Test Report

Raskar, Camera Culture, MIT Media Lab. Ramesh Raskar. Camera Culture. Associate Professor, MIT Media Lab

Color , , Computational Photography Fall 2017, Lecture 11

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

SUPER RESOLUTION INTRODUCTION

Computational Photography: Principles and Practice

Restoration of Motion Blurred Document Images

Reikan FoCal Fully Automatic Test Report

Flexible Depth of Field Photography

Understanding camera trade-offs through a Bayesian analysis of light field projections Anat Levin, William T. Freeman, and Fredo Durand

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

Basic principles of photography. David Capel 346B IST

Sensors and Sensing Cameras and Camera Calibration

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Focal Sweep Imaging with Multi-focal Diffractive Optics

More image filtering , , Computational Photography Fall 2017, Lecture 4

ELEC Dr Reji Mathew Electrical Engineering UNSW

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Image and Depth from a Single Defocused Image Using Coded Aperture Photography

Transcription:

Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18

Course announcements Homework 5 delayed for Tuesday. - You will need cameras for that one as well, so keep the ones you picked up for HW4. - Will be shorter than HW4. Project proposals are due on Tuesday 31 st. - Deadline extended by one day. One-to-one meetings this week. - Sign up for a slot using the spreadsheet posted on Piazza. - Make sure to read instructions on course website about elevator pitch presentation.

Overview of today s lecture The coded photography paradigm. Dealing with depth blur: coded aperture. Dealing with depth blur: focal sweep. Dealing with depth blur: generalized optics. Dealing with motion blur: coded exposure. Dealing with motion blur: parabolic sweep.

Slide credits Most of these slides were adapted from: Fredo Durand (MIT). Anat Levin (Technion). Gordon Wetzstein (Stanford).

The coded photography paradigm

Conventional photography real world optics captured image computation enhanced image Optics capture something that is (close to) the final image. Computation mostly enhances captured image (e.g., deblur).

Coded photography??? real world generalized optics coded representation of real world generalized computation final image(s) Generalized optics encode world into intermediate representation. Generalized computation decodes representation into multiple images. Can you think of any examples?

CFA demosaicing Early example: mosaicing real world generalized optics coded representation of real world generalized computation final image(s) Color filter array encodes color into a mosaic. Demosaicing decodes color into RGB image.

Lightfield rendering Recent example: plenoptic camera real world generalized optics coded representation of real world generalized computation final image(s) Plenoptic camera encodes world into lightfield. Lightfield rendering decodes lightfield into refocused or multi-viewpoint images.

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus. last lecture: deconvolution last lecture: blind deconvolution flutter shutter, motion-invariant photo coded aperture, focal sweep, lattice lens conventional photography coded photography

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus. last lecture: deconvolution last lecture: blind deconvolution flutter shutter, motion-invariant photo coded aperture, focal sweep, lattice lens conventional photography coded photography

Dealing with depth blur: coded aperture

Defocus blur Point spread function (PSF): The blur kernel of a (perfect) lens at some out-of-focus depth. blur kernel object distance D focus distance D What does the blur kernel depend on?

Defocus blur Point spread function (PSF): The blur kernel of a (perfect) lens at some out-of-focus depth. blur kernel Aperture determines shape of kernel. Depth determines scale of blur kernel. object distance D focus distance D

Depth determines scale of blur kernel PSF object distance D focus distance D

Depth determines scale of blur kernel PSF object distance D focus distance D

Depth determines scale of blur kernel PSF object distance D focus distance D

Depth determines scale of blur kernel PSF object distance D focus distance D

Depth determines scale of blur kernel PSF object distance D focus distance D

Aperture determines shape of blur kernel PSF object distance D focus distance D

Aperture determines shape of blur kernel What causes these lines? PSF photo of aperture shape of aperture (optical transfer function, OTF) blur kernel (point spread function, PSF) How do the OTF and PSF relate to each other?

Removing depth defocus measured PSFs at different depths input defocused image How would you create an all in-focus image given the above?

Removing depth defocus Defocus is local convolution with a depth-dependent kernel depth 3 = * depth 2 = * input defocused image depth 1 = * How would you create an all in-focus image given the above? measured PSFs at different depths

Removing depth defocus Defocus is local convolution with a depth-dependent kernel depth 3 = * depth 2 = * input defocused image depth 1 = * How would you create an all in-focus image given the above? measured PSFs at different depths

Removing depth defocus Deconvolve each image patch with all kernels Select the right scale by evaluating the deconvolution results * * * -1-1 -1 = = = How do we select the correct scale?

Removing depth defocus Problem: With standard aperture, results at different scales look very similar. * -1 = wrong scale * -1 = correct scale? * -1 = correct scale?

Coded aperture Solution: Change aperture so that it is easier to pick the correct scale * -1 = wrong scale * -1 = correct scale * -1 = wrong scale

Coded aperture changes shape of kernel PSF object distance D focus distance D

Coded aperture changes shape of kernel PSF object distance D focus distance D

Coded aperture changes shape of PSF

Coded aperture changes shape of PSF New PSF preserves high frequencies More content available to help us determine correct depth

Input

All-focused (deconvolved)

Comparison between standard and coded aperture Ringing due to wrong scale estimation

Comparison between standard and coded aperture

Refocusing

Refocusing

Refocusing

Depth estimation

Input

All-focused (deconvolved)

Refocusing

Refocusing

Refocusing

Depth estimation

Any problems with using a coded aperture?

Any problems with using a coded aperture? We lose a lot of light due to blocking. The deconvolution becomes harder due to more diffraction/zeros in frequency domain. We still need to select correct scale.

Dealing with depth blur: focal sweep

The difficulty of dealing with depth defocus varying in-focus distance At every focus setting, objects at different depths are blurred by different PSF

The difficulty of dealing with depth defocus varying in-focus distance At every focus setting, objects at different depths are blurred by different PSF PSFs for object at depth 1

The difficulty of dealing with depth defocus varying in-focus distance At every focus setting, objects at different depths are blurred by different PSF PSFs for object at depth 1 PSFs for object at depth 2

The difficulty of dealing with depth defocus varying in-focus distance At every focus setting, objects at different depths are blurred by different PSF PSFs for object at depth 1 PSFs for object at depth 2 As we sweep through focus settings, each point every object is blurred by all possible PSFs

varying in-focus distance Focal sweep Go through all focus settings during a single exposure PSFs for object at depth 1 PSFs for object at depth 2 What is the effective PSF in this case?

varying in-focus distance Focal sweep Go through all focus settings during a single exposure dt = dt = effective PSF for object at depth 1 effective PSF for object at depth 2 Anything special about these effective PSFs?

Focal sweep The effective PSF is: 1. Depth-invariant all points are blurred the same way regardless of depth. 2. Never sharp all points will be blurry regardless of depth. What are the implications of this? 1. The image we capture will be sharp nowhere; but 2. We can use simple (global) deconvolution to sharpen parts we want 1. Can we estimate depth from this? 2. Can we do refocusing from this?

Focal sweep The effective PSF is: 1. Depth-invariant all points are blurred the same way regardless of depth. 2. Never sharp all points will be blurry regardless of depth. What are the implications of this?

Focal sweep The effective PSF is: 1. Depth-invariant all points are blurred the same way regardless of depth. 2. Never sharp all points will be blurry regardless of depth. What are the implications of this? 1. The image we capture will not be sharp anywhere; but 2. We can use simple (global) deconvolution to sharpen parts we want 1. Can we estimate depth from this? 2. Can we do refocusing from this?

Focal sweep The effective PSF is: 1. Depth-invariant all points are blurred the same way regardless of depth. 2. Never sharp all points will be blurry regardless of depth. What are the implications of this? 1. The image we capture will not be sharp anywhere; but 2. We can use simple (global) deconvolution to sharpen parts we want 1. Can we estimate depth from this? 2. Can we do refocusing from this? Depth-invariance of the PSF means that we have lost all depth information

How can you implement focal sweep?

How can you implement focal sweep? Use translation stage to move sensor relative to fixed lens during exposure Rotate focusing ring to move lens relative to fixed sensor during exposure

Comparison of different PSFs

Depth of field comparisons

Any problems with using focal sweep?

Any problems with using focal sweep? We have moving parts (vibrations, motion blur). Perfect depth invariance requires very constant speed. We lose depth information.

Dealing with depth blur: generalized optics

Change optics, not aperture PSF object distance D focus distance D

Wavefront coding Replace lens with a cubic phase plate object distance D focus distance D

Wavefront coding standard lens wavefront coding Rays no longer converge. Approximately depth-invariant PSF for certain range of depths.

Lattice lens object distance D focus distance D Add lenslet array with varying focal length in front of lens

Lattice lens Does this remind you of something?

Lattice lens Effectively captures only the useful subset of the 4D lightfield. Light field spectrum: 4D Image spectrum: 2D Depth: 1D 3D Dimensionality gap (Ng 05) PSF is not depth-invariant, so local deconvolution as in coded aperture. Only the 3D manifold corresponding to physical focusing distance is useful PSFs at different depths

Standard lens Results

Lattice lens Results

Standard lens Results

Lattice lens Results

Standard lens Results

Lattice lens Results

Refocusing example

Refocusing example

Refocusing example

Comparison of different techniques Depth of field comparison: standard coded lens aperture Object at in-focus depth < << < < focal sweep wavefront coding lattice lens Object at extreme depth

Can you think of any issues? Diffusion coded photography

Dealing with motion blur

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus. last lecture: deconvolution last lecture: blind deconvolution flutter shutter, motion-invariant photo coded aperture, focal sweep, lattice lens conventional photography coded photography

Motion blur Most scene is static Can moving linearly from left to right

Motion blur = * blurry image of moving object motion blur kernel sharp image of static object What does the motion blur kernel depend on?

Motion blur = * blurry image of moving object motion blur kernel sharp image of static object What does the motion blur kernel depend on? Motion velocity determines direction of kernel. Shutter speed determines width of kernel. Can we use deconvolution to remove motion blur?

Challenges of motion deblurring Blur kernel is not invertible. Blur kernel is unknown. Blur kernel is different for different objects.

Challenges of motion deblurring Blur kernel is not invertible. How would you deal with this? Blur kernel is unknown. Blur kernel is different for different objects.

Dealing with motion blur: coded exposure

Coded exposure a.k.a. flutter shutter Code exposure (i.e., shutter speed) to make motion blur kernel better conditioned. traditional camera = * blurry image of moving object motion blur kernel sharp image of static object flutter-shutter camera = * blurry image of moving object motion blur kernel sharp image of static object

How would you implement coded exposure?

How would you implement coded exposure? electronics for external shutter control very fast external shutter

Coded exposure a.k.a. flutter shutter motion blur kernel in time domain motion blur kernel in Fourier domain Why is flutter shutter better?

Coded exposure a.k.a. flutter shutter motion blur kernel in time domain motion blur kernel in Fourier domain zeros make inverse filter unstable inverse filter is stable Why is flutter shutter better?

Motion deblurring comparison conventional photography flutter-shutter photography deconvolved output blurry input

Challenges of motion deblurring Blur kernel is not invertible. Blur kernel is unknown. How would you deal with these two? Blur kernel is different for different objects.

Dealing with motion blur: parabolic sweep

Motion-invariant photography Introduce extra motion so that: Everything is blurry; and The blur kernel is motion invariant (same for all objects). How would you achieve this?

Parabolic sweep

Hardware implementation Approximate small translation by small rotation variable radius cam Lever Rotating platform

Some results static camera input - unknown and variable blur parabolic input - blur is invariant to velocity

Some results static camera input - unknown and variable blur output after deconvolution Is this blind or non-blind deconvolution?

Some results static camera input parabolic camera input deconvolution output

Some results static camera input output after deconvolution Why does it fail in this case?

References Basic reading: Levin et al., Image and depth from a conventional camera with a coded aperture, SIGGRAPH 2007. Veeraraghavan et al., Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing, SIGGRAPH 2007. the two papers introducing coded aperture for depth and refocusing, the first covers deblurring in more detail, whereas the second deals with optimal mask selection and includes very interesting lightfield analysis. Nagahara et al., Flexible depth of field photography, ECCV 2008 and PAMI 2010. the focal sweep paper. Dowski and Cathey, Extended depth of field through wave-front coding, Applied Optics 1995. the wavefront coding paper. Levin et al., 4D Frequency Analysis of Computational Cameras for Depth of Field Extension, SIGGRAPH 2009. the lattice focal lens paper, which also includes a discussion of wavefront coding. Cossairt et al., Diffusion Coded Photography for Extended Depth of Field, SIGGRAPH 2010. the diffusion coded photography paper. Raskar et al., Coded Exposure Photography: Motion Deblurring using Fluttered Shutter, SIGGRAPH 2006. the flutter shutter paper. Levin et al., Motion-Invariant Photography, SIGGRAPH 2008. the motion-invariant photography paper. Additional reading: Zhang and Levoy, Wigner distributions and how they relate to the light field, ICCP 2009. this paper has a nice discussion of wavefront coding, in addition to analysis of lightfields and their relationship to wave optics concepts. Gehm et al., Single-shot compressive spectral imaging with a dual-disperser architecture, Optics Express 2007. this paper introduces the use of coded apertures for hyperspectral imaging, instead of depth and refocusing.