Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Similar documents
Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2017, Lecture 18

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Computational Approaches to Cameras

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Computational Cameras. Rahul Raguram COMP

Coded Computational Photography!

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Coding and Modulation in Cameras

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?


Full Resolution Lightfield Rendering

Computational Photography: Principles and Practice

Light field photography and microscopy

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

Lenses, exposure, and (de)focus

Introduction to Light Fields

Light-Field Database Creation and Depth Estimation

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Coded Aperture and Coded Exposure Photography

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Lecture 21: Cameras & Lenses II. Computer Graphics and Imaging UC Berkeley CS184/284A

Single-shot three-dimensional imaging of dilute atomic clouds

Computational Camera & Photography: Coded Imaging

Less Is More: Coded Computational Photography

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Computational Photography: Advanced Topics

Image Formation and Camera Design

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Computational Photography and Video. Prof. Marc Pollefeys

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Chapter 18 Optical Elements

Computational Photography

Transfer Efficiency and Depth Invariance in Computational Cameras

Deblurring. Basics, Problem definition and variants

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

What will be on the midterm?

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Why learn about photography in this course?

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

Demosaicing and Denoising on Simulated Light Field Images

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

High resolution extended depth of field microscopy using wavefront coding

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

6.A44 Computational Photography

Modeling and Synthesis of Aperture Effects in Cameras

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Building a Real Camera

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Sensing Increased Image Resolution Using Aperture Masks

Tomorrow s Digital Photography

Supplementary Information

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand

Understanding camera trade-offs through a Bayesian analysis of light field projections Anat Levin, William T. Freeman, and Fredo Durand

Camera Simulation. References. Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A.

MAS.963 Special Topics: Computational Camera and Photography

When Does Computational Imaging Improve Performance?

Dr F. Cuzzolin 1. September 29, 2015

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Improved motion invariant imaging with time varying shutter functions

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

CS 89.15/189.5, Fall 2015 ASPECTS OF DIGITAL PHOTOGRAPHY COMPUTATIONAL. Image Processing Basics. Wojciech Jarosz

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Time-Lapse Light Field Photography With a 7 DoF Arm

Intorduction to light sources, pinhole cameras, and lenses

Computer Vision. The Pinhole Camera Model

Lecture 30: Image Sensors (Cont) Computer Graphics and Imaging UC Berkeley CS184/284A

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Image Formation and Capture

multiframe visual-inertial blur estimation and removal for unmodified smartphones

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

Sampling and pixels. CS 178, Spring Marc Levoy Computer Science Department Stanford University. Begun 4/23, finished 4/25.

Introduction , , Computational Photography Fall 2018, Lecture 1

Focal Sweep Videography with Deformable Optics

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Principles of Light Field Imaging: Briefly revisiting 25 years of research

Basic principles of photography. David Capel 346B IST

Virtual and Digital Cameras

Angle Sensitive Imaging: A New Paradigm for Light Field Imaging

Raskar, Camera Culture, MIT Media Lab. Ramesh Raskar. Camera Culture. Associate Professor, MIT Media Lab

( ) Deriving the Lens Transmittance Function. Thin lens transmission is given by a phase with unit magnitude.

Computational Photography Introduction

Wavelengths and Colors. Ankit Mohan MAS.131/531 Fall 2009

Observational Astronomy

AN INTRODUCTION TO CHROMATIC ABERRATION IN REFRACTORS

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Coded Computational Imaging: Light Fields and Applications

Unit 1: Image Formation

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

Limitations of lenses

Spatial Resolution and Contrast of a Focused Diffractive Plenoptic Camera

Transcription:

6.098 Digital and Computational Photography 6.882 Advanced Computational Photography Final projects Send your slides by noon on Thrusday. Send final report Refocusing & Light Fields Frédo Durand Bill Freeman MIT - EECS Wavefront coding Is depth of field a blur? Depth of field is NOT a convolution of the image The circle of confusion varies with depth There are interesting occlusion effects (If you really want a convolution, there is one, but in 4D space more soon) From Macro Photography Wavefront coding CDM-Optics, U of Colorado, Boulder The worst title ever: "A New Paradigm for Imaging Systems", Cathey and Dowski, Appl. Optics, 2002 Improve depth of field using weird optics & deconvolution http://www.cdm-optics.com/site/publications.php Wavefront coding Idea: deconvolution to deblur out of focus regions Convolution = filter (e.g. blur, sharpen) Sometimes, we can cancel a convolution by another convolution Like apply sharpen after blur (kind of) This is called deconvolution Best studied in the Fourier domain (of course!) Convolution = multiplication of spectra Deconvolution = multiplication by inverse spectrum 1

Deconvolution Assume we know blurring kernel k f' = f k F' = F K (in Fourier space) Invert by: F=F'/K (in Fourier space) Well-known problem with deconvolution: Impossible to invert for ω where K(ω)=0 Numerically unstable when K(ω) is small Wavefront coding Idea: deconvolution to deblur out of focus regions Problem 1: depth of field blur is not shift-invariant Depends on depth If depth of field is not a convolution, it's harder to use deconvolution ;-( Problem 2: Depth of field blur "kills information" Fourier transform of blurring kernel has lots of zeros Deconvolution is ill-posed Wavefront coding Ray version Idea: deconvolution to deblur out of focus regions Problem 1: depth of field blur is not shift-invariant Problem 2: Depth of field blur "kills information" Solution: change optical system so that Rays don't converge anymore Image blur is the same for all depth Blur spectrum does not have too many zeros How it's done Phase plate (wave optics effect, diffraction) Pretty much bends light Will do things similar to spherical aberrations 2

Other application Single-image depth sensing Blur depends A LOT on depth Passive Ranging Through Wave-Front Coding: Information and Application. Johnson, Dowski, Cathey http://graphics.stanford.edu/courses/cs448a-06-winter/johnson-ranging-optics00.pdf Single image depth sensing Important take-home idea Coded imaging What the sensor records is not the image we want, it's been coded (kind of like in cryptography) Image processing decodes it Other forms of coded imaging Tomography e.g. http://en.wikipedia.org/wiki/computed_axial_tomogr aphy Lots of cool Fourier transforms there X-ray telescopes & coded aperture e.g. http://universe.gsfc.nasa.gov/cai/coded_intr.html Ramesh's motion blur and to some extend, Bayer mosaics See Berthold Horn's course Plenoptic camera refocusing 3

Plenoptic/light field cameras Lipmann 1908 "Window to the world" Adelson and Wang, 1992 Depth computation Revisited by Ng et al. for refocusing The Plenoptic Function Back to the images that surround us How to describe (and capture) all the possible images around us? The Plenoptic function [Adelson & Bergen 91] http://web.mit.edu/pe rsci/people/adelson/pu b_pdfs/elements91.pd f From the greek "total" See also http://www.everythin g2.com/index.pl?node _id=989303&lastnode _id=1102051 Plenoptic function 3D for viewpoint 2D for ray direction 1D for wavelength 1D for time Light fields can add polarization From McMillan 95 4

Idea Reduce to outside the convex hull of a scene For every line in space Store RGB radiance How many dimensions for 3D lines? 4: e.g. 2 for direction, 2 for intersection with plane Then rendering is just a lookup Two major publication in 1996: Light field rendering [Levoy & Hanrahan] http://graphics.stanford.edu/papers/light/ The Lumigraph [Gortler et al.] Adds some depth information http://cs.harvard.edu/~sjg/papers/lumigraph.pdf Two-plane parameterization Line parameterized by intersection with 2 planes Careful, there are different "isotopes" of such parameterization (slightly different meaning of stuv) Let's make life simpler: 2D How many dimensions for 2D lines? Only 2, e.g. y=ax+b <> (a,b) Let's make life simpler: 2D 2-line parameterization View? 5

View? View line in Ray space Kind of cool: ray point, and view around point line There is a duality Back to 3D/4D From Gortler et al. Cool visualization From Gortler et al. View = 2D plane in 4D With various resampling issues Demo light field viewer 6

Reconstruction, antialiasing, depth of field Slide by Marc Levoy Aperture reconstruction So far, we have talked about pinhole view Aperture reconstruction: depth of field, better antiliasing Small aperture Slide by Marc Levoy Image Isaksen et al. Big aperture Light field sampling [Chai et al. 00, Isaksen et al. 00, Stewart et al. 03] Light field spectrum as a function of object distance Slope inversely proportional to depth http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm http://portal.acm.org/citation.cfm?id=344779.344929 Image Isaksen et al. From [Chai et al. 2000] 7

Light field cameras Plenoptic camera For depth extraction Adelson & Wang 92 http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.html Camera array Willburn et al. http://graphics.stanford.edu/papers/cameraarray/ Camera arrays http://graphics.stanford.edu/projects/array/ MIT version Jason Yang 8

Bullet time Time splice http://www.ruffy.com/frameset.htm Robotic Camera Image Leonard McMillan Image Levoy et al. Flatbed scanner camera By Jason Yang Plenoptic camera refocusing Conventional Photograph Light Field Photography Capture the light field inside the camera body 9

Hand-Held Light Field Camera Medium format digital camera Camera in-use 16 megapixel sensor Microlens array Light Field in a Single Exposure Light Field in a Single Exposure Light Field Inside the Camera Body Digital Refocusing 10

Digital Refocusing Digitally stopping-down Σ stopping down = summing only the central portion of each microlens Σ Digital Refocusing by Ray-Tracing Digital Refocusing by Ray-Tracing u x u x Imaginary film Lens Sensor Lens Sensor Digital Refocusing by Ray-Tracing Digital Refocusing by Ray-Tracing u x u x Imaginary film Imaginary film Lens Sensor Lens Sensor 11

Digital Refocusing by Ray-Tracing u x Imaginary film Lens Sensor Results of Band-Limited Analysis Assume a light field camera with An f /A lens N x N pixels under each microlens Show result video From its light fields we can Refocus exactly within depth of field of an f /(A N) lens In our prototype camera Lens is f /4 12 x 12 pixels under each microlens Theoretically refocus within depth of field of an f/48 lens Automultiscopic displays 3D displays With Matthias, Wojciech & Hans View-dependent pixels Lenticular optics (microlenses) Barrier 12

Lenticular optics Application 3D screens are shipping! Figure by Isaksen et al. Light Field Microscopy Light field microscopy http://graphics.stanford.edu/projects/lfmicroscope/ 13

Show video Conclusions Computational Photography Slide by Ramesh Generalized Sensor Novel Cameras Light Sources Modulators Generalized Optics Processing Generalized Optics Ray 4D Ray Bender Reconstruction Upto 4D Ray Sampler Programmable 4D 4D Illumination field field + Time Time + Wavelength 4D Light Field Display Recreate 4D Lightfield Scene: 8D Ray Modulator 14