Computational Cameras. Rahul Raguram COMP

Similar documents

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Computational Approaches to Cameras

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2017, Lecture 18

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Coded Computational Photography!

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Coding and Modulation in Cameras

6.A44 Computational Photography

Admin Deblurring & Deconvolution Different types of blur

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

Computational Photography and Video. Prof. Marc Pollefeys

Coded Aperture and Coded Exposure Photography

Modeling and Synthesis of Aperture Effects in Cameras

Lenses, exposure, and (de)focus

Coded Aperture for Projector and Camera for Robust 3D measurement

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Deblurring. Basics, Problem definition and variants

Computational Camera & Photography: Coded Imaging

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Topic 6 - Optics Depth of Field and Circle Of Confusion

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Image Enhancement Using Calibrated Lens Simulations

A Framework for Analysis of Computational Imaging Systems

When Does Computational Imaging Improve Performance?

Transfer Efficiency and Depth Invariance in Computational Cameras

What are Good Apertures for Defocus Deblurring?

On the Recovery of Depth from a Single Defocused Image

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Restoration of Motion Blurred Document Images

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Extended depth of field for visual measurement systems with depth-invariant magnification

Introduction to Light Fields

Computational Photography

Simulated Programmable Apertures with Lytro

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Defocus Map Estimation from a Single Image

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Understanding Focal Length

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

VC 11/12 T2 Image Formation

Less Is More: Coded Computational Photography

This document explains the reasons behind this phenomenon and describes how to overcome it.

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

Understanding camera trade-offs through a Bayesian analysis of light field projections Anat Levin, William T. Freeman, and Fredo Durand

VC 14/15 TP2 Image Formation

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

Coded Aperture Pairs for Depth from Defocus

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Why is There a Black Dot when Defocus = 1λ?

ECEN 4606, UNDERGRADUATE OPTICS LAB

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Total Variation Blind Deconvolution: The Devil is in the Details*

Full Resolution Lightfield Rendering

VC 16/17 TP2 Image Formation

Computational Photography Introduction

High dynamic range imaging and tonemapping

Optical image stabilization (IS)

Demosaicing and Denoising on Simulated Light Field Images

Light-Field Database Creation and Depth Estimation

Mastering Y our Your Digital Camera

Flexible Depth of Field Photography

Single-shot three-dimensional imaging of dilute atomic clouds

Basic principles of photography. David Capel 346B IST

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Long-Range Adaptive Passive Imaging Through Turbulence

Computer Vision. The Pinhole Camera Model

Coded Aperture Imaging

Focal Sweep Videography with Deformable Optics

fast blur removal for wearable QR code scanners

Coded Aperture Flow. Anita Sellent and Paolo Favaro

A Poorly Focused Talk

Analysis of Coded Apertures for Defocus Deblurring of HDR Images

Computational Photography: Principles and Practice

Optical image stabilization (IS)

Extended Depth of Field Catadioptric Imaging Using Focal Sweep

A Review over Different Blur Detection Techniques in Image Processing

Parameter descriptions:

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Sensors and Sensing Cameras and Camera Calibration

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Blur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand

Image Deblurring with Blurred/Noisy Image Pairs

Building a Real Camera

Film Cameras Digital SLR Cameras Point and Shoot Bridge Compact Mirror less

Sensing Increased Image Resolution Using Aperture Masks

Transcription:

Computational Cameras Rahul Raguram COMP 790-090

What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene Computational camera Coded image Additional information

Computational cameras - examples * Catadioptric cameras * Source: S. K. Nayar, 2006

Computational cameras - examples * * Catadioptric cameras HDR imaging with assorted pixels * Source: S. K. Nayar, 2006

Computational cameras - examples * * Catadioptric cameras HDR imaging with assorted pixels * # Multiview radial cameras Time-of-flight cameras * Source: S. K. Nayar, 2006 # Source: L. Guan and M. Pollefeys, 2008

The aperture Glossographia Anglicana Nova, 1707 Diameter of the lens opening (controlled by diaphragm) Expressed as a fraction of focal length (f-number) f/2.0 with a 50mm lens: aperture is 25mm f/2.0 with a 100mm lens: aperture is 50mm Typical f-numbers: f/1.4, f/2, f/2.8, f/4, f/5.6, f/8 see a pattern?

Varying the aperture Small aperture large depth of field

Varying the aperture Large aperture small depth of field Bokeh (derived from Japanese boke ぼけ, a noun form of bokeru ぼける, "become blurred or fuzzy")

Multi-Aperture Photography Paul Green MIT CSAIL Wenyang Sun MERL Wojciech Matusik MERL Frédo Durand MIT CSAIL Slides by Green et al.

Motivation Depth of Field Control Shallow Depth of Field Portrait Landscape Large Aperture Large Depth of Field Small Aperture http://photographertips.net

Depth and Defocus Blur sensor lens plane of focus circle of confusion subject rays from point in focus converge to single pixel defocus blur depends on distance from plane of focus

Defocus Blur & Aperture circle of confusion sensor lens aperture plane of focus subject defocus blur depends on aperture size http://photographertips.net

Goals Aperture size is a critical parameter for photographers post-exposure depth of field control extrapolate shallow depth of field beyond physical aperture

Outline Multi-Aperture Camera New camera design Capture multiple aperture settings simultaneously Applications Depth of field control Depth of field extrapolation (Limited) refocusing

Related Work Computational Cameras Plenoptic Cameras Adelson and Wang 92 Ng et al 05 Georgiev et al 06 Split-Aperture Camera Aggarwal and Ahuja 04 Optical Splitting Trees McGuire et al 07 Coded Aperture Levin et al 07 Veeraraghavan et al 07 Wavefront Coding Dowski and Cathey 95 Depth from Defocus Pentland 87 Adelson and Wang 92 McGuire et al 07 Georgiev et al 06 Aggarwal and Ahuja 04 Levin et al 07 Veeraraghavan et al 07

Plenoptic Cameras Capture 4D LightField 2D Spatial (x,y) 2D Angular (u,v Aperture) Lens Aperture v Trade resolution for flexibility after capture Refocusing Depth of field control Improved Noise Characteristics Lenslet Array u Subject Sensor (x,y) Lens (u,v)

1D vs 2D Aperture Sampling Aperture v u 2D Grid Sampling http://photographertips.net

1D vs. 2D Aperture Sampling Aperture Aperture v 45 Samples 4 Samples u 2D Grid Sampling 1D Ring Sampling http://photographertips.net

Goals post-exposure depth of field control extrapolate shallow depth of field (limited) refocusing 1d sampling no beamsplitters single sensor removable

Optical Design Principles 3D sampling 2D spatial 1D aperture size 1 image for each ring Aperture Sensor http://photographertips.net

Aperture Splitting Mirrors Focusing lenses Sensor Incoming light Tilted Mirrors

Aperture Splitting Ideally at aperture plane, but not physically possible! Solution: Relay Optics to create virtual aperture plane Photographic Relay Lens system Aperture splitting optics Aperture Plane New Aperture Plane

Optical Prototype mirrors lenses SLR Camera main lens relay optics tilted mirrors Mirror Close-up

Sample Data Raw data from our camera

Point Spread Function Occlusion inner ring 1 ring 2 outer combined Ideally would be rings Gaps are from occlusion

Outline Multi-Aperture Camera New camera design Capture multiple aperture settings simultaneously Applications Depth of field control Depth of field extrapolation Refocusing

DOF Navigation I0 I2 I I 1 3

DOF Extrapolation? Approximate defocus blur as convolution I n = I K( σ 0 n I 0 I 1 I 2 I 3 ) Depends on depth and aperture size? I E K( σ n ) - Circular aperture blurring kernel

DOF Extrapolation Roadmap capture estimate blur fit model extrapolate blur Blur size Largest physical aperture I I I 3 2 I 1 0 I E Aperture Diameter

Summary Multi-aperture camera 1D sampling of aperture Removable Post-Exposure depth of field control Depth of field extrapolation

Image and Depth from a Conventional Camera with a Coded Aperture Anat Levin, Rob Fergus, Frédo Durand, William Freeman MIT CSAIL Slides by Levin et al.

Single input image: Output #1: Depth map

Output #1: Depth map Single input image: Output #2: All-focused image

Output #1: Depth map Single input image: Output #2: All-focused image

Lens and defocus Lens aperture Image of a point light source Lens Camera sensor Point spread function Focal plane

Lens and defocus Lens aperture Image of a defocused point light source Object Lens Camera sensor Point spread function Focal plane

Lens and defocus Lens aperture Image of a defocused point light source Object Lens Camera sensor Point spread function Focal plane

Lens and defocus Lens aperture Image of a defocused point light source Object Lens Camera sensor Point spread function Focal plane

Lens and defocus Lens aperture Image of a defocused point light source Object Lens Camera sensor Point spread function Focal plane

Depth and defocus Out of focus Depth from defocus: Infer depth by analyzing local scale of defocus blur In focus

Challenges Hard to discriminate a smooth scene from defocus blur? Out of focus Hard to undo defocus blur Input Ringing with conventional deblurring algorithm

Key contributions Exploit prior on natural images - Improve deconvolution - Improve depth discrimination Natural Unnatural Coded aperture (mask inside lens) - make defocus patterns different from natural images and easier to discriminate

Defocus as local convolution Input defocused image Calibrated blur kernels at different depths

Defocus as local convolution Input defocused image yy = f x k Local sub-window k Calibrated blur kernels at depth k Sharp sub-window Depth k=1: y = f k x Depth k=2: y = f k x Depth k=3: y = f k x

Overview Try deconvolving local input windows with different scaled filters: =? Larger scale =? Correct scale =? Smaller scale Somehow: select best scale.

Hard to deconvolve even when kernel is known Challenges Input Ringing with the traditional Richardson-Lucy deconvolution algorithm Hard to identify correct scale: =? Larger scale =? Correct scale? = Smaller scale

Deconvolution is ill posed f x = y? =

Deconvolution is ill posed f x = y Solution 1:? = Solution 2:? =

Idea 1: Natural images prior What makes images special? Natural Unnatural Image gradient Natural images have sparse gradients put a penalty on gradients

Deconvolution with prior x = argmin f x y 2 + λ i ρ( x ) i Convolution error Derivatives prior 2 _ +? Equal convolution error Low 2? _ + High

Comparing deconvolution algorithms (Non blind) deconvolution code available online: http://groups.csail.mit.edu/graphics/codedaperture/ Input ρ ( x) = x spread gradients 2 ρ ( x) = x 0.8 localizes gradients Richardson-Lucy Gaussian prior Sparse prior

Comparing deconvolution algorithms (Non blind) deconvolution code available online: http://groups.csail.mit.edu/graphics/codedaperture/ Input ρ ( x) = x spread gradients 2 ρ ( x) = x 0.8 localizes gradients Richardson-Lucy Gaussian prior Sparse prior

Recall: Overview Try deconvolving local input windows with different scaled filters: = Larger scale? = Correct scale? = Smaller scale? Somehow: select best scale. Challenge: smaller scale not so different than correct

Idea 2: Coded Aperture Mask (code) in aperture plane - make defocus patterns different from natural images and easier to discriminate Conventional aperture Our coded aperture

Solution: lens with occluder Object Lens Camera sensor Point spread function Focal plane

Solution: lens with occluder Aperture pattern Image of a defocused point light source Object Lens with coded aperture Camera sensor Point spread function Focal plane

Solution: lens with occluder Aperture pattern Image of a defocused point light source Object Lens with coded aperture Camera sensor Point spread function Focal plane

Solution: lens with occluder Aperture pattern Image of a defocused point light source Object Lens with coded aperture Camera sensor Point spread function Focal plane

Solution: lens with occluder Aperture pattern Image of a defocused point light source Object Lens with coded aperture Camera sensor Point spread function Focal plane

Solution: lens with occluder Aperture pattern Image of a defocused point light source Object Lens with coded aperture Camera sensor Point spread function Focal plane

Why coded? Coded aperture- reduce uncertainty in scale identification Conventional Coded Larger scale Correct scale Smaller scale

Depth results

Regularizing depth estimation Try deblurring with 10 different aperture scales x = argmin f x _ y 2 Convolution error + λ 2 + i ρ( x ) Derivatives prior i Keep minimal error scale in each local window + regularization 200 235 245 255 265 275 285 295 305 Input Local depth estimation Regularized depth

Regularizing depth estimation 200 235 245 255 265 275 285 295 Local depth estimation 305 200 235 245 Input 255 265 275 285 295 Regularized depth 305

Sometimes, manual intervention 200 235 245 255 265 275 285 295 Input Local depth estimation 305 235 235 245 245 255 255 265 265 275 275 285 285 295 295 Regularized depth 305 After user corrections 305

All focused results

Input

All-focused (deconvolved)

Close-up Original image All-focus image

Input

All-focused (deconvolved)

Close-up Original image All-focus image Naïve sharpening

Comparison- conventional aperture result Ringing due to wrong scale estimation

Comparison- coded aperture result

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Coded aperture: pros and cons + + + - Image AND depth at a single shot No loss of image resolution Simple modification to lens Depth is coarse unable to get depth at untextured areas, might need manual corrections. + -+ But depth is a pure bonus Loss some light But deconvolution increases depth of field

Deconvolution code available http://groups.csail.mit.edu/graphics/codedaperture/

50mm f/1.8: $79.95 Cardboard: $1 Tape: $1 Depth acquisition: priceless