Introduction to Light Fields

Similar documents
Coded Computational Imaging: Light Fields and Applications

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Computational Approaches to Cameras

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Computational Camera & Photography: Coded Imaging

Coding and Modulation in Cameras

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Raskar, Camera Culture, MIT Media Lab. Ramesh Raskar. Camera Culture. Associate Professor, MIT Media Lab

Coded photography , , Computational Photography Fall 2018, Lecture 14

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

Light field photography and microscopy

Coded photography , , Computational Photography Fall 2017, Lecture 18

Computational Cameras. Rahul Raguram COMP

Coded Computational Photography!

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Wavelengths and Colors. Ankit Mohan MAS.131/531 Fall 2009

Coded Aperture and Coded Exposure Photography

Computational Photography: Principles and Practice

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

MAS.963 Special Topics: Computational Camera and Photography

Simulated Programmable Apertures with Lytro

Improving Film-Like Photography. aka, Epsilon Photography

Less Is More: Coded Computational Photography

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Full Resolution Lightfield Rendering

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

Ultra-shallow DoF imaging using faced paraboloidal mirrors

Transfer Efficiency and Depth Invariance in Computational Cameras

3.0 Alignment Equipment and Diagnostic Tools:

Sensing Increased Image Resolution Using Aperture Masks

MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 2.71/2.710 Optics Spring 14 Practice Problems Posted May 11, 2014


Deconvolution , , Computational Photography Fall 2018, Lecture 12

CHARA AO Calibration Process

Optical Information Processing. Adolf W. Lohmann. Edited by Stefan Sinzinger. Ch>

Light-Field Database Creation and Depth Estimation

Holography as a tool for advanced learning of optics and photonics

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

WaveMaster IOL. Fast and accurate intraocular lens tester

Dictionary Learning based Color Demosaicing for Plenoptic Cameras

Modeling and Synthesis of Aperture Effects in Cameras

Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography

What will be on the midterm?

Coded Aperture for Projector and Camera for Robust 3D measurement

Computational Photography

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

ELECTRONIC HOLOGRAPHY

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester

Metrology and Sensing

Single-shot three-dimensional imaging of dilute atomic clouds

Computational Photography Introduction

J. C. Wyant Fall, 2012 Optics Optical Testing and Testing Instrumentation

Removal of Glare Caused by Water Droplets

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

ECEN 4606, UNDERGRADUATE OPTICS LAB

Lenses, exposure, and (de)focus

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

GEOMETRICAL OPTICS AND OPTICAL DESIGN

( ) Deriving the Lens Transmittance Function. Thin lens transmission is given by a phase with unit magnitude.

Observational Astronomy

Image Formation and Camera Design

ELEC Dr Reji Mathew Electrical Engineering UNSW

Explanation of Aberration and Wavefront

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Limitations of lenses

Demosaicing and Denoising on Simulated Light Field Images

Sensors and Sensing Cameras and Camera Calibration

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

High Resolution Spectral Video Capture & Computational Photography Xun Cao ( 曹汛 )

Spatial Resolution and Contrast of a Focused Diffractive Plenoptic Camera

Coded Aperture Pairs for Depth from Defocus

Short-course Compressive Sensing of Videos

Optical Signal Processing

Department of Mechanical and Aerospace Engineering, Princeton University Department of Astrophysical Sciences, Princeton University ABSTRACT

Why is There a Black Dot when Defocus = 1λ?

Computational Illumination

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

AgilEye Manual Version 2.0 February 28, 2007

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

arxiv: v2 [cs.gr] 7 Dec 2015

Diffraction, Fourier Optics and Imaging

Digital Photography. Visual Imaging in the Electronic Age Lecture #8 Donald P. Greenberg September 14, 2017

Ron Liu OPTI521-Introductory Optomechanical Engineering December 7, 2009

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner.

Time-Lapse Light Field Photography With a 7 DoF Arm

Use of Computer Generated Holograms for Testing Aspheric Optics

Multi-view Image Restoration From Plenoptic Raw Images

Applications of Optics

Coding & Signal Processing for Holographic Data Storage. Vijayakumar Bhagavatula

Angle Sensitive Imaging: A New Paradigm for Light Field Imaging

New Directions in Imaging Sensors Ravi Athale, MITRE Corporation OIDA Annual Forum 19 November 2008

Aberrations and adaptive optics for biomedical microscopes

Optical Design of. Microscopes. George H. Seward. Tutorial Texts in Optical Engineering Volume TT88. SPIE PRESS Bellingham, Washington USA

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Stereoscopic Hologram

Aberrations of a lens

Transcription:

MIT Media Lab Introduction to Light Fields Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/

Introduction to Light Fields Ray Concepts for 4D and 5D Functions Propagation of Light Fields Interaction with Occluders Fourier Domain Analysis and Relationship to Fourier Optics Coded Photography: Modern Methods to Capture Light Field Wigner and Ambiguity Function for Light Field in Wave Optics New Results in Augmenting Light Fields

Light Fields Goal: Representing propagation, interaction and image formation of light using purely position and angle parameters Radiance per ray Ray parameterization: position direction Position : s, x, r Direction : u, θ, s Reference plane Courtesy of Se Baek Oh. Used with permission.

Limitations of Traditional Lightfields rigorous but cumbersome wave optics based Traditional Light Field Wigner Distribution Function hologram s beam shaping ray optics based simple and powerful limited in diffraction & interference rotational PSF Courtesy of Se Baek Oh. Used with permission. Se Baek 3D Optical Systems Group CVPR 2009

Example: New Representations Augmented Lightfields rigorous but cumbersome wave optics based Wigner Distribution Function WDF Augmented LF Traditional Light Field Traditional Light Field ray optics based simple and powerful limited in diffraction & interference Se Baek 3D Optical Courtesy of Se Baek Oh. Used with permission. Interference & Diffraction Interaction w/ optical elements Non-paraxial propagation Systems Group CVPR 2009

The Plenoptic Function Figure removed due to copyright restrictions. Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let s s start with a stationary person and try to parameterize everything that he can see

Grayscale snapshot Figure removed due to copyright restrictions. P(θ,φ) is intensity of light Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer)

Color snapshot Figure removed due to copyright restrictions. P(θ,φ,λ) is intensity of light Seen from a single view point At a single time As a function of wavelength

A movie Figure removed due to copyright restrictions. P(θ,φ,λ,t) is intensity of light Seen from a single view point Over time As a function of wavelength

Holographic movie Figure removed due to copyright restrictions. is intensity of light Seen from ANY viewpoint Over time As a function of wavelength P(θ,φ,λ,t,V X,V Y,V Z )

The Plenoptic Function Figure removed due to copyright restrictions. P(θ,φ,λ,t,V X,V Y,V Z ) Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen.

Sampling Plenoptic Function (top view)

Ray Let s s not worry about time and color: 5D 3D position 2D direction P(θ,φ,V X,V Y,V Z ) Courtesy of Rick Szeliski and Michael Cohen. Used with permission. Slide by Rick Szeliski and Michael Cohen

Ray No Occluding Objects 4D 2D position 2D direction P(θ,φ,V X,V Y,V Z ) The space of all lines in 3-D 3 D space is 4D. Courtesy of Rick Szeliski and Michael Cohen. Used with permission. Slide by Rick Szeliski and Michael Cohen

Lumigraph/Lightfield - Organization 2D position 2D direction θ s Courtesy of Rick Szeliski and Michael Cohen. Used with permission. Slide by Rick Szeliski and Michael Cohen

2D position 2D position s u 2 plane parameterization Courtesy of Rick Szeliski and Michael Cohen. Used with permission. Slide by Rick Szeliski and Michael Cohen

2D position 2D position t s,t s,t u,v v 2 plane parameterization u,v s u Courtesy of Rick Szeliski and Michael Cohen. Used with permission. Slide by Rick Szeliski and Michael Cohen

Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View Σ Based on original slide by Marc Levoy. Used with permission. 2007 Marc Levoy

Conventional versus plenoptic camera Scene Pixel = (s,t) Virtual Camera = (u,v) uv-plane Pixel = (s,t) st-plane Based on original slide by Marc Levoy. Used with permission. 2007 Marc Levoy

Light Field = Array of (virtual) Cameras Σ Based on original slide by Marc Levoy. Used with permission. 2007 Marc Levoy

Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View Σ Courtesy of Marc Levoy. Used with permission. 2007 Marc Levoy

Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View Σ Based on original slide by Marc Levoy. Used with permission. 2007 Marc Levoy

Light Field Inside a Camera Courtesy of Ren Ng. Used with permission.

Light Field Inside a Camera Lenslet-based Light Field camera [Adelson and Wang, 1992, Ng et al. 2005 ] Courtesy of Ren Ng. Used with permission.

Stanford Plenoptic Camera [Ng et al 2005] Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses Courtesy of Ren Ng. Used with permission. 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

Digital Refocusing [Ng et al 2005] Courtesy of Ren Ng. Used with permission.

Adaptive Optics A deformable mirror can be used to correct wavefront errors in an astronomical telescope http://en.wikipedia.org/wiki/image:adaptive _optics_correct.png

Shack Hartmann wavefront sensor (commonly used in Adaptive optics). http://en.wikipedia.org/wiki/image:shack_hartmann.png

Measuring shape of wavefront = Lightfield Capture http://www.cvs.rochester.edu/williamslab/r_shackhartmann.html Courtesy of David Williams Lab @ the Center for Visual Science, University of Rochester. Used with permission. The spots formed on the CCD chip for the eye will be displaced because the wavefront will hit each lenslet at an angle rather than straight on.

Example using 45 cameras [Vaish CVPR 2004] Vaish, V., et al. "Using Plane + Parallax for Calibrating Dense Camera Arrays." Proceedings of CVPR 2004. Courtesy of IEEE. Used with permission. 2004 IEEE. Courtesy of Marc Levoy. Used with permission. 2007 Marc Levoy

Synthetic aperture videography Image removed due to copyright restrictions.

Vaish, V., et al. "Using Plane + Parallax for Calibrating Dense Camera Arrays." Proceedings of CVPR 2004. Courtesy of IEEE. Used with permission. 2004 IEEE.

x 1 θ i θ j x 2 θ j θ Visualizing Lightfield (i)position-angle space (ii)phase-space (iii)space- Spatial Frequency (iv)spectrogram θi θ l(x,θ) θj x 2 x 1 x x l(x,θ)

x 1 = x 1 + θ i *z x 1 θ i θ j x 2 θ j Shear of Light Field θ θi θ l(x,θ) θj x 2 x 1 x x x 1 x' 1 l(x,θ)

θ l(x,θ) x

10 0 θ l(x,θ) 10 0 θ θ l(x,θ) x x

10 0 θ l(x,θ) 10 0 θ θ l(x,θ) x x

Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View Σ Courtesy of Marc Levoy. Used with permission. 2007 Marc Levoy

Three ways to capture LF inside a camera Shadows using pin-hole array Refraction using lenslet array Heterodyning using masks

Sub-Aperture = Pin-hole + Prism Optical Society of America and H. E. Ives. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/fairuse.

Ives 1933 Optical Society of America and H. E. Ives. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/fairuse.

MERL, MIT Media Lab Glare Aware Photography: 4D Ray Sampling for Reducing Glare Raskar, Agrawal, Wilson & Veeraraghavan

MERL, MIT Media Lab Glare Aware Photography: 4D Ray Sampling for Reducing Glare Raskar, Agrawal, Wilson & Veeraraghavan

Lens Glare Reduction [Raskar, Agrawal, Wilson, Veeraraghavan SIGGRAPH 2008] Glare/Flare due to camera lenses reduces contrast

MERL, MIT Media Lab Glare Aware Photography: 4D Ray Sampling for Reducing Glare Raskar, Agrawal, Wilson & Veeraraghavan Reducing Glare Conventional Photo After removing outliers Glare Reduced Image Raskar, R., et al. Glare Aware Photography: 4D Ray Sampling for Reducing Glare Effects of Camera Lenses. Proceedings of SIGGRAPH 2008.

Light Field Inside a Camera Lenslet-based Light Field camera [Adelson and Wang, 1992, Ng et al. 2005 ] Courtesy of Ren Ng. Used with permission.

Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses Courtesy of Ren Ng. Used with permission. 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

Courtesy of Ren Ng. Used with permission.

Zooming into the raw photo Courtesy of Ren Ng. Used with permission. 2007 Marc Levoy

Digital Refocusing [Ng et al 2005] Courtesy of Ren Ng. Used with permission. Can we achieve this with a Mask alone?

Mask based Light Field Camera Mask Sensor [Veeraraghavan, Raskar, Agrawal, Tumblin, Mohan, Siggraph 2007 ]

How to Capture 4D Light Field with 2D Sensor? What should be the pattern of the mask?

Lens Copies the Lightfield of Conjugate Plane Object Main Lens 1D Sensor θ -plane x-plane x 0 x 0 θ 0 x θ θ 0 x

Object Main Lens 1D Sensor θ -plane x-plane θ l(x,θ) x Line Integral Captured Photo

Object Main Lens 1D Sensor θ -plane x-plane θ l(x,θ) x Line Integral Captured Photo

Fourier Slice Theorem θ l(x,θ) 2-D FFT f θ L(f x,f θ ) x f x Line Integral Central Slice 1-D FFT Captured Photo FFT of Captured Photo

Light Propagation (Defocus Blur) θ l(x,θ) 2-D FFT f θ L(f x,f θ ) x f x Line Integral Central Slice 1-D FFT Captured Photo FFT of Captured Photo

In Focus Photo LED

Out of Focus Photo: Open Aperture

Coded Aperture Camera The aperture of a 100 mm lens is modified Insert a coded mask with chosen binary pattern Rest of the camera is unmodified

Out of Focus Photo: Coded Aperture

Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 63

Slides removed due to copyright restrictions. See this paper and associated presentation at http://mesh.brown.edu/dlanman/research.html

Cosine Mask Used Mask Tile 1/f 0

Captured 2D Photo Encoding due to Mask

Veraraghavan, Raskar, Agrawal, Mohan, Tumblin. Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing. Proceedings of SIGGRAPH 2007. 2D FFT Traditional Camera Photo Magnitude of 2D FFT 2D FFT Heterodyne Camera Photo Magnitude of 2D FFT

Extra sensor bandwidth cannot capture extra angular dimension of the light field f θ f θ0 Extra sensor bandwidth f x0 f x Sensor Slice Fourier Light Field Space (Wigner Transform)

Sensor Slice captures entire Light Field f θ f θ0 f x0 f x Modulation Function Modulated Light Field

Where to place the Mask? Mask Sensor f θ f x Mask Modulation Function

Computing 4D Light Field 2D Sensor Photo, 1800*1800 2D Fourier Transform, 1800*1800 2D FFT 9*9=81 spectral copies 4D Light Field 4D IFFT Rearrange 2D tiles into 4D planes 200*200*9*9 200*200*9*9 Veraraghavan, Raskar, Agrawal, Mohan, Tumblin. Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing. Proceedings of SIGGRAPH 2007.

x 1 = x 1 + θ i *z x 1 θ i θ j x 2 θ j Shear of Light Field θ θi θ l(x,θ) θj x 2 x 1 x x x 1 x' 1 l(x,θ)

Light Propagation (Defocus Blur) θ l(x,θ) 2-D FFT f θ L(f x,f θ ) x f x Line Integral Central Slice 1-D FFT Captured Photo FFT of Captured Photo

MERL Mask-Enhanced Cameras: Heterodyned Light Fields & Coded Aperture Veeraraghavan, Raskar, Agrawal, Mohan & Tumblin Sensor Sensor Microlens array Mask Plenoptic Camera Heterodyne Camera Samples individual rays Samples coded combination of rays Predefined spectrum for lenses Chromatic abberration Supports any wavelength High alignment precision Reconfigurable f/#, Easier alignment Peripheral pixels wasted pixels No wastage High resolution image for parts of scene in focus Negligible Light Loss 50 % Light Loss due to mask

Space of LF representations Time-frequency representations Phase space representations Quasi light field Other LF representations Other LF representations Observable LF Traditional light field Augmented LF WDF Rihaczek Distribution Function incoherent Courtesy of Se Baek Oh. Used with permission. coherent

Quasi light fields the utility of light fields, the versatility of Maxwell Other LF representatio ns Rihaczek Distribution Function Other LF representatio ns Observable LF Traditiona l light field incoherent Augmented LF coherent WDF We form coherent images by formulating, capturing, and integrating quasi light fields. Courtesy of Se Baek Oh. Used with permission.

(i) Observable Light Field move aperture across plane look at directional spread continuous form of plenoptic camera Courtesy of Se Baek Oh. Used with permission. scene aperture position s direction u

(ii) Augmented Light Field with LF Transformer light field transformer WDF Augmented LF Light Field LF LF LF LF LF propagation (diffractive) optical element negative radiance LF propagation Interaction at the optical elements Courtesy of Se Baek Oh. Used with permission. 7

Virtual light projector with real valued (possibly negative radiance) along a ray real projector first null (OPD = λ/2) virtual light projector real projector Courtesy of Se Baek Oh. Used with permission. 7

(ii) ALF with LF Transformer Courtesy of Se Baek Oh. Used with permission. 8

Tradeoff between cross-interference terms and localization u y (i) Spectrogram non-negative localization (ii) Wigner localization cross terms (iii) Rihaczek localization complex 3 m u 0 m 0 m y 3 m y 0 m 3 m 0 m y 3 m Courtesy of Se Baek Oh. Used with permission.

Property of the Representation Constant along rays Non-negativity negativity Coherence Wavelength Interference Cross term Traditional LF always constant always positive only incoherent zero no Observable LF nearly constant always positive any coherence state any yes Augmented LF only in the paraxial region positive and negative any any yes WDF only in the paraxial region positive and negative any any yes Rihaczek DF no; linear drift complex any any reduced Courtesy of Se Baek Oh. Used with permission.

Benefits & Limitations of the Representation Ability to propagate Modeling wave optics Simplicity of computation Adaptability to current pipe line Near Field Far Field Traditional LF x-shear no very simple high no yes Observable LF not x-shearx yes modest low yes yes Augmented LF x-shear yes modest high no yes WDF x-shear yes modest low yes yes Rihaczek DF x-shear yes better than WDF, not as simple as LF low no yes Courtesy of Se Baek Oh. Used with permission.

Motivation What is the difference between a hologram and a lenticular screen? How they capture phase of a wavefront for telescope applications? What is wavefront coding lens for extended depth of field imaging?

Acknowledgements Dartmouth Marcus Testorf, MIT Ankit Mohan, Ahmed Kirmani, Jaewon Kim George Barbastathis Stanford Marc Levoy, Ren Ng, Andrew Adams Adobe Todor Georgiev, MERL Ashok Veeraraghavan, Amit Agrawal

MIT Media Lab Light Fields Camera Culture Ramesh Raskar MIT Media Lab http:// CameraCulture. info/

MIT OpenCourseWare http://ocw.mit.edu MAS.531 Computational Camera and Photography Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.