Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Similar documents
Light field sensing. Marc Levoy. Computer Science Department Stanford University

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

Light field photography and microscopy

Computational Approaches to Cameras

Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Introduction to Light Fields

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Computational Photography Introduction

Light-Field Database Creation and Depth Estimation

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

Coded Computational Imaging: Light Fields and Applications

Coded Aperture and Coded Exposure Photography

Computational Cameras. Rahul Raguram COMP

Computational Photography: Principles and Practice

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coding and Modulation in Cameras

Synthetic aperture photography and illumination using arrays of cameras and projectors

CS6670: Computer Vision

Coded photography , , Computational Photography Fall 2017, Lecture 18

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Building a Real Camera

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

Coded Computational Photography!

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

Lenses, exposure, and (de)focus

Simulated Programmable Apertures with Lytro

Ultra-shallow DoF imaging using faced paraboloidal mirrors

Improving Film-Like Photography. aka, Epsilon Photography

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Single-shot three-dimensional imaging of dilute atomic clouds

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

CS6670: Computer Vision

Basic principles of photography. David Capel 346B IST

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Computer Vision. The Pinhole Camera Model

6.A44 Computational Photography

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Lytro camera technology: theory, algorithms, performance analysis

What will be on the midterm?

Why is sports photography hard?

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Demosaicing and Denoising on Simulated Light Field Images

A 3D Multi-Aperture Image Sensor Architecture

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

CS354 Computer Graphics Computational Photography. Qixing Huang April 23 th 2018

Tomorrow s Digital Photography

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

Unit 1: Image Formation

Image Formation and Camera Design

Adding Realistic Camera Effects to the Computer Graphics Camera Model

Digital Photography and Geometry Capture. NBAY 6120 March 9, 2016 Donald P. Greenberg Lecture 4

Dictionary Learning based Color Demosaicing for Plenoptic Cameras

Observational Astronomy

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Development of airborne light field photography

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

VC 11/12 T2 Image Formation

Deblurring. Basics, Problem definition and variants

Lenses and Focal Length

Full Resolution Lightfield Rendering

Less Is More: Coded Computational Photography

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

multiframe visual-inertial blur estimation and removal for unmodified smartphones

DSLR Cameras have a wide variety of lenses that can be used.

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Computational Photography: Illumination Part 2. Brown 1

Flash Photography: 1

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Lecture 30: Image Sensors (Cont) Computer Graphics and Imaging UC Berkeley CS184/284A

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner.

Depth Estimation Algorithm for Color Coded Aperture Camera

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Coded Aperture for Projector and Camera for Robust 3D measurement

Topic 6 - Optics Depth of Field and Circle Of Confusion

Single-view Metrology and Cameras

Lecture 21: Cameras & Lenses II. Computer Graphics and Imaging UC Berkeley CS184/284A

Active Aperture Control and Sensor Modulation for Flexible Imaging

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Systems Biology. Optical Train, Köhler Illumination

arxiv: v2 [cs.gr] 7 Dec 2015

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Sampling and pixels. CS 178, Spring Marc Levoy Computer Science Department Stanford University. Begun 4/23, finished 4/25.

Angle Sensitive Imaging: A New Paradigm for Light Field Imaging

Two strategies for realistic rendering capture real world data synthesize from bottom up

Intro to Virtual Reality (Cont)

Understanding camera trade-offs through a Bayesian analysis of light field projections Anat Levin, William T. Freeman, and Fredo Durand

Early art: events. Baroque art: portraits. Renaissance art: events. Being There: Capturing and Experiencing a Sense of Place

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

VC 14/15 TP2 Image Formation

Computational Illumination

Raskar, Camera Culture, MIT Media Lab. Ramesh Raskar. Camera Culture. Associate Professor, MIT Media Lab

Depth from Combining Defocus and Correspondence Using Light-Field Cameras

Transcription:

Capturing Light Rooms by the Sea, Edward Hopper, 1951 The Penitent Magdalen, Georges de La Tour, c. 1640 Some slides from M. Agrawala, F. Durand, P. Debevec, A. Efros, R. Fergus, D. Forsyth, M. Levoy, and S. Seitz The Light Field Figure by Leonard McMillan What is the set of all things that we can ever see? Answer: The Light Field (aka Plenoptic Function) Let s start with a stationary person and try to parameterize everything that she can see OPALE "Sparkles and Wine" 2013 Grayscale Snapshot P(q, f) is intensity of light Seen from a single viewpoint At a single time Averaged over the wavelengths of the visible spectrum 1

Color Snapshot P(q, f, l) is intensity of light Seen from a single viewpoint At a single time As a function of wavelength A Movie P(q, f, l, t) is intensity of light Seen from a single viewpoint Over time As a function of wavelength Holographic Movie The Light Field P(q, f, l, t, VX, VY, VZ) P(q, f, l, t, VX, VY, VZ) is intensity of light Seen from ANY viewpoint Over time As a function of wavelength Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! 2

Sampling the Light Field Camera Lighting Surface Camera A camera is a device for capturing and storing samples of the Light Field Building Better Cameras Modify Optics: Wide-Angle Imaging Capture more rays Higher density sensor arrays Multiple Cameras Catadioptric Imaging Color cameras, multi-spectral cameras Video cameras Examples: Disney 55, McCutchen 91, Nalwa 96, Swaminathan & Nayar 99, Cutler et al. 02 Examples: Rees 70, Charles 87, Nayar 88, Yagi 90, Hong 91, Yamazawa 95, Bogner 95, Nalwa 96, Nayar 97, Chahl & Srinivasan 97 3

Catadioptric Cameras for 360 Imaging Omnidirectional Image Catadioptric Imaging Catadioptric Imaging Camera s Viewpoint Camera Mirror Subject 4

Catadioptric Imaging Virtual Viewpoint 1 Mirrors Camera s Viewpoint Virtual Viewpoint 2 Catadioptric Imaging Virtual Viewpoint 1 Camera s Viewpoint Virtual Viewpoint 2 Catadioptric Imaging Reconstructing Faces Circular Viewpoint Locus Camera Mirror Subject 5

Reconstructing Faces Stereo Views Femto Photography 3D Reconstructions FemtoFlash UltraFast Detector A trillion frame per second camera Serious Sync Computational Optics See UW research on this by Prof. Andreas Velten http://www.youtube.com/watch?v=9xjlck6w020 6

The Light Field How to Capture it? What s it good for? The Light Field Surface Ray Ignoring time and color, one sample: 4D: 2D direction 2D position non-dispersive medium P(q, f, VX, VY, VZ) 5D 3D position 2D direction Slide by Rick Szeliski and Michael Cohen 7

Light Field - Organization Light Field - Organization 2D position 2D direction q s 2D position 2D position s u 2 plane parameterization Slide by Rick Szeliski and Michael Cohen Slide by Rick Szeliski and Michael Cohen Light Field - Organization Light Field - Organization 2D position 2D position t s,t s,t u,v v Hold s, t constant Let u, v vary An image u,v 2 plane parameterization s u Slide by Rick Szeliski and Michael Cohen s,t u,v Slide by Rick Szeliski and Michael Cohen 8

Light Field How to Capture Light Fields? One camera + move object (and light sources) Multiple cameras One camera + multiple microlenses Light Field - Capture Gantry Idea 1 Move camera carefully over s, t plane Gantry Lazy Susan Manually rotated XY Positioner Lights turn with lazy susan Correctness by construction s,t u,v Slide by Rick Szeliski and Michael Cohen 9

Multi-Camera Arrays Stanford s 640 480 pixels 30 fps 128 cameras synchronized timing continuous streaming flexible arrangement Stanford Tiled Camera Array What s a Light Field Good For? Synthetic aperture photography Seeing through occluding objects Refocusing Changing Depth of Field Synthesizing images from novel viewpoints 10

Synthetic Aperture Photography [Vaish CVPR 2004] 45 cameras aimed at bushes Synthetic Aperture Photography Synthetic Aperture Photography 11

Synthetic Aperture Photography Red point effectively disappears because it is so blurry Synthetic Aperture Photography If aperture is larger than a foreground occluding object, then some rays from behind the object are captured Leonardo da Vinci observed that if you hold a needle in front of your eye, it adds haze but does not completely obscure any part of it (because your eye s pupil is bigger than the needle) Synthetic Aperture Photography Synthetic Aperture Photography 12

Synthetic Aperture Photography Another way to think about synthetic aperture photography take the images from all the cameras rectify them to a common plane in scene (focal plane) shift them by a certain amount and add them together Objects that become aligned by the shifting process will be sharply focused objects in front of that plane are blurred away objects in back of that plane are blurred away Synthetic Aperture Photography One image of people behind bushes Reconstructed synthetic aperture image 13

How to Capture Light Fields? Light Field Photography using a Handheld Light Field Camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan One camera + move object (and light sources) Multiple cameras One camera + multiple microlenses Proc. SIGGRAPH 2005 Lytro Illum Light Field Camera Conventional vs. Light Field Camera www.lytro.com 30-250mm lens 8.3x optical zoom f/2.0 aperture $280 ($1,600 MSRP) 40 megaray ½ CMOS sensor Maximum image resolution: 2450 1634 (4.0 megapixels) 14

Conventional vs. Light Field Camera uv-plane st-plane Conventional vs. Light Field Camera st-plane uv-plane Prototype Camera Contax medium format camera Adaptive Optics microlens array Kodak 16-megapixel sensor 125µ square-sided microlenses 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens 15

c Digitally Stopping-Down a b c (a) illustrates microlenses at depths closer than the focal plane. In these right-side up microlens images, the woman s cheek appears on the left, as it appears in the macroscopic image. (b) illustrates microlenses at depths further than the focal plane. In these inverted microlens images, the man s cheek appears on the right, opposite the macroscopic world. This effect is due to inversion of the microlens rays as they pass through the world focal plane before arriving at the main lens. (c) illustrates microlenses on edges at the focal plane (the fingers that are clasped together). The microlenses at this depth are constant in color because all the rays arriving at the microlens originate from the same point on the fingers, which reflect light diffusely. a b Σ Σ stopping down = summing only the central portion of each microlens Digital Refocusing Example of Digital Refocusing Σ Σ refocusing = summing windows extracted from several microlenses 16

Refocusing Portraits Extending the Depth of Field conventional photograph, main lens at f / 4 conventional photograph, main lens at f / 22 light field, main lens at f / 4, after all-focus algorithm [Agarwala 2004] Digitally Moving the Observer Example of Moving the Observer Σ Σ moving the observer = moving the window we extract from the microlenses 17

Moving Backward and Forward Implications Other ways to Sample the Plenoptic Function Cuts the unwanted link between exposure (due to the aperture) and depth of field Trades off spatial resolution for ability to refocus and adjust the perspective Sensor pixels should be made even smaller, subject to the diffraction limit 36mm 24mm 2µ pixels = 216 megapixels 18K 12K pixels 1800 1200 pixels 10 10 rays per pixel Moving in time: Spatio-temporal volume: P(q, f, t) Useful to study temporal changes Long an interest of artists Claude Monet, Haystacks studies 18

Space-Time Images Other ways to slice the plenoptic function: t y x 19