The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Similar documents
CS6670: Computer Vision

High Dynamic Range Imaging

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

Computational Approaches to Cameras

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded Computational Photography!

Coded photography , , Computational Photography Fall 2017, Lecture 18

Deblurring. Basics, Problem definition and variants

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

High dynamic range imaging and tonemapping

Light field sensing. Marc Levoy. Computer Science Department Stanford University

High Dynamic Range Images : Rendering and Image Processing Alexei Efros. The Grandma Problem

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Computational Camera & Photography: Coded Imaging

Computational Cameras. Rahul Raguram COMP

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Deconvolution , , Computational Photography Fall 2018, Lecture 12

The Dynamic Range Problem. High Dynamic Range (HDR) Multiple Exposure Photography. Multiple Exposure Photography. Dr. Yossi Rubner.

Lenses, exposure, and (de)focus

When Does Computational Imaging Improve Performance?

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Introduction to Light Fields

Computational Photography Introduction

Lecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013

! High&Dynamic!Range!Imaging! Slides!from!Marc!Pollefeys,!Gabriel! Brostow!(and!Alyosha!Efros!and! others)!!

Coded Aperture for Projector and Camera for Robust 3D measurement

Improving Film-Like Photography. aka, Epsilon Photography

Why learn about photography in this course?

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

Lecture 30: Image Sensors (Cont) Computer Graphics and Imaging UC Berkeley CS184/284A

Light field photography and microscopy

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Computational Photography and Video. Prof. Marc Pollefeys

Unit 1: Image Formation

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis


Improved motion invariant imaging with time varying shutter functions

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera High dynamic range imaging

ALMALENCE SUPER SENSOR. A software component with an effect of increasing the pixel size and number of pixels in the sensor

Image Formation and Camera Design

Distributed Algorithms. Image and Video Processing

Coding and Modulation in Cameras

CSC320H: Intro to Visual Computing. Course WWW (course information sheet available there):

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Capturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.

Admin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene

Computational Photography

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

A Framework for Analysis of Computational Imaging Systems

Computational Photography: Principles and Practice

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner.

Cameras. CSE 455, Winter 2010 January 25, 2010

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Computer Vision. Howie Choset Introduction to Robotics

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Noise and ISO. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Admin Deblurring & Deconvolution Different types of blur

The Noise about Noise

Transfer Efficiency and Depth Invariance in Computational Cameras

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Realistic Image Synthesis

Optical image stabilization (IS)

Resolution test with line patterns

Optical image stabilization (IS)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

What will be on the midterm?

Automatic Selection of Brackets for HDR Image Creation

Tomorrow s Digital Photography

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Introduction , , Computational Photography Fall 2018, Lecture 1

Coded Aperture and Coded Exposure Photography

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Introductory Photography

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Presented to you today by the Fort Collins Digital Camera Club

University Of Lübeck ISNM Presented by: Omar A. Hanoun

Optimal Single Image Capture for Motion Deblurring

Image stabilization (IS)

Ocular Shack-Hartmann sensor resolution. Dan Neal Dan Topa James Copland

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Modeling and Synthesis of Aperture Effects in Cameras

CS354 Computer Graphics Computational Photography. Qixing Huang April 23 th 2018

COMPUTATIONAL PHOTOGRAPHY. Chapter 10

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

Midterm Examination CS 534: Computational Photography

ELEC Dr Reji Mathew Electrical Engineering UNSW

Optical image stabilization (IS)

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Optical Flow Estimation. Using High Frame Rate Sequences

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Camera Image Processing Pipeline: Part II

Building a Real Camera

Transcription:

Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution Infinite zoom control Desired object(s) are in focus No noise No motion blur Infinite dynamic range (can see dark and bright things)... Creating the ultimate camera The analog camera has changed very little in >100 yrs we re unlikely to get there following this path More promising is to combine analog optics with computational techniques Computational cameras or Computational photography This lecture will survey techniques for producing higher quality images by combining optics and computation Common themes: take multiple photos modify the camera

Noise reduction Take several images and average them Field of view We can artificially increase the field of view by compositing several photos together (project 2). Why does this work? Basic statistics: variance of the mean decreases with n: Improving resolution: Gigapixel images Improving resolution: super resolution What if you don t have a zoom lens? Max Lyons, 2003 fused 196 telephoto shots A few other notable examples: Obama inauguration (gigapan.org) HDView (Microsoft Research)

Intuition (slides from Yossi Rubner & Miki Elad) Intuition (slides from Yossi Rubner & Miki Elad) For a given band-limited image, the Nyquist sampling theorem states that if a uniform sampling is fine enough ( D), perfect reconstruction is possible. D Due to our limited camera resolution, we sample using an insufficient grid D 9 10 Intuition (slides from Yossi Rubner & Miki Elad) Intuition (slides from Yossi Rubner & Miki Elad) However, if we take a second picture, shifting the camera slightly to the right we obtain: Similarly, by shifting down we get a third image: 11 12

Intuition (slides from Yossi Rubner & Miki Elad) Intuition And finally, by shifting down and to the right we get the fourth image: By combining all four images the desired resolution is obtained, and thus perfect reconstruction is guaranteed. 13 14 Example Handling more general motions 3:1 scale-up in each axis using 9 images, with pure global translation between them What if the camera displacement is Arbitrary? What if the camera rotates? Gets closer to the object (zoom)? 15 16

Super-resolution Basic idea: define a destination (dst) image of desired resolution assume mapping from dst to each input image is known usually a combination of a motion/warp and an average (point-spread function) can be expressed as a set of linear constraints sometimes the mapping is solved for as well add some form of regularization (e.g., smoothness assumption ) can also be expressed using linear constraints but L1, other nonlinear methods work better How does this work? [Baker & Kanade, 2002] Limits of super-resolution [Baker & Kanade, 2002] Performance degrades significantly beyond 4x or so Doesn t matter how many new images you add space of possible (ambiguous) solutions explodes quickly Major cause quantizing pixels to 8-bit gray values Dynamic Range Typical cameras have limited dynamic range Possible solutions: nonlinear techniques (e.g., L1) better priors (e.g., using domain knowledge) Baker & Kanade Hallucination, 2002 Freeman et al. Example-based super-resolution

HDR images merge multiple inputs HDR images merged Pixel count Pixel count Scene Radiance Radiance Camera is not a photometer! Limited dynamic range 8 bits captures only 2 orders of magnitude of light intensity We can see ~10 orders of magnitude of light intensity Unknown, nonlinear response pixel intensity amount of light (# photons, or radiance ) Solution: Recover response curve from multiple exposures, then reconstruct the radiance map Camera response function 255 Pixel value 0 log Exposure = log (Radiance * Δt) (CCD photon count)

Calculating response function Debevec & Malik [SIGGRAPH 1997] 1 2 3 Δt = 1/64 sec 1 2 3 Δt = 1/16 sec 1 2 3 Δt = 1/4 sec 1 2 3 Δt = 1 sec 1 2 3 Pixel Value Z = f(exposure) Exposure = Radiance Δt log Exposure = log Radiance + log Δt Δt = 4 sec Pixel value Assuming unit radiance for each pixel 3 2 1 log Exposure After adjusting radiances to obtain a smooth response curve Pixel value log Exposure The Math Let g(z) be the discrete inverse response function For each pixel site i in each image j, want: Solve the over-determined linear system: N P [ ln Radiance i + lnδt j g(z ij )] 2 +λ g (z) 2 i=1 j=1 ln Radiance i +ln Δt j = g(z ij ) Z max z =Z min Capture and composite several photos Same trick works for field of view resolution signal to noise dynamic range Focus But sometimes you can do better by modifying the camera fitting term smoothness term

Focus Suppose we want to produce images where the desired object is guaranteed to be in focus? Light field camera [Ng et al., 2005] Or suppose we want everything to be in focus? http://www.refocusimaging.com/gallery/ Conventional vs. light field camera Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

Simulating depth of field Σ Σ stopping down aperture = summing only the central portion of each microlens Digital refocusing Example of digital refocusing Σ Σ refocusing = summing windows extracted from several microlenses

All-in-focus If you only want to produce an all-focus image, there are simpler alternatives E.g., Wavefront coding [Dowsky 1995] Coded aperture [Levin SIGGRAPH 2007], [Raskar SIGGRAPH 2007] can also produce change in focus (ala Ng s light field camera) Levin et al., SIGGRAPH 2007 Input Levin et al., SIGGRAPH 2007

All-focused (deconvolved) Close-up Original image All-focus image Motion blur removal Instead of coding the aperture, code the... Raskar et al., Shutter SIGGRAPH is OPEN 2007 and CLOSED

Raskar et al., SIGGRAPH 2007 Many more possibilities Seeing through/behind objects Using a camera array ( synthetic aperture ) Levoy et al., SIGGRAPH 2004 Removing interreflections Nayar et al., SIGGRAPH 2006 Family portraits where everyone s smiling Photomontage (Agarwala at al., SIGGRAPH 2004) License Plate Retrieval More on computational photography SIGGRAPH course notes and video Other courses MIT course CMU course Stanford course Columbia course Wikipedia page Symposium on Computational Photography ICCP 2009 (conference)