When Does Computational Imaging Improve Performance?
|
|
- Lesley Sparks
- 6 years ago
- Views:
Transcription
1 When Does Computational Imaging Improve Performance? Oliver Cossairt Assistant Professor Northwestern University Collaborators: Mohit Gupta, Changyin Zhou, Daniel Miau, Shree Nayar (Columbia University)
2 Iphone 4G Photograph Canon DLSR Photograph How to capture the best quality photograph?
3 Digital vs. Film Photography Film Camera Dynamic range fixed at time of exposure 1ms Exposure Time 4ms Exposure Time 8ms Exposure Time 16ms Exposure Time Digital Camera Dynamic range can be extended computationally High Dynamic Range Image
4 Digital Photography: Noise Film Camera Noise level fixed at capture time Limited by film grain size Milky Way, 6 min exposure (Jesse Levinson) Digital Camera Noise can be averaged away SNR unlimited in principle Milky Way, 10x 6 min exposures averaged
5 Computational Imaging: Increased Functionality Take multiple pictures and computationally combine HDR Imaging Panoramic Stitching Light Field Capture [Wilburn et al. 04] Image-Based Lighting [Debevec et al. 00] Digital Holography [Greenbum et al. 12] Others Multiview Stereo Depth from Focus/Defocus Tomography Structured Light Deconvolution microscopy etc.
6 Focus Blur Film Camera Blur is fixed at time of capture M100 Galaxy captured by Hubble Telescope Digital Camera Images can be deblurred via deconvolution M100 Galaxy after blind deconvolution [Caraso, Opt. Eng 06]
7 Motion Blur Film Camera Short exposure avoids motion blur Image is noisy 1 millisecond exposure (noisy) Digital Camera Long exposure produces blurry image Blur can be removed via deconvolution 50 millisecond exposure (blurry) (deblurred)
8 Coded Blur and Multiplexing Camera Exposure 50 millisec Time [Raskar et al. 06]
9 Coded Blur and Multiplexing Camera Exposure Blur is shifted and summed copies 50 millisec Time [Raskar et al. 06]
10 Coded Blur and Multiplexing Camera Exposure Which copies to keep? 50 millisec Time [Raskar et al. 06]
11 Computational Imaging: Increased Performance Coded image capture for increased performance Coded Aperture Defocus Blur Motion Blur [Dowski, Cathey 96] [Mertz 65] [Gottesman 89] [Hausler 72] [Nagahara 08] [Levin et al. 07] [Zhou, Nayar 08] [Raskar 06] [Levin 08] [Cho 10] Multi/Hyper-Spectral Light Field Capture Reflectance [Sloane 79] [Hanley 99] [Baer 99] [Wetzstein et al., 12] [Lanman 08] [Veeraraghavan 07] [Liang 08] [Schechner 03] [Ratner 07] [Ratner 08]
12 Coded Imaging Performance Camera Exposure Camera Exposure 50 millisec Time 50 millisec Time Vs. Short Exposure Coded Exposure Deblurred Image When does computational imaging improve performance?
13 Measuring Computational Imaging Performance
14 Image Formation Model Scene Image Computational Camera Coded Image Coded Image Coding Matrix Noise Optical Coding Equation No diffraction Fully determined Assumption: A) Linear model of incoherent image formation
15 Affine Noise Model Noise Variance at k th Pixel: photon noise aperture, lighting, pixel size Noise PDF: read noise electronics, ADC s, quantization Signal dependent / independent noise Ignore Dark current, fixed pattern Photon noise modeled as Gaussian (ok for more than 10 photons) Photon noise spatially averaged Assumption: B) Affine noise model (photon noise is Gaussian)
16 Lighting Conditions Signal-level and photon noise depend on illumination Average Signal (e - ) Illumination (lux) Reflectivity Aperture Exposure Time (s) Quantum Efficiency Pixel Size (m) Ex) q=.5, R =.5, F/8, t = 6ms, p=6um Quarter moon Full moon Twilight Indoor lighting Cloudy day Sunny day Illumination I src (lux) Signal level J (e - ) Assumption: C) Naturally occurring light conditions for photography [Cossairt et al. TIP 12]
17 Measuring Performance For Gaussian noise, Mean-Squared-Error (MSE) can be computed analytically Ex) Coded Motion Deblurring Long Exposure Coded Exposure Observation: 1) Multiplexing performance depends on coding matrix
18 Multiplexing vs. Impulse Imaging Impulse imaging (identity sampling) Noise variance Coded imaging (multiplexed sampling) Noise variance Observation: Increased throughput 2) Multiplexing increases signal-dependent noise
19 Multiplexing vs. Impulse Imaging SNR Gain over impulse imaging: Hadamard Multiplexing: Noise Dependent Coding Dependent Decreases with C Increases with C [Sloane 79] No SNR gain for large signal Coded Aperture Astronomy Increasing scene points Fresnel zone plate [Mertz 65] Observation: Decreasing contrast 3) Performance depends on multiplexing and signal prior
20 Image Prior Models Assume we have a PDF for images, e.g. Power Spectra Prior Other priors Total Variation (TV) Wavelet/sparsity prior Learned priors (K-SVD) Compute the Maximum A Posteriori (MAP) estimate Data term Prior term MSE difficult to express analytically when Assumption: D) Signal prior models naturally occurring images
21 Image Priors and Noise Denoise Twilight (10 lux) PSNR = 5.5 db PSNR = 16.4 db Denoise Daylight 5 (10 lux) PSNR = 35 db PSNR = 35.9 db Observation: 4) Signal priors help more at low light levels
22 Observations: 1) Multiplexing performance depends on coding matrix 2) Multiplexing helps most in low light 3) Performance depends on both multiplexing and signal prior 4) Signal priors help most in low light Assumptions A) Incoherent imaging B) Affine noise model C) Natural lighting conditions D) Natural image prior How to capture the best quality photograph?
23 Example: Motion Deblurring
24 Motion Deblurring vs. Impulse Imaging Optical efficiency (C) = total on time Camera Exposure 50 millisec Time 50 millisec Time Vs. Impulse Imaging (Short Exposure) Computational Imaging (Coded Exposure) What is the best possible coding performance we can get? [Ratner 07]
25 SNR Gain (G) Multiplexing Performance Bound S-Matrix 4 Average signal level 3 2 Read noise Optical Efficiency (C) [Ratner and Schechner 07]
26 When Does Motion Deblurring Improve Performance? Upper Bound on SNR Gain: Read Noise Average signal level Performance depends only on lighting conditions! q=.5, R =.5, F/2.1, p = 1um, Motion Invariant Levin et al. Flutter Shutter Raskar et al. Maximum object speed (pixels/sec) [Cossairt et al. TIP 12]
27 Flutter Shutter Simulation q=.5, R =.5, F/2.1, pixel size = 1um, read noise Impulse (4ms) Flutter Shutter (180ms) Deblurred Twilight (10 lux) PSNR = -7.2 db PSNR = -3.0 db Cloudy Day 3 (10 lux) PSNR = 12.4 db PSNR = 10.1 db
28 Example: Extended DOF Imaging
29 Depth of Field Small DOF Microscope Tachinid Fly
30 Depth of Field Large DOF Microscope Tachinid Fly
31 Depth of Field and Noise Image Lens F Small apertures have large depth of field and low SNR
32 Focal Sweep Sensor Lens (depth) Point Spread Function (PSF) [Hausler 72, Nagahara et al. 08]
33 Focal Sweep Sensor Lens (depth) = t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 (400) (600) (900) (1200) (1500) (1700) (2000) Integrated PSF [Hausler 72, Nagahara et al. 08]
34 Quasi Depth Invariant PSF Focal Sweep PSF Traditional Camera PSF mm 750mm mm mm 2000mm mm mm 2000mm Extended depth of field with a single deconvolution
35 Extended Depth of Field Telescope 75 m 50 m Traditional Image 75 m 50 m Meade LX200 8 Telescope 2000mm FL Focal Sweep: Processed Captured
36 Focal Sweep Without Moving Parts Focal Sweep Image Lens Diffusion Coding Image (No Moving Parts) Lens 500 x 3 micron [Cossairt et al. Siggraph 10] Radial Diffuser
37 RMS Deblurring Error Diffusion Coding: Evaluation [Dowski and Cathey 95] [Hausler 72, Nagahara et al. 08] Cubic Phase Plate Focal Sweep Diffusion Coding.01 noise Depth (mm) Diffusion coding gives best performance without moving parts [Cossairt et al. Siggraph 10]
38 Diffusion Coding vs. Traditional Camera Traditional F/1.8 Diffusion Coding F/1.8 (Captured) Traditional F/18 (Normalized) Diffusion Coding F/1.8 (Deblurred)
39 Face Detection Traditional Camera (F/2.0) Diffusion Coding Camera (F/2.0)
40 Diffusion Coded Telescope: Optical Design Diffuser Annular Aperture Mirror 2 Mirror 1 Sensor 8 dia 80 Focal Length
41 Telephoto Focal Sweep with Deformable Optics Canon 800mm EFL Lens Sensor Deformable Lens [Miau et al. ICCP 13]
42 Telephoto Video Quality Comparison Conventional EDOF (Deformable Lens)
43 Focal Sweep Performance Impulse Camera Noise Variance: Mean-Squared Error: sensor lens A Focal Sweep Noise Variance: Mean-Squared Error: sensor lens C*A diffuser light increase [Cossairt et al. TIP 12]
44 When Does Defocus Deblurring Improve Performance? Focal sweep multiplexing gain can be expressed analytically Read Noise Average signal level Performance depends only on lighting conditions! q=.5, R =.5, t = 20ms, p = 5um, Maximum defocus at F/1 (pixels) [Cossairt et al. TIP 12]
45 Focal Sweep Simulation Pixel size = 5um Traditional Traditional Focal Sweep Read noise (F/2.0) (F/20.0) (F/2.0) Twilight (10 lux) PSNR = 5.5 db PSNR = 18.5 db Daylight 5 (10 lux) PSNR = 35 db PSNR = 38.5 db
46 Focal Sweep Simulation (with Prior) Pixel size = 5um Traditional Traditional Focal Sweep Read noise (F/2.0) (F/20.0) (F/2.0) Twilight (10 lux) PSNR = 16.4 db / 5.5 db PSNR = 22.8 db / 18.5 db Daylight 5 (10 lux) PSNR = 35.9 db / 35 db PSNR = 39.6 db / 38.5 db BM3D Algorithm: [Dabov et al. 06]
47 Simulated Focal Sweep Performance Focal Sweep Performance Impulse Imaging Focal Sweep performance bound is weak at low light levels [Cossairt et al., TIP 12]
48 Conclusions Results for Motion Deblurring, EDOF also applicable to many other computational cameras Computational imaging performance should always be measured relative to impulse imaging Computational imaging performance depends jointly on multiplexing, noise, and signal priors Important question: How much performance improvement from multiplexing above and beyond use of signal priors?
49 Visual Quality Metrics SSIM Metric UQI Metric VIF SSIM Metric Performance bound roughly holds for all metrics [Cossairt et al., TIP 12]
50 Computational Gigapixel Camera Computational Camera Design Prototype Camera Ball Ball Lens Lens Sensor Lens Array Array Ball Lens Sensor Computations Gigapixel Image Pan/Tilt Motor Also See: MOSAIC Program, Duke, UCSD, Distant Focus
51 Point Spread Function Resolution vs. Lens Scale Pixels PSF size increases linearly
52 Point Spread Function RMS Deblurring Error Resolution vs. Lens Scale [Cossairt et al. JOSA 11] Pixels PSF size increases linearly Scale (M) Deblurring Error is sub-linear
A Framework for Analysis of Computational Imaging Systems
A Framework for Analysis of Computational Imaging Systems Kaushik Mitra, Oliver Cossairt, Ashok Veeraghavan Rice University Northwestern University Computational imaging CI systems that adds new functionality
More informationTo Denoise or Deblur: Parameter Optimization for Imaging Systems
To Denoise or Deblur: Parameter Optimization for Imaging Systems Kaushik Mitra, Oliver Cossairt and Ashok Veeraraghavan 1 ECE, Rice University 2 EECS, Northwestern University 3/3/2014 1 Capture moving
More informationCoded Computational Photography!
Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!
More informationTo Denoise or Deblur: Parameter Optimization for Imaging Systems
To Denoise or Deblur: Parameter Optimization for Imaging Systems Kaushik Mitra a, Oliver Cossairt b and Ashok Veeraraghavan a a Electrical and Computer Engineering, Rice University, Houston, TX 77005 b
More informationCoded photography , , Computational Photography Fall 2018, Lecture 14
Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with
More informationCoding and Modulation in Cameras
Coding and Modulation in Cameras Amit Agrawal June 2010 Mitsubishi Electric Research Labs (MERL) Cambridge, MA, USA Coded Computational Imaging Agrawal, Veeraraghavan, Narasimhan & Mohan Schedule Introduction
More informationDepth from Diffusion
Depth from Diffusion Changyin Zhou Oliver Cossairt Shree Nayar Columbia University Supported by ONR Optical Diffuser Optical Diffuser ~ 10 micron Micrograph of a Holographic Diffuser (RPC Photonics) [Gray,
More informationNear-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis
Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis Yosuke Bando 1,2 Henry Holtzman 2 Ramesh Raskar 2 1 Toshiba Corporation 2 MIT Media Lab Defocus & Motion Blur PSF Depth
More informationCoded photography , , Computational Photography Fall 2017, Lecture 18
Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras
More informationDeblurring. Basics, Problem definition and variants
Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying
More informationComputational Camera & Photography: Coded Imaging
Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types
More informationDappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing
Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research
More informationFocal Sweep Videography with Deformable Optics
Focal Sweep Videography with Deformable Optics Daniel Miau Columbia University dmiau@cs.columbia.edu Oliver Cossairt Northwestern University ollie@eecs.northwestern.edu Shree K. Nayar Columbia University
More informationA Framework for Analysis of Computational Imaging Systems: Role of Signal Prior, Sensor Noise and Multiplexing
SNR gain (in db) 1 A Framework for Analysis of Computational Imaging Systems: Role of Signal Prior, Sensor Noise and Multiplexing Kaushik Mitra, Member, IEEE, Oliver S. Cossairt, Member, IEEE and Ashok
More informationThe ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?
Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution
More informationExtended Depth of Field Catadioptric Imaging Using Focal Sweep
Extended Depth of Field Catadioptric Imaging Using Focal Sweep Ryunosuke Yokoya Columbia University New York, NY 10027 yokoya@cs.columbia.edu Shree K. Nayar Columbia University New York, NY 10027 nayar@cs.columbia.edu
More informationExtended depth of field for visual measurement systems with depth-invariant magnification
Extended depth of field for visual measurement systems with depth-invariant magnification Yanyu Zhao a and Yufu Qu* a,b a School of Instrument Science and Opto-Electronic Engineering, Beijing University
More informationComputational Photography Introduction
Computational Photography Introduction Jongmin Baek CS 478 Lecture Jan 9, 2012 Background Sales of digital cameras surpassed sales of film cameras in 2004. Digital cameras are cool Free film Instant display
More informationCoded Aperture for Projector and Camera for Robust 3D measurement
Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement
More informationOptimal Single Image Capture for Motion Deblurring
Optimal Single Image Capture for Motion Deblurring Amit Agrawal Mitsubishi Electric Research Labs (MERL) 1 Broadway, Cambridge, MA, USA agrawal@merl.com Ramesh Raskar MIT Media Lab Ames St., Cambridge,
More informationComputational Cameras. Rahul Raguram COMP
Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene
More informationMotion-invariant Coding Using a Programmable Aperture Camera
[DOI: 10.2197/ipsjtcva.6.25] Research Paper Motion-invariant Coding Using a Programmable Aperture Camera Toshiki Sonoda 1,a) Hajime Nagahara 1,b) Rin-ichiro Taniguchi 1,c) Received: October 22, 2013, Accepted:
More informationDeconvolution , , Computational Photography Fall 2017, Lecture 17
Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another
More informationModeling and Synthesis of Aperture Effects in Cameras
Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting
More informationSimulated Programmable Apertures with Lytro
Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows
More informationDeconvolution , , Computational Photography Fall 2018, Lecture 12
Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?
More informationCoded Aperture and Coded Exposure Photography
Coded Aperture and Coded Exposure Photography Martin Wilson University of Cape Town Cape Town, South Africa Email: Martin.Wilson@uct.ac.za Fred Nicolls University of Cape Town Cape Town, South Africa Email:
More informationTransfer Efficiency and Depth Invariance in Computational Cameras
Transfer Efficiency and Depth Invariance in Computational Cameras Jongmin Baek Stanford University IEEE International Conference on Computational Photography 2010 Jongmin Baek (Stanford University) Transfer
More informationCoded Aperture Pairs for Depth from Defocus
Coded Aperture Pairs for Depth from Defocus Changyin Zhou Columbia University New York City, U.S. changyin@cs.columbia.edu Stephen Lin Microsoft Research Asia Beijing, P.R. China stevelin@microsoft.com
More informationChangyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012
Changyin Zhou Software Engineer at Google X Google Inc. 1600 Amphitheater Parkway, Mountain View, CA 94043 E-mail: changyin@google.com URL: http://www.changyin.org Office: (917) 209-9110 Mobile: (646)
More informationWhat are Good Apertures for Defocus Deblurring?
What are Good Apertures for Defocus Deblurring? Changyin Zhou, Shree Nayar Abstract In recent years, with camera pixels shrinking in size, images are more likely to include defocused regions. In order
More informationComputational Photography
Computational photography Computational Photography Digital Visual Effects Yung-Yu Chuang wikipedia: Computational photography h refers broadly to computational imaging techniques that enhance or extend
More informationImplementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring
Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific
More informationFlexible Depth of Field Photography
TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Flexible Depth of Field Photography Sujit Kuthirummal, Hajime Nagahara, Changyin Zhou, and Shree K. Nayar Abstract The range of scene depths
More informationProject 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/
More informationImage and Depth from a Single Defocused Image Using Coded Aperture Photography
Image and Depth from a Single Defocused Image Using Coded Aperture Photography Mina Masoudifar a, Hamid Reza Pourreza a a Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
More informationTo Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera
Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 14 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 2 due May 19 Any last minute issues or questions? Next two lectures: Imaging,
More informationFlexible Depth of Field Photography
TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Flexible Depth of Field Photography Sujit Kuthirummal, Hajime Nagahara, Changyin Zhou, and Shree K. Nayar Abstract The range of scene depths
More informationCoded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility
Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility Amit Agrawal Yi Xu Mitsubishi Electric Research Labs (MERL) 201 Broadway, Cambridge, MA, USA [agrawal@merl.com,xu43@cs.purdue.edu]
More informationRecent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)
Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous
More informationCS354 Computer Graphics Computational Photography. Qixing Huang April 23 th 2018
CS354 Computer Graphics Computational Photography Qixing Huang April 23 th 2018 Background Sales of digital cameras surpassed sales of film cameras in 2004 Digital Cameras Free film Instant display Quality
More informationPoint Spread Function Engineering for Scene Recovery. Changyin Zhou
Point Spread Function Engineering for Scene Recovery Changyin Zhou Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences
More informationAn Analysis of Focus Sweep for Improved 2D Motion Invariance
3 IEEE Conference on Computer Vision and Pattern Recognition Workshops An Analysis of Focus Sweep for Improved D Motion Invariance Yosuke Bando TOSHIBA Corporation yosuke.bando@toshiba.co.jp Abstract Recent
More informationA Review over Different Blur Detection Techniques in Image Processing
A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering
More informationTradeoffs and Limits in Computational Imaging. Oliver Cossairt
Tradeoffs and Limits in Computational Imaging Oliver Cossairt Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA
More informationShort-course Compressive Sensing of Videos
Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan Ashok Veeraraghavan Tutorial Outline Time Presenter
More informationImproved motion invariant imaging with time varying shutter functions
Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia
More informationTHE depth of field (DOF) of an imaging system is the
58 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 1, JANUARY 2011 Flexible Depth of Field Photography Sujit Kuthirummal, Member, IEEE, Hajime Nagahara, Changyin Zhou, Student
More informationBurst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!
Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!
More informationComputational Photography and Video. Prof. Marc Pollefeys
Computational Photography and Video Prof. Marc Pollefeys Today s schedule Introduction of Computational Photography Course facts Syllabus Digital Photography What is computational photography Convergence
More informationGet the Shot! Photography + Instagram Workshop September 21, 2013 BlogPodium. Saturday, 21 September, 13
Get the Shot! Photography + Instagram Workshop September 21, 2013 BlogPodium Part One: Taking your camera off manual Technical details Common problems and how to fix them Practice Ways to make your photos
More informationAdmin. Lightfields. Overview. Overview 5/13/2008. Idea. Projects due by the end of today. Lecture 13. Lightfield representation of a scene
Admin Lightfields Projects due by the end of today Email me source code, result images and short report Lecture 13 Overview Lightfield representation of a scene Unified representation of all rays Overview
More informationCoded Exposure HDR Light-Field Video Recording
Coded Exposure HDR Light-Field Video Recording David C. Schedl, Clemens Birklbauer, and Oliver Bimber* Johannes Kepler University Linz *firstname.lastname@jku.at Exposure Sequence long exposed short HDR
More informationImproving Film-Like Photography. aka, Epsilon Photography
Improving Film-Like Photography aka, Epsilon Photography Ankit Mohan Courtesy of Ankit Mohan. Used with permission. Film-like like Optics: Imaging Intuition Angle(θ,ϕ) Ray Center of Projection Position
More informationCameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!
!! Cameras and Sensors Today Pinhole camera! Lenses! Exposure! Sensors! photo by Abelardo Morell BIL721: Computational Photography! Spring 2015, Lecture 2! Aykut Erdem! Hacettepe University! Computer Vision
More informationAdmin Deblurring & Deconvolution Different types of blur
Admin Assignment 3 due Deblurring & Deconvolution Lecture 10 Last lecture Move to Friday? Projects Come and see me Different types of blur Camera shake User moving hands Scene motion Objects in the scene
More informationCriteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design
Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Computer Aided Design Several CAD tools use Ray Tracing (see
More informationWavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS
6.098 Digital and Computational Photography 6.882 Advanced Computational Photography Final projects Send your slides by noon on Thrusday. Send final report Refocusing & Light Fields Frédo Durand Bill Freeman
More informationLenses, exposure, and (de)focus
Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26
More informationCameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017
Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more
More informationRecent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic
Recent advances in deblurring and image stabilization Michal Šorel Academy of Sciences of the Czech Republic Camera shake stabilization Alternative to OIS (optical image stabilization) systems Should work
More informationPhysics 1230 Homework 8 Due Friday June 24, 2016
At this point, you know lots about mirrors and lenses and can predict how they interact with light from objects to form images for observers. In the next part of the course, we consider applications of
More informationHigh Dynamic Range Imaging
High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic
More informationAstrophotography. An intro to night sky photography
Astrophotography An intro to night sky photography Agenda Hardware Some myths exposed Image Acquisition Calibration Hardware Cameras, Lenses and Mounts Cameras for Astro-imaging Point and Shoot Limited
More informationLecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017
Lecture 22: Cameras & Lenses III Computer Graphics and Imaging UC Berkeley, Spring 2017 F-Number For Lens vs. Photo A lens s F-Number is the maximum for that lens E.g. 50 mm F/1.4 is a high-quality telephoto
More informationCamera Mechanics & camera function. Daily independent reading:pgs. 1-5 Silently read for 10 min. Note taking led by Mr. Hiller
Camera Mechanics & camera function Daily independent reading:pgs. 1-5 Silently read for 10 min. Note taking led by Mr. Hiller Focused Learning Target: We will be able to identify the various parts of the
More informationWhat will be on the midterm?
What will be on the midterm? CS 178, Spring 2014 Marc Levoy Computer Science Department Stanford University General information 2 Monday, 7-9pm, Cubberly Auditorium (School of Edu) closed book, no notes
More informationImage Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3
Image Formation Dr. Gerhard Roth COMP 4102A Winter 2015 Version 3 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance
More informationAcquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros
Acquisition Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros Image Acquisition Digital Camera Film Outline Pinhole camera Lens Lens aberrations Exposure Sensors Noise
More informationMotion Estimation from a Single Blurred Image
Motion Estimation from a Single Blurred Image Image Restoration: De-Blurring Build a Blur Map Adapt Existing De-blurring Techniques to real blurred images Analysis, Reconstruction and 3D reconstruction
More informationBlur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park
Blur and Recovery with FTVd By: James Kerwin Zhehao Li Shaoyi Su Charles Park Blur and Recovery with FTVd By: James Kerwin Zhehao Li Shaoyi Su Charles Park Online: < http://cnx.org/content/col11395/1.1/
More informationLecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013
Lecture 18: Light field cameras (plenoptic cameras) Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:
More information! 1! Digital Photography! 2! 1!
! 1! Digital Photography! 2! 1! Summary of results! Field of view at a distance of 5 meters Focal length! 20mm! 55mm! 200mm! Field of view! 6 meters! 2.2 meters! 0.6 meters! 3! 4! 2! ! 5! Which Lens?!
More informationCameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros
Cameras Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26 with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros Camera trial #1 scene film Put a piece of film in front of
More informationLong-Range Adaptive Passive Imaging Through Turbulence
/ APPROVED FOR PUBLIC RELEASE Long-Range Adaptive Passive Imaging Through Turbulence David Tofsted, with John Blowers, Joel Soto, Sean D Arcy, and Nathan Tofsted U.S. Army Research Laboratory RDRL-CIE-D
More informationWhy learn about photography in this course?
Why learn about photography in this course? Geri's Game: Note the background is blurred. - photography: model of image formation - Many computer graphics methods use existing photographs e.g. texture &
More informationAPJIMTC, Jalandhar, India. Keywords---Median filter, mean filter, adaptive filter, salt & pepper noise, Gaussian noise.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Comparative
More informationSynthetic aperture photography and illumination using arrays of cameras and projectors
Synthetic aperture photography and illumination using arrays of cameras and projectors technologies large camera arrays large projector arrays camera projector arrays Outline optical effects synthetic
More informationLess Is More: Coded Computational Photography
Less Is More: Coded Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA Abstract. Computational photography combines plentiful computing, digital sensors,
More informationIntroduction to camera usage. The universal manual controls of most cameras
Introduction to camera usage A camera in its barest form is simply a light tight container that utilizes a lens with iris, a shutter that has variable speeds, and contains a sensitive piece of media, either
More informationlecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response
lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response - application: high dynamic range imaging Why learn
More informationEnhanced Method for Image Restoration using Spatial Domain
Enhanced Method for Image Restoration using Spatial Domain Gurpal Kaur Department of Electronics and Communication Engineering SVIET, Ramnagar,Banur, Punjab, India Ashish Department of Electronics and
More informationInternational Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)
Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed
More informationTonemapping and bilateral filtering
Tonemapping and bilateral filtering http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 6 Course announcements Homework 2 is out. - Due September
More informationDemosaicing and Denoising on Simulated Light Field Images
Demosaicing and Denoising on Simulated Light Field Images Trisha Lian Stanford University tlian@stanford.edu Kyle Chiang Stanford University kchiang@stanford.edu Abstract Light field cameras use an array
More informationProblem Session 6. Computa(onal Imaging and Display EE 367 / CS 448I
Problem Session 6 Computa(onal Imaging and Display EE 367 / CS 448I Topics Photo- electron shot- noise SNR calcula@ons Deconvolu@on of an image with Poisson noise Wiener deconvolu@on Richardson- Lucy Richardson-
More informationComputational Approaches to Cameras
Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on
More informationIntroductory Photography
Introductory Photography Basic concepts + Tips & Tricks Ken Goldman Apple Pi General Meeting 26 June 2010 Kenneth R. Goldman 1 The Flow General Thoughts Cameras Composition Miscellaneous Tips & Tricks
More informationRaskar, Camera Culture, MIT Media Lab. Ramesh Raskar. Camera Culture. Associate Professor, MIT Media Lab
Raskar, Camera Culture, MIT Media Lab Camera Culture Ramesh Raskar C C lt Camera Culture Associate Professor, MIT Media Lab Where are the camera s? Where are the camera s? We focus on creating tools to
More informationFOCUS, EXPOSURE (& METERING) BVCC May 2018
FOCUS, EXPOSURE (& METERING) BVCC May 2018 SUMMARY Metering in digital cameras. Metering modes. Exposure, quick recap. Exposure settings and modes. Focus system(s) and camera controls. Challenges & Experiments.
More informationHow do we see the world?
The Camera 1 How do we see the world? Let s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image? Credit: Steve Seitz 2 Pinhole camera Idea 2: Add a barrier to
More informationToward Non-stationary Blind Image Deblurring: Models and Techniques
Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring
More information4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES
4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,
More informationCapturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.
Capturing Light in man and machine Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al. 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Image Formation Digital
More informationA Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications
A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications IEEE Transactions on Image Processing, Vol. 21, No. 2, 2012 Eric Dedrick and Daniel Lau, Presented by Ran Shu School
More informationCompressive Imaging. Aswin Sankaranarayanan (Computational Photography Fall 2017)
Compressive Imaging Aswin Sankaranarayanan (Computational Photography Fall 2017) Traditional Models for Sensing Linear (for the most part) Take as many measurements as unknowns sample Traditional Models
More informationOptical image stabilization (IS)
Optical image stabilization (IS) CS 178, Spring 2011 Marc Levoy Computer Science Department Stanford University Outline! what are the causes of camera shake? how can you avoid it (without having an IS
More informationKAUSHIK MITRA CURRENT POSITION. Assistant Professor at Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai.
KAUSHIK MITRA School Address Department of Electrical Engineering Indian Institute of Technology Madras Chennai, TN, India 600036 Web: www.ee.iitm.ac.in/kmitra Email: kmitra@ee.iitm.ac.in Contact: 91-44-22574411
More informationBasic principles of photography. David Capel 346B IST
Basic principles of photography David Capel 346B IST Latin Camera Obscura = Dark Room Light passing through a small hole produces an inverted image on the opposite wall Safely observing the solar eclipse
More information