Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera High dynamic range imaging

Similar documents
Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Announcement A total of 5 (five) late days are allowed for projects. Office hours

High Dynamic Range Images : Rendering and Image Processing Alexei Efros. The Grandma Problem

CS559: Computer Graphics. Lecture 2: Image Formation in Eyes and Cameras Li Zhang Spring 2008

High dynamic range imaging

High dynamic range imaging

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

CS6670: Computer Vision

Unit 1: Image Formation

Cameras. CSE 455, Winter 2010 January 25, 2010

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

LENSES. INEL 6088 Computer Vision

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Prof. Feng Liu. Spring /05/2017

Building a Real Camera. Slides Credit: Svetlana Lazebnik

High dynamic range imaging and tonemapping

Building a Real Camera

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

High Dynamic Range Imaging

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

CS6670: Computer Vision

Computational Photography and Video. Prof. Marc Pollefeys

Image Formation and Capture

Two strategies for realistic rendering capture real world data synthesize from bottom up

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Basic principles of photography. David Capel 346B IST

! High&Dynamic!Range!Imaging! Slides!from!Marc!Pollefeys,!Gabriel! Brostow!(and!Alyosha!Efros!and! others)!!

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

Capturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.

Introduction to Computer Vision

Lenses, exposure, and (de)focus

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

CSE 473/573 Computer Vision and Image Processing (CVIP)

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

brief history of photography foveon X3 imager technology description

Image formation - Cameras. Grading & Project. About the course. Tentative Schedule. Course Content. Students introduction

Images and Displays. Lecture Steve Marschner 1

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

Digital Photographs, Image Sensors and Matrices

Digital Cameras The Imaging Capture Path

Digital Photographs and Matrices

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Introduction to Image Processing and Computer Vision -- Noise, Dynamic Range and Color --

Computer Vision. The Pinhole Camera Model

Digital photography , , Computational Photography Fall 2017, Lecture 2

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Introduction to Digital Photography

Focusing and Metering

A CAMERA IS A LIGHT TIGHT BOX

Wavelengths and Colors. Ankit Mohan MAS.131/531 Fall 2009

Digital camera. Sensor. Memory card. Circuit board

How do we see the world?

Capturing Light in man and machine

Capturing Light in man and machine

Digital Camera Sensors

Basic Camera Concepts. How to properly utilize your camera

Image Formation: Camera Model

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Computational Approaches to Cameras

Get the Shot! Photography + Instagram Workshop September 21, 2013 BlogPodium. Saturday, 21 September, 13

ME 6406 MACHINE VISION. Georgia Institute of Technology

The Dynamic Range Problem. High Dynamic Range (HDR) Multiple Exposure Photography. Multiple Exposure Photography. Dr. Yossi Rubner.

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Introduction to camera usage. The universal manual controls of most cameras

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1

A Simple Camera Model

High Dynamic Range Images

Image Processing & Projective geometry

High Dynamic Range Imaging

Capturing Light in man and machine

Image Formation and Camera Design

Sensors and Sensing Cameras and Camera Calibration

VC 16/17 TP2 Image Formation

F-number sequence. a change of f-number to the next in the sequence corresponds to a factor of 2 change in light intensity,

Topic 9 - Sensors Within

Cameras As Computing Systems

Focusing & metering. CS 448A, Winter Marc Levoy Computer Science Department Stanford University

Digital photography , , Computational Photography Fall 2018, Lecture 2

Advanced Camera and Image Sensor Technology. Steve Kinney Imaging Professional Camera Link Chairman

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Color , , Computational Photography Fall 2018, Lecture 7

VC 14/15 TP2 Image Formation

COMPUTATIONAL PHOTOGRAPHY. Chapter 10

Mastering Y our Your Digital Camera

VC 11/12 T2 Image Formation

Lecture 9. Lecture 9. t (min)

HDR images acquisition

Camera, video production. TNGD10 - Moving media

A Digital Camera Glossary. Ashley Rodriguez, Charlie Serrano, Luis Martinez, Anderson Guatemala PERIOD 6

Where Vision and Silicon Meet

Improving Film-Like Photography. aka, Epsilon Photography

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.

History and Future of Electronic Color Photography: Where Vision and Silicon Meet

Cameras CS / ECE 181B

Transcription:

Outline Cameras Pinhole camera Film camera Digital camera Video camera High dynamic range imaging Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/1 with slides by Fedro Durand, Brian Curless, Steve Seitz and Alexei Efros Camera trial #1 Pinhole camera pinhole camera scene film scene barrier film Put a piece of film in front of an object. Add a barrier to block off most of the rays. It reduces blurring The pinhole is known as the aperture The image is inverted

Shrinking the aperture Shrinking the aperture Why not making the aperture as small as possible? Less light gets through Diffraction effect High-end commercial pinhole cameras Adding a lens circle of confusion scene lens film $200~$700 A lens focuses light onto the film There is a specific distance at which objects are in focus other points project to a circle of confusion in the image

Lenses Exposure = aperture + shutter speed F Thin lens equation: Any object point satisfying this equation is in focus Thin lens applet: http://www.phy.ntnu.edu.tw/java/lens/lens_e.html Aperture of diameter D restricts the range of rays (aperture may be on either side of the lens) Shutter speed is the amount of time that light is allowed to pass through the aperture Exposure Two main parameters: Aperture (in f stop) Effect of shutter speed Longer shutter speed => more light, but more motion blur Shutter speed (in fraction of a second) Faster shutter speed freezes motion

Aperture Aperture is the diameter of the lens opening, usually specified by f-stop, f/d, a fraction of the focal length. f/2.0 on a 50mm means that the aperture is 25mm f/2.0 on a 100mm means that the aperture is 50mm When a change in f-stop occurs, the light is either doubled or cut in half. Lower f-stop, more light (larger lens opening) Higher f-stop, less light (smaller lens opening) Depth of field Changing the aperture size affects depth of field. A smaller aperture increases the range in which the object is approximately in focus See http://www.photonhead.com/simcam/ Exposure & metering The camera metering system measures how bright the scene is In Aperture priority mode, the photographer sets the aperture, the camera sets the shutter speed In Shutter-speed priority mode, the photographers sets the shutter speed and the camera deduces the aperture In Program mode, the camera decides both exposure and shutter speed (middle value more or less) In Manual mode, the user decides everything (but can get feedback) Pros and cons of various modes Aperture priority Direct depth of field control Cons: can require impossible shutter speed (e.g. with f/1.4 for a bright scene) Shutter speed priority Direct motion blur control Cons: can require impossible aperture (e.g. when requesting a 1/1000 speed for a dark scene) Note that aperture is somewhat more restricted Program Almost no control, but no need for neurons Manual Full control, but takes more time and thinking

Distortion Correcting radial distortion No distortion Pin cushion Barrel Radial distortion of the image Caused by imperfect lenses Deviations are most noticeable for rays that pass through the edge of the lens from Helmut Dersch Film camera Digital camera aperture & shutter aperture & shutter scene lens & motor film scene lens & motor sensor array A digital camera replaces film with a sensor array Each cell in the array is a light-sensitive diode that converts photons to electrons

CCD v.s. CMOS CCD is less susceptible to noise (special process, higher fill factor) CMOS is more flexible, less expensive (standard process), less power consumption Sensor noise Blooming Diffusion Dark current Photon shot noise Amplifier readout noise CCD CMOS SLR (Single-Lens Reflex) Reflex (R in SLR) means that we see through the same lens used to take the image. Not the case for compact cameras SLR view finder Prism Your eye Mirror (flipped for exposure) Film/sensor Light from scene Mirror (when viewing) lens

Color Field sequential So far, we ve only talked about monochrome sensors. Color imaging has been implemented in a number of ways: Field sequential Multi-chip Color filter array X3 sensor Field sequential Field sequential

Prokudin-Gorskii (early 1900 s) Prokudin-Gorskii (early 1990 s) Lantern projector http://www.loc.gov/exhibits/empire/ Multi-chip Embedded color filters wavelength dependent Color filters can be manufactured directly onto the photodetectors.

Color filter array Color filter array Kodak DCS620x Bayer pattern Color filter arrays (CFAs)/color filter mosaics Color filter arrays (CFAs)/color filter mosaics Bayer s pattern Demosaicking CFA s bilinear interpolation original input linear interpolation

Demosaicking CFA s Demosaicking CFA s Constant hue-based interpolation (Cok) Hue: Interpolate G first Median-based interpolation (Freeman) 1. Linear interpolation 2. Median filter on color differences Demosaicking CFA s Demosaicking CFA s Median-based interpolation (Freeman) Gradient-based interpolation (LaRoche-Prescott) 1. Interpolation on G original input linear interpolation color difference median filter reconstruction

Demosaicking CFA s Demosaicking CFA s Gradient-based interpolation (LaRoche-Prescott) 2. Interpolation of color differences bilinear Cok Freeman LaRoche Demosaicking CFA s Foveon X3 sensor light penetrates to different depths for different wavelengths multilayer CMOS sensor gets 3 different spectral sensitivities Generally, Freeman s is the best, especially for natural images.

Color filter array X3 technology red green blue output red green blue output Foveon X3 sensor Cameras with X3 Bayer CFA X3 sensor Sigma SD10, SD9 Polaroid X530

Sigma SD9 vs Canon D30 Color processing After color values are recorded, more color processing usually happens: White balance Non-linearity to approximate film response or match TV monitor gamma White Balance Manual white balance warmer +3 automatic white balance white balance with the white book white balance with the red book

Autofocus Active Sonar Infrared Passive Digital camera review website http://www.dpreview.com/ A cool video of digital camera illustration Camcorder Interlacing without interlacing with interlacing

Deinterlacing Deinterlacing blend weave Discard (even field only or odd filed only) Progressive scan Hard cases High dynamic range imaging

Camera pipeline High dynamic range image Short exposure 10-6 10 6 Real world radiance Picture intensity dynamic range 10-6 10 6 Pixel value 0 to 255 Long exposure 10-6 10 6 Real world radiance Picture intensity dynamic range 10-6 10 6 Pixel value 0 to 255

Real-world response functions Camera calibration Geometric How pixel coordinates relate to directions in the world Photometric How pixel values relate to radiance amounts in the world Camera is not a photometer Limited dynamic range Perhaps use multiple exposures? Unknown, nonlinear response Not possible to convert pixel values to radiance Solution: Recover response curve from multiple exposures, then reconstruct the radiance map Varying exposure Ways to change exposure Shutter speed Aperture Natural density filters

Shutter speed Varying shutter speeds Note: shutter times usually obey a power series each stop is a factor of 2 ¼, 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, 1/500, 1/1000 sec Usually really is: ¼, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256, 1/512, 1/1024 sec Math for recovering response curve Idea behind the math

Idea behind the math Idea behind the math Recovering response curve The solution can be only up to a scale, add a constraint Add a hat weighting function Recovering response curve We want If P=11, N~50 We want selected pixels well distributed and sampled from constant region. They pick points by hand. It is an overdetermined system of linear equations and can be solved using SVD

Matlab code Matlab code Matlab code Recovered response function

Constructing HDR radiance map Reconstructed radiance map combine pixels to reduce noise and obtain a more reliable estimation What is this for? Easier HDR reconstruction Human perception Vision/graphics applications raw image = 12-bit CCD snapshot

Easier HDR reconstruction exposure Portable floatmap (.pfm) 12 bytes per pixel, 4 for each channel sign exponent mantissa exposure=radiance* Δt Δt Text header similar to Jeff Poskanzer s.ppm image format: Floating Point TIFF similar PF 768 512 1 <binary image data> Radiance format (.pic,.hdr,.rad) ILM s OpenEXR (.exr) 6 bytes per pixel, 2 for each channel, compressed 32 bits / pixel Red Green Blue Exponent (145, 215, 87, 149) = (145, 215, 87) * 2^(149-128) = (1190000, 1760000, 713000) (145, 215, 87, 103) = (145, 215, 87) * 2^(103-128) = (0.00000432, 0.00000641, 0.00000259) sign exponent mantissa Several lossless compression options, 2:1 typical Compatible with the half datatype in NVidia's Cg Supported natively on GeForce FX and Quadro FX Ward, Greg. "Real Pixels," in Graphics Gems IV, edited by James Arvo, Academic Press, 1994 Available at http://www.openexr.net/

Radiometric self calibration Space of response curves Assume that any response function can be modeled as a high-order polynomial Space of response curves Assorted pixel

Assorted pixel Assorted pixel Assignment #1 HDR image assemble It you have not subscribed the mailing list, please do so. Will be announced around Friday through the mailing list You will use a tripod to take multiple photos with different shutter speeds. Write a program to recover the response curve and radiance map. We will provide image I/O library. Furthermore, apply some tone mapping operation on your photograph. References http://www.howstuffworks.com/digital-camera.htm http://electronics.howstuffworks.com/autofocus.htm Ramanath, Snyder, Bilbro, and Sander. Demosaicking Methods for Bayer Color Arrays, Journal of Electronic Imaging, 11(3), pp306-315. Paul E. Debevec, Jitendra Malik, Recovering High Dynamic Range Radiance Maps from Photographs, SIGGRAPH 1997. http://www.worldatwar.org/photos/whitebalance/ind ex.mhtml http://www.100fps.com/