Why learn about photography in this course?

Similar documents
lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

High Dynamic Range Imaging

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Computational Photography and Video. Prof. Marc Pollefeys

Virtual and Digital Cameras

Basic principles of photography. David Capel 346B IST

Lenses, exposure, and (de)focus

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Intorduction to light sources, pinhole cameras, and lenses

Camera Simulation. References. Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A.

6.A44 Computational Photography

PHOTOGRAPHY: MINI-SYMPOSIUM

Lens Openings & Shutter Speeds

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

CAMERA BASICS. Stops of light

VC 11/12 T2 Image Formation

High Dynamic Range Imaging

Distributed Algorithms. Image and Video Processing

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Computer Vision. The Pinhole Camera Model

Midterm Examination CS 534: Computational Photography

What will be on the midterm?

Computational Photography

Realistic Image Synthesis

Name: Date: Math in Special Effects: Try Other Challenges. Student Handout

Understanding Focal Length

Tonemapping and bilateral filtering

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Basic Camera Concepts. How to properly utilize your camera

Parameter descriptions:

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Chapter 36. Image Formation

Chapter 36. Image Formation

VC 14/15 TP2 Image Formation

Topic 3 - A Closer Look At Exposure: Aperture

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

CS559: Computer Graphics. Lecture 2: Image Formation in Eyes and Cameras Li Zhang Spring 2008

High dynamic range imaging and tonemapping

Exposure settings & Lens choices

General Imaging System

Fast Perception-Based Depth of Field Rendering

Lecture 21: Cameras & Lenses II. Computer Graphics and Imaging UC Berkeley CS184/284A

Aperture. The lens opening that allows more, or less light onto the sensor formed by a diaphragm inside the actual lens.

Chapter 18 Optical Elements

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38

Gray Point (A Plea to Forget About White Point)

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

PHY385H1F Introductory Optics. Practicals Session 7 Studying for Test 2

Coded photography , , Computational Photography Fall 2017, Lecture 18

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

Coded photography , , Computational Photography Fall 2018, Lecture 14

Be aware that there is no universal notation for the various quantities.

Unit 1: Image Formation

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Camera Exposure Modes

Working with your Camera

Dr F. Cuzzolin 1. September 29, 2015

1. Any wide view of a physical space. a. Panorama c. Landscape e. Panning b. Grayscale d. Aperture

Image Formation and Capture

Chapter 34 Geometric Optics

Name Digital Imaging I Chapters 9 12 Review Material

Topic 6 - Optics Depth of Field and Circle Of Confusion

This document explains the reasons behind this phenomenon and describes how to overcome it.

1. Any wide view of a physical space. a. Panorama c. Landscape e. Panning b. Grayscale d. Aperture

CPSC 425: Computer Vision

Types of lenses. Shown below are various types of lenses, both converging and diverging.

Intro to Digital Compositions: Week One Physical Design

[ Summary. 3i = 1* 6i = 4J;

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Adding Realistic Camera Effects to the Computer Graphics Camera Model

Focusing and Metering

Light. Path of Light. Looking at things. Depth and Distance. Getting light to imager. CS559 Lecture 2 Lights, Cameras, Eyes

Wave or particle? Light has. Wavelength Frequency Velocity

Fundamental Paraxial Equation for Thin Lenses

Photography PreTest Boyer Valley Mallory

Physics 1230 Homework 8 Due Friday June 24, 2016

History of projection. Perspective. History of projection. Plane projection in drawing

Introduction. Related Work

Kent Messamore 3/12/2010

Image and Multidimensional Signal Processing

Deblurring. Basics, Problem definition and variants

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

A BEGINNER S GUIDE TO PHOTOGRAPHY CHEATSHEET

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

EC-433 Digital Image Processing

ECEN 4606, UNDERGRADUATE OPTICS LAB

4/30/2009. Lighting is the language of photography Light: Science and Magic

Computational Approaches to Cameras

GEOMETRICAL OPTICS AND OPTICAL DESIGN

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

ME 6406 MACHINE VISION. Georgia Institute of Technology

R 1 R 2 R 3. t 1 t 2. n 1 n 2

A Mathematical model for the determination of distance of an object in a 2D image

Get the Shot! Photography + Instagram Workshop September 21, 2013 BlogPodium. Saturday, 21 September, 13

New Paltz Central School District ART High School/Studio in Photography

Radiometry I: Illumination. cs348b Matt Pharr

VC 16/17 TP2 Image Formation

Transcription:

Why learn about photography in this course? Geri's Game: Note the background is blurred. - photography: model of image formation - Many computer graphics methods use existing photographs e.g. texture & environment mapping, image matting. Understanding them can only help us to better use them. - Many computer graphics methods attempt to mimic real images and their properties. See next slide - Digital photographs can be manipulated to achieve new types of images e.g. HDR as we'll see later https://www.youtube.com/watch?v=9iyrc7g2icg As we have seen, in computer graphics, the projection surface is in front of the viewer. In real cameras and eyes, images are formed behind the center of projection. Aperture Real cameras (and eyes) have a finite aperture, not a pinhole. The diameter A of the aperture can be varied to allow more or less light to reach the image plane. We were thinking of the viewer as looking through a window. Lens Cameras (and eyes) also have a lens that focusses the light. Typically the aperture is in front of the lens, but for simplicity I have just drawn it as below. For any point (x0, y0, z0), there is a corresponding point (x1, y1, z1), called the conjugate point. All the rays that leave (x0, y0, z0) and pass through the lens will converge on (x1, y1, z1). For a fixed distance between the lens and sensor plane, some scene points will be in focus and some will be blurred. (I will spare you the mathematical formulas.) too far blurred perfect in focus (sharp) too close blurred

Depth of Field "Depth of field" is the range of depths that are ~ in focus. [Definition: the blur width is less than the distance between pixels.] How to render image blur? (sketch only) http://http.developer.nvidia.com/gpugems/gpugems_ch23.html Method 1: Ray tracing (Cook et al. 1984) For each point on the image plane, trace a set of rays back through the lens into the scene (using formulas I omitted). Compute the average of RGB values of this set of rays. Method 2: "Accumulation buffer" (Haeberli and Akeley 1990) Render the scene in the standard OpenGL way from each camera position within the aperture (one image shown below). Each of these images needs to be scaled and translated on the image plane. (Again, I will spare you the math.) Then, sum up all the images. - basics of photography Camera Settings The total light reaching each point on the image plane depends on the intensity of the incoming light, and on the angle of the cone of rays which depends on the aperture. There is also a proportionality factor -- not shown. * angleofconeofrays(x) "Solid Angle" is a 2D angle. It is defined to be the area of a unit hemisphere (radius 1) covered by the angle. Angle has units radians (or degrees). Solid angle has units "steradians". e.g. You can talk about the solid angle of the sun or moon. Angular width of the lens as seen from the sensor is The units are radians. The total light reaching each point on the image plane (per unit time) is thus as follows, where L( l ) is the intensity of the light in direction l. Here we ignore color spectrum but in fact E( ) also depends on wavelength of light (see color lecture). The solid angleofconeofrays is proportional to: (This is a familiar effect: the area of a 2D shape grows like the square of the diameter.)

F-number (definition) = f / A Since f / A (or its inverse) is fundamental to determining how much light reaches the image plane, this quantity is given a name. It is also possible to fix the aperture and vary the focal length. What happens when we vary the focal length as on the previous slide? wide angle fixed sensor area On typical cameras, the user can vary f-number: The mechanism for doing this is usually to vary the aperture. narrow angle ("telephoto") fixed sensor area small f (wide angle) large f (telephoto) The image is darker for the larger focal length f. Why? Because the angle of the lens is smaller when viewed from a point on the sensor. Shutter speed 1/t (t = time of exposure) Image intensity also depends on t. Application: Motion Blur (Cook 1984) - basics of photography Exercise: very subtle rendering effect here. Can you see it? Exposure Camera Response How does this relate to last lecture? The model for image RGB from last lecture was: In fact, a typical camera response mapping is exposure, E * t

As we will see a few slides from now, it is useful to re-draw camera response curve as a function of log exposure. - basics of photograph - camera settings (aperture, f-number, shutter speed) In few slides, I will say how to compute this curve. Dynamic range A typical scene has a dynamic range of luminances that is much greater than the dynamic range of exposures you can capture with a single image in your camera. Example (scene dynamic range over 4000) min max 'Dynamic range' of a signal is the ratio of the maximum value to the minimum value. If we look at log(signal), then dynamic range is a difference, max - min. Note that the dynamic range of an exposure image, E(x,y) * t, doesn't depend on the exposure time t. camera DR scene DR camera's DR scene DR How to compute camera response curve T( )? (Sketch only [Debevec and Malik 1997]) - Take multiple exposures by varying shutter speed (as we did two slides back) - Perform a "least squares" fit to a model of T( ). (This requires making a few reasonable assumptions about the model e.g. monotonically increasing, smooth, goes from 0 to 255. Details omitted.) - Option: compute separate models for RGB Computing a high dynamic range (HDR) image Given T( ) for a camera, and given a set of new images It(x,y) obtained for several shutter speeds, 1/t, Use the estimate -1 Et(x,y) = T ( It(x,y) ) / t Et(x,y) for which 0 << It(x,y) << 255 where the T ( ) curve is most reliable.

How to view a HDR image on a low dynamic range (LDR) display? This is the problem of "tone mapping". The simplest method is to compute log E(x,y) and scale values to [0, 255]. For example, Tone mapping is a classical problem in painting/drawing. How to depict a HDR scene on a LDR display/canvas/print? Typical dynamic range of paint/print is only about 30:1. HDR has always been an issue in classical photography e.g. Ansel Adams, techniques for "burning and dodging" prints. BTW, another problem: Panoramas / image stitching Announcement - A4 posted (worth 6%), due in two weeks - available in consumer level cameras - based on homographies (2D -> 2D maps) HDR images can now be made with consumer level software. - traditionally part of computer vision curriculum, but many of the key contributions are by graphics people and are used in graphics