Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1

Similar documents
Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

How do we see the world?

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

Basic principles of photography. David Capel 346B IST

Two strategies for realistic rendering capture real world data synthesize from bottom up

Unit 1: Image Formation

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Cameras. CSE 455, Winter 2010 January 25, 2010

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Building a Real Camera

LENSES. INEL 6088 Computer Vision

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Lecture 02 Image Formation 1

CS6670: Computer Vision

CPSC 425: Computer Vision

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

CS 443: Imaging and Multimedia Cameras and Lenses

Computer Vision. The Pinhole Camera Model

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

CSE 473/573 Computer Vision and Image Processing (CVIP)

GEOMETRICAL OPTICS AND OPTICAL DESIGN

Image Acquisition and Representation. Camera. CCD Camera. Image Acquisition Hardware

Image Acquisition and Representation

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

What will be on the midterm?

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced

CS6670: Computer Vision

Chapter 25 Optical Instruments

Image Acquisition and Representation. Image Acquisition Hardware. Camera. how digital images are produced how digital images are represented

Name: Date: Math in Special Effects: Try Other Challenges. Student Handout

Computer Vision. Thursday, August 30

VC 16/17 TP2 Image Formation

Lenses, exposure, and (de)focus

VC 14/15 TP2 Image Formation

Lecture 7: Camera Models

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Image Formation and Capture

Astronomical Cameras

Physics 1230 Homework 8 Due Friday June 24, 2016

Properties of optical instruments. Projection optical systems

VC 11/12 T2 Image Formation

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras

ME 6406 MACHINE VISION. Georgia Institute of Technology

APPLICATIONS FOR TELECENTRIC LIGHTING

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Image Formation: Camera Model

Geometrical Optics Optical systems

Dr F. Cuzzolin 1. September 29, 2015

Overview. Image formation - 1

CSE 527: Introduction to Computer Vision

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Applications of Optics

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Imaging Instruments (part I)

Shaw Academy. Lesson 2 Course Notes. Diploma in Smartphone Photography

Sensors and Sensing Cameras and Camera Calibration

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Physics 3340 Spring Fourier Optics

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

Adding Realistic Camera Effects to the Computer Graphics Camera Model

Lecture 2 Camera Models

Reflectors vs. Refractors

Cameras, lenses and sensors

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

Phys 531 Lecture 9 30 September 2004 Ray Optics II. + 1 s i. = 1 f

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Chapter 3 Op,cal Instrumenta,on

Lens Aperture. South Pasadena High School Final Exam Study Guide- 1 st Semester Photo ½. Study Guide Topics that will be on the Final Exam

Image Processing - Intro. Tamás Szirányi

Chapter 34 Geometric Optics

Chapter 24 Geometrical Optics. Copyright 2010 Pearson Education, Inc.

This document explains the reasons behind this phenomenon and describes how to overcome it.

Lecture 2 Camera Models

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Exercise 8: Interference and diffraction

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

Computational Photography: Interactive Imaging and Graphics

Understanding Focal Length

Section 8. Objectives

TECHSPEC COMPACT FIXED FOCAL LENGTH LENS

ECEN 4606, UNDERGRADUATE OPTICS LAB

Modeling and Synthesis of Aperture Effects in Cameras

Intorduction to light sources, pinhole cameras, and lenses

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Complete the diagram to show what happens to the rays. ... (1) What word can be used to describe this type of lens? ... (1)

Vision 1. Physical Properties of Light. Overview of Topics. Light, Optics, & The Eye Chaudhuri, Chapter 8

Chapter 3 Op+cal Instrumenta+on

Light gathering Power: Magnification with eyepiece:

6.A44 Computational Photography

Image Formation and Camera Design

Homogeneous Representation Representation of points & vectors. Properties. Homogeneous Transformations

Transcription:

Image Formation Dr. Gerhard Roth COMP 4102A Winter 2014 Version 1

Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance Created from processing passive images or by an active sensor Intensity image is a function of three things Optical parameters of the lens Lens type, focal length, field of view, angular apertures Photogrammetric (Radiometric) parameters Type, direction and intensity of the illumination Reflectance properties of the viewed surface Characteristics of the image sensor Geometric parameters Type of projection, position and orientation of camera

Elements of an imaging device Light rays coming from outside world and falling on the photoreceptors in the retina.

Pinhole Camera

Perspective Projection Draughtsman Drawing a Lute, Albrecht Dürer, 1525

Camera Obscura Camera Obscura, Reinerus Gemma Frisius, 1544 Camera Obscura: Latin dark chamber

Camera Obscura Contemporary artist Madison Cawein rented studio space in an old factory building where many of the windows were boarded up or painted over. A random small hole in one of those windows turned one room into a camera obscura.

Photographic Camera Photographic camera: Joseph Nicéphore Niepce, 1816

First Photograph First photograph on record, la table servie, obtained by Niepce in 1822.

Why Lenses? Gather more light from each scene point and also reduce blurring

Why Lenses? Pinhole too big - many directions are averaged, blurring the image Pinhole too small - diffraction effects blur the image Generally, pinhole cameras are dark, because a very small set of rays from a particular point hits the screen.

Camera with Lens - Thin Lens Model Lens thickness small compared to focal length Basic properties 1. Any ray entering the lens parallel to the axis on one side goes through the focal point on the other side. 2. Any ray entering the lens from the focal point on one side emerges parallel to the axis on the other side.

Fundamental Equation of Thin Lenses Object Lens Focal Point Image sensor Z Z f z z f Effective Focal length 1 Z 1 z 1 f Any point satisfying this equation is in focus Proof uses similar triangles: PSFl~ORFl and QOFr~spFr and fact that PS = QO and sp = OR

Thin Lenses As the point goes to infinity the focal point approaches f, the value for a pin hole camera For a lens we can adjust focus ring to move the lens and aperture ring to change aperture Both of these adjustments affect what is called the depth of field (explained by model) Thin lens applet: http://www.phy.ntnu.edu.tw/java/lens/lens_e.html

Depth of field Point is in focus over a given distance which is the depth of field which changes with f In focus region has less than one pixel of blur

Depth of field

Depth of field

Aperture size Blurriness of out of focus objects depends on the aperture size Larger aperture means smaller depth of field but it also lets in more light

Large apeture = small DOF Small apeture = large DOF Varying the aperture

Nice Depth of Field effect

Field of View (Zoom)

Field of View (Zoom)

FOV depends of Focal Length f Smaller FOV = larger Focal Length

Field of View / Focal Length Large FOV, small f Camera close to car Small FOV, large f Camera far from the car Small field of view has wide angle, but more perspective distortion

Effect of change in focal length Small f is wide angle, large f is telescopic

Autofocus Uses sensor, control system and motor to focus on a selected point or area Can get sharp images over large depth variation Intelligently adjust camera lens to maintain focus on an object (another definition) Two approaches, passive and active Active Triangulation using an active sensor such as laser, ultrasound, or infrared light Passive Phase detection (similar to stereo) to find depth Contrast detection uses blur or lack of it to find depth

Autofocus On all high end cameras, and now on many low end cameras (webcams) and phones In Android have fixed focus, autofocus (it does it once), or continuous autofocus Most sophisticated image processing applications require an in-focus image Requires autofocus QRTag and OCR (including my chess application) Not require autofocus ARTag, a tag system with much less information Such applications work on wider variety of devices

QRTag versus ARTag QRTag Lot of info, small regions Can encode entire URL ARTag Less info, large regions Only 10 bits of encoding

Basic radiometry Image Irradiance: the power of light, per unit area and at each point p of the image plane. Scene (surface) Radiance: the power of the light, per unit area, ideally emitted by each point p of a surface in 3-D space in a given direction.

Surface Reflectance Model A model of the way in which the surface reflects incident light is called a surface reflectance model There are a number of different types of surface reflectance models Fix the lighting, and the object and then move the camera while looking a single surface point The changes in appearance of that surface point defines the specularity Plain sheet of paper is non-specular (no change) Desktop is semi-specular (some change) Mirror is very specular (a great deal of change)

Surface Reflectance for Lambertian L I T n is called surface albedo and it depends on the surface material And L is scene irradiance (no d vector term) Lambertian model: each surface point appears equally bright from all viewing directions (no term with d). Non specular surface. Specular model: this is not true, looks brighter from some viewing directions (mirrors are very specular). These models are much more complex than the lambertain model (more parameters)

Human Eye

CCD (Charge-Coupled Device) Cameras Small solid state cells convert light energy into electrical charge (sensing elements always rectangles and are usually square) The image plane acts as a digital memory that can be read row by row by a computer

Image Digitization Sampling measuring the value of an image at a finite number of points. Quantization representing the measured value at the sampled point, by an integer. Pixel picture element, usually in the range [0,255]

Grayscale Image 10 5 9 100 A digital image is represented by an integer array E of m-by-n. E(i,j), a pixel, is an integer in the range [0, 255].

Color Image B G R

Geometric Model of Camera Perspective projection P P(X,Y,Z) p(x,y) optical center y p x principal point image plane principal axis x f X Z y f Y Z

Funny things happen

Parallel lines aren t Figure by David Forsyth

Lengths can t be trusted... B C A Figure by David Forsyth