Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Similar documents
COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Lenses, exposure, and (de)focus

Two strategies for realistic rendering capture real world data synthesize from bottom up

Introduction. Related Work

Cameras. CSE 455, Winter 2010 January 25, 2010

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Laboratory experiment aberrations

LENSES. INEL 6088 Computer Vision

Chapter 25 Optical Instruments

Astronomy 80 B: Light. Lecture 9: curved mirrors, lenses, aberrations 29 April 2003 Jerry Nelson

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

What will be on the midterm?

Basic principles of photography. David Capel 346B IST

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Unit 1: Image Formation

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapter 18 Optical Elements

Physics 1230 Homework 8 Due Friday June 24, 2016

CPSC 425: Computer Vision

Modeling and Synthesis of Aperture Effects in Cameras

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline

Computer Vision. The Pinhole Camera Model

Adding Realistic Camera Effects to the Computer Graphics Camera Model

Image Formation: Camera Model

Image Formation and Capture

ECEN 4606, UNDERGRADUATE OPTICS LAB

How do we see the world?

CSE 473/573 Computer Vision and Image Processing (CVIP)

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations

Geometric optics & aberrations

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Aperture, Shutter Speed and ISO

To start there are three key properties that you need to understand: ISO (sensitivity)

Building a Real Camera

OPTICAL SYSTEMS OBJECTIVES

Lecture PowerPoint. Chapter 25 Physics: Principles with Applications, 6 th edition Giancoli

Performance Factors. Technical Assistance. Fundamental Optics

Cameras, lenses, and sensors

Imaging Optics Fundamentals

Explanation of Aberration and Wavefront

Lecture Outline Chapter 27. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.

Cameras, lenses and sensors

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Working with your Camera

Notes from Lens Lecture with Graham Reed

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

CS 443: Imaging and Multimedia Cameras and Lenses

CSE 527: Introduction to Computer Vision

Chapter 36. Image Formation

Virtual and Digital Cameras

Chapter 36. Image Formation

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Waves & Oscillations

Lens Aperture. South Pasadena High School Final Exam Study Guide- 1 st Semester Photo ½. Study Guide Topics that will be on the Final Exam

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Camera Simulation. References. Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A.

Lecture 9. Lecture 9. t (min)

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Chapter 23. Mirrors and Lenses

Aberrations, Camera, Eye

Nikon AF-S Nikkor 50mm F1.4G Lens Review: 4. Test results (FX): Digital Photograph...

Ch 24. Geometric Optics

Exam Preparation Guide Geometrical optics (TN3313)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Optical basics for machine vision systems. Lars Fermum Chief instructor STEMMER IMAGING GmbH

Communication Graphics Basic Vocabulary

Physics 1230: Light and Color. Guest Lecture, Jack again. Lecture 23: More about cameras

AST Lab exercise: aberrations

SNC2D PHYSICS 5/25/2013. LIGHT & GEOMETRIC OPTICS L Converging & Diverging Lenses (P ) Curved Lenses. Curved Lenses

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

Opto Engineering S.r.l.

Why learn about photography in this course?

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

Point Spread Function. Confocal Laser Scanning Microscopy. Confocal Aperture. Optical aberrations. Alternative Scanning Microscopy

Basic Camera Concepts. How to properly utilize your camera

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

12:40-2:40 3:00-4:00 PM

Image Processing & Projective geometry

Chapter 24 Geometrical Optics. Copyright 2010 Pearson Education, Inc.

Chapter 23. Mirrors and Lenses

Reflectors vs. Refractors

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

6.A44 Computational Photography

Chapter 23. Light Geometric Optics

Is That a Photograph? Architectural Photography for 3D

Lens Principal and Nodal Points

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

3.0 Alignment Equipment and Diagnostic Tools:

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Applications of Optics

Applied Optics. , Physics Department (Room #36-401) , ,

VC 11/12 T2 Image Formation

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Transcription:

Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Camera Focus

Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more realistic camera lenses that blur objects that are not in focus We say that real lenses have a limited depth of field, where the depth of field refers to the zone that is in focus Sometimes, this blur is referred to as defocus blur

Defocus Blur Defocus blur can be a bad thing if the subject of the image is out of focus However it can sometimes be a good thing If the subject is in focus and the background is blurred, this can have the effect of drawing the attention to the subject while removing distractions from the background It can have a nice artistic effect if handled properly

Lenses In photography, the term lens refers to the whole optical system in front of the film This is generally made from several lens elements, which are the individual pieces of glass in the lens, plus the iris and any structural components Most modern lenses have at least 4 elements, and complex zoom lenses can have more than 10

Focal Plane With a typical lens, there is a plane in front of the camera that is in perfect focus- this is called the focal plane Things get blurry as they get closer to the camera or further away from the focal plane

Focal Plane

Aperture A camera aperture is an opening through which light travels A small aperture will lead to a sharper image and a large aperture will lead to a blurrier image Typically, in a real camera, the aperture size can be changed with an adjustable iris

Rendering Camera Focus To add camera focus blur to a ray tracer, we need to model the camera lens We could model the entire complex lens with multiple lens elements, lens coatings, and an iris. This is actually a fairly common approach in high end movies, where computer generated objects need to be integrated into live-action scenes However, we will examine a much simpler method This method requires adding two more parameters to the Camera class: Aperture and FocalPlane Aperture refers to the diameter of the lens and FocalPlane is the distance in front of the camera

Rendering Camera Focus Our existing approach to generating camera rays uses a virtual image plane, which is 1.0 unit in front of the camera. We first generate a point on the image plane and then generate a ray from the camera origin through the point To modify this, all we need to do is scale the virtual image plane distance to the focal plane, and then generate a ray origin by choosing a random point on a circular disk the size of the camera aperture

Distant Focal Plane

Medium Focal Plane

Close Focal Plane

Small Aperture

Large Aperture

Bokeh In recent years, the Japanese word bokeh has been adopted by English speaking photographers and computer graphics practitioners to refer to the artistic effect of defocus blur More specifically, the term refers to how the lens renders out-of-focus points of light

Point Spread Function The point spread function (PSF) of an optical system describes how an individual point of light in the scene will map to the image

Motion Blur

Motion Blur Motion blur refers to the blurring we see on fast moving objects Motion blur is generally a good thing, and can improve the perceived realism in animations Motion blur is sometimes called temporal antialiasing (at least that s what it s called in the computer graphics world), and it reduces the aliasing phenomenon known as strobing

Shutter Speeds Motion blur occurs because the camera shutter is open for a finite length of time Camera shutter speeds vary based on light levels, exposure settings, film speed, etc. Typical shutter speeds range from 1/30 th of a second down to 1/4000 th of a second Some still images use very long shutter speeds (maybe a few seconds) Motion picture cameras (video or film) require the shutter speed to be faster than the frame time, so for 60Hz video, a typical shutter speed would be 1/100 th of a second or less Older film cameras typically run at 24 fps, and the shutter is typically open for up to half of the frame time, so 1/48 th of a second or less

Rendering Motion Blur To add motion blur to a ray tracer, we will need to distribute rays in time We add a Time field to the Ray class When the camera generates a ray, we assign a random time We can either base it on actual time in seconds, or we can normalize it to a [0 1] range When we trace the ray, we need to intersect it with objects moved to their correct position, based on the ray time

Moving Objects Not all objects in the scene need to move, so we can treat moving objects as a special case Just like we used the InstanceObject to position an object with a matrix, we can create a MotionObject which handles moving objects The MotionObject is a lot like an InstanceObject except it allows the matrix to change over time A simple way to do this is give it an initial and final matrix In the MotionObject::Intersect() function, we first use the input ray time to interpolate between the initial and final matrix. Then we have to compute the inverse on the fly, and from there, it behaves like a normal InstanceObject A more complex implementation could do an animation channel lookup and compute a matrix based on that

Matrix Interpolation Assuming we go with the simpler option, we still have to address the issue of how we interpolate between the initial and final matrix The simplest way is to just do a linear interpolation (lerp) for each component of the matrix Lerp(t,a,b) = (1-t)a + tb = a + t(b-a) This will work reasonably well, assuming the object doesn t rotate too much in the time interval, which is usually the case

Matrix Interpolation However, for fast rotating objects (like a propeller), this may not be good enough When the matrix is linearly interpolated, every part of the object will move in a straight line from the initial to final position This is OK if the object only rotates a few degrees, but will start to break down if there is more than say 20 or so degrees of movement To improve on this, we can use quaternion interpolation or twist extraction

Matrix Interpolation GLM has a routine for interpolating rigid 4x4 matrices: mtx = glm::interpolate(m1, m2, t); GLM s documentation says that this may behave oddly if there are any scales or shears in the matrices It should work fine for rigid matrices made from pure rotations & translations

Non-Orthonormal Matrices For non-orthonormal matrices, it might be best to just lerp the individual components Alternatively, one can extract twist, shear, and scale, and then interpolate all of those and reconstruct a matrix, but this might be overkill

MotionObject Handling moving objects using this scheme is quite simple, and doesn t have a large performance penalty, apart from the necessity to shoot enough rays to reduce the noise in the blurred areas One catch to remember though, is that if you allow Objects to be placed in spatial data structures, you will need to compute a bounding box for the MotionObject that encompasses the object for the entire time interval

Moving Objects

Moving Cameras Objects are not the only thing that moves Often, the camera moves as well This may result in blurring everything during fast camera moves or turns The camera can be handled much like an object It can have an initial and final matrix and that can get interpolated as well The camera ray time is chosen first, then the camera matrix is interpolated, and then the ray origin and direction are built from the interpolated camera matrix

Moving Camera

Fast Panning Shot

Reflections & Shadows Motion blur comes from averaging across a finite interval of time Each camera ray is meant to be a single instant of that time interval Therefore, any reflected rays or shadow rays spawned from the initial camera ray must use the same time as the camera ray

Animation Blur Technically, anything that changes over time can be blurred This isn t limited just to matrices For example, camera FOV angles can changea fast zoom-in will be blurred One could even blur changes in lighting properties (position, color, brightness ) or any other dynamic property

Lens Imperfections

Camera Imperfections Real camera lenses aren t perfect and can suffer from imperfections or aberrations Like defocus and motion blur, these can sometimes be desirable and sometimes undesirable However, if our goal is to model a real lens or integrate synthetic objects into a real scene, we may want to include some of these

Spherical Aberration Spherical aberration occurs when the individual rays coming from a point in the scene do not converge to a point on the film plane

Coma Aberration Coma aberration refers to the lens distortion that can cause off-axis point sources to appear to have a tail like a comet

Chromatic Aberration Chromatic Aberration is caused by the fact that the index of refraction for a lens varies with the wavelength of the light It can cause color fringing especially towards the edges of the image

Astigmatism Astigmatism is caused by lens elements that are not radially symmetric Rays in different planes focus to different points, leading to asymmetric distortions in the final image

Radial Distortion Lenses can cause various types of geometric distortion of the image Fish-eye lenses take advantage of this effect Barrel distortion Pincushion distortion Fisheye lens (barrel)

Vignette Vignetting is the reduction in brightness or color saturation towards the edge of the image

Lens Flares Lens flares are caused by interreflection and scattering between the different elements and other components in a lens Very bright light sources tend to cause flares even if they are outside the image frame

Bloom, Halos, & Stars Bloom, halos, and stars are forms of lens flares

Modeling Lens Imperfections Most of the lens imperfections shown can be faked as a 2D post process We will discuss this in some more detail when we talk about HDR (high dynamic range) imaging in a later lecture There are some modern techniques however, that attempt to fully model the lens and capture all of these effects purely from the shape and arrangement of the lens elements

Sensor Modeling In addition to modeling the lens system of a camera, one could model the properties of the sensor as well By sensor, we could mean either a piece of film or some electronic sensor like a CCD Sensor modeling includes modeling the lightresponse curve as well as additional noise properties and/or diffusion properties taking place

Camera Research Papers A realistic camera model for computer graphics, Kolb, Mitchell, Hanrahan, 1995 Polynomial optics: a construction kit for efficient ray-tracing of lens systems, Hullin, Hanika, Heidrich, 2012 Efficient Monte Carlo rendering with realistic lenses, Hanika, Dachsbacher, 2014