Photographing Long Scenes with Multiviewpoint

Similar documents
Multi Viewpoint Panoramas

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

How to combine images in Photoshop

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

6.A44 Computational Photography

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera

Synthetic Stereoscopic Panoramic Images

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Light-Field Database Creation and Depth Estimation

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Panoramas. Featuring ROD PLANCK. Rod Planck DECEMBER 29, 2017 ADVANCED

Midterm Examination CS 534: Computational Photography

Colour correction for panoramic imaging

Homographies and Mosaics

Homographies and Mosaics

Video Registration: Key Challenges. Richard Szeliski Microsoft Research

Image Processing & Projective geometry

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Fast and High-Quality Image Blending on Mobile Phones

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Computer Vision. The Pinhole Camera Model

Lecture 7: homogeneous coordinates

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Image Denoising using Dark Frames

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho

Exposure settings & Lens choices

Unit 1: Image Formation

Image Processing for feature extraction

Creating a Panorama Photograph Using Photoshop Elements

Two strategies for realistic rendering capture real world data synthesize from bottom up

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

* When the subject is horizontal When your subject is wider than it is tall, a horizontal image compliments the subject.

LENSES. INEL 6088 Computer Vision

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS

Multi-perspective Panoramas. Slides from a talk by Lihi Zelnik-Manor at ICCV 07 3DRR workshop

Reikan FoCal Aperture Sharpness Test Report

Cameras. CSE 455, Winter 2010 January 25, 2010

6.869 Advances in Computer Vision Spring 2010, A. Torralba

Reikan FoCal Aperture Sharpness Test Report

Computational Photography and Video. Prof. Marc Pollefeys

Toward Non-stationary Blind Image Deblurring: Models and Techniques

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display

Reikan FoCal Aperture Sharpness Test Report

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Reikan FoCal Aperture Sharpness Test Report

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

Panoramic Image Mosaics

Lenses, exposure, and (de)focus

Removing Temporal Stationary Blur in Route Panoramas

The key to a fisheye is the relationship between latitude ø of the 3D vector and radius on the 2D fisheye image, namely a linear one where

How do we see the world?

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Lens Aperture. South Pasadena High School Final Exam Study Guide- 1 st Semester Photo ½. Study Guide Topics that will be on the Final Exam

Technical information about PhoToPlan

Creating Stitched Panoramas

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Building a Real Camera

STEM Spectrum Imaging Tutorial

Multi-perspective Panoramas. Slides from a talk by Lihi Zelnik-Manor at ICCV 07 3DRR workshop

Improved SIFT Matching for Image Pairs with a Scale Difference

CSE 473/573 Computer Vision and Image Processing (CVIP)

Basic principles of photography. David Capel 346B IST

Exercise questions for Machine vision

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Image formation - Cameras. Grading & Project. About the course. Tentative Schedule. Course Content. Students introduction

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

Photoshop Elements 13 Training part 1 1:53:28 14:47:10

EF 15mm f/2.8 Fisheye. EF 14mm f/2.8l USM. EF 20mm f/2.8 USM

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

High Performance Imaging Using Large Camera Arrays

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Study guide for Photography / Understanding the SLR Camera

Lens Openings & Shutter Speeds

The principles of CCTV design in VideoCAD

Parameter descriptions:

Digital Photographic Imaging Using MOEMS

Aperture & ƒ/stop Worksheet

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Photoshop Elements 14 Training part 1

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

Sensors and Sensing Cameras and Camera Calibration

CAMERA BASICS. Stops of light

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

On spatial resolution

Aperture: Circular hole in front of or within a lens that restricts the amount of light passing through the lens to the photographic material.

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Photography Composition Basics

MEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST

Robot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment

1. Figure A' is similar to Figure A. Which transformations compose the similarity transformation that maps Figure A onto Figure A'?

ONE OF THE MOST IMPORTANT SETTINGS ON YOUR CAMERA!

One Week to Better Photography

Transcription:

Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov

Motivation Want an image that shows an elongated scene Single image not sufficient Small part of street Wider field of view: distortions towards the edges of image Far away: loss of details (perspective depth cues) Capture images from different points of view Needs a way to stitch images together Should resemble what humans would see

Some definitions Multi-viewpoint Many single viewpoint photos rendered in one picture naturally Long scenes Back of river, aisle of grocery store

Strip Panorama Also known as slit-scan panorama Past: created by sliding slit-shaped aperture across film Now: extract thin, vertical strips of pixels from frames of a video sequence

Disadvantages of Strip Panorama Objects further from camera, horizontally stretched Closer objects, squashed For automatic system complex capture setup Bad quality Do not preserve depth cues

System Overview Goal: reduce disadvantages of strip panoramas Stitch together arbitrary regions of source images Use Markov Random Field optimization to solve objective function Allows interactive refinement

What constitutes a good panorama image? Inspired by work of artist Michael Koller Each object in the scene is rendered from a viewpoint in front of it (avoid perspective distortion) Panoramas composed of large regions of linear perspective seen from a viewpoint where a person would naturally stand (city block viewed from across street, not far away) Local perspective effects are evident (closer objects larger than farther objects) Seams between these perspective regions do not draw attention (natural/continuous)

Image Types Those too long to effectively image from single viewpoint Those whose geometry predominantly lies along large, dominant plane 3D images are less likely to work well (turn around street corners, four sides of buildings, etc.)

Key Observation Images projected onto the picture surface from their original 3D viewpoints will agree in areas depicting scene geometry lying on the dominant plane (point a will project from each camera to same pixel on picture surface, while point b will project to different places)

Key Observation This agreement can be visualized by averaging the projections of all of the cameras onto the picture surface The resulting image is sharp for geometry near the dominant plane because these projections are consistent and blurry for objects at other depths Choose the best seam

1. Capture images Capture lots of images (40 min) e.g. 107 for this road

1. Capture images Photographs taken with hand-held camera From multiple viewpoints along scene Intervals of one large step (~1m) Auto focus Manual exposure Fisheye lens for some scenes Cover more scene content in one picture to avoid frequent viewpoint transition

2. Preprocess Remove radial distortions (e.g. fisheye lens) Build projection matrices for each camera i 3D rotation matrix R i 3D translation matrix t i Focal length f i Camera location in world coordinates: C i = -R it t i Recover parameters using structure-from-motion system Match SIFT features between pairs of inputs Compensate exposures

3. Picture Surface Selection Picture surface selected by user View of recovered 3D points Automatic definition of coordinate system Fit plane to camera viewpoints using PCA Blue line: picture surface selected by user Red line: extracted camera locations

3. Picture Surface Selection Project source image onto picture surface S(i,j): 3D location of sample (i,j) on picture surface S(i,j) projected into source

3. Picture Surface Selection Average out many of them Average image After warping + cropping

4. Viewpoint Selection Each image I i represents i th viewpoint Now have a series of n images I i of equivalent dimension Task: choose color for each pixel p = (px,py) in panorama from one source image: I i (p) In essence, a pixel labeling problem

4. Viewpoint Selection Objective function For every point p of result find best source image L(p) = i if pixel p of the panorama is assigned color I i (p) Best = minimizing energy Minimize using MRF optimization 3 terms

4. Viewpoint Selection Term I D: an object in the scene should be imaged from a view point roughly in front of it Approximation of a more direct notion Vector starting at S(p) of picture surface Extend in direction normal to picture surface Angle between C i S(p) and above vector The higher the angle the less in front of object

4. Viewpoint Selection p i here (i.e. p L(p) ) = pixel in i-th image closest to camera (~center of the image) in the composite coordinates Find p i Pixel p chooses its color from I i Minimize 2D distance from p to p L(p)

4. Viewpoint Selection Term II H: cost function that encourages panorama to resemble average image in areas where scene geometry intersects picture surface Will occur naturally except in outliers resulted from motion, occlusions, etc. Want to discount outliers

4. Viewpoint Selection Median image, M(x,y) Vector median filter computed across three color channels MAD, σ(x,y) Median absolute deviation Minimize difference between median image and image defined by current labeling for pixels whose variance is low; 0 if variance is too large

4. Viewpoint Selection Term III V: encourage seamless transition between different regions of linear perspective p and q are neighboring pixels

4. Viewpoint Selection Parameters, Determined experimentally - typically 100 - typically 0.25 Higher = more straight views and more noticeable seams Lower both and = more likely remove objects off of the dominant plane

4. Viewpoint Selection The solver Constraint: pixels in image I i to which the I th camera does not project are set as null -- > the black holes L(p) = I is not possible if I i = null Wish to compute panorama that minimizes overall cost function Resembles Markov Random Field optimization Minimize using min-cut optimization in a series of alphaexpansion moves Takes typically ~20 minutes Still, some artifacts remain Fix them manually

5. Interactive Refinement: View Selection Supply the solution L(p) manually for some pixels p Selects source image, draws stroke where source should appear in panorama

5. Interactive Refinement: Seam Suppression MRF optimization try to route seams around objects that lie off the dominant plane Such objects don t always exist Shortened car Mark source

5. Interactive Refinement: Seam Suppression Mark original images, propagate to projected image Allows indication of objects in scene across which seams should not be placed Keep whole region as much as possible

Example Result