Image Processing & Projective geometry

Similar documents
Unit 1: Image Formation

Computer Vision. The Pinhole Camera Model

Sensors and Sensing Cameras and Camera Calibration

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

How do we see the world?

Lecture 02 Image Formation 1

Goal of this Section. Capturing Reflectance From Theory to Practice. Acquisition Basics. How can we measure material properties? Special Purpose Tools

Two strategies for realistic rendering capture real world data synthesize from bottom up

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Homogeneous Representation Representation of points & vectors. Properties. Homogeneous Transformations

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Cameras. CSE 455, Winter 2010 January 25, 2010

Single-view Metrology and Cameras

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Image formation - Cameras. Grading & Project. About the course. Tentative Schedule. Course Content. Students introduction

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

CS6670: Computer Vision

ME 6406 MACHINE VISION. Georgia Institute of Technology

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Computer Vision Slides curtesy of Professor Gregory Dudek

MEM455/800 Robotics II/Advance Robotics Winter 2009

CSE 473/573 Computer Vision and Image Processing (CVIP)

MEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST

Catadioptric Stereo For Robot Localization

Building a Real Camera

Dr F. Cuzzolin 1. September 29, 2015

Parameter descriptions:

HDR videos acquisition

CIS581: Computer Vision and Computational Photography Homework: Cameras and Convolution Due: Sept. 14, 2017 at 3:00 pm

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Vision Review: Image Processing. Course web page:

Photographing Long Scenes with Multiviewpoint

Lecture 2 Camera Models

CSE 527: Introduction to Computer Vision

Image Formation and Capture

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Astronomy 80 B: Light. Lecture 9: curved mirrors, lenses, aberrations 29 April 2003 Jerry Nelson

Virtual and Digital Cameras

What will be on the midterm?

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera High dynamic range imaging

Lecture 2 Camera Models

Cameras. Outline. Pinhole camera. Camera trial #1. Pinhole camera Film camera Digital camera Video camera

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Midterm Examination CS 534: Computational Photography

Announcement A total of 5 (five) late days are allowed for projects. Office hours

Lecture 7: homogeneous coordinates

Digital Image Processing

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Oversubscription. Sorry, not fixed yet. We ll let you know as soon as we can.

Computer Vision. Thursday, August 30

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Processing

LENSES. INEL 6088 Computer Vision

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

3D Viewing. Introduction to Computer Graphics Torsten Möller / Manfred Klaffenböck. Machiraju/Zhang/Möller

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Computational Approaches to Cameras

Image Processing : Introduction

6.098/6.882 Computational Photography 1. Problem Set 1. Assigned: Feb 9, 2006 Due: Feb 23, 2006

ECEN 4606, UNDERGRADUATE OPTICS LAB

A Comparison Between Camera Calibration Software Toolboxes

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

General Imaging System

Waves & Oscillations

Laboratory experiment aberrations

Be aware that there is no universal notation for the various quantities.

A Geometric Correction Method of Plane Image Based on OpenCV

Calibration Report. Short Version. UltraCam L, S/N UC-L Vexcel Imaging GmbH, A-8010 Graz, Austria

Waitlist. We ll let you know as soon as we can. Biggest issue is TAs

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

Chapter 25 Optical Instruments

CS 376b Computer Vision

CS6670: Computer Vision

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

AR 2 kanoid: Augmented Reality ARkanoid

CORRECTED VISION. Here be underscores THE ROLE OF CAMERA AND LENS PARAMETERS IN REAL-WORLD MEASUREMENT

Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera

Coded photography , , Computational Photography Fall 2018, Lecture 14

Chapter 18 Optical Elements

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Advanced Camera and Image Sensor Technology. Steve Kinney Imaging Professional Camera Link Chairman

Geometry of Aerial Photographs

Lane Detection in Automotive

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Depth Perception with a Single Camera

CPSC 425: Computer Vision

Vision, Color, and Illusions. Vision: How we see

3DUNDERWORLD-SLS v.3.0

INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET

Lane Detection in Automotive

Homographies and Mosaics

Transcription:

Image Processing & Projective geometry Arunkumar Byravan Partial slides borrowed from Jianbo Shi & Steve Seitz

Color spaces RGB Red, Green, Blue HSV Hue, Saturation, Value

Why HSV? HSV separates luma, or the image intensity, from chroma or the color information Different shades of a color have same hue, different RGBs Advantage: Robustness to lighting conditions, shadows etc Easy to use for color thresholding! Fast conversion from RGB to HSV (Python: colorsys) Other relevant color spaces: YCbCr, Lab

HSV example GIMP

MPC Last lecture Repeat: Pick a set of controls (Ex: linear velocity, steering angle) Simulate/Rollout using internal model (for time T) Compute error (Ex: distance from desired path, desired fixed point etc.) Choose controls that minimize error Execute control for T << T Key questions: How to generate rollouts? How to measure error?

MPC - Racecar Rollouts Image templates Pixel tracks of potential paths taken by the car How to generate them? Errors Distance from track in image How to measure error? Other error metrics: Distance to target point Parameterized error (line, spline etc)

How to generate templates? Heuristic approach: Generate arcs of varying curvature Associate with a control (linearly interpolate controls & invent a mapping)

How to generate samples? Geometric approach use motion model Generate rollouts from kinematic model based on controls Linearly interpolate controls, rollout a trajectory for fixed horizon T Project rollouts onto image to generate templates Imagine how each rollout would look like when seen from the camera

How to generate templates? How do we generate image templates from these rollouts? Projective geometry! Rollouts from the kinematic motion model, points in frame of car, z = 0

Camera Extrinsics x +y z y z +x x y Origin frame for rollouts Red is camera frame, white is rollout origin frame

Camera Extrinsics We have 3D points in robot frame (white) Transform points to camera frame through extrinsics: Camera frame! # $ = ' ( ) 1! # $ 1 1 R = 3x3 rotation matrix t = 3x1 translation vector For racecar, extrinsics can be measured (and is constant) Robot frame x z y z y x Red is camera frame, white is rollout origin frame

Camera Extrinsics Extrinsics allow us to transform 3D points to camera frame of reference We need to figure out how to get the image of these points as seen by the camera

Image formation process Slide from Steve Seitz

Pinhole camera model Add a barrier to block off most of the rays This reduces blurring The opening known as the aperture How does this transform the image? Slide from Steve Seitz

Pinhole camera model Pinhole camera model Pinhole model: Captures pencil of rays all rays through a single point The point is called Center of Projection (COP) The image is formed on the Image Plane Effective focal length f is distance from COP to Image Plane Slide from Steve Seitz

Homemade pinhole camera Slide from Jianbo Shi

Camera with Lens Camera with lense Slide from Jianbo Shi

Digital camera Slide from Jianbo Shi

Bayer grid Slide from Jianbo Shi

Projection from 3D to 2D Lens Pixel CCD sensor 3D object Slide from Jianbo Shi

3D point projection (Metric space) 3D point : Focal length in meter 2D projection onto CCD plane Slide from Jianbo Shi

3D point projection (Metric space) Projection plane 3D point Focal length in meter 2D projection onto CCD plane Slide from Jianbo Shi

3D point projection (Metric space) Projection plane 3D point Focal length in meter 2D projection onto CCD plane Slide from Jianbo Shi

3D point projection (Pixel space) CCD sensor (mm) Image (pixel) : Image principal point Slide from Jianbo Shi

3D point projection (Pixel space) Projection plane Focal length in pixel CCD sensor (mm) Image (pixel) Focal length in pixel Slide from Jianbo Shi

3D point projection (Pixel space) Projection plane Focal length in pixel CCD sensor (mm) Image (pixel) Focal length in pixel Slide from Jianbo Shi

3D point projection (Pixel space) Projection plane 3D point img img Focal length in meter img img Slide from Jianbo Shi

Homogeneous coordinates Projection plane 2D point =: 3D ray λ 2 : A point in Euclidean space ( ) can be represented by 2 a homogeneous representation in Projective space ( ) (3 numbers). Slide from Jianbo Shi

Homogeneous coordinates 2D point =: 3D ray Projection plane λ λ Homogeneous coordinate : 3D point lies in the 3D ray passing 2D image point. Slide from Jianbo Shi

3D point projection (Pixel space) Projection plane CCD sensor (mm) Image (pixel) Homogeneous representation Slide from Jianbo Shi

Camera intrinsics Pixel space Metric space + Projection plane Camera intrinsic parameter : metric space to pixel space Slide from Jianbo Shi

Putting it all together Generating templates Procedure: Generate rollouts based on kinematic car model (robot frame) Transform points to camera frame based on camera extrinsics Project points to pixel space using camera intrinsics Camera frame! # $ 1 = ' ( ) 1 Extrinsics! # $ 1 Robot frame Need to be measured for racecar (Approx values in /tf) * + 1 =, Intrinsics! $ # $ Intrinsics fixed for a camera (for racecar: /camera/color/camera_info)

Rollout templates Rollouts from kinematic car model Projected rollouts

Measuring error (for MPC) Template matching using convolution Find template that best matches masked track, choose it Issues?

Measuring error (Set-point error) Choose set point in image (similar to PID) Find template that gets you closest to set point, choose it

Measuring error (Set-point + Direction error) Choose set point in image along with heading (based on track) Find template that gets you closest to set point while oriented correctly Keep track of heading in templates

Measuring error (3D error) Instead of generating pixelized templates, project masked track (or set point) back to 3D How? Each pixel corresponds to ray in 3D We know that all pixels on track lie on ground plane (known) Solve for ray-plane intersection Advantage: Reason in 3D! Projection plane 2D point == 3D ray Note: arrow direction Pixel space Metric space 3D ray The 3D point must lie in the 3D ray passing through the origin and 2D image point.

Measuring error (fancy error metric) Fit a line or curve to pixel/3d track points & your rollouts Compare the errors in parametric space (line / curve co-efficients)

Focal length Slide from Jianbo Shi

Focal length Slide from Jianbo Shi

Focal length Slide from Jianbo Shi

Focal length Slide from Jianbo Shi

Dolly zoom https://www.youtube.com/watch?v=nb4bikrnzmk

Perspective cues Slide from Steve Seitz

Perspective cues Slide from Steve Seitz

Perspective cues Slide from Steve Seitz

Lens distortion (Fisheye lens) Multiple models to capture distortion, commonly used is Plumb Bob model Slide from Jianbo Shi

Lens distortion Barrel distortion Pincushion distortion Moustache distortion Modeled as a function that changes pixel (u,v) after intrinsics + extrinsics based projection

Camera Calibration Images courtesy Jean-Yves Bouguet, Intel Corp. Compute camera intrinsic parameters & distortion Compute extrinsics between multiple views/different cameras Key idea: Use a known object of fixed size & match it across multiple scenes -> provides enough constraints to solve for camera parameters Good code available online! Intel s OpenCV library: http://www.intel.com/research/mrl/research/opencv/ Matlab version by Jean-Yves Bouget: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html Zhengyou Zhang s web site: http://research.microsoft.com/~zhang/calib/ Slide from Steve Seitz