Computer Vision Slides curtesy of Professor Gregory Dudek
|
|
- Janel Beasley
- 5 years ago
- Views:
Transcription
1 Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis
2 Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short range. Fast. CSCE 774: Robotic Systems 2
3 So, what s the problem? How hard is vision? Why do we think is do-able? Problems: Slow. Data-heavy. Impossible. Mixes up many factors. CSCE 774: Robotic Systems 3
4 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 4
5 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 5
6 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 6
7 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 7
8 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 8
9 The Vision Problem Input Vision Algorithm Output CSCE 774: Robotic Systems 9
10 doesn t need a full interpretation of available images This is Prof. X in his office offering me a can of spam. does need information about what to do... Run Away!! What does a robot need? reactive deliberative avoiding obstacles (or predators) pursuing objects localizing itself Mapping finding targets reasoning about the world environmental interactions CSCE 774: Robotic Systems 10
11 What does a robot need? What a camera does to the 3d world... Shigeo Fukuda squeezes away one dimension Material/ CSCE 774: Robotic Systems 11
12 What does a robot need? What a camera does to the 3d world... Shigeo Fukuda Material/ CSCE 774: Robotic Systems 12
13 Ill-posed In trying to extract 3d structure from 2d images, vision is an ill-posed problem. CSCE 774: Robotic Systems 13
14 The vision problem in general... In trying to extract 3d structure from 2d images, vision is an ill-posed problem. Basically, there are too many possible worlds that might (in theory) give rise to a particular image CSCE 774: Robotic Systems 14
15 Ill-posed In trying to extract 3d structure from 2d images, vision is an ill-posed problem. CSCE 774: Robotic Systems 15
16 Ill-posed In trying to extract 3d structure from 2d images, vision is an ill-posed problem. An image isn t enough to disambiguate the many possible 3d worlds that could have produced it. CSCE 774: Robotic Systems 16
17 Camera Geometry 3D2D transformation: perspective projection center of projection object focal length image plane CSCE 774: Robotic Systems 17
18 Coordinate Systems y u pixel coordinates v x object coordinates canonical axes at the C.O.P. u (col) z f Z optical axis y v (row) x principal point Add coordinate systems in order to describe feature points... CSCE 774: Robotic Systems 18
19 Coordinate Systems image can. coords: (x,y) y u pixel coordinates v (X,Y,Z) in canonical coords x object coordinates canonical axes z f CSCE 774: Robotic Systems 19
20 From 3d to 2d image can. coords: (x,y) y u pixel coordinates v (X,Y,Z) in canonical coords x object coordinates canonical axes z f x = f X Z y = f Y Z a nonlinear transformation goal: to recover information about (X,Y,Z) from (x,y) CSCE 774: Robotic Systems 20
21 Camera Calibration Camera Model [u v 1] Pixel coords World coords Intrinsic Parameters focal lengths in pixels skew coefficient focal point Extrinsic Parameters Rotation and Translation CSCE 774: Robotic Systems w w w c z y x T A R v u z T w w w z y x o y x v u A y y x x m f m f, u,v o 0 T R
22 Camera Calibration Existing packages in MATLAB, OpenCV, etc CSCE 774: Robotic Systems 22
23 A Vision solution If interpreting a single image is difficult... What about more?! multiple cameras multiple times CSCE 774: Robotic Systems 23
24 Robot vision sampler A brief overview of robotic vision processing... (1) Image streams simplified via generality simplified via specificity (2) Stereo vision (or beyond...) (3) Incorporating vision within robot control 3d reconstruction Visual servoing speaking of servoing... CSCE 774: Robotic Systems 24
25 Visual Servoing CSCE 774: Robotic Systems 25
26 Details Images are not actually continuous. The sampling (and hardware) issues lead to a few other minor problems. CSCE 774: Robotic Systems 26
27 CCD (Charge-Coupled Device) CSCE 774: Robotic Systems 27
28 Aliasing. To avoid: f sampling > 2F max Nyquist Rate CSCE 774: Robotic Systems 28
29 Aliasing: Moiré Patterns CSCE 774: Robotic Systems 29
30 Key problems Recognition: What is that thing in the picture? What are all the things in the image? Scene interpretation Describe the image? Scene reconstruction : What is the 3-dimensional layout of the scene? What are the physical parameters that gave rise to the image? What is a description of the scene? Notion of an inverse problem. CSCE 774: Robotic Systems 30
31 Correspondence Problem CSCE 774: Robotic Systems 31
32 Correspondence From I 1 From I 2? CSCE 774: Robotic Systems 32
33 Gaussian Blur CSCE 774: Robotic Systems 33
34 Gaussian Blur and Noise CSCE 774: Robotic Systems 34
35 Gaussian Blur and Noise CSCE 774: Robotic Systems 35
36 Gaussian Blur, Noise, Sobel CSCE 774: Robotic Systems 36
37 Fiduciary Markers/Fiducial Fourier Tag CSCE 774: Robotic Systems 37
38 Stereo Vision: Pinhole Camera image plane f 1 p O 1 focal points CSCE 774: Robotic Systems O 2 image plane f 2 38
39 Stereo Vision: Pinhole Camera image plane f 1 p p 1 O 1 focal points CSCE 774: Robotic Systems O 2 p 2 image plane f 2 39
40 Stereo Vision: Pinhole Camera image plane f 1 (part of) epipolar plane p O 1 p 1 p 2 epipolar line focal points O 2 image plane f 2 CSCE 774: Robotic Systems 40
41 Stereo Vision: Pinhole f O 1 baseline b x1 p x1 D p O 2 p x2 disparity: d=p x1 -p x2 CSCE 774: Robotic Systems x2 Depth: D=fb/d 41
42 Stereo Vision: Pinhole f a 1 q 1 p x1 D q 1 p a 2 x1 p x2 q 2 q 2 x2 CSCE 774: Robotic Systems 42
43 Large Baseline CSCE 774: Robotic Systems 43
44 Stereo: Disparity Map CSCE 774: Robotic Systems 44
45 Another Example (Hole Filling) Cloth Parameters and Motion Capture by David Pritchard B.A.Sc., University of Waterloo, 2001 CSCE 774: Robotic Systems 45
46 Depth Map in a City CSCE 774: Robotic Systems 46
47 Stereo Vision Large number of algorithms out there: rank 43 different algorithms. CSCE 774: Robotic Systems 47
48 Good Feature High Recall Good Precision Feature Detection Feature Matching Several Alternatives: Harris Corners (OpenCV) SURF (OpenCV) SIFT etc CSCE 774: Robotic Systems 48
49 Harris Corners CSCE 774: Robotic Systems 49
50 SURF CSCE 774: Robotic Systems 50
51 SIFT CSCE 774: Robotic Systems 51
52 Optical Flow Definition: the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene. CSCE 774: Robotic Systems 52
53 Optical Flow Field CSCE 774: Robotic Systems 53
54 Optical flow Information about image motion rather than the scene. This is a classic reconstruction problem. This next step might be to use the image motion to infer scene motion, robot motion or 3D layout. time sequence of images CSCE 774: Robotic Systems 54
55 Optical flow Information about scene motion rather than the scene. an image cube I(x,y,t) CSCE 774: Robotic Systems 55
56 Optical flow Information about scene motion rather than the scene. optical flow How? CSCE 774: Robotic Systems 56
57 Optical Flow By measuring the direction that intensities are moving... I(x,y,t) We can estimate things CSCE 774: Robotic Systems 57
58 Observations & Warnings How can we do this? Assume the scene itself is static. Find matching chunks in the images. An instance of correspondence. BUT World really isn t static. Lightning might change even in a static scene. CSCE 774: Robotic Systems 58
59 Optical Flow By measuring the direction that intensities are moving... I(x,y,t) I(x,y,0) I(0,0,0) I(2,-1,0) We can estimate things... di at (0,0,0) dx = I x I(x,y,1) I(0,0,1) CSCE 774: Robotic Systems 59
60 Optical Flow By measuring the direction that intensities are moving... I(x,y,t) I(x,y,0) I(0,0,0) I(2,-1,0) di at (0,0,0) I(x,y,1) I We can estimate things like = I x = x = dx I(0,0,1) I(1,0,0) - I(0,0,0) = CSCE 774: Robotic Systems 60
61 Optical Flow By measuring the direction that intensities are moving... I(x,y,t) I(x,y,0) I(0,0,0) I(2,-1,0) We can estimate things like di dx = I x I(x,y,1) = I y I(0,0,1) so... CSCE 774: Robotic Systems 61 di dy di dt = I t
62 Measuring Optical Flow Let I(x,y,t) be the sequence of images. (x,y,t) Simplest assumption (constant brightness constraint): I(x,y,t) = I(x + dx, y + dy, t + dt) CSCE 774: Robotic Systems 62
63 Measuring Optical Flow Let I(x,y,t) be the sequence of images. (x,y,t) Simplest assumption (constant brightness constraint): I(x,y,t) = I(x + dx, y + dy, t + dt) Reminder: f(x + dx) = f(x) + f (x) dx + f (x) dx 2 / CSCE 774: Robotic Systems 63
64 Measuring Optical Flow Let I(x,y,t) be the sequence of images. (x,y,t) Simplest assumption (constant brightness constraint): I(x,y,t) = I(x + dx, y + dy, t + dt) Reminder: f(x + dx) = f(x) + f (x) dx + f (x) dx 2 / I(x,y,t) = I(x,y,t) + I x dx + I y dy + I t dt + 2nd deriv. + higher CSCE 774: Robotic Systems 64
65 Measuring Optical Flow Let I(x,y,t) be the sequence of images. (x,y,t) Simplest assumption (constant brightness constraint): I(x,y,t) = I(x + dx, y + dy, t + dt) Reminder: f(x + dx) = f(x) + f (x) dx + f (x) dx 2 / I(x,y,t) = I(x,y,t) + I x dx + I y dy + I t dt + 2nd deriv. + higher 0 = I x dx + I y dy + I t dt ignore these terms CSCE 774: Robotic Systems 65
66 Measuring Optical Flow Let I(x,y,t) be the sequence of images. (x,y,t) Simplest assumption (constant brightness constraint): I(x,y,t) = I(x + dx, y + dy, t + dt) Reminder: f(x + dx) = f(x) + f (x) dx + f (x) dx 2 / I(x,y,t) = I(x,y,t) + I x dx + I y dy + I t dt + 2nd deriv. + higher 0 = I x dx + I y dy + I t dt ignore these terms -I t = I dx x dt + I y dy dt intensity-flow equation good and bad... CSCE 774: Robotic Systems 66
67 The aperture problem -I t = I dx x dt + I y dy dt The intensity-flow equation provides only one constraint on two variables ( x-motion and y-motion) It is only possible to find optical flow in one direction... CSCE 774: Robotic Systems 67
68 The aperture problem It is only possible to find optical flow in one direction... at any single point in the image! img1 img2 raw optical flow smoothed for ten iterations Smoothing can be done by incorporating neighboring points information. CSCE 774: Robotic Systems 68
69 Optical Flow Application Visual Odometry Wheel slip detection on future Mars Rovers CSCE 774: Robotic Systems 69
70 Image Downsampling CSCE 774: Robotic Systems 70
Introduction. Ioannis Rekleitis
Introduction Ioannis Rekleitis Why Image Processing? Who here has a camera? How many cameras do you have Point where computers fast/cheap Cameras become omnipresent Deep Learning CSCE 590: Introduction
More informationComputer Vision. Howie Choset Introduction to Robotics
Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Sensors and Image Formation Imaging sensors and models of image formation Coordinate systems Digital
More informationSensors and Sensing Cameras and Camera Calibration
Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014
More informationMIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura
MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work
More informationUnit 1: Image Formation
Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor
More informationME 6406 MACHINE VISION. Georgia Institute of Technology
ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class
More informationCatadioptric Stereo For Robot Localization
Catadioptric Stereo For Robot Localization Adam Bickett CSE 252C Project University of California, San Diego Abstract Stereo rigs are indispensable in real world 3D localization and reconstruction, yet
More informationImage Processing & Projective geometry
Image Processing & Projective geometry Arunkumar Byravan Partial slides borrowed from Jianbo Shi & Steve Seitz Color spaces RGB Red, Green, Blue HSV Hue, Saturation, Value Why HSV? HSV separates luma,
More informationFOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM
FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method
More informationPrinceton University COS429 Computer Vision Problem Set 1: Building a Camera
Princeton University COS429 Computer Vision Problem Set 1: Building a Camera What to submit: You need to submit two files: one PDF file for the report that contains your name, Princeton NetID, all the
More informationComputer Vision. The Pinhole Camera Model
Computer Vision The Pinhole Camera Model Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2017/2018 Imaging device
More informationIMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics
IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)
More informationCIS581: Computer Vision and Computational Photography Homework: Cameras and Convolution Due: Sept. 14, 2017 at 3:00 pm
CIS58: Computer Vision and Computational Photography Homework: Cameras and Convolution Due: Sept. 4, 207 at 3:00 pm Instructions This is an individual assignment. Individual means each student must hand
More informationImage Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.
Image Formation Light (Energy) Source Surface Imaging Plane Pinhole Lens World Optics Sensor Signal B&W Film Color Film TV Camera Silver Density Silver density in three color layers Electrical Today Optics:
More informationRoad Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke
Road Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke Lanes in Construction Sites Roadway is often bounded by elevated objects (e.g. guidance walls)
More informationDeconvolution , , Computational Photography Fall 2018, Lecture 12
Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?
More informationLecture 02 Image Formation 1
Institute of Informatics Institute of Neuroinformatics Lecture 02 Image Formation 1 Davide Scaramuzza http://rpg.ifi.uzh.ch 1 Lab Exercise 1 - Today afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Work
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are
More informationVC 11/12 T2 Image Formation
VC 11/12 T2 Image Formation Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Computer Vision? The Human Visual System
More informationPerception. Introduction to HRI Simmons & Nourbakhsh Spring 2015
Perception Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception my goals What is the state of the art boundary? Where might we be in 5-10 years? The Perceptual Pipeline The classical approach:
More informationDynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken
Dynamically Reparameterized Light Fields & Fourier Slice Photography Oliver Barth, 2009 Max Planck Institute Saarbrücken Background What we are talking about? 2 / 83 Background What we are talking about?
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 2 - Class 2: Vision, Physics, Cameras September 7th, 2017 Today Physics Human Vision Eye Brain Perspective Projection Camera Models Image Formation Digital
More informationReprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera
Facoltà di Ingegneria Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Mariolino De Cecco, Nicolo Biasi, Ilya Afanasyev Trento, 2012 1/20 Content
More informationDr F. Cuzzolin 1. September 29, 2015
P00407 Principles of Computer Vision 1 1 Department of Computing and Communication Technologies Oxford Brookes University, UK September 29, 2015 September 29, 2015 1 / 73 Outline of the Lecture 1 2 Basics
More informationCSCI 1290: Comp Photo
CSCI 29: Comp Photo Fall 28 @ Brown University James Tompkin Many slides thanks to James Hays old CS 29 course, along with all of its acknowledgements. Things I forgot on Thursday Grads are not required
More informationProjection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Projection Readings Nalwa 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html
More informationLecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)
Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces
More information3DUNDERWORLD-SLS v.3.0
3DUNDERWORLD-SLS v.3.0 Rapid Scanning and Automatic 3D Reconstruction of Underwater Sites FP7-PEOPLE-2010-RG - 268256 3DUNDERWORLD Software Developer(s): Kyriakos Herakleous Researcher(s): Kyriakos Herakleous,
More informationProjection. Readings. Szeliski 2.1. Wednesday, October 23, 13
Projection Readings Szeliski 2.1 Projection Readings Szeliski 2.1 Müller-Lyer Illusion by Pravin Bhat Müller-Lyer Illusion by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Müller-Lyer
More information8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and
8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE
More informationOverview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image
Camera & Color Overview Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Book: Hartley 6.1, Szeliski 2.1.5, 2.2, 2.3 The trip
More informationSingle Camera Catadioptric Stereo System
Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various
More informationGoal of this Section. Capturing Reflectance From Theory to Practice. Acquisition Basics. How can we measure material properties? Special Purpose Tools
Capturing Reflectance From Theory to Practice Acquisition Basics GRIS, TU Darmstadt (formerly University of Washington, Seattle Goal of this Section practical, hands-on description of acquisition basics
More informationDeconvolution , , Computational Photography Fall 2017, Lecture 17
Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another
More informationLane Detection in Automotive
Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 5 Defining our Region of Interest... 6 BirdsEyeView Transformation...
More informationCameras. CSE 455, Winter 2010 January 25, 2010
Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project
More informationMethod for out-of-focus camera calibration
2346 Vol. 55, No. 9 / March 20 2016 / Applied Optics Research Article Method for out-of-focus camera calibration TYLER BELL, 1 JING XU, 2 AND SONG ZHANG 1, * 1 School of Mechanical Engineering, Purdue
More informationSUPER RESOLUTION INTRODUCTION
SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-
More information6.869 Advances in Computer Vision Spring 2010, A. Torralba
6.869 Advances in Computer Vision Spring 2010, A. Torralba Due date: Wednesday, Feb 17, 2010 Problem set 1 You need to submit a report with brief descriptions of what you did. The most important part is
More informationActive Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1
Active Stereo Vision COMP 4102A Winter 2014 Gerhard Roth Version 1 Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 22: Computational photography photomatix.com Announcements Final project midterm reports due on Tuesday to CMS by 11:59pm BRDF s can be incredibly complicated
More informationVC 14/15 TP2 Image Formation
VC 14/15 TP2 Image Formation Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Computer Vision? The Human Visual System
More informationAcquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools
Course 10 Realistic Materials in Computer Graphics Acquisition Basics MPI Informatik (moving to the University of Washington Goal of this Section practical, hands-on description of acquisition basics general
More informationDesign of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems
Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent
More informationLenses, exposure, and (de)focus
Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26
More informationmultiframe visual-inertial blur estimation and removal for unmodified smartphones
multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers
More informationMEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST
MEM: Intro to Robotics Assignment 3I Due: Wednesday 10/15 11:59 EST 1. Basic Optics You are shopping for a new lens for your Canon D30 digital camera and there are lots of lens options at the store. Your
More informationProjection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.
Projection Projection Readings Szeliski 2.1 Readings Szeliski 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Let s design a camera
More informationBe aware that there is no universal notation for the various quantities.
Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and
More informationIntroduction. Related Work
Introduction Depth of field is a natural phenomenon when it comes to both sight and photography. The basic ray tracing camera model is insufficient at representing this essential visual element and will
More informationModeling and Synthesis of Aperture Effects in Cameras
Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting
More informationSURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008
ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES
More informationCS 465 Prelim 1. Tuesday 4 October hours. Problem 1: Image formats (18 pts)
CS 465 Prelim 1 Tuesday 4 October 2005 1.5 hours Problem 1: Image formats (18 pts) 1. Give a common pixel data format that uses up the following numbers of bits per pixel: 8, 16, 32, 36. For instance,
More informationVC 16/17 TP2 Image Formation
VC 16/17 TP2 Image Formation Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Hélder Filipe Pinto de Oliveira Outline Computer Vision? The Human Visual
More informationCS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters
More informationA Mathematical model for the determination of distance of an object in a 2D image
A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in
More informationImage Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced
Image Acquisition Hardware Image Acquisition and Representation how digital images are produced how digital images are represented photometric models-basic radiometry image noises and noise suppression
More informationIntroduction. Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University
EEE 508 - Digital Image & Video Processing and Compression http://lina.faculty.asu.edu/eee508/ Introduction Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University
More informationProf. Feng Liu. Fall /04/2018
Prof. Feng Liu Fall 2018 http://www.cs.pdx.edu/~fliu/courses/cs447/ 10/04/2018 1 Last Time Image file formats Color quantization 2 Today Dithering Signal Processing Homework 1 due today in class Homework
More informationImage Acquisition and Representation. Camera. CCD Camera. Image Acquisition Hardware
Image Acquisition and Representation Camera Slide 1 how digital images are produced how digital images are represented Slide 3 First photograph was due to Niepce of France in 1827. Basic abstraction is
More informationBackground. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image
Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How
More informationComputer Graphics Fundamentals
Computer Graphics Fundamentals Jacek Kęsik, PhD Simple converts Rotations Translations Flips Resizing Geometry Rotation n * 90 degrees other Geometry Rotation n * 90 degrees other Geometry Translations
More informationDigital deformation model for fisheye image rectification
Digital deformation model for fisheye image rectification Wenguang Hou, 1 Mingyue Ding, 1 Nannan Qin, 2 and Xudong Lai 2, 1 Department of Bio-medical Engineering, Image Processing and Intelligence Control
More informationColor Space 1: RGB Color Space. Color Space 2: HSV. RGB Cube Easy for devices But not perceptual Where do the grays live? Where is hue and saturation?
Color Space : RGB Color Space Color Space 2: HSV RGB Cube Easy for devices But not perceptual Where do the grays live? Where is hue and saturation? Hue, Saturation, Value (Intensity) RBG cube on its vertex
More informationDistance Estimation with a Two or Three Aperture SLR Digital Camera
Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University
More informationA moment-preserving approach for depth from defocus
A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:
More informationComputer Vision. Thursday, August 30
Computer Vision Thursday, August 30 1 Today Course overview Requirements, logistics Image formation 2 Introductions Instructor: Prof. Kristen Grauman grauman @ cs TAY 4.118, Thurs 2-4 pm TA: Sudheendra
More informationHow do we see the world?
The Camera 1 How do we see the world? Let s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image? Credit: Steve Seitz 2 Pinhole camera Idea 2: Add a barrier to
More informationRobot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment
Robot Visual Mapper Hung Dang, Jasdeep Hundal and Ramu Nachiappan Abstract Mapping is an essential component of autonomous robot path planning and navigation. The standard approach often employs laser
More informationImage Processing for feature extraction
Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image
More informationPhotographing Long Scenes with Multiviewpoint
Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an
More informationקורס גרפיקה ממוחשבת 2008 סמסטר ב' Image Processing 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור
קורס גרפיקה ממוחשבת 2008 סמסטר ב' Image Processing 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור What is an image? An image is a discrete array of samples representing a continuous
More informationThe ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?
Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution
More informationA Study of Slanted-Edge MTF Stability and Repeatability
A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency
More informationDigital Image Processing. Lecture # 6 Corner Detection & Color Processing
Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond
More informationProject 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/
More informationSingle Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation
Single Digital mage Multi-focusing Using Point to Point Blur Model Based Depth Estimation Praveen S S, Aparna P R Abstract The proposed paper focuses on Multi-focusing, a technique that restores all-focused
More informationA Geometric Correction Method of Plane Image Based on OpenCV
Sensors & Transducers 204 by IFSA Publishing, S. L. http://www.sensorsportal.com A Geometric orrection Method of Plane Image ased on OpenV Li Xiaopeng, Sun Leilei, 2 Lou aiying, Liu Yonghong ollege of
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 5: Cameras and Projection Szeliski 2.1.3-2.1.6 Reading Announcements Project 1 assigned, see projects page: http://www.cs.cornell.edu/courses/cs6670/2011sp/projects/projects.html
More informationImage Acquisition and Representation
Image Acquisition and Representation how digital images are produced how digital images are represented photometric models-basic radiometry image noises and noise suppression methods 1 Image Acquisition
More informationFilters. Materials from Prof. Klaus Mueller
Filters Materials from Prof. Klaus Mueller Think More about Pixels What exactly a pixel is in an image or on the screen? Solid square? This cannot be implemented A dot? Yes, but size matters Pixel Dots
More information06: Thinking in Frequencies. CS 5840: Computer Vision Instructor: Jonathan Ventura
06: Thinking in Frequencies CS 5840: Computer Vision Instructor: Jonathan Ventura Decomposition of Functions Taylor series: Sum of polynomials f(x) =f(a)+f 0 (a)(x a)+ f 00 (a) 2! (x a) 2 + f 000 (a) (x
More informationImage Processing. What is an image? קורס גרפיקה ממוחשבת 2008 סמסטר ב' Converting to digital form. Sampling and Reconstruction.
Amplitude 5/1/008 What is an image? An image is a discrete array of samples representing a continuous D function קורס גרפיקה ממוחשבת 008 סמסטר ב' Continuous function Discrete samples 1 חלק מהשקפים מעובדים
More informationImage Acquisition and Representation. Image Acquisition Hardware. Camera. how digital images are produced how digital images are represented
Image Acquisition and Representation Slide 1 how digital images are produced how digital images are represented Slide 3 Note a digital camera represents a camera system with a built-in digitizer. photometric
More informationImage Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt
CSCE 641 Computer Graphics: Image Mosaicing Jinxiang Chai Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt Outline Image registration - How to break assumptions? 3D-2D registration
More informationAR 2 kanoid: Augmented Reality ARkanoid
AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular
More informationTelling What-Is-What in Video. Gerard Medioni
Telling What-Is-What in Video Gerard Medioni medioni@usc.edu 1 Tracking Essential problem Establishes correspondences between elements in successive frames Basic problem easy 2 Many issues One target (pursuit)
More informationECE 484 Digital Image Processing Lec 09 - Image Resampling
ECE 484 Digital Image Processing Lec 09 - Image Resampling Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu slides created with WPS Office Linux
More informationTime-Lapse Light Field Photography With a 7 DoF Arm
Time-Lapse Light Field Photography With a 7 DoF Arm John Oberlin and Stefanie Tellex Abstract A photograph taken by a conventional camera captures the average intensity of light at each pixel, discarding
More informationLane Detection in Automotive
Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 6 Defining our Region of Interest... 10 BirdsEyeView
More informationOpto Engineering S.r.l.
TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides
More informationImage Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3
Image Formation Dr. Gerhard Roth COMP 4102A Winter 2015 Version 3 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance
More informationLast Lecture. photomatix.com
Last Lecture photomatix.com Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image Pyramids (Gaussian and Laplacian) Removing handshake
More informationToday I t n d ro ucti tion to computer vision Course overview Course requirements
COMP 776: Computer Vision Today Introduction ti to computer vision i Course overview Course requirements The goal of computer vision To extract t meaning from pixels What we see What a computer sees Source:
More informationStereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.
Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.
More informationImplementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring
Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific
More information4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES
4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,
More informationCPSC 425: Computer Vision
1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 7, 2016 Topics: Image
More informationComputer Vision Robotics I Prof. Yanco Spring 2015
Computer Vision 91.450 Robotics I Prof. Yanco Spring 2015 RGB Color Space Lighting impacts color values! HSV Color Space Hue, the color type (such as red, blue, or yellow); Measured in values of 0-360
More information