Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Similar documents
Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Time of Flight Capture

Revolutionizing 2D measurement. Maximizing longevity. Challenging expectations. R2100 Multi-Ray LED Scanner

Probabilistic Robotics Course. Robots and Sensors Orazio

CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2

Development of intelligent systems

Outline. Comparison of Kinect and Bumblebee2 in Indoor Environments. Introduction (Cont d) Introduction

Range Sensing strategies

Electronics, Sensors, and Actuators

Chapter 2 Sensors. The Author(s) 2018 M. Ben-Ari and F. Mondada, Elements of Robotics, / _2

What was the first gestural interface?

WP640 Imaging Colorimeter. Backlit Graphics Panel Analysis

Sensing. Autonomous systems. Properties. Classification. Key requirement of autonomous systems. An AS should be connected to the outside world.

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1

Omni-Directional Catadioptric Acquisition System

Journal of Mechatronics, Electrical Power, and Vehicular Technology

KINECT CONTROLLED HUMANOID AND HELICOPTER

Brainstorm. In addition to cameras / Kinect, what other kinds of sensors would be useful?

Gesture Recognition with Real World Environment using Kinect: A Review

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Computer Vision Slides curtesy of Professor Gregory Dudek

CSE Tue 10/09. Nadir Weibel

A Turnkey Weld Inspection Solution Combining PAUT & TOFD

Flood modelling and management. Glasgow University. 8 September Paul Shaw - GeoVision

THE VISIONLAB TEAM engineers - 1 physicist. Feasibility study and prototyping Hardware benchmarking Open and closed source libraries

Intelligent Robotics Sensors and Actuators

Model R2002. Instruction Manual. Infrared Thermometer. reedinstruments www.

Image Manipulation Interface using Depth-based Hand Gesture

G Metrology System Design (AA)

MEASURING HEAD-UP DISPLAYS FROM 2D TO AR: SYSTEM BENEFITS & DEMONSTRATION Presented By Matt Scholz November 28, 2018

Service Robots in an Intelligent House

Touch & Gesture. HCID 520 User Interface Software & Technology

Image interpretation I and II

Simultaneous geometry and color texture acquisition using a single-chip color camera

A machine vision system for scanner-based laser welding of polymers

Tektronix AFG10022 Function Generator. Coming soon to B10: Sin, Square, Ramp, Swept, Arbitrary, Noise. Linear Actuators. Non-magnetized iron plunger

Intermediate 2 Waves & Optics Past Paper questions

Remote Sensing Platforms

LDOR: Laser Directed Object Retrieving Robot. Final Report

Shock Sensor Module This module is digital shock sensor. It will output a high level signal when it detects a shock event.

Digital DIY Technologies and Tools Welcome to Digital DIY and Technologies and Tools

Coded Aperture for Projector and Camera for Robust 3D measurement

CALIBRATION MANUAL. Version Author: Robbie Dowling Lloyd Laney

Human Retina. Sharp Spot: Fovea Blind Spot: Optic Nerve

RPLIDAR A1. Introduction and Datasheet. Low Cost 360 Degree Laser Range Scanner. Model: A1M8. Shanghai Slamtec.Co.,Ltd rev.1.

In the Mr Bit control system, one control module creates the image, whilst the other creates the message.

Color , , Computational Photography Fall 2017, Lecture 11

Reikan FoCal Aperture Sharpness Test Report

USING ROBOCOMP AND KINECT IN AUGMENTED REALITY APPLICATIONS. Leandro P. Serrano July 2011, Coimbra

Specification. Product Model: JDEPC-OV04. Camera Board s Version: VER:1.01. Camera Board s Dimension: 60*9.0mm MANUFACTURER

Touch technologies for large-format applications

Raster Images and Displays

EI- '$")( )/ Datasheet

EEE 187: Robotics. Summary 11: Sensors used in Robotics

Lab 2. Logistics & Travel. Installing all the packages. Makeup class Recorded class Class time to work on lab Remote class

Intro to Virtual Reality (Cont)

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

Reikan FoCal Aperture Sharpness Test Report

Vision-based Localization and Mapping with Heterogeneous Teams of Ground and Micro Flying Robots

Rapid Array Scanning with the MS2000 Stage

Toward an Augmented Reality System for Violin Learning Support

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

RPLIDAR A3. Introduction and Datasheet. Low Cost 360 Degree Laser Range Scanner. Model: A3M1. Shanghai Slamtec.Co.,Ltd rev.1.

Dust reduction filter. Live View

Color , , Computational Photography Fall 2018, Lecture 7

Helicopter Aerial Laser Ranging

Reikan FoCal Aperture Sharpness Test Report

Air Marshalling with the Kinect

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

Here Comes the Sun. The Challenge

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Reikan FoCal Aperture Sharpness Test Report

No Brain Too Small PHYSICS

Exercise questions for Machine vision

Fein. High Sensitivity Microscope Camera with Advanced Software 3DCxM20-20 Megapixels

AgilEye Manual Version 2.0 February 28, 2007

RPLIDAR A2. Introduction and Datasheet. Model: A2M3 A2M4 OPTMAG. Shanghai Slamtec.Co.,Ltd rev.1.0 Low Cost 360 Degree Laser Range Scanner

Touch & Gesture. HCID 520 User Interface Software & Technology

Light Waves. Aim: To observe how light behaves and come up with rules that describe this behavior.

Chapter 3 Data Acquisition in an Urban Environment

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

Module 3 Introduction to GIS. Lecture 8 GIS data acquisition

DESIGN OF A LASER DISTANCE SENSOR WITH A WEB CAMERA FOR A MOBILE ROBOT

Lecture Notes Prepared by Prof. J. Francis Spring Remote Sensing Instruments

Laser Telemetric System (Metrology)

SMART LASER SENSORS SIMPLIFY TIRE AND RUBBER INSPECTION

Flash Photography: 1

Dumpster Optics BENDING LIGHT REFLECTION

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Vision Lighting Seminar

Observational Astronomy

ROAD TO THE BEST ALPR IMAGES

Study guide for Graduate Computer Vision

Image Based Subpixel Techniques for Movement and Vibration Tracking

USER GUIDE. Mini InfraRed Thermometer with Laser Pointer MODEL 42500

TRIANGULATION-BASED light projection is a typical

A Structured Light Range Imaging System Using a Moving Correlation Code

Hochperformante Inline-3D-Messung

TurtleBot2&ROS - Learning TB2

Transcription:

Active Stereo Vision COMP 4102A Winter 2014 Gerhard Roth Version 1

Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different ambient lighting conditions Can get 3d data when there is no natural texture (i.e. white wall) Minus Need active source and a way to project it (laser dangerous?) Need more complex hardware A number of different systems, but two principles Triangulation (same as stereo but the light source replaces second camera) with camera and light source Time of flight (produce a pulsed beam of light, measure distance by time light takes to return)

Pulsed Time of Flight Basic idea: send out pulse of light (usually laser), time how long it takes to return Advantages: Large working volume (up to from 20 to 1000 m.) Disadvantages: Not-so-great accuracy (at best ~5 mm.) Requires getting timing to ~30 picoseconds Often used for scanning buildings, rooms, archeological sites, etc. The only practical long range measuring technology (triangulation fails over 20 meters)

Optech Airborne Laser Mapping

Raw Image depth is colour coded

Building, outlines, trees and wires

Bare Earth Model

Removing the trees

Triangulation One or two cameras and a light source Many possible light sources and variations Still use triangulation to find the depth

Simplest possible triangulation system? Take two calibrated stereo cameras Use a laser pointer to shine light on where we want the depth Find that laser spot in both images, this feature must correspond, so you get 3d This is easy because the laser spot is very bright compared to the rest of the world This works, but getting data is very slow since you must move around the laser spot Very easy to build, and to make it work!

Triangulation system with one camera? What if you have a single laser pointer and also a single camera looking at the spot?

Triangulation with one camera? Assume laser moved by a calibrated motor Then you know direction in space of the laser beam Camera calibrated and you know baseline, so can find laser spot location in image Then you can still triangulate to find depth! Even though you only have one camera Need a very accurate and high speed motor to move the laser spot around the scene This is complex hardware but is exactly what was done at NRC over about 30 years!

Triangulation can be very accurate Can get accuracy down to 20 microns (1/50 th of a millimeter!)

Microsoft Kinect Triangulation based system for finding depth Designed to interpret motions, not to build accurate 3d models or measure objects Frequency of infrared projector similar to sun So can not be used close to a window or be taken outdoors Still, for Human Computer Interaction, Kinect is a big breakthrough The first inexpensive and mass produced active sensor for consumers and researchers

Kinect Hardware

Kinect Hardware IR Emitter Color Sensor IR Depth Sensor Tilt Motor Microphone Array

Kinect Hardware

Sensors/Resolution of Kinect Separate sensors for depth and colour Color 12 FPS: 1280X960 RGB 15 FPS: Raw YUV 640x480 30 FPS: 640x480 Depth 30 FPS: 80x60, 320x240, 640x480 Not that accurate unless extra calibration is done Depth and colour registered so you can get the colour for each depth point

Depth and Intensity Images Depth image shown in depth map style, with brighter points closer to camera http://www.youtube.com/watch?v=inim0xwir 0o http://www.youtube.com/watch?v=7tgf30-5kuq&feature=related

How does Kinect get depth Project pseudo-random dots on world http://www.youtube.com/watch?v=dtklngsh9po &feature=related

Local patterns are almost unique What is the principle? Uses self identifying patterns of dots (like glyphs) What are glyphs? A local pattern that identifies itself uniquely Qrcode Augmented Reality Tags

Glyphs printed in paper (Dataglyphs) Old Xerox technology A little pattern that is hard to see but encodes a unique bit string

Kinect Projects dots which are glyphs

Kinect Glyphs almost unique Local pattern identifies location of projection Find local identifier by looking in a small region around a given point => code

How do Pseudo-Random dots work? One you get the glyph, a prior calibration tells you the angle(s) and therefore the ray for that particular point So now you can triangulate to get depth!

How do Pseudo-Random dots work? Repeat this process for each small region in the dot image to get depth at that point

Kinect Depth Acquisition Summary There is a projector for the laser dots and a sensor just for these dots (infrared) We can recognize the glyph in the infrared image so can triangulate to find the depth This requires a prior calibration process so that we know the rays for the laser dots Still just ordinary triangulation process There is a another camera that produces a separate and distinct intensity image The Kinect returns both a depth map and the overlayed intensity image

Model Building with a Kinect Given a series of depth images (from Kinect) and overlaid intensity what can we do? A simple model building algorithm Take overlapping depth images In intensity image find some surf images Each surf feature has range value in depth Each surf feature has range value in depth Align the overlapping depth images If you repeat this process enough times you get one big model

Kinect for model building http://www.youtube.com/watch?v=nsrmniev O4s

Depth sensor better than intensity? Is it easier to use a Kinect (depth sensor) or an ordinary digital camera to make models? Using a Kinect is much better because the depth accuracy from the Kinect does not change as you move camera Depth accuracy depends on baseline alone With an intensity image sequence the quality of any depth reconstruction process depends on the spacing between the images Can not rotate the intensity camera and get depth, but can rotate Kinect camera

Limitations of Kinect Not that accurate unless you do more complex calibrations It was designed to interpret motions, not to build accurate 3d models or measure objects Frequency of infrared projector similar to sun So Kinect can not be used close to a window or be taken outdoors in bright sunlight Multiple Kinects interfere with each other For Human Computer Interaction, Kinect is a big breakthrough; inexpensive and useful