Manipulation. Manipulation. Better Vision through Manipulation. Giorgio Metta Paul Fitzpatrick. Humanoid Robotics Group.

Similar documents
Perception and Perspective in Robotics

The Whole World in Your Hand: Active and Interactive Segmentation

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems

Sound Automata. Category: Physics: Force & Motion; Sound & Waves. Type: Make & Take. Rough Parts List: Tools List: Video:

PERCEIVING MOVEMENT. Ways to create movement

Hare and Snail Challenges READY, GO!

CHAPTER 1. Introduction. 1.1 The place of perception in AI

Learning haptic representation of objects

Interactive Robot Learning of Gestures, Language and Affordances

Effective Iconography....convey ideas without words; attract attention...

Prospective Teleautonomy For EOD Operations

Today I t n d ro ucti tion to computer vision Course overview Course requirements

Collaboration in Multimodal Virtual Environments

Chapter 1 Introduction

Motion perception PSY 310 Greg Francis. Lecture 24. Aperture problem

Building Perceptive Robots with INTEL Euclid Development kit

Graz University of Technology (Austria)

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Introduction to Vision & Robotics

FALL 2014, Issue No. 32 ROBOTICS AT OUR FINGERTIPS

Introduction. Visual data acquisition devices. The goal of computer vision. The goal of computer vision. Vision as measurement device

COMP 776: Computer Vision

CSE 408 Multimedia Information System

A developmental approach to grasping

Robotics and Artificial Intelligence. Rodney Brooks Director, MIT Computer Science and Artificial Intelligence Laboratory CTO, irobot Corp

SQUIRREL Summer School 2014 Freiburg, July. Gian-Diego Tipaldi, Michael Zillich

Unit IV: Sensation & Perception. Module 19 Vision Organization & Interpretation

Robot-Cub Outline. Robotcub 1 st Open Day Genova July 14, 2005

a Motorized Robot Inventor s Guide What will yours look like?

What you see is not what you get. Grade Level: 3-12 Presentation time: minutes, depending on which activities are chosen

CS325 Artificial Intelligence Robotics I Autonomous Robots (Ch. 25)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Toward Interactive Learning of Object Categories by a Robot: A Case Study with Container and Non-Container Objects

Mechatronics Project Report

Moon Illusion. (McCready, ; 1. What is Moon Illusion and what it is not

Spring 2018 CS543 / ECE549 Computer Vision. Course webpage URL:

Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality

Dan Kersten Computational Vision Lab Psychology Department, U. Minnesota SUnS kersten.org

Teaching robots: embodied machine learning strategies for networked robotic applications

Introduction. BIL719 Computer Vision Pinar Duygulu Hacettepe University

Advanced Robotics Introduction

Advanced Robotics Introduction

A Responsive Vision System to Support Human-Robot Interaction

Introduction to Vision & Robotics

4 Perceiving and Recognizing Objects

Short Course on Computational Illumination

Computer Vision Slides curtesy of Professor Gregory Dudek

Dropping Disks on Pegs: a Robotic Learning Approach

A Kinect-based 3D hand-gesture interface for 3D databases

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Motorized Balancing Toy

Salient features make a search easy

Introduction to Mobile Robotics Welcome

Do hunter-gatherers have illusions?

Trajectory Generation for a Mobile Robot by Reinforcement Learning

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Tapping into Touch. Eduardo Torres-Jara Lorenzo Natale Paul Fitzpatrick

Team KMUTT: Team Description Paper

Human Vision. Human Vision - Perception

Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot

Perception in Immersive Environments

Lecture 8. Human Information Processing (1) CENG 412-Human Factors in Engineering May

Perception: Pattern and object recognition. Chapter 3

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

CAN WE BELIEVE OUR OWN EYES?

SECOND YEAR PROJECT SUMMARY

Foundation - 2. Exploring how local products, services and environments are designed by people for a purpose and meet social needs

Prof. Riyadh Al_Azzawi F.R.C.Psych

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

YDDON. Humans, Robots, & Intelligent Objects New communication approaches

Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

Robotics Enabling Autonomy in Challenging Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

Humanoids. Lecture Outline. RSS 2010 Lecture # 19 Una-May O Reilly. Definition and motivation. Locomotion. Why humanoids? What are humanoids?

LEARN * DREAM * AWAKEN* DISCOVER * ENLIGHTEN * INVESTIGATE * QUESTION * EXPLORE

Teacher s Resource. 2. The student will see the images reversed left to right.

Robots Learning from Robots: A proof of Concept Study for Co-Manipulation Tasks. Luka Peternel and Arash Ajoudani Presented by Halishia Chugani

Introduction to Vision & Robotics

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

High Level Computer Vision. Introduction - April 16, Bernt Schiele & Mario Fritz MPI Informatics and Saarland University, Saarbrücken, Germany

5a. Reactive Agents. COMP3411: Artificial Intelligence. Outline. History of Reactive Agents. Reactive Agents. History of Reactive Agents

ME7752: Mechanics and Control of Robots Lecture 1

Interaction in Urban Traffic Insights into an Observation of Pedestrian-Vehicle Encounters

Learning Actions from Demonstration

Sensation. Perception. Perception

Introduction to Vision. Alan L. Yuille. UCLA.

COGS 101A: Sensation and Perception

COPYRIGHTED MATERIAL. Overview

Ant? Bird? Dog? Human -SURE

Evolutions of communication

ZJUDancer Team Description Paper

Service Robots in an Intelligent House

Robot: icub This humanoid helps us study the brain

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

James Turrell - Perceptual Art. induces introspection, causing the viewer to look at their own viewing process, 1 creating completely

COPYRIGHTED MATERIAL OVERVIEW 1

Transcription:

Manipulation Manipulation Better Vision through Manipulation Giorgio Metta Paul Fitzpatrick Humanoid Robotics Group MIT AI Lab

Vision & Manipulation In robotics, vision is often used to guide manipulation But manipulation can also guide vision Important for Correction recovering when perception is misleading Experimentation progressing when perception is ambiguous Development bootstrapping when perception is dumb

Linking Vision & Manipulation A link from robotics Active vision: Good motor strategies can simplify perceptual problems A link from neuroscience Mirror neurons: Relating perceived actions of others with own action may simplify learning tasks

Linking Vision & Manipulation A link from robotics Active vision: Good motor strategies can simplify perceptual problems A link from neuroscience Mirror neurons: Relating perceived actions of others with own action may simplify learning tasks

A Simple Scene?

A Simple Scene? Edges of table and cube overlap Cube has misleading surface pattern Color of cube and table are poorly separated Maybe some cruel grad-student glued the cube to the table

Active Segmentation

Active Segmentation

Result No confusion between cube and own texture No confusion between cube and table

Point of Contact

Point of Contact 1 2 3 4 5 6 7 8 9 10 Motion spreads continuously (arm or its shadow) Motion spreads suddenly, faster than the arm itself contact

Segmentation Side tap Back slap Prior to impact Impact event Motion caused (red = novel, Purple/blue = discounted) Segmentation (green/yellow)

Typical results

A Complete Example

Linking Vision & Manipulation A link from robotics Active vision: Good motor strategies can simplify perceptual problems A link from neuroscience Mirror neurons: Relating perceived actions of others with own action may simplify learning tasks

Linking Vision & Manipulation A link from robotics Active vision: Good motor strategies can simplify perceptual problems A link from neuroscience Mirror neurons: Relating perceived actions of others with own action may simplify learning tasks

Viewing Manipulation Canonical neurons Active when manipulable objects are presented visually Mirror neurons Active when another individual is seen performing manipulative gestures

Simplest Form of Manipulation What is the simplest possible manipulative gesture? Contact with object is necessary; can t do much without it Contact with object is sufficient for certain classes of affordances to come into play (e.g. rolling) So can use various styles of poking/prodding/tapping/swiping as basic manipulative gestures (if willing to omit the manus from manipulation )

Gesture Vocabulary pull in side tap push away back slap

Exploring an Affordance: Rolling

Exploring an Affordance: Rolling A toy car: it rolls in the direction of its principal axis A bottle: it rolls orthogonal to the direction of its principal axis A toy cube: it doesn t roll, it doesn t have a principal axis A ball: it rolls, it doesn t have a principal axis

Forming Object Clusters

Preferred Direction of Motion 0.5 0.5 estimated probability of occurrence 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 0.5 0.4 0.3 0.2 0.1 Bottle, pointiness =0.13 Cube, pointiness =0.03 Rolls at right angles to principal axis 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 0.5 0.4 0.3 0.2 0.1 Rolls along principal axis Car, pointiness =0.07 Ball, pointiness =0.02 0 0 10 20 30 40 50 60 70 80 90 0 0 10 20 30 40 50 60 70 80 90 difference between angle of motion and principal axis of object [degrees]

Closing the Loop search rotation identify and localize object Previously-poked prototypes

Closing The Loop: Very Preliminary!

Conclusions Poking works! Will always be an important perceptual fall-back Simple, yet already enough to let robot explore world of objects and motion Stepping stone to greater things?

Acknowledgements This work was funded by DARPA as part of the Natural Tasking of Robots Based on Human Interaction Cues project under contract number DABT 63-00-C-10102 and by NTT as part of the NTT/MIT Collaboration Agreement

Training Visual Predictor

Locating Arm without Appearance Model Optical flow Maximum Segmented regions

Tracing Cause and Effect Object, goal connects robot and human action