Prospective Teleautonomy For EOD Operations

Similar documents
Robotics and Artificial Intelligence. Rodney Brooks Director, MIT Computer Science and Artificial Intelligence Laboratory CTO, irobot Corp

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

Robotic Systems. Jeff Jaster Deputy Associate Director for Autonomous Systems US Army TARDEC Intelligent Ground Systems

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Space Robotic Capabilities David Kortenkamp (NASA Johnson Space Center)

ROBOTIC MANIPULATION AND HAPTIC FEEDBACK VIA HIGH SPEED MESSAGING WITH THE JOINT ARCHITECTURE FOR UNMANNED SYSTEMS (JAUS)

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Revised and extended. Accompanies this course pages heavier Perception treated more thoroughly. 1 - Introduction

CAPACITIES FOR TECHNOLOGY TRANSFER

Distribution Statement A (Approved for Public Release, Distribution Unlimited)

FP7 ICT Call 6: Cognitive Systems and Robotics

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

On-demand printable robots

Ground Robotics Capability Conference and Exhibit. Mr. George Solhan Office of Naval Research Code March 2010

ATLAS. High Mobility, Humanoid Robot ROBOT 17 ALLSTARS -

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Booklet of teaching units

Service Robots in an Intelligent House

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Autonomous Control for Unmanned

Intelligent Robotic Systems. What is a Robot? Is This a Robot? Prof. Richard Voyles Department of Computer Engineering University of Denver

Overview Agents, environments, typical components

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

National Aeronautics and Space Administration

Advanced Robotics Introduction

VSI Labs The Build Up of Automated Driving

Advancing Autonomy on Man Portable Robots. Brandon Sights SPAWAR Systems Center, San Diego May 14, 2008

Touch & Gesture. HCID 520 User Interface Software & Technology

Manipulation. Manipulation. Better Vision through Manipulation. Giorgio Metta Paul Fitzpatrick. Humanoid Robotics Group.

Accessible Power Tool Flexible Application Scalable Solution

Improving Emergency Response and Human- Robotic Performance

Advanced Robotics Introduction

UNIT VI. Current approaches to programming are classified as into two major categories:

Towards Intuitive Industrial Human-Robot Collaboration

Franka Emika GmbH. Our vision of a robot for everyone sensitive, interconnected, adaptive and cost-efficient.

Available theses (October 2012) MERLIN Group

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

All theses offered at MERLIN (November 2017)

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

What will the robot do during the final demonstration?

Introduction to Vision & Robotics

Hybrid architectures. IAR Lecture 6 Barbara Webb

2016 IROC-A Challenge Descriptions

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

DENSO www. densocorp-na.com

A PROTOTYPE CLIMBING ROBOT FOR INSPECTION OF COMPLEX FERROUS STRUCTURES

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Transer Learning : Super Intelligence

Touch & Gesture. HCID 520 User Interface Software & Technology

Haptic Virtual Fixtures for Robot-Assisted Manipulation

What was the first gestural interface?

TECHNOLOGY DEVELOPMENT AREAS IN AAWA

Autonomous and Mobile Robotics Prof. Giuseppe Oriolo. Introduction: Applications, Problems, Architectures

Canadian Activities in Intelligent Robotic Systems - An Overview

Intro to AI. AI is a huge field. AI is a huge field 2/19/15. What is AI. One definition:

INTRODUCTION to ROBOTICS

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Multi-Modal Robot Skins: Proximity Servoing and its Applications

Spring 19 Planning Techniques for Robotics Introduction; What is Planning for Robotics?

Intro to AI. AI is a huge field. AI is a huge field 2/26/16. What is AI (artificial intelligence) What is AI. One definition:

INTELLIGENT WHEELCHAIRS

A simple embedded stereoscopic vision system for an autonomous rover

Intelligent Driving Agents

OFFensive Swarm-Enabled Tactics (OFFSET)

UNCLASSIFIED. UNCLASSIFIED R-1 Line Item #13 Page 1 of 11

Multi-Agent Planning

Robotics and Autonomous Systems

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

How To Create The Right Collaborative System For Your Application. Corey Ryan Manager - Medical Robotics KUKA Robotics Corporation

Autonomy Test & Evaluation Verification & Validation (ATEVV) Challenge Area

Available theses in robotics (November 2017) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Teleoperation and System Health Monitoring Mo-Yuen Chow, Ph.D.

Introduction to Vision & Robotics

Wednesday, October 29, :00-04:00pm EB: 3546D. TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof.

Automatic Payload Deployment System (APDS)

CS494/594: Software for Intelligent Robotics

Intelligent interaction

Introduction to Robotics

Soar Technology, Inc. Autonomous Platforms Overview

Real-time Cooperative Behavior for Tactical Mobile Robot Teams. September 10, 1998 Ronald C. Arkin and Thomas R. Collins Georgia Tech

Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina. Overview of the Pilot:

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training

UNCLASSIFIED R-1 ITEM NOMENCLATURE

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Fall 17 Planning & Decision-making in Robotics Introduction; What is Planning, Role of Planning in Robots

Chapter 1. Robot and Robotics PP

1. Mechanical Arms Hardware

Artificial Intelligence and Mobile Robots: Successes and Challenges

2 Focus of research and research interests

Real-Time Bilateral Control for an Internet-Based Telerobotic System

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida

Natural Interaction with Social Robots

ISTAR Concepts & Solutions

Transcription:

Perception and task guidance Perceived world model & intent Prospective Teleautonomy For EOD Operations Prof. Seth Teller Electrical Engineering and Computer Science Department Computer Science and Artificial Intelligence Laboratory MIT

Goal: robots that join human teams Our lab develops machines that: Have a rich, live awareness of surroundings Understand natural speech and gesture Act autonomously, under human supervision Self-driving car (DARPA urban challenge) MIT vehicle one of only 6 to complete final course, of 89 original entrants Unmanned, voice-operated operated forklift Developed for DDR&E and U.S. Army Now part of fielding effort at ARDEC Self-driving di i wheelchair h & robot arm Learns from human-narrated motions, actions Prospective teleautonomous manipulator Effort funded by ONR starting 1/1/12 Subject of today s presentation

What problem are we tackling? ONR BAA 11-019 asked the research community to develop and demonstrate emerging technologies for dismounted missions to detect/locate, access, diagnose/identify, and render safe/neutralize explosive hazards, including IEDs and UXO. From EOD personnel at SOCPAC, SPAWAR: Degree of autonomy provided by existing platforms is very low Existing teleoperation interfaces are clumsy, even for experts Even simple missions can take tens of minutes via teleoperation Significant fraction of missions can t be completed via teleoperation; in these cases, personnel must dismount and approach manually We re pursuing a new approach that, t if successful: Will significantly reduce the need for manual approach Will significantly reduce average mission duration, error rate Could mitigate issues of cost, weight, comms req ts, s/w complexity

Traditional Task Decomposition H U M A N All levels of planning All scene interpretation Uses OCU to control robot DOFs Tracking of workspace dynamics Reacting to unexpected events R O B O T Sends video feed to operator Follows commands of OCU Limited Cartesian motions Motor velocities/forces/torques Raw sensor data Key limitation: operator must control high-dof arm with low-dof OCU

Novel Proposed Decomposition (SRI/Sarnoff Video-Trek?) Perception and task guidance Perceived world model & intent H U M A N New component on human side: Multi-touch human-robot interface R O B O T Changes human-robot division of labor: New components on robot side: Lidar (captures RGB point cloud) Prospective autonomy algorithms High-level mission, task planning Sends video, lidar feed to operator Scene interpretation, motion guidance Scene perception (human-assisted) Cues about objects, desired actions Low-level motion planning via multi-touch gestures on lidar data Scene-adaptive motion control Rather than make robot smart, make it able to follow detailed instructions

Video (but please keep in mind ) Proof-of-concept demonstration Research robot (WG PR2), touchscreen Actions shown: pulling a lever, opening a box flap Imagine, next, what we re working toward: Typical EOD robot (Talon, Packbot), rugged tablet Actual EOD tasks (dig, break window, open car door, detach circuit board, clip wire, use disruptor)

Prospective Teleautonomy

Novel system aspects (1/7) Mouse/touchscreen interaction No teleop knobs/joysticks/gloves/controllers/armatures* Operator no longer forced to do inverse kinematics, i.e., move joysticks to make arm & gripper move as desired Comment: elimination of complex OCU mechanism could reduce both system weight and cost *But system could retain standard OCU as backup manual control method

Novel system aspects (2/7) Operator s free view of workspace View rendered from bot-pov lidar point cloud Operator can change view without moving robot or arm Comment: running change detection locally, and sending new HD/scans to operator only when needed, could significantly reduce bandwidth needs

Novel system aspects (3/7) Operator selection of object of interest Can add/subtract lidar points while changing view System responds by highlighting its idea of object

Novel system aspects (4/7) Operator has free 3D view of highlighted object Comment: could integrate give me a better look at that or give this fancy sensor a better look at that

Novel system aspects (5/7) Operator indicates object s degrees of freedom then specifies object motion based on indicated DOFs Comment: in future, system could infer DOFs automatically by matching selected point cloud to object library

Novel system aspects (6/7) System animates its idea of what operator wants Waits for operator confirmation before executing plan

Novel system aspects (7/7) Robot executes the operator-specified motion Uses local sensing, closed-loop control to track object Comment 1: no longer need low-latency comms; could even tolerate intermittent link Comment 2: in future, operator could provide guidance continuously, during execution Comment 3: motion planner is platform-agnostic, reducing software complexity

Prospective Teleautonomy

Technical Challenges / Risks Which sensors should the system use? Size/weight/power/cost vs. utility outdoors Point clouds (esp. dynamic) don t compress well How should the human-robot interface work? Rich enough to convey operator mental model Intuitive enough for training, predictability, trust Robust enough to generalize across manipulands How to achieve robust motion planning? Must incorporate operator-indicated constraints (+/-) Must hide details of robot morphology from operator How to achieve robust motion control? Bot must track non-rigid changes in workspace, and adapt to moving manipuland geometry in real time

Milestones (5 parallel thrusts over 48 mos.) Bidirectional human-robot interface Touchscreen to hand gestures; don t touch constraints; articulation/manipulation gestures; complex task sequences Workspace perception Terrain classification; object segmentation; manipuland categories; fine-resolution perception; clutter; night operations Multi-modal guidance and control Simple route planning; pre-grasp stance planning; execution of don t touch plans; macro capability for task sequences End-to-end capability Simple mobility; 1-DOF actions; single platform; path adjustment; multi-dof actions; multiple platforms; autonomous approach and pre-grasp p stance; bimanual manipulation with stance adjustment Frequent engagement with EOD personnel Observation of EOD training; feedback about prototypes

Conclusions New approach to conduct of EOD missions Treats robot as junior partner to human operator Operator conveys interpretation, action plan to robot Robot reflects its understanding, then executes plan Requires advances along multiple fronts Human-robot interaction: assisted robot perception Sensing and perception: objects from point clouds Platform-independent motion planning & control Proposed several concrete milestones From actions to complex action sequences From single platform to multiple platforms

Questions / Comments? Reactions from EOD folks? Highest-priority capability gaps? Stats on mission durations, success rates? Access to training materials? Opportunities to observe training? Opportunities for EOD test/evaluation?