EXPLORING THE PERFORMANCE OF THE IROBOT CREATE FOR OBJECT RELOCATION IN OUTER SPACE

Similar documents
CS295-1 Final Project : AIBO

Multi-Agent Planning

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

On-demand printable robots

CS 393R. Lab Introduction. Todd Hester

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Skyworker: Robotics for Space Assembly, Inspection and Maintenance

Robot: Robonaut 2 The first humanoid robot to go to outer space

R (2) Controlling System Application with hands by identifying movements through Camera

Special Sensor Report: CMUcam Vision Board

Information and Program

Extending Tekkotsu to New Platforms for Cognitive Robotics

Research Proposal: Autonomous Mobile Robot Platform for Indoor Applications :xwgn zrvd ziad mipt ineyiil zinepehe`e zciip ziheaex dnxethlt

Intelligent interaction

Vision System for a Robot Guide System

Toward an Augmented Reality System for Violin Learning Support

The Design of key mechanical functions for a super multi-dof and extendable Space Robotic Arm

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

SENLUTION Miniature Angular & Heading Reference System The World s Smallest Mini-AHRS

DESIGN AND DEVELOPMENT OF LIBRARY ASSISTANT ROBOT

MarineBlue: A Low-Cost Chess Robot

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Gael Force FRC Team 126

Robotic manipulator capable of sorting moving objects alongside human workers using a budget-conscious control system

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Lab 7: Introduction to Webots and Sensor Modeling

Performance Analysis of Ultrasonic Mapping Device and Radar

The Sony AIBO: Using IR for Maze Navigation

Dropping Disks on Pegs: a Robotic Learning Approach

A Responsive Vision System to Support Human-Robot Interaction

Traffic Sign Recognition Senior Project Final Report

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

Abstract. 1. Introduction

More Info at Open Access Database by S. Dutta and T. Schmidt

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

A New Simulator for Botball Robots

Instruction Manual for HyperScan Spectrometer

Lab 8: Introduction to the e-puck Robot

T.C. MARMARA UNIVERSITY FACULTY of ENGINEERING COMPUTER ENGINEERING DEPARTMENT

Eye-to-Hand Position Based Visual Servoing and Human Control Using Kinect Camera in ViSeLab Testbed

Mission Reliability Estimation for Repairable Robot Teams

VICs: A Modular Vision-Based HCI Framework

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

League <BART LAB AssistBot (THAILAND)>

ME 461 Laboratory #5 Characterization and Control of PMDC Motors

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Voice Control of da Vinci

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

Physical Presence in Virtual Worlds using PhysX

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

LDOR: Laser Directed Object Retrieving Robot. Final Report

FP7 ICT Call 6: Cognitive Systems and Robotics

Face Detector using Network-based Services for a Remote Robot Application

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Benchmarking Intelligent Service Robots through Scientific Competitions: the approach. Luca Iocchi. Sapienza University of Rome, Italy

How To Create The Right Collaborative System For Your Application. Corey Ryan Manager - Medical Robotics KUKA Robotics Corporation

Artificial Intelligence: Implications for Autonomous Weapons. Stuart Russell University of California, Berkeley

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR

Team KMUTT: Team Description Paper

Foreword Thank you for purchasing the Motion Controller!

Chapter 1 Introduction to Robotics

Service Robots in an Intelligent House

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface

Los Alamos. DOE Office of Scientific and Technical Information LA-U R-9&%

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

Collaborative Robotic Navigation Using EZ-Robots

Versatile Camera Machine Vision Lab

Exercise 5: PWM and Control Theory

CSC C85 Embedded Systems Project # 1 Robot Localization

Automated Driving Car Using Image Processing

Incorporating a Software System for Robotics Control and Coordination in Mechatronics Curriculum and Research

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Short Course on Computational Illumination

A Semi-Minimalistic Approach to Humanoid Design

Operating Rausch ScanCam within POSM.

Responding to Voice Commands

Mixed-Initiative Interactions for Mobile Robot Search

Multi touch Vector Field Operation for Navigating Multiple Mobile Robots

Randomized Motion Planning for Groups of Nonholonomic Robots

Rapid Array Scanning with the MS2000 Stage

LASA I PRESS KIT lasa.epfl.ch I EPFL-STI-IMT-LASA Station 9 I CH 1015, Lausanne, Switzerland

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Release Notes v KINOVA Gen3 Ultra lightweight robot enabled by KINOVA KORTEX

Nautical Autonomous System with Task Integration (Code name)

High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control

Turtlebot Laser Tag. Jason Grant, Joe Thompson {jgrant3, University of Notre Dame Notre Dame, IN 46556

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL

Simulation of a mobile robot navigation system

MATLAB: Basics to Advanced

HeroX - Untethered VR Training in Sync'ed Physical Spaces

INTRODUCTION. of value of the variable being measured. The term sensor some. times is used instead of the term detector, primary element or

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Stabilize humanoid robot teleoperated by a RGB-D sensor

MRS: an Autonomous and Remote-Controlled Robotics Platform for STEM Education

Transcription:

EXPLORING THE PERFORMANCE OF THE IROBOT CREATE FOR OBJECT RELOCATION IN OUTER SPACE Mr. Hasani Burns Advisor: Dr. Chutima Boonthum-Denecke Hampton University Abstract This research explores the performance of the irobot Create machine for optimizing object relocation in an outer space environment. It is an ultimate goal to have it become a symbol of innovation for robots that are sent into outer space. Functioning as a tool-bot, and an active assistant, this robot aims to assist in small duties and respond to commands. With its arm and color blob recognition capabilities, this robot has the potential to receive a request, register and associate it with existing objects in its line of sight, and maneuver the arm to act accordingly, grabbing the correct object and giving it to a worker or engineer. This poster and presentation explains current progress and implementation of the irobot Create for this purpose. Introduction The irobot Create is a robot that is not only a flexible robot platform for students and educators [3], but is also a cheap and effective way to use robotics technology for the societal need. Robotics is the future and the design of robots on the hardware and software levels is essential in the acceleration of space exploration, excavation, and space operations. Robots are tools, and can be expertly programmed to carry out human-like tasks in environments that are not suitable or too dangerous for a person to be in. Combining irobot Create with the Tekkotsu framework created at Carnegie Mellon University, allowed for us to examine the color recognition for objects using the Calliope (Figure 1), a modified version of the irobot Create. Figure 1a: Calliope (Full-body) Figure 1b: Calliope Gripper (2 DOF)

Tekkotsu Framework & AprilTags: Tekkotsu [2], in Japanese meaning iron bones, is a software package that aims to give the user a structure on which to develop robotics control and routine tasks by focusing on higher level programming. Tekkotsu uses object-oriented, template, and inheritance features of C++. Hence, the user does not need to worry about the low-level programming to control a robot's vision and movement. AprilTags [1] augmented reality tags from Professor Edwin Olson s APRIL Lab at the University of Michigan, a new visual fiducial system that uses a 2D bar code style tag allowing full 6 degree-of-freedom (DOF) localization from a single image. Visual fiducial systems are used to improve human-robot interaction, allowing humans to signal commands by simply flashing an appropriate card to a robot. Implementation The Calliope, as shown in Figure 1, has an additional arm component with motors and controls that lets us extend the capability much farther than that of the traditional irobot Create. To correctly plan out how this robot would be used, one first had to examine the environment which the robot would be carrying out its duties. Zero-gravity environments call for precise movement, and especially for a robot that aims to grab tools and hand them to workers, it must be able to know when to let go of a certain tool, so as to not let it float off. The environment also calls for a robot that should not need to be controlled via operator, nor need any extra buttons pressed when out in the field. A robot such as this would be most effective acting autonomously, and that meant much emphasis on the robot s ability to sense commands, colors, shapes, tags, whatever it could use in order to get an understanding of what its next instruction should be. The robot would begin in a ready or idle position, before proceeding to the next state (see Figure 2). It is inconclusive to which medium of command specifically would trigger the motor response from the robot, but the AprilTag system would work as well as a signaling mechanism. Figure 2: State Transition Concept Diagram From this command, the robot would begin its movement toward the source of which the tag signal came. As it makes it way to the target, the eye via the camera on the top of the robot will be scanning the path ahead for obstacles. The AprilTags can be recognized also through use of the segmenting camera, as shown in Figures 3b and 3c. From the raw image, it can differentiate between the tag image, as shown by the small square outlining the segment camera view, and the rest of the world. With the Tekkotsu

framework, the position of the tag from the camera in space with the X, Y, and Z coordinates can be extracted. #include "Behaviors/StateMachine.h" $nodeclass AprilTest : VisualRoutinesStateNode { $nodeclass Look : MapBuilderNode($, MapBuilderRequest::cameraMap) : constructor { mapreq.setapriltagfamily(); // Use the default tag family $nodeclass Report : SpeechNode : dostart { NEW_SHAPEVEC(tags, AprilTagData, select_type<apriltagdata>(camshs)); textstream << "I saw " << tags.size() << " april tag" << ( tags.size() == 1? "" : "s" ); Figure 3a: AprilTags Figure 3b: AprilTags (Raw Camera) SHAPEVEC_ITERATE(tags, AprilTagData, t) { textsream << "x distance is " << int(t->getcentroid().coordx() << " millimeters."; END_ITERATE $setupmachine { Look =MAP=> Report REGISTER_BEHAVIOR_MENU( AprilTest, DEFAULT_TK_MENU"/Vision Demos"); Figure 4: Sample Code for AprilTag recognition and a distance Figure 3c: AprilTags (Seg Camera) Using Map Builder nodes with the default tag family, the robot can get a

distance from the AprilTag from the center of it (see Figure 4). This allows for precision when measuring how much the robot must travel in order to reach the destination. Future plans include perfecting this detecting AprilTag and its distance, along with the mapping functionality to maneuver around any obstruction, avoiding contact and damage, and then finally able to return to the appropriate path or home-base. Once the robot reaches the target destination, it will halt at that position and take a second set of image and use its segmented camera view to distinguish the colors. Finally, after receiving the toolspecific command, the Calliope would search for the color corresponding to that tool, grab it, and then hand it up and forward. It takes a huge effort to use Tekkotsu s MapBuilder components to activate and allow the Calliope to be able to see. It is even more important to be able to recognize an object in terms of the possibility for color association for tools, especially small objects where AprilTags may be invisible or hard to detect. Utilizing color image segmentation, color classes are assigned to each pixel, and calibrated so that when the create sees that specific tone of color, it can record it properly. The code sample shown in Figure 5 represents the activation of the MapBuilder component, and what exactly it is looking for. The MapBuilder is created in the state machine through Visual Routine State node, and is told in this case to look for various blobs of colors in an image. A specific set of target colors can be listed; hence, segmented image will show only these targeted colors. #include "Behaviors/StateMachine.h" $nodeclass MapBuilderTest1 : VisualRoutinesStateNode { $nodeclass LookForObjects : MapBuilderNode : dostart { mapreq.addobjectcolor( blobdatatype, "green"); mapreq.addobjectcolor( blobdatatype, "blue"); mapreq.addobjectcolor( blobdatatype, "red"); $setupmachine{ LookForObjects =C=> SpeechNode("Done") REGISTER_BEHAVIOR(MapBuilderTest1 ); Figure 5: Tekkotsu Code Sample The image in Figure 6 shows the color segmentation working alongside the raw image capture. Even when not in the best light, the camera is able to capture and distinguish between the red, green, and blue colors. Manipulating the script in Figure 5 allows the robot to be able to react a certain way, when a certain color is seen and recognized. This is done by adding new nodes that act as function calls to make the robot move its body, arm, or register the data in a structure such as an array.

#include "Behaviors/StateMachine.h" $nodeclass MSeqTest : DynamicMotionSequenceNode : dostart { MMAccessor<DynamicMotionSequence> mseq_acc = getmc(); Figure 6a: Raw objects mseq_acc->loadfile("firstpose.pos"); Figure 7a: Sample Calliope Posture file Figure 6b: Color Segmentation Kinematics: The arm s kinematics allow it to move in the X, Y, and Z direction and can be manually controlled through the Tekkotsu interface, and also programmed to a set of poses, set by number and axis position. The positions are loaded on command in another node, as shown in the figure above, and can be fluidly switched between each other to properly animate. They can be set to have a time interval run between them, as with any other function in the framework. Figure 7b: 3D Model of Arm Function Figure 7c: 3D Model of Arm Function

Future Implementation & Conclusion The Calliope robot is now able to recognize AprilTag and color blobs such that it can react based on what it sees. Both AprilTag and color Blobs have pros and cons, so depending on the situation one may be chosen over another. For example, in an unknown and limited space, color Blobs be may better used to signal the robot. It would be benefit to have a single program that switch between the two signals. The Calliope s color recognition can be improved when color segmentation is properly defined. It is unfortunate that light and shade play a major role in color recognition. In some case, orange or pink can be seen as red; while green as blue. Therefore, color calibration should be done prior to any deployment, which it is unlikely due to unknown area and uncertainty on lighting. What can be improved? Because Calliope only has a 2D-Arm (Figure 1), which means it can only pick up a standing object, but not the one that lay down. Therefore, a toolkit tray must be in vertical. If Calliope has a 5D-Arm (Figure 8), it can be maneuvered to pick up both standing-up and laying-down objects. Finally, the robot should be able to navigate through known and unknown area when carry out the toolkit tray. Mapping and localization that is the robot should know its position in the map would expand the use of this robot. Acknowledgements This research project is funded by the Virginia Space Grant Consortium and Calliope robot is in part by the ARTSI Alliance (National Science Foundation, Broadening Participation in Computing Program). References [1] Olson, Edwin, AprilTag: A robust and flexible multi-purpose fiducial system. University of Michigan APRIL Laboratory, 2010, May. [2] Tira-Thompson, E. J., and Touretzky, D. S. In press. The Tekkotsu robotics development framework. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA-2011), Shanghai, China [3] Touretzky, D. S. 2010. Preparing computer science students for the robotics revolution. Communications of the ACM, 53(8):27-29 Figure 8: Calliope with 5D-arm