WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment.

Similar documents
Senior Design Competition Problem

Robotic Systems Challenge 2013

ReVRSR: Remote Virtual Reality for Service Robots

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

TEST PROJECT MOBILE ROBOTICS FOR JUNIOR

Fire Fighting. Objective. Robot. Fire Fighting. Name of Event: Robots per Team: 1

An Escape Room set in the world of Assassin s Creed Origins. Content

Sample Test Project Regional Skill Competitions Level 3 Skill 23 - Mobile Robotics Category: Manufacturing & Engineering Technology

RCAP CoSpace Rescue Rules 2017

Forms & Score Sheets

GST BOCES. Regional Robotics Competition & Exhibition. May 29, :00 2:00. Wings of Eagles Discovery Center, Big Flats NY. Mission Mars Rover

Forms & Score Sheets

Virtual Universe Pro. Player Player 2018 for Virtual Universe Pro

World Robot Summit Call for Tenders: A Standard Robot Platform in Simulation for Tunnel Disaster Response and Recovery Challenge

Hearthstone Championship Tour Seoul Tournament Rules

Interactive guidance system for railway passengers

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

Quad Cities Photography Club

FTC Block Party CS2N Mode Fall 2013

World Robot Summit. January 2018 Ministry of Economy, Trade and Industry (METI) New Energy and Industrial Technology Development Organization (NEDO)

Partner Robot Challenge Real Space

1 Lab + Hwk 4: Introduction to the e-puck Robot

RoboCupJunior CoSpace Rescue Rules 2015

Learning Actions from Demonstration

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX.

R (2) Controlling System Application with hands by identifying movements through Camera

Network Scanner Guide for Fiery S300 50C-KM

LINE FOLLOWING RULES. Priit Norak

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The

Public Robotic Experiments to Be Held at Haneda Airport Again This Year

Playstation Move Controller Doesn't Light Up

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS)

Virtual Reality Devices in C2 Systems

Halo Championship Series (HCS) Season 1 Handbook

JHU Robotics Challenge 2015

1 Abstract and Motivation

Soccer Server: a simulator of RoboCup. NODA Itsuki. below. in the server, strategies of teams are compared mainly

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

REGULATIONS «LEGO SUMO»

The robot capable of understanding human-like instructions

ODYSSEY OF THE MIND Problem No. 2: Teach Yer Creature. Copyright 2008, Creative Competitions, Inc.

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray

NXT Amazing Rules USU Physics Day Lagoon Farmington, UT

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

IEEE Open Milker Robot Version 1.1

How can i do a resume. How can i do a resume.zip

Note: Objective: Prelab: ME 5286 Robotics Labs Lab 1: Hello Cobot World Duration: 2 Weeks (1/28/2019 2/08/2019)

Bloodhound RMS Product Overview

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Robotics Contest Contact: Robin Schamber

Stress Testing the OpenSimulator Virtual World Server

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

HARDWARE SETUP GUIDE. 1 P age

System Audit Checklist

Sample Test Project District / Zonal Skill Competitions Skill- Mobile Robotic Category: Manufacturing & Engineering Technology

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011

6 System architecture

CS 309: Autonomous Intelligent Robotics FRI I. Instructor: Justin Hart.

Mini Sumo and Lego Sumo rules 2018

INVENTORY FOR HARMONISED INLAND AIS APPLICATION SPECIFIC MESSAGES IN EUROPE

CLEVELAND PHOTOGRAPHIC SOCIETY COMPETITION RULES FOR

The Control of Avatar Motion Using Hand Gesture

Specifications. Chapter Arenas Basic Arena Standard Arena

Affordance based Human Motion Synthesizing System

Ansible Tower Quick Setup Guide

RoboCupJunior Rescue Simulation(CoSpace) 2018

Note: Objective: Prelab: ME 5286 Robotics Labs Lab 1: Hello Cobot World Duration: 2 Weeks (1/22/2018 2/02/2018)

North Central Camera Club Council N4C. Competition Rules and Information Summary

HeroX - Untethered VR Training in Sync'ed Physical Spaces

UNIFORM PROVISIONS CONCERNING THE APPROVAL OF

VR/AR Concepts in Architecture And Available Tools

Using the Rift. Rift Navigation. Take a tour of the features of the Rift. Here are the basics of getting around in Rift.

RED TACTON.

ChessBase Accounts FIRST STEPS. CH E ACCESS THE WORLD OF CHESSBASE ANYWHERE, ANYTIME - 24/7

Benchmarking Intelligent Service Robots through Scientific Competitions: the approach. Luca Iocchi. Sapienza University of Rome, Italy

revolutionizing Subhead Can Be Placed Here healthcare Anders Gronstedt, Ph.D., President, Gronstedt Group September 22, 2017

Rules & Regulations. Version: 2009 Revision: 127 Last Build Date: January 21, 2009 Time: 598

Sec Element standards. (1) Element 1: 5 words per minute

SUMO RULES. Allar Aasjõe

i.bot (Line Follower)

Two Dimensional Motion Activity (Projectile Motion)

Biomedical Engineering Prosthetic Arm DEMONSTRATION EVENT

PROPOSED COMPETITION BY-LAW CHANGES Aug 10, 2009

Embodied Interaction Research at University of Otago

Scheduling Algorithms Exploring via Robotics Learning

MESA Cyber Robot Challenge: Robot Controller Guide

Cambridge International Examinations Cambridge International General Certificate of Secondary Education. Published

Registering for Corban Accelerated Online

Evaluation of Five-finger Haptic Communication with Network Delay

INTRODUCING THE VIRTUAL REALITY FLIGHT SIMULATOR FOR SURGEONS

Advances in Human!!!!! Computer Interaction

Sioux CCM. Mechatronics Trophy assignment description. Nov 2017 v0.9 Page 1 of 13

REGULATIONS «LEGO & MINI SUMO»

PHYSICS-BASED INTERACTIONS IN VIRTUAL REALITY MAX LAMMERS LEAD SENSE GLOVE

Easy Robot Programming for Industrial Manipulators by Manual Volume Sweeping

Sample Mobile Robotic Project State Skill Competitions Level 2 Skill 23 Category: Manufacturing & Engineering Technology

Western Kansas Lego Robotics Competition April 16, 2018 Fort Hays State University

CTI Products RadioPro Dispatch User Guide Document # S For Version 8 Software

Transcription:

WRS Partner Robot Challenge (Virtual Space) 2018 WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment. 1 Introduction The Partner Robot Challenge (Virtual Space) evaluate how the intelligent robots can make natural and friendly communication with users, and achieve variety of support behavior for daily life environment. The competition is designed based on the SIGVerse simulator, which enables robots to make embodied and social interaction in virtual reality environment. This competition consists of four kinds of tasks. The following sections explain the detailed rule and software configuration. This document mainly explains the rule and the evaluation method. Regarding the software configuration, communication protocol used in the software system, and so on, please refer the Github. The Github repository of this competition is: https://github.com/partnerrobotchallengevirtual 2. Handyman 2.1. System Configuration The system configuration for the Handyman task is depicted below: Windows PC Ubuntu PC Handyman rosbridge server SIGVerse rosbridge server Robot controller Unity ROS Figure 2.1 System configuration The Windows computer runs the Handyman program. The Handyman program has been created in Unity. The Ubuntu computer runs the rosbridge server, the SIGVerse rosbridge server, and the robot controller created by the competition participants. The Handyman program and robot controller communicate through the basic rosbridge server, but communication with a large amount of data (sensor data, etc.) is transmitted through the SIGVerse rosbridge server. 1 / 18

In the Handyman program, robots move in accordance with the instructions from the robot controller when the human avatar issues commands to the robot. The robot controller sends ROS messages such as Twist and JointTrajectory to the Handyman program to move the robot in the Handyman program. The Handyman program distributes JointState, TF, sensor information, and other ROS messages at a regular interval to the Robot Controller. 2.2. Flow of the Handyman task The detail flow of the handyman task is described at GitHub wiki. Please check the following URL: https://github.com/partnerrobotchallengevirtual/handyman-unity/wiki/systemoverview 2.3. Time Limits The time limit for each session is N minutes. Participants have M sessions for the tasks. N and M are announced during the preparation phase. The timer starts when the sensor signals are distributed to the robot. 2.4. Scoring (Each Task) Arriving at the instructed room +20 Fail to arrive at the instructed room -10 Grasping the instructed object +50 Fail to grasp the target object -10 Grasping the wrong object -10 Achieving the instructed action +30 Success to suggest the human s error +50 Fail to suggest the human s error -10 Collisions of HSR to environment (each time) -α (5 < α <50) Collisions of objects to environment (each time) -β (1 < β <50) The score will be 0 points even if many deductions results in a score below zero. α is a penalty of the collision between HSR and environment such as furniture. β is a penalty of the collision between object and environment such as dropping off the object from higher position. α and β are calculated with a consideration of the impact of the collision each time. It is proportional to the collision velocity. 2 / 18

2.5. Instruction to the robot Instruction statements are given by natural language expression in English, described by text (sequence of ASCII character). The instruction always includes movement, grasping and carrying behavior such as: Go to the xxx, grasp the YYY and ZZZ, where xxx indicates the name of the room, YYY indicates the name of the object, and ZZZ indicates the carrying behavior. The carrying behavior consist of 1) put the target object to a certain destination position, 2) discard the object into a trash can, and 3) handover to the avatar. The name of the object and furniture are fixed and announced seven days before the competition. To express the target position may use the following phrase candidates: next to the x, on the x, in the x, under the x, close to the x. The x should be the name of object or furniture. These phrase candidates might be increased. In such case, the candidates are announced before the competition. 2.6. Other conditions 2.6.1. Handover to the avatar The instruction statement might include handover of a target object to the avatar. To achieve the handover action, the robot should reach the arm, which grasps the target object, to the avatar s chest. The position of the end effector should be inside a sphere, which central position is the chest of the avatar and the radius is 30cm. 2.6.2. Dynamic changes in the environment Another avatar who is not related to the instruction to the robot may stand or walk around in the environment. The avatar who instructed the robot also may walk around after the instruction. The robot should distinguish the unrelated avatar. The position of furniture such as desk, table, chair, and so on are fixed; however only the trash can might be located at different position from the original room layout file, which is announced beforehand. The target object instructed by the avatar sometimes might not exist. In such case, the robot recognizes and declares the non-existence. If the declaration is succeeded, additional score is given, and the avatar corrects the instruction. 3 / 18

2.6.3. Screenshot of Handyman task Fundamentally, the camera shall track the robot in the screen as shown below. Figure 2.2 Main window (1) Status information Status information displays the number of attempts left, the time remaining, a description of the task, the [Camera] button. Press the [Camera] button to change the perspective of the camera. (2) Score information Score information displays the score for each task and the total score for all of the tasks. (3) Overhead view The overhead view displays the room from a top view. 4 / 18

2.7. Room Layout and Furniture The list of room names used in the challenge is shown in the Table 2.1. Figure 2.3 is just an example of the room layout. Actual room layout will be announced 24 hours before the competition as a Unity project file. More than two layouts will be used. Each layout has name of the layout such as Mr.A s house, which is included in the announcement. Table 2.1 Room list No Name 1 bed room 2 kitchen 3 living room 4 lobby kitchen bed room living room lobby Figure 2.3 An example of room layout The names of furniture are fixed and announced by seven days before the competition. An identical label never used for different type of furniture. For example, just table never used if there are kitchen table, wooden table, side table and so on. The furniture consists of two categories: 1) Exclusive for the room type. (ex. Bed for bed room ) This furniture never exist at the different room. 2) Common among the rooms. This furniture may 5 / 18

exist at all the room. (ex. wooden table ). The announcement will include the type of categories; however, the corresponding room type for the category 1) will be unknown. 2.8. Graspable Object List A list of potential objects to grasp during the challenge is shown below. The set of the object name and 3D model in Unity environment will be announced by seven days before the competition. All the used name shall be opened and announced; however, some object lacks the 3D model in Unity environment, that means the robot should estimate the appearance of the object using the label information. The robot can grasp any kind of object if the position of the end effector is placed at appropriate position and direction. Table 2.2 Graspable object list (example) No Label No Label 1 apple 7 sugar 2 toy_penguin 8 soysauce 3 doll_rabbit 9 sauce 4 doll_bear 10 ketchup 5 doll_dog 11 tumblerglass 6 cannedjuice 12 cup Figure 2.4 Graspable objects 6 / 18

2.9 Destination List An example list of destination used in the task is shown below. The final label information will be announced N days before the competition. N will be determined when the competition is approaching. Table 2.3 Destination list (example) No Label No Label 1 lowtable_a 4 trashbox_c01 2 wagon_c02 5 trashbox_c02 3 sidetable_a_1 6 trashbox_c03 Figure 2.5 Destination objects 7 / 18

3 Overview of Interactive Cleanup 3.1 System Configuration The system configuration for this competitive challenge is outlined below. Windows PC Ubuntu PC Interactive Cleanup rosbridge server SIGVerse rosbridge server Robot controller Unity ROS Figure 3.1 System configuration The Windows computer runs the Interactive Cleanup program. The Interactive Cleanup program has been created in Unity. The Ubuntu computer runs the rosbridge server, the SIGVerse rosbridge server, and the robot controller created by the competition participants. The Interactive Cleanup program and robot controller communicate through the basic rosbridge server, but communication with a large amount of data (sensor data, etc.) is transmitted through the SIGVerse rosbridge server. In the Interactive cleanup program, robots move in accordance with the instructions from the robot controller when the human avatar issues the Cleanup command to the robot. The instructions for the Cleanup command are determined based on the operations of the human avatar and the messages sent by the human avatar. The robot controller sends ROS messages such as Twist and JointTrajectory to the Interactive Cleanup program to move the robot in the Interactive Cleanup program. The Interactive Cleanup program distributes JointState, TF, sensor information, and other ROS messages at a regular interval to the Robot Controller. 8 / 18

3.2 Flow of the Interactive Cleanup task The flow of the Intearctive Cleanup task is described at GitHub wiki. Please check the following URL: https://github.com/partnerrobotchallengevirtual/interactive-cleanup-unity/wiki/systemoverview 3.3 Time Limits The time limit for each task is 10 minutes. Participants have 15 attempts for the tasks. N and M are announced during the preparation phase. The timer starts when the sensor signals are distributed to the robot. 3.4 Scoring (Each Task) Success to grasp the target object +50 Fail to grasp the target object -10 Success to clean up of the target object +50 Fail to clean up the target object -10 Confirmation of object correctness (each time) -10 Collisions of HSR to environment (each time) -α (5 < α <50) Collisions of objects to environment (each time) -β (1 < β <50) The score will be 0 points even if many deductions results in a score below zero. α is a penalty of the collision between HSR and environment such as furniture. β is a penalty of the collision between object and environment such as dropping off the object from higher position. α and β are calculated with a consideration of the impact of the collision each time. It is proportional to the collision velocity. 3.5 Other 3.5.1 Regarding Avatar Finger Pointing The finger for the human avatar to point with is the pointer finger on either the right or left hand. 3.5.2 Regarding the Robot and Avatar The initial position of the robot and human avatar will be fixed. The height of the human avatar will be fixed. 3.5.3 Regarding the Publication of room layouts The layouts of the room used for the competition will be announced by seven days before the competition. Announcement procedure is the same as in Handyman task. 9 / 18

3.6 Interactive Cleanup Screen During the Competitive Challenge Fundamentally, the camera shall track the robot in the screen as shown below. Figure 3.2 Main window (1) Status information Status information displays the number of remaining sessions, a description of the task, and the [Camera] button. Press the [Camera] button to change the perspective of the camera. (2) Score information Score information displays the score for each task and the total score for all of the tasks. (3) Overhead view The overhead view displays the room from a top view. (4) Avatar view The avatar view displays the human avatar. 10 / 18

3.7 Room An example of a room used in the competition is shown below. Figure 3.3 An example of the room for the Interactive Cleanup 11 / 18

3.8 Graspable Object List A list of potential objects to grasp during the challenge is shown below. Table 3.1 Graspable object list (example) No Label No Label 1 apple 7 sugar 2 toy_penguin 8 Soysauce 3 doll_rabbit 9 Sauce 4 doll_bear 10 Ketchup 5 doll_dog 11 tumblerglass 6 Cannedjuice 12 cup Figure 3.4 Graspable objects 3.9 Destination List A example list of destination used in the task is shown below. The final label information will be announced N days before the competition. N will be determined when the competition is approaching. Table 3.2 Destination list (example) No Label No Label 1 lowtable_a 4 trashbox_c01 2 wagon_c02 5 trashbox_c02 3 sidetable_a_1 6 trashbox_c03 Figure 3.5 Destination objects 12 / 18

13 / 18

4 Human Navigation 4.1 Rules [Overview] The purpose of the Human Navigation task is to generate instruction statements by natural language for a person to guide them naturally to achieve a several tasks. In order words, the stance of the robot and human are opposite to the Handyman task. The robot is required to make natural language instruction to carry out a certain target to a certain destination such as, Please bring the cup on the table in front of you to the second drawer of the kitchen. A person (test subject) follows the instructions, logs into the avatar in VR, and then goes to take the object. The required time to complete the object manipulation is measured and used for the calculation of point. The team that generates the easiest and most natural instruction for a person should get higher points. Reference video for the human navigation task is available from: https://drive.google.com/file/d/0bxjnl2prt1f_wfpfmg1vzufwy0u/view?usp=sharing [System Configuration for the Human Navigation task] The computers and each program are connected in the configuration shown in below: Windows computer (prepared by executive committee) Linux computer (prepared by team) Are_you_ready? I_am_ready task_info Moderator Task_succeeded / Task_failed Task_finished Go_to_next_trial ROSBridge server Robot controller Oculus Rift & Touch (Demonstrator) Avatar Mission_complete Guidance_request Voice output to demonstrator Microsoft Speech API (SAPI) Robot Unity guidance_message ROS Figure 4.1: System Configuration for the Competitive Challenge [Flow of the Human Navigation task] The flow of the Human Navigation task is described at GitHub wiki. Please check the following URL: https://github.com/partnerrobotchallengevirtual/human-navigation-unity/wiki/systemoverview [Regarding the Environment for the Competitive Challenge] Figure 4.1 shows an example of the configuration. A sample environment is available from Github. The environment to use during the competitive challenge is scheduled for release N days before 14 / 18

the competition. N will be announced in preparation phase. Objects, which cosist of the target candidates, non-related objects, and furniture are included in the environment. They are provided as Unity project file. The room layout information will be provided as Unity exe file. The system provides several room layouts which consist of 1) Easy mode: opened M days before the competition, 2) Hard mode: never opened before the competition. The system sends a room layout ID when the session starts. M will be announced in preparation phase. The preparation of multiple types of environments is planned. The environment for the competitive challenge includes furniture not provided in the sample, and changes such as the quantity may also be made. Figure 4.2: An Example Environment for the Competitive Challenge 15 / 18

Figure 4.3: An example of the restraining area of the HSR movement. The area are surrounded by blocks. The HSR should behave inside the restraining area. [Object to Manipulate] The 3D models of the objects to grasp and the labels (ID) will be released in advance. The position of the objects placed in the environment for the competitive challenge will be unknown. The position, quantity, and type of objects changes for each attempt. The same object may also be placed in the environment. The position of each object in the environment for the competitive challenge is sent to the robot at the start of the task as the task_info message. However, this message does not include information such as where the object is located in/on which piece of furniture. The object name used in the task_info corresponds to the label information shown in the label list, which will be announced before the competition. [Protocol of Task_info Message] Task_info includes the information below as ROS messages for notification. environment_id: room layout ID, which defines the location of furniture. target_object: Object label and position of the target object. objects_info: Set of object labels and position of objects excepting the target object. destination: Center position of designated area to place the target object. [Regarding Robot Instructions] The natural language instructions are provided to the test subject verbally (using SAPI) and visually (using message board effect in Unity) The number of characters used in the instruction should not exceed 400. 16 / 18

The number of instructions from the robot is limited. If the number of instruction exceeds 15 times, penalty will be given. The robot can use gesture instruction to point the target destination; however the robot locates in a restraining area such as shown in Figure 6.3. The area is separated from the task area in which the avatar act; the borderline is defined by several blocks. If the robot collides with the blocks, penalty will be given. The test subject can request the natural language instructions at any time by using a button on the Oculus Touch. The robot can give natural language instructions at any time. [Regarding the test subjects] The test subject cannot ask any questions to the robot. The test subjects are volunteer who do not have conflict of interest on the competition. The test subjects are required to be someone who has no knowledge about the environment. Test subjects will learn the operational procedures to move, grasp objects, as well as open and close doors and drawers in a test environment in advance. The position of the test subject will be set at a certain place where they cannot look over the environment. [Regarding Devices] The competitive challenge uses the Oculus Rift & Touch. [Regarding the Session execution procedure] A line of multiple teams will compete (Test subjects will not be able to know instructions in advance through attempts for other teams) The organizer starts the timer when the test subject is ready for the session execution. The start message will also be sent to the competitor s software module. [Voice Output to Demonstrators] The voice generated by SAPI is output to demonstrators. The voices of other teams cannot be heard by demonstrators. SAPI is executed on the computer prepared by the Competition committee. [Time Limits for the Competitive Challenge] 90min (5 min for each task x 12 tasks) + 30 min for explanation and practice for test subjects [Regarding Scoring] Action Score Grasp the target object 20 Required time until the demonstrator grasp the target object 30 ( 150 required_time [second] 150 [second] ) Grasp the wrong object (each time until the target object is grasped) -5 Place the target object in/on the destination 20 Required time from the grasp of the target object until the demonstrator places the 30 target object in/on the destination ( 150 required_time [second] 150 [second] Penalty for the optional instruction which exceed the max. number of instruction. -3 Collision of the robot (each time) -10 Highest number of points 100 17 / 18 )

Test subjects will understand the scoring in advance. 5. Final [Time Limits for the Competitive Challenge] Each 15 min n teams + α [Flow of the Competitive Challenge] 10 min Preparations for demonstration presentation Demonstration Presentation 5 min Question and answers Take down of demonstration [Note] This competitive challenge does not take deductions for restricted items or for the use of special functions. [Scoring] A judging panel with expert knowledge will award points based on the evaluation criteria below. Creativity/presentation of the story Effectiveness of interaction between the person and robot Diversity/universality of system integration Difficulty/completeness of performance Relevance/practicality in daily life 18 / 18