Turtlebot Laser Tag. Jason Grant, Joe Thompson {jgrant3, University of Notre Dame Notre Dame, IN 46556

Similar documents
CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

STRATEGO EXPERT SYSTEM SHELL

Semi-Autonomous Parking for Enhanced Safety and Efficiency

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

CS295-1 Final Project : AIBO

Team KMUTT: Team Description Paper

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Fire Fighting. Objective. Robot. Fire Fighting. Name of Event: Robots per Team: 1

Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture

CoSpace Make Code Challenge Rules 2016

Construction of Mobile Robots

S.P.Q.R. Legged Team Report from RoboCup 2003

ReVRSR: Remote Virtual Reality for Service Robots

Keywords: Multi-robot adversarial environments, real-time autonomous robots

StarPlus Hybrid Approach to Avoid and Reduce the Impact of Interference in Congested Unlicensed Radio Bands

Hierarchical Controller for Robotic Soccer

RoboCup. Presented by Shane Murphy April 24, 2003

This is a repository copy of Complex robot training tasks through bootstrapping system identification.

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

A New Simulator for Botball Robots

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011

League <BART LAB AssistBot (THAILAND)>

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Robotic Systems ECE 401RB Fall 2007

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Service Robots in an Intelligent House

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Paulo Costa, Antonio Moreira, Armando Sousa, Paulo Marques, Pedro Costa, Anibal Matos

Table of Contents FIRST 2005 FIRST Robotics Competition Manual: Section 4 The Game rev C Page 1 of 17

An Open Robot Simulator Environment

MESA Cyber Robot Challenge: Robot Controller Guide

Learning and Using Models of Kicking Motions for Legged Robots

Towards Complex Human Robot Cooperation Based on Gesture-Controlled Autonomous Navigation

Move Evaluation Tree System

Vision Ques t. Vision Quest. Use the Vision Sensor to drive your robot in Vision Quest!

6 System architecture

Learning and Using Models of Kicking Motions for Legged Robots

2018 Sumobot Rules. The last tournament takes place in collaboration. Two teams of two robots compete simultaneously.

Design. BE 1200 Winter 2012 Quiz 6/7 Line Following Program Garan Marlatt

Saphira Robot Control Architecture

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida

Analyzing Games.

Mobile Robots Exploration and Mapping in 2D

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

The WURDE Robotics Middleware and RIDE Multi-Robot Tele-Operation Interface

Multi-Platform Soccer Robot Development System

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani

Flocking-Based Multi-Robot Exploration

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

MATERIALS PROVIDED BY SCIENCE & TECH FAIR STAFF AT EVENT:

Test Plan. Robot Soccer. ECEn Senior Project. Real Madrid. Daniel Gardner Warren Kemmerer Brandon Williams TJ Schramm Steven Deshazer

1 Abstract and Motivation

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

MAVeC 19 Autobot Challenge

Implicit Fitness Functions for Evolving a Drawing Robot

INTRODUCTION TO GAME AI

VACUUM MARAUDERS V1.0

Building Perceptive Robots with INTEL Euclid Development kit

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

L09. PID, PURE PURSUIT

Monte Carlo based battleship agent

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

Robotics Introduction Matteo Matteucci

RoboCupJunior CoSpace Rescue Rules 2015

Multi-Robot Coordination. Chapter 11

The Robot Olympics: A competition for Tribot s and their humans

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

CORC 3303 Exploring Robotics. Why Teams?

Implement a Robot for the Trinity College Fire Fighting Robot Competition.

Android Speech Interface to a Home Robot July 2012

The Next Generation of Gaming Consoles

Toward a Design for Teaching Cognitive Robotics. Matthew D. Tothero Oskars J. Rieksts

Remote Monitoring of Environmental Sites using Solar Powered Wi-Fi Enabled Systems

YDLIDAR G4 DATASHEET. Doc#: 文档编码 :

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

Robocup Electrical Team 2006 Description Paper

May Edited by: Roemi E. Fernández Héctor Montes

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS

Mobile and web games Development

Robot Olympics: Programming Robots to Perform Tasks in the Real World

Design of a Remote-Cockpit for small Aerospace Vehicles

Lab 7: Introduction to Webots and Sensor Modeling

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2011/11

EQ-ROBO Programming : bomb Remover Robot

OWL and Rules for Cognitive Radio

GROUP BEHAVIOR IN MOBILE AUTONOMOUS AGENTS. Bruce Turner Intelligent Machine Design Lab Summer 1999

Introduction. robovitics club, VIT University, Vellore Robo-Sumo Event Details v1.2 Revised on Jan-30, 2012

IMGD 1001: Programming Practices; Artificial Intelligence

A Comparative Study of Structured Light and Laser Range Finding Devices

EDUCATIONAL PROGRAM YEAR bachiller. The black forest FIRST YEAR OF HIGH SCHOOL PROGRAM

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Transcription:

Turtlebot Laser Tag Turtlebot Laser Tag was a collaborative project between Team 1 and Team 7 to create an interactive and autonomous game of laser tag. Turtlebots communicated through a central ROS server and used the Kinect to track opponents and emulate shooting. Jason Grant, Joe Thompson {jgrant3, jthomp11}@nd.edu University of Notre Dame Notre Dame, IN 46556 Artist Kayla Wolter, Chelsea Young Saint Mary s College Notre Dame, IN 46556 1

I. Executive Summary This project pits two robots in a spectator thrilling laser-tag fight to the death (or battery discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an avatar for each of the Turtlebots. The robot control and game management will be handled by the engineers. The two aspects of the project were envisioned to be reasonably separable to allow for remote development of each of the solutions. The robots will be designed to be robust to the environments presented by the artists. II. Introduction Why Laser Tag? Collaboratively, teams one and seven decided to simulate the game laser tag for our final project. Laser tag was chosen for several reasons. Firstly, laser tag fits within the constraints of the Turtlebot. Turlebots do two things well: move and think about moving. Secondly, laser tag makes use of the available sensors on the robots (vision/3d sensing and cliff/bump sensor). The concept of shooting is also easily simulated with a vision system and proper sound effects. We believed that laser tag would be fun for kids to watch and play, and that the game was conceptually easy to follow. Goals Our goal was to have multiple robots autonomously play a game of laser tag in a constrained environment. This included moving quickly through the environment while avoiding obstacles, tracking the opponent when it was discovered, shooting the opponent when in range, and scattering across the map after being hit. Furthermore, each team was responsible for developing its own algorithm for independent gameplay. Game Rules The game will have a predefined time limit where the each robot will search for and shoot its opponents. Robots are required to obey the referee (ROS central server) which determines hits and misses for both teams. After a robot is hit by the opposition, the hit robot is responsible for exhibiting an hit behavior (shaking, spinning, etc.). At this time, a ceasefire begins and last for 20 seconds. During this time, the robots will scatter and no shots are allowed to be fired. Concluding the ceasefire, the game will commence as normal. Teams score one point for a hit and lose a point after 3 misses. Artistic Aspect The robot teams will take on the appearances of either a dinosaur or a robot. The robots interacted with each other in the space modeled after a city skyline battle field. We created objects in the environment that both served as an obstacle for the robots to utilize over the course of battle as well as a fusion of past and futuristic themes. The objects for the course are both portable and lightweight as Pepakura was to construct these forms. In regards to surface design, we used a latex covering on the cardboard to create a more durable form. The forms will also be weighted down using plaster/wood. 2

The viewers will be able to control the robots, thus engaging in a battle of past vs future taking place in the present. III. Methodology Collaborative Material A central server was needed to enforce game rules, determine hits and misses from shooting solutions provided by the competitors, and communicate game state to each of the robots. A separate ROS node, called the gamemaster, was created for this purpose that would run on its own dedicated system and communicate to each robot over a wireless network using a custom ROS message. As stated, the chief responsibility of this node was to manage the game state and enforce the rules. In implementation this was implemented through an accurate management of the game state. By accurately tracking the game state, the gamemaster could leave it up to each of the robots to follow the basic rules. The game state consisted of the actual state, hunt or seek, the number of hits registered by each team, and the number of misses registered by each team. By sending updates of this state, the robots were able to deduce whether a given shot was registered as a hit or a miss and what behavior the robot should currently be following. Loose rule enforcement was implemented by ignoring shots if in scatter state and by penalizing robots that missed very often. The latter encouraged the control algorithms to only shoot when they could reasonably expect a hit. Determining hits and misses were also a critical part of the gamemaster. In order to hit an opponent, a robot competitor sends a shooting solution to the gamemaster. This solution is composed of the image coordinate center of the blob corresponding to the target on the opponent as well as the depth at that point in the image. This information was then combined in the server to determine a hit probability. The probability of a hit falls off as the opponent gets farther away or more off center. Thus, it would be very unlikely to hit an opponent in the corner of the RGB image that is 3.5 meters away. The gamemaster communicated score information to a scoreboard database application. The only purpose of this application was to track the score for each team. This data was then easily queried by a PHP web page and displayed for the observers. An example display is shown in Figure 1.X. 3

Figure. 1X. A scoreboard depicting team 7 dominating a game of laser tag. Independent Algorithm Development Each group was tasked with developing an algorithm to control a single competitor in the game of robot laser tag. These algorithms were developed without collaboration between the groups. Separation of Perception / Control In order to facilitate a modular development, it was decided to separate the perception functionality from the movement control functionality. This allowed all of the code for perceiving the environment to be contained within its own nodelet. This nodelet would then generate high level messages which the movement controller could use. By doing this, we could develop the controller independently of the perception and allowed for easier debugging and more maintainable code files. The perception nodelet is responsible for handling the sensor input from the environment after preprocessing by ROS. The ROS system processes the depth disparity data coming from the Kinect sensor using the Point Cloud Library (PCL) and passes the resulting three-dimensional point cloud to the perception nodelet. The RGB data from the Kinect is first processed by CMVision and color blob information is sent to the nodelet. The nodelet is responsible for combining these two sensor sources into meaningful information for use by the movement controller. This is done by linking the real-world three-dimensional position data with the color blob data. The linking process is possible because the OpenNI drivers provide a flag to 4

automatically register the depth camera with the RGB camera. This process is outlined in Fi2. gure X. F 2. igure2 X. Flow from the environment to the perception nodelet to the custom messages Custom Messages As stated in the previous section, our Perception nodelete returned four custom messages: obstacle, collision, opponent, and target. Obstacle messages were used in our medium to long range planning. This message indicated that an object was approximately 1-3 meters ahead of us and within a half meter to the right or left. This allowed us to prepare to avoid the obstacle. Collision messages required immediate attention, either by stopping, backing up, or turning to avoid an obstacle less than 1 meter ahead. Opponent messages were sent when the avatar of the opponent was spotted. After receiving this message, the turtlebot could follow its opponent on the map. The last message is the target message. This message contained the blob information so that we could align and shoot the target3. Figure X: Custom messages from the perception nodelete State Machine A high level state machine controlled the goals and shot firing of the robot as shown in Figure X. At all times, the robot is either in the hunt or scatter state. This state is communicated to the robot by the central gamemaster. If the robot is in the hunt state, it will attempt to follow the opponent in order to shoot it at some point in the future. If the robot is in the scatter state, it will 5

flee by turning away if the opponent is found. If the target is located and the game state is hunt, the robot will fire a shot. At any point in the game, if the hit and miss counts from the gamemaster indicate that the robot has been hit with a shot, the robot will perform a predefined hit behavior and then resume in the current state. Notice that this state machine does not control movement at all. It only manages high level goals which are communicated to the actual controller as discussed in the next section4..l W Figure X. Goal management state machine Movement Controller The movement controller was developed as a reactive controller in that different high level sensory input is linked directly to motor control. The exact nature of this linkage can change depending on the robot s state. For example, in the scatter state, opponent messages will cause the robot to turn away in an attempt to flee. However, in the hunt state, the robot will attempt to follow the opponent based upon information coming from the opponent messages. Regardless of the state, the forces coming from opponent sensing is combined with forces generated by the other types of messages and movement occurs. The combination of the various movement forces occurs through the action of a movement arbitrator. This function accepts a variety of desired movements generated by reactions to the various sensory messages and outputs a single movement command to the robot s drivers. Each desired movement sent to the arbitrator has a weight associated with it. These weights are used by the arbitrator to create a normalized weighted average of all of the desired movements input in a single turn. This simple mechanism proves to be quite effective and allows for the relative importance of each reactive function to be set at the function level. The arbitrator need not know the source of each desired movement. With this established, the controller is just a set of functions generating movement commands that all feed into arbitrator. The arbitrator is the 6

only entity in the program that can give movement control commands to the robot. The reactive functions are then given relative importance and this is coded by adjusting the weights of the movements input to the arbitrator. Our movements were based on the following desirable behavior. As the weakest goal the robot wants to seek obstacles that could be used for cover. If an obstacle is detected, the robot will attempt approach the object in an effort to hide. This is given a low weight compared to the other reactions. As the next weakest goal, the robot wants to either hunt the opponent by following it (hunt mode) or turn away from it (scatter mode). Movements generated by this reaction are given a stronger weight than the obstacle reaction. This allows the hunting or scattering behavior to subsume control over the obstacle seeking behavior. The strongest reactions are those to avoid collisions. These reactions are caused by the collision events and result in command with a very high weight being sent to the arbitrator. The default behavior if no reaction occurs is to move forward. The process is outlined in Figure X. Figure X. Movement controller arbitration layout. In summary, the reaction arbitration controller creates the following behavior: 1. The robot will move forward. 2. If an obstacle is seen, the robot will attempt to hide by it. 3. If the opponent is seen, the robot will attempt to follow it or flee from it depending upon the game state. 4. If a collision is imminent, the robot will address that immediately. Finally, if at any time the robot can legally make a shot at a target within range, the controller will do so. 7

IV. Results Our demonstration at Robotics Day turned out to be quite successful. We were able to achieve remotely and autonomous controlled gameplay. Attendees enjoyed the interactive aspect, and eventually we had lines forming to play the game. Nevertheless, there were several issues that we faced while our robots were deployed. The largest issue we dealt with was poor WiFi coverage in the Joyce Center. When first proposing the project, we expected that WiFi may be an issue in the arena. Our initial solution was to bring our own wireless router on the day of the event. Unfortunately, this did not help much. We believe that the size of the arena and the unavailability of a open channel caused poor coverage. Because of the poor WiFi coverage, the Turtlebots often lost connection to the ROS master node. When this happened, the Turtlebots were no longer able to register new commands and continued to loop the last issued command. When our robot navigated in autonomous mode, its default motion was to move forward. The lost of connection to master node would not let the robot register bumps on the front bump sensor or see objects ahead. This caused the Turtlebot to run into objects and into the walls. The WiFi issue became worse when users controlled the robots with the Playstation 3 controller. When the connection failed, the Turtlebots became unresponsive to the controller which discouraged the user. Some of the artistic aspects also caused issues in the gameplay. The artwork that donned the robots presented a challenge while the game was played interactively. The avatars were able to freely spin on top of the robot. This action disoriented the user at times, and gamers were not sure which side of the robot was frontwards. In one instance, a player continually drove in the wrong direction and eventually drove through a wall. The color used for the ballerina robot was also the same color that was used for several building. Because of this, Team 1 did not have the option to track the robot throughout the arena. Material used for some of the building was reflective which caused several issues for the Kinect. Most noticeably, a team could see its own shooting target in a reflection and then proceed to shoot itself. We, as did most other teams, experience battery issues, but we did not expect our robots to last throughout the entire event. Robot laser tag operated for about 2 hours before the batteries died. We were able to change batteries and operate for about another 2 hours giving. Total down time was close to one hour. V. Discussion and Future Work Our previous lab work had adequately prepared us for this project, and we felt that this project was simply an extension of our previous work, almost like final lab assignment. In our previous lab, we had developed the system to separate perception from control and thus only had to develop the movement controller. Very little modification was needed with our previous 8

perception nodelet except to extract the specific information needed for our new messages. This allowed us to focus our attention on communicating with the central gamemaster, controlling our robot with the reactive controller, and figuring out how to get two robots operating on the same ROS master. This last point represented the bulk of our development problems. It was not discussed previously as it is viewed to be basic robot startup protocol required to do any other task and is taken as a very basic requirement for the project. Theoretically, it should have been easy. Both robots should not listen to the same topics for movement and sensor information. Thus, these topic names need to be changed. However, because Turtlebots were developed with ease of use in mind, the developers created a number of different scripts to bring up the robots automatically. This ended up hiding many of the finer details of the Turtlebot operation which we had to figure out through script reading. Nevertheless, we eventually configured the robots to listen and publish on their own ROS topics such that they would not overlap communications. We feel that an idea of this nature could be incorporated into labs as future work. Specifically, the use of multiple Turtlebots on a single master would be an excellent concept to cover and could lead into the discussion of multi-robot systems. Because of the configuration needed to get this working, the lab could also introduce the lower level concepts of ROS as well as the procedures necessary for starting the robots. The tasks that the multi-robot system perform need not be complex as the take aways from this lab are the ideas needed to get a system like this working. 9