CAMBADA 2015: Team Description Paper

Similar documents
CAMBADA 2014: Team Description Paper

Improving the Kicking Accuracy in a Soccer Robot

Robot Sports Team Description Paper

SPQR RoboCup 2016 Standard Platform League Qualification Report

CMDragons 2009 Team Description

A modular real-time vision module for humanoid robots

NUST FALCONS. Team Description for RoboCup Small Size League, 2011

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

Humanoid Robot NAO: Developing Behaviors for Football Humanoid Robots

Motion Control of Mobile Autonomous Robots Using Non-linear Dynamical Systems Approach

RoboCup. Presented by Shane Murphy April 24, 2003

Minho MSL - A New Generation of soccer robots

S.P.Q.R. Legged Team Report from RoboCup 2003

Learning and Using Models of Kicking Motions for Legged Robots

STOx s 2014 Extended Team Description Paper

Multi-Platform Soccer Robot Development System

Learning and Using Models of Kicking Motions for Legged Robots

Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture

Robocup Electrical Team 2006 Description Paper

NuBot Team Description Paper 2008

Tech United Eindhoven Team Description 2018

UChile Team Research Report 2009

CAMBADA: Team Description Paper

Paulo Costa, Antonio Moreira, Armando Sousa, Paulo Marques, Pedro Costa, Anibal Matos

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

CS295-1 Final Project : AIBO

Multi-Agent Control Structure for a Vision Based Robot Soccer System

MRL Small Size 2008 Team Description

Hierarchical Controller for Robotic Soccer

Fernando Ribeiro, Gil Lopes, Davide Oliveira, Fátima Gonçalves, Júlio

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

The Dutch AIBO Team 2004

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Robo-Erectus Jr-2013 KidSize Team Description Paper.

RoboTurk 2014 Team Description

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

Field Rangers Team Description Paper

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

CMDragons 2008 Team Description

TechUnited Team Description

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

Design and Implementation a Fully Autonomous Soccer Player Robot

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

ER-Force Team Description Paper for RoboCup 2010

SPQR RoboCup 2014 Standard Platform League Team Description Paper

Team KMUTT: Team Description Paper

Overview Agents, environments, typical components

Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup

Team Description 2006 for Team RO-PE A

Design a Modular Architecture for Autonomous Soccer Robot Based on Omnidirectional Mobility with Distributed Behavior Control

Autonomous and Autonomic Systems: With Applications to NASA Intelligent Spacecraft Operations and Exploration Systems

Towards Integrated Soccer Robots

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

The UPennalizers RoboCup Standard Platform League Team Description Paper 2017

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Intelligent Robotics Sensors and Actuators

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Hanuman KMUTT: Team Description Paper

Simple Path Planning Algorithm for Two-Wheeled Differentially Driven (2WDD) Soccer Robots

Multi-Robot Team Response to a Multi-Robot Opponent Team

UvA Rescue Team Description Paper Infrastructure competition Rescue Simulation League RoboCup Jo~ao Pessoa - Brazil

Functional Specification Document. Robot Soccer ECEn Senior Project

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Robo-Erectus Tr-2010 TeenSize Team Description Paper.

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

Parsian. Team Description for Robocup 2013

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

Randomized Motion Planning for Groups of Nonholonomic Robots

Funzionalità per la navigazione di robot mobili. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Technical issues of MRL Virtual Robots Team RoboCup 2016, Leipzig Germany

Cognitive Concepts in Autonomous Soccer Playing Robots

Team-NUST. Team Description for RoboCup-SPL 2014 in João Pessoa, Brazil

CORC 3303 Exploring Robotics. Why Teams?

Development of Local Vision-based Behaviors for a Robotic Soccer Player Antonio Salim, Olac Fuentes, Angélica Muñoz

Turtlebot Laser Tag. Jason Grant, Joe Thompson {jgrant3, University of Notre Dame Notre Dame, IN 46556

Soccer-Swarm: A Visualization Framework for the Development of Robot Soccer Players

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

RoboBulls 2016: RoboCup Small Size League

COOPERATIVE STRATEGY BASED ON ADAPTIVE Q- LEARNING FOR ROBOT SOCCER SYSTEMS

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

The description of team KIKS

Baset Adult-Size 2016 Team Description Paper

WF Wolves & Taura Bots Humanoid Kid Size Team Description for RoboCup 2016

RoboCup Rescue - Robot League League Talk. Johannes Pellenz RoboCup Rescue Exec

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

The Design of an Intelligent Soccer-Playing Robot

MCT Susanoo Logics 2014 Team Description

UvA Rescue - Team Description Paper - Infrastructure competition - Rescue Simulation League RoboCup João Pessoa - Brazil Visser, A.

Using Reactive and Adaptive Behaviors to Play Soccer

Multi-robot Formation Control Based on Leader-follower Method

IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS

Behavior generation for a mobile robot based on the adaptive fitness function

Transcription:

CAMBADA 2015: Team Description Paper B. Cunha, A. J. R. Neves, P. Dias, J. L. Azevedo, N. Lau, R. Dias, F. Amaral, E. Pedrosa, A. Pereira, J. Silva, J. Cunha and A. Trifan Intelligent Robotics and Intelligent Systems Lab IEETA/DETI University of Aveiro, Portugal Abstract. This paper describes the CAMBADA Middle Size robotic soccer team for the purpose of qualification to RoboCup 2015. During the last year, improvements have been made in a significant number of components of the robots. Most important changes include the ongoing implementation of a new platform, aerial ball detection and a new goalkeeper behavior, improvements in the world modeling and sensor fusion, development of a model for ball control using the robot s body and several improvements in the high-level coordination and control. These improvements are, namely, a new model for the software agent based on utilities, that includes the use of setplays, adaptive strategic positioning, passes and learning for behaviors development. 1 Introduction CAMBADA 1 is the RoboCup Middle Size League (MSL) soccer team of the University of Aveiro, Portugal. The project involves people working on several areas contributing for the development of all the components of the robot, from hardware to software. The development of the team started in 2003 and a steady progress was observed since then. CAMBADA has participated in several national and international competitions, including RoboCup worldchampionships (5 th place in 2007, 1 st in 2008, 3 rd in 2009, 2010, 2011, 2013 and 2014), the European RoboLudens, German Open (2 nd in 2010), Dutch Open (3 rd place in 2012) and the annual Portuguese Open Robotics Festival (3 rd place in 2006, 1 st in 2007, 2008, 2009, 2010, 2011, 2012 and 2 nd in 2013 and 2014). Moreover, the CAMBADA team achieved excellent results in the technical challenge of the RoboCup MSL: 2 nd place in 2008 and 2014, and 1 st place in 2009, 2012 and 2013. A 3 rd place in 2013, 2 nd place in 2012 and 1 st place in 2011 and 2014 in the RoboCup Scientific Challenge were also achieved. The general architecture of the CAMBADA robots has been described in [1, 2]. Basically, the robots follow a biomorphic paradigm, each being centered on a main processing unit (a laptop), which is responsible for the high-level behavior coordination, i.e. the coordination layer. This main processing unit 1 CAMBADA is an acronym for Cooperative Autonomous Mobile robots with Advanced Distributed Architecture.

handles external communication with the other robots and has high bandwidth sensors, typically vision, directly attached to it. Finally, this unit receives low bandwidth sensing information and sends actuating commands to control the robot attitude by means of a distributed low-level sensing/actuating system. This paper describes the current development stage of the team and is organized as follows: Section 2 describes the recent improvements of the hardware. Section 3 presents the workdone in the last year regarding3d detection of aerial balls. Section 4 addresses the world modeling and the control of the robots. Section 5 describes the high-level coordination and control framework and, finally, Section 6 concludes the paper. 2 New Platform During the ongoing year, the construction of a new platform was finished, which reused the model and functionalities that have proven to be efficient in the previous platform and introduces new changes in some aspects that require a new approach, namely the ability to move faster than 3 m/s top speed and the ability to actively control the ball in a more efficient way. The main issues that are addressed in the new platform can be summarized as follows: new, custom made, omni-directional wheels based on an aluminum 3 piece sandwich structure (see details in the mechanical drawings) in which 2 sets of 12 off-phase free rollers are supported. New geometric solution with an asymmetrical hexagon shape to exploit side dribbling possibilities. A new power transmission system, based on synchronous belts and sprockets. This allowed the team to re-use current Maxon 150W DC motors providing power transmission to the wheels by a synchronous belt system instead of the old direct drive approach. New motor control boards were also developed in order to improve the control of the motors in this new configuration. A new ball handling mechanism. This mechanism is based on a double active handler similar to some of the solutions already presented by other teams, but uses omni wheels to increase the ability to model the control of the forces applied to the ball. Direction and speed of the ball interface rollers is closed loop controlled in order to ensure full compliance with ball handling current rules. A new kicker device with improved efficiency and better force and aim control over the ball. A new vision support system. The previous solution used to support the catadioptric mirror/camera solution proved to be mechanically weak. The new solution adopts a much stronger structure and resorts to titanium bars to interconnect the catadioptric set. 3 3D Aerial Ball Detection The current vision system of the CAMBADA robots is based on an omnidirectional setup described in[3]. The vision system has suffered several improvements

in the last years. Namely an algorithm for the self-calibration of the colormetric parameters of a digital camera [4] has been presented and a computer vision library for color object detection has been implemented [5]. For this year s competition, we introduce an algorithm for the 3D detection of aerial balls using a Kinect sensor. For this purpose, a Kinect camera has been added to the platform of the robotic goalkeeper, as an additional vision sensor. The pipeline of the vision system of our goalkeeper is presented in Fig. 1. Fig. 1. Vision sytem pipeline of CAMBADA goalkeeper. The first step of the algorithm is the detection of blobs of the ball color, using the UAVision [6] library. After performing a color segmentation on the input image using a look-up table (LUT), we apply a filter based on depth information in order to remove the color classification of objects that are outside the soccer field. Scanlines are used for searching pixels of the color of interest (the color of the soccer ball). When scanning the image in search of the color of interest, the relevant found information is saved using a run-length encoding approach. The run-length information is used for forming blobs or clusters of the color of interest. These blobs have to pass a validation process in order to establish if a given blob is a ball. The validation procedure is based on calculating different features for each of the found blobs, such as the bounding box area, the circularity, and width-height relation. The depth information from the Kinect sensor is used for discarding the color of the objects that are found farther than a certain distance (in this case, 7m were considered). This complements the previous step by filtering possible objects of the ball color found outside the field. As stated before, this step is applied after the color classification. A calibration between the RGB and depth images provided by the sensor have to be performed.

a) b) c Fig.2. On the left, the original classified image. In the center, the obtained color classified image after filtering. On the right, the original image with the ball correctly detected. The default parameters available in the ROS package for Kinect calibration 2 have been used for this purpose. A result of the algorithm is presented in Fig. 2. For the calibration of the Kinect sensor, relatively to its position on the robot, the algorithm presented in [7] has been used. An application based on this algorithm (see Fig. 3(a)) acquires on demand an image from Kinect and then allows the user to pick some points on the 3D cloud of points. The chosen points correspond to points in the world whose relative position to the robot are known by the user. The software then evaluates the rigid body transform between the 2 coordinates systems corresponding to the position of the Kinect and its orientation relatively to the origin of the robot coordinates system. a) b) Fig. 3. On the left, the application for Kinect position calibration with 3 reference points on the Kinect cloud and the corresponding coordinates in robot coordinates system [7]. On the right, the calculated trajectory of the ball. The positions of the balls used for the computation are presented as large red spheres and the parabolic trajectory estimated is represented by the small blue spheres. The magenta spheres represent the projection on the ground of the detected balls. 2 http://wiki.ros.org/kinect_camera

For the estimation of the ball trajectory, we use the algorithm described in [7]. Having the trajectory calculated, the goalkeeper can estimate the best position to intercept the ball using the projection of the trajectory on the ground, determined by the two magenta spheres drawn on Fig. 3(b). 4 World modeling and robot control Several improvements to the world model have been or are being made. The main changes aim to improve the precision of the ball and obstacles perception, obstacles avoidance, motion model, kicking calibration and the adaptation of the basic behaviors in order to comply with the novel architecture of the high level software agent based on utilities and priorities, as will be described later. New behaviors for ball handling with the robot s body are also being developed. These new behaviors have the goal to control the ball using the flat surfaces of robot s body. These include pushing the ball to the desired direction, which may be the opponent goal, or simply preventing the ball to go out of the field boundaries. To do so, some auxiliary points are calculated so that the robot passes through those points, adjusting its position and orientation to reach the desired destination with the desired velocity. In terms of obstacles perception [8], we are developing methodologies for obstacle tracking for persistent representation in the worldstate. This model will represent the global information of the obstacles on the field, rather than an individual perspective of each robot. This representation will be used by the utility map, as described later. The reactive component of the obstacle avoidance algorithm continues to be improved in order to try to ensure that the probability of robot to robot or robot to obstacle crash, or even touch, is reduced to a minimum. The system relies on a set of fully configurable virtual sonars, based on a set of parametric values, and is supported on the vision subsystem. This allows the use of different sonar configurations according to each particular game situation, their dynamic change according to the robot velocity and the evaluation of the robot dynamics to anticipate the feasible movements. The team is also improving the self-calibration process of the kicking device using two robots communicating with each other. New algorithms are being developed for 3D ball detection using high-speed cameras and 3D cameras. Moreover, this process is being complemented with the study of the real ball s trajectory, that will eventually allow the robots to have a more precise kick. 5 High-level coordination With respect to the software architecture, the fast evolution of the code development over the last years led to a lot of outdated modules and unused portions of code. Therefore, we decided that this was the perfect time to rethink the high-level software architecture. Most of the code was adapted and some weak

points were addressed in this new approach, such as the lack of Behavior history, non-smooth transitions and decisions merely based on the current agent cycle. In the context of MSL, with such a dynamic environment, there is a growing need of predicting the near future. Making decisions based only on the very last available information is not very effective. This occurs either due to the delays in inter robot communication or because of the fast moving opponents. So, as to overcome this problem, we are evolving to an hybrid agent, which makes decisions based on priorities and a set of utilities (each one testing the expected success with a different option) but also on simple conditions. This will ease the algorithm development of the various roles, by providing the agent an array of different choices in advance, each with some prior conditions and a given priority. In order to train some of the behaviors, there was an effort to build a set of Reinforcement Learning tools. These will be used to primarily train the dribbling and pass receiving behaviors. 5.1 Adaptive Strategic positioning In order to improve how agents decide the best positions to occupy on the field, depending on the game state, the CAMBADA agent is being changed to support an utility map. This leads to more dynamic positions in relation to the opponents, and not only to the ball. To do that, the agent is being adapted to support height maps. These maps take into consideration the opponents and the ball positions as well as other restrictions, namely the field of view. From them it is extracted the most advantageous position, closest to the strategic position defined by SBSP or DT (as presented in the last years), for the robot in a certain moment. After analyzing these maps the agent will choose the position to be occupied. The Fig. 4 depicts an evaluation of utility maps for two different game follow on situations. a) b) Fig.4. a) Dribble utility map based on the 3m dribbling maximum radius and the mates and opponents positions. b) A Dribble plus kick utility map, that also takes into consideration the target opponent goal and possible lines for shooting.

5.2 Reinforcement Learning for behaviors The MSL provides an interesting environment and a hard testbed for the application of Reinforcement Learning methods for robotic behavior generation. The research goals of the CAMBADA team in this field covers not only the application of state-of-the-art methods, but also a more theoretical and fundamental research to develop efficient learning methods for robotic applications. Following the research carried out over the last year, the CAMBADA team has developed learning tasks that aim to learn efficient controllers for the dribbling and passing behaviors. With the construction of the new hardware platform, we are also exploring the possibilities of learning how to control the new ball handling device applying RL methods directly in a micro-controller. Additionally, the team has developed a new RL update-rule [9] and is applying new function approximators that should improve the performance and stability of the learning methods used. 5.3 Coach In the scientific challenge of the RoboCup 2013, the CAMBADA team presented a coach, for the MSL scenario, that allows the choice, in real time, of the best formation for the robots, based on a set of rules that evaluates several game statistics and the game state. A screeenshot of the coach application with the game and rules status is presented in Fig. 5. The team continued the development of the referred coach during the last year, including new features to be used in the next RoboCup competitions. a) b) Fig. 5. Screeenshots of the the coach application. a) Game status reflecting current game flow, result, percentage of time in each midfield and current formation. b) Rules status defining the current parameters and thresholds for changing ongoing strategy.

6 Conclusions This paper described the current development stage of the CAMBADA robots. Since the last submission of qualification materials, in February 2014, several improvements have or are being carried out. Ball detection in 3D space, improvements in the world modeling and sensor fusion, development of models for ball control using both the robot s body as well as the ball handling mechanism. Several improvements in the high-level coordination and control, namely a new model for the software agent based on utilities that includes the use of setplays, adaptive strategic positioning, passes and learning for behaviors were also or are now under development. Two of our previous Ph.D or MsC students, that are now currently employed, are still cooperating with the project. Two Ph.D and four MSc students are doing work in the project, as well as several other students, who are also collaborating in the project as volunteers. References 1. A. Neves, J. Azevedo, N. Lau B. Cunha, J. Silva, F. Santos, G. Corrente, D. A. Martins, N. Figueiredo, A. Pereira, L. Almeida, L. S. Lopes, and P. Pedreiras. CAMBADA soccer team: from robot architecture to multiagent coordination, chapter 2. I-Tech Education and Publishing, Vienna, Austria, In Vladan Papic (Ed.), Robot Soccer, 2010. 2. José Luís Azevedo, Bernardo Cunha, and Luís Almeida. Hierarchical distributed architectures for autonomous mobile robots: a case study. In ETFA2007-12th IEEE Conference on Emerging Technologies and Factory Automation, volume 1-3, pages 973 980, Patras, Greece, September 2007. 3. António J. R. Neves, Armando J. Pinho, Daniel A. Martins, and Bernardo Cunha. An efficient omnidirectional vision system for soccer robots: from calibration to object detection. Mechatronics, 21(2):399 410, mar 2011. 4. António J. R. Neves, Alina Trifan, and Bernardo Cunha. Self-calibration of colormetric parameters in vision systems for autonomous soccer robots. In RoboCup 2013 Symposium, Eindhoven, Netherland, September 2013. 5. Alina Trifan, António J. R. Neves, and Bernardo Cunha. UAVision: A modular time-constrained vision library for soccer robots. In RoboCup 2014 Symposium, Joao Pessoa, Brazil, July 25 2014. 6. António JR Neves, Alina Trifan, and Bernardo Cunha. Uavision: A modular timeconstrained vision library for color-coded object detection. In Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications, pages 351 362. Springer, 2014. 7. Paulo Dias, João Silva, Rafael Castro, and António J. R. Neves. Detection of aerial balls using a Kinect sensor. In RoboCup 2014 Symposium, Joao Pessoa, Brazil, July 25 2014. 8. Joao Silva, Nuno Lau, António JR Neves, Joao Rodrigues, and José Luís Azevedo. Obstacle detection, identification and sharing on a robotic soccer team. In Progress in Artificial Intelligence, pages 350 360. Springer, 2009. 9. Joao Cunha, Nuno Lau, and António JR Neves. Q-batch: initial results with a novel update rule for batch reinforcement learning. In Advances in Artificial Intelligence- Local Proceedings, XVI Portuguese Conference on Artificial Intelligence, Azores, Portugal, pages 240 251, 2013.