CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main objective of this project is to investigate the Robocup Soccer domain [12] and to explore the possible research oppurtunities in that area. In our project we have tried to build an application in which two AIBO robots [11] pass the ball to each other and finally score a goal on a soccer field while avoiding obstacles on the playing field. For this reason, we are using Sony AIBO Robots as our hardware and OPEN- R SDK [10] and Tekkotsu framework [9] as our software development environment. This document describes our progress in this project and the information we have gathered about Robocup Soccer in general. 1 Introduction Project AIBO is our course project to fill the requirements of CS295-1 Sensor Data Magement course offered in Department of Computer Science, at Brown University. In this project, we have tried to build an application in which two AIBO robots pass the ball to each other and finally score a goal on a soccer field while avoiding obstacles on the playing field. The main objective underlying this project is to investigate the Robocup Soccer domain and to explore the possible research oppurtunities in that area. In this document, we describe our project progress and present our system design. We also talk about Robocup Soccer domain whenever possible and mention the possible research directions we have discovered while working on this project. This document is structured as follows: In section 2 we describe the application that we propose for this project in detail. In section 3 we explain the development environment and the tools we have used in this project. Then in section 4 we talk about our current progress on the project, describe our 1
system design and present our successes and failures in this project. We also talk about our team behaviour selection mechanism. In section 5, we provide related work in Robocup Soccer and give some examples of research in this area. Finally, in section 6 we make our final comments and conclude this report. 2 Aibo Project Description This project consists of development of an application using Sony AIBO robots in order to get familiar with Robocup Soccer domain, to understand the basic requirements of this domain and to grasp the fundamental research issues. Our application can be described as follows: two AIBO robots pass the ball to each other and finally score a goal on a soccer field while avoiding obstacles on the playing field. A successful implementation of this project would require many components such as image processing, object recognition, sensor data observation, behavior modeling, and robot kinematics to work together successfully. Hence, in this simple application we are able to investigate many different areas of computer science (such as robotics, wireless networking, distributed systems and artificial intelligence) and see how they work together to establish a common goal. This diversity of computer science areas involved in Robocup applications and research is fascinating and not easily found in other projects or works. However, it is also the reason why it is difficult to successfully implement such applications and make them work. We believe that a research group for joining the Robocup Soccer should be pretty large, involve both faculty and students from diverse areas in computer science as well as engineering. 3 Methodology We use the AIBO robots and the Open-R programming environment along with Tekkotsu framework in this project. We currently have two AIBO robots in our department. We have setup a Robocup soccer field in the AI Lab according to the formal specification described in the Robocup specification documents. The field includes two goals, four markers on sides of the field, a pink ball and white lines on the field. AIBO robots are actually commercially available entertainment robots produced by Sony. Yet, there is an annual competition, Robocup Soccer, being done using the Aibo Robots. In addition, there is a fair amount of research being done on this domain using the robots. The research groups working 2
on this area include Dutch AIBO Team, Carnegie Mellon Robocup Team, University of Pennsylvania AIBO Team and many others. Some of these research groups actually involve sub-groups from more than one universities. Open-R SDK is the programming interface provided by SONY to be used in developing applications with the AIBO Robots. Open-R SDK is a development environment based on gcc (c++) where one can make software to run on AIBO (ERS-7, ERS-210, ERS-220, ERS-210A, and ERS-220A). It provides basic low-level functions to program the robot and access its hardware, memory and other units. Tekkotsu framework is an open source development framework for the Sony AIBO developed by Carnegie Mellon University Tekkotsu Team. The aim of Tekkotsu framework is to build a structure on top of OPEN-R SDK environment using which people can develop more complex applications in a more easy and flexible way. That is it handles routine taks for the user, so that he or she can focus on higher level programming. Since there are many issues to consider while developing an application using AIBO robots, we tried to make use of available projects and see how they work in order to get more things done in this project. For motion component, we are borrowing techniques from other well-established robocup teams such as UPenn and CMU which are already partly implemented in the Tekkotsu environment. Our localization is based on a Monte Carlo Localization routine [13] implementing a particle filter. To achieve the low level vision, we have very distinct markers in the field that we are able to identify through simple vision techniques, which aids us in localization. 4 Current Progress In this section, we will describe the system design that we made for this application and talk about what we have implemented and what parts are missing. Unfortunately, we have not been able to successfully implement all parts of this project. So we don t have a completely working system. Below is the description of our system design: 4.1 System Design After analyzing Robocup Soccer projects of various universities we came up with the design sketched in Figure 1. Below we describe the functionality of each component of the system. 3
Figure 1: System Model and Structure 4.1.1 Basic Skills Basic skills consists of player-to-ball interaction skills such as kicking and holding the ball, and other player skills such as moving on the field. Any other movement related functionality is also implemented in this component. All these basic skills are provided by Tekkotsu framework. We only had to figure out how to make use of available functionality. We successfully made the robot move around, turn his head around or do any other moves. We can also use available kicking moves of Upenn Robocup team in our project successfully. 4.1.2 Individual Behavior This module focuses on executing the role (specified by play) of the robot on which it runs. It provides its current state in execution of the role and other related information. We don t have this layer implemented yet. 4.1.3 World Model World model component keeps information about the game state. Currently, this information includes location of ball, goal and robots on the field. Information sources of this information are: communication, vision and distance sensors. This component uses other components to achieve its functionality such as localization, communication and vision components. 4.1.4 Communication Component Communication component is the interface to the wireless communication medium. Supported communication protocols include UDP and TCP. We currently have a WLAN setup in the AI Lab. There is a PC connected to an access point, called AIBONET. We can ssh into that pc and communicate 4
with AIBO robots over that pc using the WLAN. In our project, we have implemented the communication interface and we are able to send and receive UDP packages over wireless. 4.1.5 Localization Module The localization module takes information from the vision component and attempts to accurately locate the positions of the ball and the two robots on the field. It does this using a Monte Carlo Localization (MCL) model called a particle filter. A particle filter is a probabilistic model which maintains a collection of guesses as to the location of the robots in the form of x and y location on the field as well as the bearing of the robot from -PI/2 to PI/2. Data comes into the particle filter from the vision component as a set of objects that are recognized by the robot, and where they are in relation to the robot. Using this information and the known locations of these objects such as the goal, the particle filter updates the probability of each sample location, and resample around the more likely samples. The specific approach implemented in this project is a form of MCL known as Adaptive MCL (AMCL). This means that the number of particles in each model can be changed on the fly to account for more uncertain information. If the standard deviation of the particles grows too large, more particles are evenly distributed over the field as to try to localize over a broader area. In order to localize the ball, there is one particle filter running on each robot, and these two models share data between them. The location of the ball is determined to be the most likely position given all of the data from both robots. However, if one robot cannot see the ball, then it learns the location from the other robot. 4.1.6 Vision Module The vision module is actually a very complicated module and none of us has background in computer vision. However, we have figured out how to detect the location of ball and goal, as well as markers on the field. We have benefited from Tekkotsu framework a lot in the implementation of this component. Tekkotsu implements a vision component developed by Carnegie Mellon University, called CMVision [1], [4]. The technique used in object recognition in this component is called segmented vision. Segmentating can be considered as the process by which information on the images are extracted through color codes. The images are encoded in the YUV color space [2]. Color thresholds are applied on images in constructing connected regions of colors. For example, we have a goal which is blue. To locate the location of 5
the blue goal first, we need to make our system understand what blue color is, that is we define color thresholds. Color thresholds are defined using sample images. Then segmenation process will provide us with regions of blue color extracted from camera images. Eventually an algorithm that takes in blue regions and decides which one is the blue goal is run on these regions and the result is the location of the goal on the image. 4.1.7 Sensor Interface This is the system interface to various sensor data including distance sensors, joint sensors, pressure sensors, and all other sensors available in AIBO. This interface is provided by Tekkotsu framework and we have figured out how to take values from it. Hence, we are able to get sensor data from the robot successfully. 4.1.8 Team Behavior Component This component takes the world state from the world state component, maps it to a play [3] in the strategy database and then orders robots to execute the selected play. We implemented a nearest neighbor algorithm to choose the play that is most suitable for the current world state. So this component is pretty much complete. However, since we don t have all of its subcomponents ready, we cannot test it properly. We expect that there will be oscillations between different plays during gameplay. In other words, team behaviour component may switch between different plays before they are finished. This would be an undesirable result. However, this can be avoided with an additional constraint that prevents such frequent switches between different plays. 4.1.9 Strategy DB Strategy DB keeps a list of plays that the Team Behaviour component makes use of. Plays describe roles for each robot. A role for a robot consists of a sequence of basic skills. We have made up some plays and mapped them to world states so that team bahaviour component can choose the appropriate play for each different world state. A world state basically describes the game state, i.e. the location of the dogs, and ball. 4.2 Team Play Demonstration The team behavior component chooses the best play to execute for a given game state. In our strategy database, we have a predefined set of plays that 6
the team behaviour component can use. To demonstrate the execution of play selection mechanism we have made a visualizer component. Below is a screenshot: In this picture, the right side shows the plays currently in our strategy Figure 2: Play Selection Visualizer database, and the left shows the current state of the world and how the play is planned to be executed. The red and blue boxes are the two robots, and the pink box is the ball. The red line and blue lines are the paths that the robots should take, and the pink line is the ball s path. This play is showing that the red robot will pass the ball to the blue one, who will then shoot it towards the goal. On the right you can also see which play is being run, which is designated with the red star. 5 Related Work Lots of research is being done in Robocup soccer domain. Research in this domain is diversed in many different areas of computer science. One branch of research is computer vision. In our project we have made use of Carnegie Mellon s vision component, CMVision. Information gathered via camera is still a noisy information, however it is the one that is used mostly and provides a huge portion of the gathered information. Color segmentation is used to recognize color coded objects. Therefore, everything in the robocup world is color coded. Another dimension of research is coordinated team behavior. It includes coordinating a multi-robot team towards a common goal [6], [7], [8]. This component both has to plan ahead to achieve the ultimate common goal and at the same time make use of short term advantages. Therefore, it has to make acurate decisions and keep the system responsive to environmental 7
changes. One other direction of research includes making use of statistical measures to learn opponent strategies and adapt the gameplay to make most out of available information. Probabilistic opponent modeling techniques are utilized in predicting location, movement and behavior of opponent team. 6 Conclusion and Future Work This project s ultimate goal was to learn about the Robocup Soccer, and to understand possible directions of research in the domain. We have tried to develop an application to learn the available tools and methods in AIBO application development. As results of our efforts, we have : 1. figured out the basic development environment 2. learnt what others are doing and what projects are being done 3. setup a aibo lab and a network 4. found the Tekkotsu framework and learnt how to use it 5. learnt about various fields of computer science involved in Robocup Soccer 6. implemented most parts of the project Unfortunately, we have not been able to implement all the project. Therefore, we don t have a completely working system. In addition, there are issues we have come across in the development of the project, that we have not been able to handle ourselves and need further help with. Such details are usually omitted from research papers, but are crucial in practice. Moreover, we think if the ultimate goal is to join the Robocup competition, it is required to setup a large Robocup team with people from diverse backgrounds. Yet we are pleased to have worked in this field and had fun in the process. Future work in this project includes: completing implementation of the project, eliminating the bugs, and basically making every part work together as a whole. References [1] CMVision Web Site: http://www.cs.cmu.edu/ jbruce/cmvision/ 8
[2] C. Poynton. Poynton s color FAQ (http://www.inforamp.net/poynton/notes/colour and gamma/colorfaq.html). [3] Brett Browning, James Bruce, Michael Bowling, and Manuela Veloso. STP: Skills, tactics and plays for multi-robot control in adversarial environments. IEEE Journal of Control and Systems Engineering, 219:33-52, 2005. [4] James Bruce and Manuela Veloso. Fast and Accurate Vision-Based Pattern Detection and Identification. In Proceedings of ICRA 03, the 2003 IEEE International Conference on Robotics and Automation, Taiwan, May 2003. [5] Brett Browning and Manuela Veloso. Real-time, adaptive color-based robot vision. In In Proceedings of IROS 05, 2005. [6] Maayan Roth, Douglas Vail, and Manuela Veloso. A World Model for Multi-Robot Teams with Communication. In IROS-2003, 2003. [7] Douglas Vail and Manuela Veloso. Dynamic Multi-Robot Coordination. In Multi-Robot Systems, Kluwer, 2003. [8] James Bruce, Michael Bowling, Brett Browning, and Manuela Veloso. Multi-Robot Team Response to a Multi-Robot Opponent Team. In Proceedings of the IROS-2002 Workshop on Collaborative Robots, Switzerland, October 2002. [9] Tekkotsu Web Page : http://www.cs.cmu.edu/ tekkotsu/ [10] Open-r SDK: http://openr.aibo.com/ [11] Sony AIBO: www.sonystyle.com [12] Robocup Organization: http://www.robocup.org/ [13] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, Monte Carlo Localization for Mobile Robots, IEEE International Conference on Robotics and Automation (ICRA99), May, 1999. 9