NuBot Team Description Paper 2008

Similar documents
Camera Parameters Auto-Adjusting Technique for Robust Robot Vision

RoboCup. Presented by Shane Murphy April 24, 2003

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

Hierarchical Controller for Robotic Soccer

Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture

CMDragons 2009 Team Description

Robocup Electrical Team 2006 Description Paper

Keywords: Multi-robot adversarial environments, real-time autonomous robots

2 Our Hardware Architecture

CS295-1 Final Project : AIBO

Design a Modular Architecture for Autonomous Soccer Robot Based on Omnidirectional Mobility with Distributed Behavior Control

UChile Team Research Report 2009

NTU Robot PAL 2009 Team Report

Learning and Using Models of Kicking Motions for Legged Robots

BRIDGING THE GAP: LEARNING IN THE ROBOCUP SIMULATION AND MIDSIZE LEAGUE

Automatic acquisition of robot motion and sensor models

Design and Implementation a Fully Autonomous Soccer Player Robot

Multi-Agent Control Structure for a Vision Based Robot Soccer System

CAMBADA 2015: Team Description Paper

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

The Attempto Tübingen Robot Soccer Team 2006

Field Rangers Team Description Paper

CAMBADA 2014: Team Description Paper

Dutch Nao Team. Team Description for Robocup Eindhoven, The Netherlands November 8, 2012

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Team Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize)

S.P.Q.R. Legged Team Report from RoboCup 2003

Multi-Platform Soccer Robot Development System

Baset Adult-Size 2016 Team Description Paper

GermanTeam The German National RoboCup Team

Robot Sports Team Description Paper

Multi-robot Formation Control Based on Leader-follower Method

BehRobot Humanoid Adult Size Team

Team Edinferno Description Paper for RoboCup 2011 SPL

Team KMUTT: Team Description Paper

Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup

Learning and Using Models of Kicking Motions for Legged Robots

Motion Control of Mobile Autonomous Robots Using Non-linear Dynamical Systems Approach

Using Reactive and Adaptive Behaviors to Play Soccer

Multi Robot Object Tracking and Self Localization

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

The description of team KIKS

Team TH-MOS. Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China

Robo-Erectus Jr-2013 KidSize Team Description Paper.

Team Description Paper & Research Report 2016

SPQR RoboCup 2016 Standard Platform League Qualification Report

SPQR RoboCup 2014 Standard Platform League Team Description Paper

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

RoboCup 2012 Best Humanoid Award Winner NimbRo TeenSize

NUST FALCONS. Team Description for RoboCup Small Size League, 2011

Robo-Erectus Tr-2010 TeenSize Team Description Paper.

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

MRL Small Size 2008 Team Description

Self-Localization Based on Monocular Vision for Humanoid Robot

Towards Integrated Soccer Robots

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

International Journal of Informative & Futuristic Research ISSN (Online):

NimbRo 2005 Team Description

The Future of AI A Robotics Perspective

Courses on Robotics by Guest Lecturing at Balkan Countries

CMDragons 2008 Team Description

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

Parsian. Team Description for Robocup 2013

A Vision Based System for Goal-Directed Obstacle Avoidance

STOx s 2014 Extended Team Description Paper

Team Description Paper: HuroEvolution Humanoid Robot for Robocup 2010 Humanoid League

NEUIslanders Team Description Paper RoboCup 2018

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

RoboBulls 2016: RoboCup Small Size League

Hanuman KMUTT: Team Description Paper

COOPERATIVE STRATEGY BASED ON ADAPTIVE Q- LEARNING FOR ROBOT SOCCER SYSTEMS

FalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper. Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A.

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors

Team TH-MOS Abstract. Keywords. 1 Introduction 2 Hardware and Electronics

IMAGE TYPE WATER METER CHARACTER RECOGNITION BASED ON EMBEDDED DSP

Test Plan. Robot Soccer. ECEn Senior Project. Real Madrid. Daniel Gardner Warren Kemmerer Brandon Williams TJ Schramm Steven Deshazer

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

The Design of an Intelligent Soccer-Playing Robot

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

CIT Brains & Team KIS

Concept and Architecture of a Centaur Robot

MCT Susanoo Logics 2014 Team Description

2014 KIKS Extended Team Description

The Dutch AIBO Team 2004

Robótica 2005 Actas do Encontro Científico Coimbra, 29 de Abril de 2005

Autonomous Robot Soccer Teams

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Minho MSL - A New Generation of soccer robots

Global Variable Team Description Paper RoboCup 2018 Rescue Virtual Robot League

CMDragons 2006 Team Description

Kid-Size Humanoid Soccer Robot Design by TKU Team

Concept and Architecture of a Centaur Robot

LEVELS OF MULTI-ROBOT COORDINATION FOR DYNAMIC ENVIRONMENTS

Multi-Robot Team Response to a Multi-Robot Opponent Team

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

Funzionalità per la navigazione di robot mobili. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Transcription:

NuBot Team Description Paper 2008 1 Hui Zhang, 1 Huimin Lu, 3 Xiangke Wang, 3 Fangyi Sun, 2 Xiucai Ji, 1 Dan Hai, 1 Fei Liu, 3 Lianhu Cui, 1 Zhiqiang Zheng College of Mechatronics and Automation National University of Defense Technology, China NuBot team website: www.nubot.com.cn 1 {zhanghui_nudt, lhmnew, haidan, liufei, zqzheng}@nudt.edu.cn 2 jxc_nudt@hotmail.com, 3 {wxk26605771, sunfangyi1985, clh2062}@163.com Abstract. The paper mainly presents the developments of our middle-size league robot team NuBot for RoboCup 2008 Suzhou. The improvements lie in robot hardware like new panoramic mirror and kicking device, and in robot software such as algorithms for panoramic image processing and robot s selflocalization, multi-robot cooperation, path planning and motion control. Our current research focuses on robust robot vision, multi-robot cooperation, new learning controller for DC motors, and reinforcement learning for real robots. 1 Introduction RoboCup is an international research and education initiative. Its goal is to foster artificial intelligence and robotics research by providing a standard problem where a wide range of technologies can be examined and integrated. The middle-size league competition of RoboCup can serve as a test-bed for general-purpose methods in robotics and artificial intelligence like image understanding, computer vision, motion planning and control, and multi-robot coordination, etc. Our middle-size league team NuBot, founded in 2004, participated in RoboCup 2006 Bremen for the first time. We entered the top 8 by advancing to the second round robin in RoboCup 2007 Atlanta, and won the 3rd place in the first technical challenge which is playing with arbitrary goals. We also participated in the 1st and 2nd RoboCup China Open in 2006 and 2007, and won the first place respectively. Our research focuses consist of multirobot cooperation, robust robot vision, robot control and reinforcement learning for real robots, etc. In the following parts, we will describe the recent developments of our robot team comparing to that presented in our former TDP [1], involving some improvement in robot hardware like panoramic mirror and kicking device, and in robot software like algorithms for panoramic image processing and robot self-localization, multi-robot cooperation mechanism, path planning and motion control. Finally we will introduce our current research focuses. 1

2 Improvement in Robot Hardware Every fully autonomous robot of NuBot is homogeneous. The chassis of the robot is designed as a frame construction where there are four omni-directional wheels, DC motors, motor controllers, control board, batteries, electromagnetic kicking device, and notebook PC. An omni-directional vision system and a perspective camera as the front vision are on the top of the framework. Several pieces of foam are added around the framework to protect our and opposite robots from strike in the competition. Our current robot team is demonstrated in figure 1. Fig. 1. Our current MSL soccer robot team-nubot There are two main improvements in our robot hardware. The first one is a novel omni-directional vision system. The performance of omni-directional vision system is determined almost by the panoramic mirror. The inner part and the outer part of our former panoramic mirror are horizontally isometric mirror and vertically isometric mirror respectively, and the mirror can make the imaging resolution of the objects near the robot on the field constant and make the imaging distortion of the objects far from the robot small in vertical direction [1] [2], as shown in figure 2(c), which is a typical panoramic image we captured in Bremen when participating in RoboCup 2006. From this figure, we can find that the only deficiency of this mirror lies in that the imaging of scene very close to robot is bad, such as the robot itself can not be seen in the panoramic image, which is caused by the difficult of manufacturing the innermost part of this mirror accurately. So we design a new panoramic mirror to solve the above problem by replacing the innermost part of the mirror with a hyperbolic mirror. The novel panoramic mirror is made up of hyperbolic mirror, horizontally isometric mirror and vertically isometric mirror from the inner to the outer. The designed profile of mirror and manufactured mirror are demonstrated in figure 2(a) and figure 2(b). The typical panoramic image captured by the new omni-directional vision system is showed in figure 2(d). The new omni-directional vision system maintains the merit of our former system, and also can have clear imaging of scene very close to robot, such robot itself. 2

(a) (b) (c) Fig. 2. The improvement in our omni-directional vision system. (a) The profile curve of new designed panoramic mirror. (b) The new manufactured panoramic mirror. (c) The typical panoramic image captured by our former omni-directional vision in Bremen, 2006, and the dimension of the field is 12m*8m. (d) The typical panoramic image captured by our new omnidirectional vision in Atlanta, 2007, and the dimension of the field is 18m*12m. Another significant improvement in hardware is the kicking device. We design a smaller, lighter, but more powerful solenoid, as shown in figure 3. Furthermore, the robot can adjust shooting strength to lift the ball over the obstacle according to its distance by controlling the discharging time of capacitors in the kicking electrocircuit. The discharging time is controlled by DSP embedded in the robot, and the time resolution is less than 0.1ms. (d) Fig. 3. Our smaller, lighter, but more powerful solenoid 3

3 Algorithm for Panoramic Image Processing and Robot s Selflocalization Object recognition and self-localization is the basis for robot s autonomous ability. Although the color goals will be replaced by white nets, there will be still lots of color objects on the field of RoboCup 2008, such as orange ball, black robots, green field, white mark lines, and magenta/cyan markers. It is still very important for soccer robots to recognize these color objects. Changing light conditions can cause lots of difficulties to it [3], so developing robust object recognition method adaptive to different illumination remains as a research focus in RoboCup literature. We assume that the light condition changes gently, which is consistent to most of the practical situations in the competition. Under this assumption, we develop a robust object recognition method after verifying that the conditional probability density distribution of the YUV values mapping to each color is Gaussian [4]. In the method, we firstly calibrate one or more panoramic images by human-computer interface [5] and get the means and variances of the conditional probability density distribution for each color type. We select the classifying seeds in the image based on the Gaussian parameters, gain the object regions from these seeds by region growing algorithm under the principle that the color values in an object region should be similar, and then the Gaussian parameters can be updated to adaptive to new light conditions. We also detect the white line points robustly by scanning the panoramic image with scan lines arranged radially around the center of the image, which is similar to the method in [6], but furthermore, we can reduce the false detecting rate by using the updated Gaussian parameters of green field to confirm the possible line points to be real ones [4]. The results of processing panoramic images captured under greatly different illumination are demonstrated in figure 4. The color objects in the all images can be detected correctly and the recognition can be robust to changing illumination. The white line points are the only visual information that could be used as landmarks for robot s self-localization, for there will be no longer color goals from RoboCup 2008. Monte Carlo localization [7] is the most popular method in indoor mobile robotics and it can solve the global localization effectively, like recovery from robot s being kidnapped. The matching localization method presented in [8] is a very fast and effective algorithm to track robot s localization, and it only takes several milliseconds to finish the localization computation for one frame image. The difference between the detected white line points and the true field mark lines in the world coordinate frame are used to construct the sensor model in Monte Carlo localization and to construct the error function needed to be minimized or optimized in the matching localization. This difference can be approximated by the distance from the detected line points to the closest field mark lines. According to the respective merit of these two algorithms, we combine them to realize our robot s self-localization. In the localization procedure, for the competition field is totally symmetry, we firstly have to know which half field the robot localizes in before the competition, and then use Monte Carlo localization method to solve the global localization. After acquiring the initial localization, we apply the matching localization method to track the localization accurately. If the robot detects that the localization tracking fails or it is kidnapped during the competition, it will recall 4

Monte Carlo localization method to reinitialize its localization. For breaking the symmetry of the new field, we will add a digital compass as the orientation sensor. Experiments show that the position error of robot s self-localization can be less than 30cm. After acquiring self-localization with high accuracy, the robot can estimate the moving velocity of ball and other mobile objects it detected in the vision sensor by the method presented in [9]. The velocity information is very useful for the positioning strategy of goalie, and the ball passing and intercepting in multi-robot cooperation. (a1) (b1) (c1) (a2) (b2) (c2) Fig. 4. The processing results of panoramic images under different illumination. (a1)(b1)(c1) The images captured under weaker and weaker illumination. (a2)(b2)(c2) The processing results of the three images by our object recognition method. The red points are the detected white line points. 4 Multi-robot Cooperation Mechanism Our robot control software is based on a behavior-based hierarchical architecture for mobile robots [10]. We integrate a multi-robot cooperation mechanism combining the globally distributed role assignment strategy and the partially centralized cooperation strategy in this architecture. In the globally distributed role assignment strategy, all robots are equal totally, and they can select their own roles dynamically based on market mechanism, such as attacker, assistant, defender and so on. In the partially centralized cooperation strategy, we define several tactical actions for two-robot 5

cooperation, like place-kick cooperation, attacking cover and ball passing. The definitions of these tactical actions are as follows: Place-kick cooperation: In the place-kick, the assistant will push the ball to the front of attacker, and then the attacker can score the ball directly, for there is not direct freekick in RoboCup MSL according to current rule; Attacking cover: When the attacker is dribbling the ball, the assistant will cover it by positioning between the ball and the opposite robots; Ball passing: The attacker will pass the ball to its teammate who is in the better position, and then the teammate will intercept and receive the ball, like in the corner kick. In the above two-robot tactical actions, the attacker is the dominator, and it can decide whether, how and who to cooperate with itself by communication according the situation in the competition. The information flows of the globally distributed role assignment strategy and the partially centralized cooperation strategy are shown in the figure 5. The performance of multi-robot cooperation can be found in the qualification video for RoboCup 2008 from our team website: www.nubot.com.cn. (a) Information flow Fig. 5. (a) The information flow of the globally distributed role assignment strategy. (b) The information flow of the partially centralized cooperation strategy. (b) 5 Robot s Path Planning and Motion Control We have done some research on trajectory planning, for robot has to select an optimal trajectory to attack and shoot the ball to the opponent s goal in dynamic environment. Due to the robot s movement is based on kinematic model analysis, we only generate the nearest point as the destination point where there are fewest obstacles between the robot and opponent s goal and the robot can shoot and score. We disperse the opponent s half field as grids and calculate the utility of each grid according to the following four factors with different weights: the position sensitivity of the grid which increases as the distance to the opponent s goal decrease; the obstacles between the grid and the opponent s goal; the distance between the grid and the each obstacle detected; the obstacles between the robot and the grid. Then we can search the nearest point/grid where the robot can get more opportunity to score from robot s current 6

position. In real application, we also set the condition to replan and make the robot not change its movement suddenly due to the imprecise and vibrational sensor information. In motion control, for having achieved accurate robot s self-localization, we redesign the robot s basic behaviors such as moving to some point and the positioning strategy in world coordinate frame. We also redesign ball tracing behavior in target reference coordinate which is fixed to the ball with the ball velocity direction as x axes. So when capturing the ball, the robot just need to move to the origin of the target reference coordinate without the need to consider the complex relation between the ball and the robot. 6 Current Research Focus Our current main research focuses are listed as follows: -Robust robot vision: The final goal of RoboCup is that the soccer robot team defeats human champion, so robots will have to be able to play outdoors and get rid of the color-coded environment sooner or later. We are developing our robot vision system to make that the robot can work well in the environment with highly dynamic illumination and even in totally new field without any off-line calibration. We are also researching on the new arbitrary FIFA ball recognition method based on our omnidirectional vision system. -Multi-robot cooperation: Multi-robot cooperation holds an important place in distributed AI and robotics field. We have designed a good multi-robot cooperation mechanism and also realized several two-robot cooperative behaviors. Now we have to do deeper research to develop our robot s cooperation ability by involving more robots and more complex cooperative behaviors in this mechanism. -New learning controller for DC motors: An ongoing research project is to replace the traditional PID controller for DC motors by a new learned controller based on reference controller. The reward of the learning controller is a function of the performance. To deal with the continuous state space, the input state-action pairs are approximated by a multi-layer perception (MLP), and the weights of the MLP are initialized by a former PID controller. The training process of new learning controller can converge quickly because of the proper initial weights. The controller can adapt to different field carpets, because it will be trained on them. The controller also can be optimal under the noise, overcome wheel slippages, and adapt to robot s different dynamic character. -Reinforcement learning for real robots: Applying reinforcement learning to real robot control is attractive for its superiority over the traditional explicit control procedures. But it is not an easy job for that there are time-delay, imprecise sensor information, large state spaces and constraint of training times existing in the real robot system. Now we focus on applying RL to the behavior control of the single robot, such as intercepting a moving ball or driving to a specified position. In our research, we will use the linear function approximation to deal with large state spaces. We will also learn the robot s behavior in our MSL simulation environment [1] based on ODE, and then go on in real robots to reduce the training times in real robots. 7

7 Summary We have describe the developments of our soccer robot team, including the new panoramic mirror and kicking device in robot hardware and panoramic image processing, robot s self-localization, a novel multi-robot cooperation mechanism, path planning and motion control in robot software. Our current research focuses are in robust robot vision, multi-robot cooperation, new learning controller for DC motors and reinforcement learning for real robots. Acknowledgement We would like to thank Lin Liu, Yupeng Liu and Wei Liu for their cooperation to establish and develop our RoboCup MSL soccer robot team-nubot. References 1. Hui Zhang, Huimin Lu, Xiucai Ji, et.al.: NuBot Team Description Paper 2007. RoboCup 2007 Atlanta, CD-ROM, Atlanta, USA, July, 2007. 2. LU Huimin, LIU Fei and ZHENG Zhi-qiang: A Novel Omni-vision System for Soccer Robots. Journal of Image and Graphics (in Chinese), Vol.12, No.7: 1243-1248, 2007. 3. Mayer, G., Utz, H., and Kraetzschmar,G.K.: Playing robot soccer under natural light: A case study. In Polani, D., Browning, B., Bonarini, A., eds.: RoboCup 2003: Robot Soccer World Cup VII, Berlin, Springer-Verlag (2004) pp. 238-249. 4. Huimin Lu, Zhiqiang Zheng, Fei Liu and Xiangke Wang: A robust object recognition method for soccer robots. Accepted by the 7th World Congress on Intelligent Control and Automation (WCICA 08), Chongqing, China, June, 2008. 5. Fei Liu, Huimin Lu and Zhiqiang Zheng: A Modified Color Look-Up Table Segmentation Method for Robot Soccer. 4th Latin America Robotic Simposium/IX Congreso Mexicano de Robotica 2007(4th IEEE LARS/COMRob 07), Monterry, Mexico, November, 2007. 6. A Merke, S Welker, and M Riedmiller: Line based robot localization under natural light conditions. In ECAI 2004 Workshop on Agents in Dynamic and Real Time Environments, 2004. 7. Frank Dellaert, Dieter Fox, Wolfram Burgard and Sebastian Thrun: Monte Carlo localization for mobile robots. IEEE International Conference on Robotics and Automation (ICRA99), May, 1999. 8. Martin Lauer, Sascha Lange, and Martin Riedmiller: Calculating the perfect match: An efficient and accurate approach for robot self-localization. In A. Bredenfeld, A. Jacoff, I. Noda and Y. Takahashi, eds.: RoboCup 2005: Robot Soccer World Cup IX, LNCS, Springer- Verlag, 2006. 9. Martin Lauer, Sascha Lange, and Martin Riedmiller: Modeling moving objects in a dynamically changing robot application. In KI 2005: Advances in Artificial Intelligence, page 291-303, 2005. 10. Xiucai Ji, Lin Liu, and Zhiqiang Zheng: A Modular Hierarchical Architecture for Autonomous Robots Based on Task-Driven Behaviors. International Conference on Sensing, Computing and Automation, Chongqing, China, May, 2006. 8