NTU Robot PAL 2009 Team Report

Similar documents
Adaptive Motion Control with Visual Feedback for a Humanoid Robot

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

CS295-1 Final Project : AIBO

UChile Team Research Report 2009

Team TH-MOS. Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China

Team Description Paper & Research Report 2016

S.P.Q.R. Legged Team Report from RoboCup 2003

Team TH-MOS Abstract. Keywords. 1 Introduction 2 Hardware and Electronics

Team Description for RoboCup 2011

Team Edinferno Description Paper for RoboCup 2011 SPL

Baset Adult-Size 2016 Team Description Paper

Cerberus 14 Team Report

Keywords: Multi-robot adversarial environments, real-time autonomous robots

RoboCup. Presented by Shane Murphy April 24, 2003

Learning and Using Models of Kicking Motions for Legged Robots

Robo-Erectus Jr-2013 KidSize Team Description Paper.

SPQR RoboCup 2016 Standard Platform League Qualification Report

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Learning and Using Models of Kicking Motions for Legged Robots

Self-Tuning Nearness Diagram Navigation

Learning Visual Obstacle Detection Using Color Histogram Features

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

International Journal of Informative & Futuristic Research ISSN (Online):

Team Description for RoboCup 2010

Robo-Erectus Tr-2010 TeenSize Team Description Paper.

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

The UPennalizers RoboCup Standard Platform League Team Description Paper 2017

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

Mobile Robots Exploration and Mapping in 2D

CMDragons 2009 Team Description

Nao Devils Dortmund. Team Description for RoboCup Stefan Czarnetzki, Gregor Jochmann, and Sören Kerner

Tsinghua Hephaestus 2016 AdultSize Team Description

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

4D-Particle filter localization for a simulated UAV

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

A software video stabilization system for automotive oriented applications

FalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper. Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A.

Team Description Paper: HuroEvolution Humanoid Robot for Robocup 2010 Humanoid League

Autonomous Robot Soccer Teams

Overview Agents, environments, typical components

High Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden

Stabilize humanoid robot teleoperated by a RGB-D sensor

Towards Integrated Soccer Robots

NuBot Team Description Paper 2008

NaOISIS : A 3-D Behavioural Simulator for the NAO Humanoid Robot

Distributed, Play-Based Coordination for Robot Teams in Dynamic Environments

Kid-Size Humanoid Soccer Robot Design by TKU Team

MRL Team Description Paper for Humanoid KidSize League of RoboCup 2017

Team Description for Humanoid KidSize League of RoboCup Stephen McGill, Seung Joon Yi, Yida Zhang, Aditya Sreekumar, and Professor Dan Lee

Hanuman KMUTT: Team Description Paper

2 Focus of research and research interests

Courses on Robotics by Guest Lecturing at Balkan Countries

RoboCup TDP Team ZSTT

Does JoiTech Messi dream of RoboCup Goal?

RoboCup 2012 Best Humanoid Award Winner NimbRo TeenSize

KMUTT Kickers: Team Description Paper

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Robotic Systems ECE 401RB Fall 2007

The Future of AI A Robotics Perspective

CIT Brains (Kid Size League)

GermanTeam The German National RoboCup Team

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Autonomous Localization

Team Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize)

Localisation et navigation de robots

Creating a 3D environment map from 2D camera images in robotics

Autonomous and Mobile Robotics Prof. Giuseppe Oriolo. Introduction: Applications, Problems, Architectures

ICHIRO TEAM - Team Description Paper Humanoid KidSize League of Robocup 2017

Team KMUTT: Team Description Paper

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

SPQR RoboCup 2014 Standard Platform League Team Description Paper

FUmanoid Team Description Paper 2010

Saphira Robot Control Architecture

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

KI-SUNG SUH USING NAO INTRODUCTION TO INTERACTIVE HUMANOID ROBOTS

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

A Vision Based System for Goal-Directed Obstacle Avoidance

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Team Description Paper: HuroEvolution Humanoid Robot for Robocup 2014 Humanoid League

INTRODUCTION. of value of the variable being measured. The term sensor some. times is used instead of the term detector, primary element or

Improved SIFT Matching for Image Pairs with a Scale Difference

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

BehRobot Humanoid Adult Size Team

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Path Planning for Mobile Robots Based on Hybrid Architecture Platform

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications

Hierarchical Controller for Robotic Soccer

An Agent-Based Architecture for an Adaptive Human-Robot Interface

CORC 3303 Exploring Robotics. Why Teams?

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

AcYut TeenSize Team Description Paper 2017

Team Description 2006 for Team RO-PE A

Transcription:

NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering Graduate Institute of Networking and Multimedia National Taiwan University Taipei, Taiwan bobwang@ntu.edu.tw,{neant,jimmy,alan}@pal.csie.ntu.edu.tw http://pal.csie.ntu.edu.tw/ 1 Introduction We were the first team from Taiwan to participate in the RoboCup Standard Platform League. Given the limited time and resources, we formed a small size team with only one faculty member and two students to participate in the SPL for the first time with Nao Robots in RoboCup 2009 and were in the top 8 teams. This document is the report of Team NTU Robot PAL in support of the application for participation at RoboCup 10 SPL. Following the team report and code release 2008 of B-Human team [1], we got familiar with the Nao robots and incrementally constructed our software system. New components such as body orientation estimation, goal detection and modified walking parameters were added to the existing B-Human s architecture for accomplishing new requirements and improvements at RoboCup 2009. In addition to the perception

2 NTU Robot PAL 2009 and action modules used in RoboCup 09 SPL, the ongoing efforts are briefly described in this report. 2 The Team The leader of Team NTU Robot PAL is Chieh-Chih (Bob) Wang, who earned his Ph.D. in robotics from Carnegie Mellon University in 2004. He established the Robot Perception and Learning (PAL) laboratory at National Taiwan University (NTU) in 2005. Currently there are seven Ph.D. students, six master students, two full time research assistants and two undergraduate students working on the problems of robot perception and learning with Prof. Wang. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments. We believe that perception and learning are two of the most critical capabilities to achieve our goals. Our work, simultaneous localization, mapping and moving object tracking (SLAMMOT) [2, 3], provides a foundation for robots to localize themselves and to detect and track other teammates and opponents in dynamic environments. We are currently working on monocular SLAMMOT and multi-robot SLAMMOT which can be directly applied using the robot s camera in this competition. Our recent work on interacting object tracking [4, 5] could provide a means to recognize interactions among robots and higher level game strategies. Based on our recent work on probabilistic structure from sound using uncalibrated microphones [6], the robot s microphones could be used not only for communication but also for localization. Our paper [7] shows the first work to explicitly deal with laser scanner failure: windows and glasses. The RobotCup Standard Platform League provides an excellent scenario for us to exploit and explore robot perception and learning. With the use of the standard platforms, we mainly focus on the theory and software problems. Without using relatively accurate laser scanners, we would like to see what level of robot perception we can accomplish using onboard cameras, sonar and microphones. With the fully integrated humanoid platforms, we would like to exploit and explore what level of robot learning can be accomplished to solve motion control, path planning and game playing. 3 Perception In this section, the perception system of Team NTU Robot PAL is presented. Our implementation followed the framework of the B-Human system [1]. Hardware related functions such as sensor reading, and raw data processing functions such as image processing, color segmentation and line detection from the B-Human system were directly used. The principle components of our perception system were designed by ourselves. Section 3.1 introduces the body orientation estimation module which aims at estimating the pitch and roll angles of the robot head. Section 3.2 describes the

NTU Robot PAL 2009 3 goal detection module and Section 3.3 addresses the ball detection module. The overall workflow of the proposed perception system is summarized in Section 3.4. 3.1 Body Orientation Estimation The robot head pose is critical for field lines, goals and ball localization. Given the location and orientation of the robot and the assumption that all field line and the ball lie on the ground, the locations of the field lines and the ball can be determined accordingly. This approach is efficient as the original three dimensional location estimation problem is simplified into a two dimensional problem. However, a body orientation estimation module with sufficient accuracy is critical. Although the orientation of the robot could be estimated with the use of the onboard accelerometer, the measurement readings could not be stable enough even when the robot is static. By combining accelerometer and gyroscope data, a Kalman filter is applied to improve the smoothness of the estimates. The body orientation is predicted using gyroscope data and updated using accelerometer data and the reliable body orientation estimates are reported. 3.2 Goal Detection The ability to determine the position of the goal is critical for the robot to take sensible actions such as kicking and defending. Considering the fact that the colors of the goal posts are specifically decided by RoboCup SPL, we designed our goal detection module with the use of color information. The module extracts blue or yellow lines from the image first [1], then, the extracted lines are classified into vertical ones and horizontal ones, and in the last step, vertical and horizontal goal posts are detected by grouping neighboring lines of the same type. The leftmost and rightmost angles of the goals with respect to the robot are estimated using a particle filter [1] with the goal post detection result as measurements, and based on this direction information, the explicit action, such as the kicking direction, can be determined accordingly. The detected vertical goal posts and the inferred leftmost and rightmost angles are illustrated in Figure 1. 3.3 Ball Detection In addition to the position of the goal, the position of the ball is also critical for scoring. Although the color of the ball is distinctive enough in the field for the detection module using color information, objects outside the field are possibly of the similar color, e.g. clothes of the surrounding people. These dynamic background objects with similar colors may degrade the performance of the ball detector. Therefore, additional spatial constraints are designed to remove the false hypothesis in which the ball is outside the field. In practice, after segmenting

4 NTU Robot PAL 2009 Fig. 1. Goal detection out the orange regions from the image [1], the radiuses of the candidate balls and the distances between the balls and the robot are computed. Whether a candidate is valid or not is decided by the following constraint: the radius of the candidate should be within a threshold compared with the ideal radius of the ball in the estimated distance. After removing false hypotheses that are not consistent with the constraint, a more reliable ball position is estimated using the ball estimation function provided in the B-Human system [1]. 3.4 Gaming without Global Localization In the early stage of developing the robot localization module, the Monte Carlo localization algorithm was applied to estimate the robot pose. The prediction was computed according to the motion model provided by the walking engine (see Section 4.1) and the update stage used results from the proposed perception modules including the extracted field lines [1] and the detected goal direction. However, due to insufficient time, the proposed algorithm was not working yet during the competition in Graz. Thus, an alternative approach was designed to locally localize the robots with respect to the goal. Given the detection result of the goal, the robot pose with respected to the goal can be estimated accordingly. Although this local localization approach is simple, it can efficiently infer the relations between the goal, the robot and the ball which are sufficient for the reactive-based behavior module to decide an acceptable action in our practical experiments. 3.5 Summary To summarize our perception system, the pitch and roll angles of the robot are firstly estimated from the body orientation estimation module. The goal direction is then estimated by the goal detection module, and the ball position is estimated by the ball detection module based on the estimated pitch and roll angles of the robot. In the last step, relative relations between the goal, the ball, and the robot are estimated and are finally fed into the action module.

NTU Robot PAL 2009 5 4 Action The action system of Team NTU Robot PAL is described in this section. All of our motion modules follow the architecture of the B-Human team [1]. Section 4.1 described our modification to improve the speed and stability of robot walking. The key frames generated to perform kicking motion of robots are addressed in Section 4.2. Finally, in Section 4.3, the proposed reactive-based behavior module to make decision of what should be done with the estimated ball and goal locations is described. 4.1 Walking The walking engine used in our system follows the approach of the B-Human team. This engine computes inverse kinematics of foot trajectories for calculating the angle of each joint. The foot trajectory is generated according to walking commands such as walking speed and step height. To achieve a high speed and rarely falling down walking which increases the possibility of winning, the predefined parameters are modified following the pattern proposed in [8]. The walking speed of our robot is around 12 centimeters per second. The walking performances of our robots are satisfactory. 4.2 Kicking Kicking is probably the most essential motion for the teams applying reactivebased approach to win games. Faster and more stable kicking motion results in higher probability of scoring a goal. The kicking motion is designed to be as robust as possible in which the robot should not fall down or enter an unstable situation. We designed several key poses in which the robot is most likely to be stable. This means that the robot should remain stable in these poses. The kicking motion is generated by interpolating the joint angles of the adjacent key poses which are shown in Figure 2. Fig. 2. Key Poses of Kicking Motion It was observed this year that more kicking motions such as side kicks were performed. The flexibility of kicking did increase the success rate of scoring and passing. More kicking motions will be designed accordingly.

6 NTU Robot PAL 2009 4.3 Behavior The behaviors of the two strikers executed in the 2009 competition were simply the same. Incorporating the information from the proposed perception modules, the proposed strategy is reactive-based in which the motion is decided using only the relative goal and ball locations. The strategy is quite brief and is shown in Figure 3. Fig. 3. Strategy diagram The initial state of a striker is in search for ball. The head of the robot is turning for searching the ball. Once the ball is detected, the state is changed into go to ball and the striker goes to the ball immediately. When the robot is near the ball within a small distance, the state changes into search for goal. In this state, the striker rotates around the ball and search the goal at the same time. This state is terminated until the ball and the goal are both in front of the robot. Finally, the striker steps near the ball and kicks. In the conditions that the ball or the goal can not be detected/seen for several seconds, the state the robots will be changed to search for ball or search for goal. This mechanism drives our robots to handle the lost ball or lost goal situations. This strategy was shown to be effective as the team scored totally three goals and one penalty shot in the competitions. Two goals were scored in challenging situations in which the striker made decisions quickly and did not hesitate to kick. 5 Ongoing Work Based on our accomplishments in 2009, we are currently working on the following tasks. 5.1 Robot Detection and Recognition Collaboration could be one of the most important factors to make the team more intelligent and efficient. The ability to detect and recognize robots, either allies

NTU Robot PAL 2009 7 or enemies, is a critical skill to achieve collaboration. With the robot detection and recognition capability, all related information could be fused to improve the localization performance via limited communication. The practical design of our robot detection and recognition module is addressed as follows. In the training stage, the speeded up robust features (SURF) and vector quantized color histograms are extracted from the training images, and then a support vector machine with Gaussian kernels is applied to train a binary classifier. In the testing stage, the outputs from the SVM classifier along with tracking and localization results from other teammates are further fused to improve the system robustness. 5.2 Collaborative Localization Collaborative is also a way to enhance the perception ability. Considering a case that one robot on the field can not see any feature, the estimate uncertainty gradually increases as the sensor provides no information about the world. The problem could be solved with information from other robots. Accurate estimates and reduced uncertainties can be obtained by properly combining all information from different robots. Thus, a distributed localization module [9] is being implemented to merge all information from different robots. The estimator is decomposed into a number of smaller communicating filters which are individually performed. 5.3 Active Perception Based on the existing passive perception framework, the robots are able to retrieve rich and stable information about the environment. However, it is still possible that some parts of the environment are lack of information due to some unavoidable situations. For instance, when all the robots in the field look toward the same direction, the states of the uncovered regions will become more and more uncertain. As the state estimation algorithm only reports the states based on collected sensory data, the passive perception system may not be able to solve the insufficient data problem. It is our interest to enhance the current perception system with the ability to actively control the robot to collect useful data. More specifically, when the states of the environment become uncertain or ambiguous, the perception system will actively suggest some possible solutions to the action module for reducing the state uncertainty and ambiguity. The strategy decision module is designed to be able to consider kinds of possible actions, including moving for defending or attacking and gazing for uncertainty reducing as aforementioned. By designing the reward function properly, an action can be determined in order to maximize the winning possibility given the information gathered so far.

8 NTU Robot PAL 2009 6 Conclusion It was our first time this year to participate in RoboCup SPL with the Nao robots. The simple reactive-based system was developed due to limited time and our unfamiliarity with the hardware. To improve the overall performance, we are working on enhancing the action module such as designing more effective and robust walking and kicking patterns. Meanwhile, we are working on strengthening the perception capability. The distributed multiple robot localization and ball tracking system is under construction. Enhancing the system with the ability to model the uncertain environment consisting of both static and dynamic parts is also of our interest. 7 Acknowledgements We gratefully acknowledge the support from Excellent Research Projects of National Taiwan University; Department of Computer Science and Information Engineering at National Taiwan University; Taiwan Compal Communications, MSI and Intel. In addition, we thank the B-Human team for releasing their code and the members of the Robot Perception and Learning Laboratory for their prompt and effective assistance. References 1. T. Röfer, T. Laue, A. Burchardt, E. Damrose, K. Gillmann, C. Graf, T. J. de Haas, A. Härtl, A. Rieskamp, A. Schreck, and J.-H. Worch, B-human team report and code release 2008, Department of Computer Science of the University of Bremen and the DFKI research area Safe and Secure Cognitive Systems, Tech. Rep., 2008. [Online]. Available: http://www.b-human.de/download.php?file=coderelease08 doc 2. C.-C. Wang, C. Thorpe, and S. Thrun, Online simultaneous localization and mapping with detection and tracking of moving objects: Theory and results from a ground vehicle in crowded urban areas, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, September 2003. 3. C.-C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. Durrant-Whyte, Simultaneous localization, mapping and moving object tracking, The International Journal of Robotics Research, vol. 26, no. 9, pp. 889 916, September 2007. 4. C.-C. Wang, T.-C. Lo, and S.-W. Yang, Interacting object tracking in crowded urban areas, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Roma, Italy, April 2007. 5. K.-W. Wan, C.-C. Wang, and T. T. Ton, Weakly interacting object tracking in indoor environments, in IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), Taipei, Taiwan, August 2008. 6. C.-C. Wang, C.-H. Lin, and J.-S. Hu, Probabilistic structure from sound, Advanced Robotics, vol. 23, no. 12-13, pp. 1687 1702, October 2009. 7. S.-W. Yang and C.-C. Wang, Dealing with laser scanner failure: Mirrors and windows, in IEEE International Conference on Robotics and Automation (ICRA), Pasadena, California, May 2008.

NTU Robot PAL 2009 9 8. S. Behnke, Online trajectory generation for omnidirectional biped walking, in IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida, May 2006. 9. S. I. Roumeliotis and G. A. Bekey, Distributed multi-robot localization, IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 781 795, October 2002.