UChile Robotics Team Team Description for RoboCup 2014

Similar documents
UChile Team Research Report 2009

SPQR RoboCup 2014 Standard Platform League Team Description Paper

UChile RoadRunners 2009 Team Description Paper

SPQR RoboCup 2016 Standard Platform League Qualification Report

Ball Dribbling for Humanoid Biped Robots: A Reinforcement Learning and Fuzzy Control Approach

S.P.Q.R. Legged Team Report from RoboCup 2003

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

Nao Devils Dortmund. Team Description for RoboCup Stefan Czarnetzki, Gregor Jochmann, and Sören Kerner

GermanTeam The German National RoboCup Team

NTU Robot PAL 2009 Team Report

Team Edinferno Description Paper for RoboCup 2011 SPL

Team KMUTT: Team Description Paper

Automatic acquisition of robot motion and sensor models

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Baset Adult-Size 2016 Team Description Paper

Team TH-MOS. Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China

RoboCup. Presented by Shane Murphy April 24, 2003

Nao Devils Dortmund. Team Description for RoboCup 2013

Team TH-MOS Abstract. Keywords. 1 Introduction 2 Hardware and Electronics

Robo-Erectus Jr-2013 KidSize Team Description Paper.

Hierarchical Controller for Robotic Soccer

Robo-Erectus Tr-2010 TeenSize Team Description Paper.

Team Description Paper & Research Report 2016

CAMBADA 2015: Team Description Paper

NuBot Team Description Paper 2008

Detection of AIBO and Humanoid Robots Using Cascades of Boosted Classifiers

The UPennalizers RoboCup Standard Platform League Team Description Paper 2017

Courses on Robotics by Guest Lecturing at Balkan Countries

Learning and Using Models of Kicking Motions for Legged Robots

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

KMUTT Kickers: Team Description Paper

A Vision Based System for Goal-Directed Obstacle Avoidance

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

Learning and Using Models of Kicking Motions for Legged Robots

Team Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize)

RoboCup 2012 Best Humanoid Award Winner NimbRo TeenSize

Tsinghua Hephaestus 2016 AdultSize Team Description

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Hanuman KMUTT: Team Description Paper

Dutch Nao Team. Team Description for Robocup Eindhoven, The Netherlands November 8, 2012

Robotic Systems ECE 401RB Fall 2007

COOPERATIVE STRATEGY BASED ON ADAPTIVE Q- LEARNING FOR ROBOT SOCCER SYSTEMS

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

Keywords: Multi-robot adversarial environments, real-time autonomous robots

EROS TEAM. Team Description for Humanoid Kidsize League of Robocup2013

Humanoid Robot NAO: Developing Behaviors for Football Humanoid Robots

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

The UT Austin Villa 3D Simulation Soccer Team 2008

Learning Visual Obstacle Detection Using Color Histogram Features

Multi-Platform Soccer Robot Development System

WF Wolves & Taura Bots Humanoid Kid Size Team Description for RoboCup 2016

Kid-Size Humanoid Soccer Robot Design by TKU Team

Adaptive Motion Control with Visual Feedback for a Humanoid Robot

Team RoBIU. Team Description for Humanoid KidSize League of RoboCup 2014

CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY

MRL Team Description Paper for Humanoid KidSize League of RoboCup 2017

Towards Using ROS in the RoboCup Humanoid Soccer League

Adaptive Dynamic Simulation Framework for Humanoid Robots

Reinforcement Learning Simulations and Robotics

Does JoiTech Messi dream of RoboCup Goal?

Cerberus 14 Team Report

Hierarchical Case-Based Reasoning Behavior Control for Humanoid Robot

NimbRo 2005 Team Description

Bogobots-TecMTY humanoid kid-size team 2009

Using Reactive and Adaptive Behaviors to Play Soccer

Team Description for Humanoid KidSize League of RoboCup Stephen McGill, Seung Joon Yi, Yida Zhang, Aditya Sreekumar, and Professor Dan Lee

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Improving the Kicking Accuracy in a Soccer Robot

Team Description for RoboCup 2011

2 Our Hardware Architecture

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

NAO-Team Humboldt 2010

NaOISIS : A 3-D Behavioural Simulator for the NAO Humanoid Robot

ZJUDancer Team Description Paper

FUmanoid Team Description Paper 2010

CAMBADA 2014: Team Description Paper

Team Description Paper

YRA Team Description 2011

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Bembelbots Frankfurt RoboCup SPL Team at Goethe University Frankfurt

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

An Open Robot Simulator Environment

The Attempto Tübingen Robot Soccer Team 2006

Representation Learning for Mobile Robots in Dynamic Environments

Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Robocup Electrical Team 2006 Description Paper

MRL Team Description Paper for Humanoid KidSize League of RoboCup 2014

Multi Robot Object Tracking and Self Localization

AcYut TeenSize Team Description Paper 2017

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat

HfutEngine3D Soccer Simulation Team Description Paper 2012

Task Allocation: Role Assignment. Dr. Daisy Tang

Plymouth Humanoids Team Description Paper for RoboCup 2012

SimRobot Development and Applications

FalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper. Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A.

Transcription:

UChile Robotics Team Team Description for RoboCup 2014 José Miguel Yáñez, Pablo Cano, Matías Mattamala, Pablo Saavedra, Matías Silva, Leonardo Leottau, Carlos Celemín, Yoshiro Tsutsumi, Pablo Miranda, and Javier Ruiz-del-Solar Advanced Mining Technology Center (AMTC), Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago, Chile {uchilert}@amtc.uchile.cl http://uchilert.amtc.cl Abstract. This team description paper describes the organization, research focus and ongoing work of the UChile Robotics Team entering the RoboCup Standard Platform league in 2014. 1 Introduction UChile Robotics Team (UChileRT) is an effort of the Advanced Mining Technology Center and the Department of Electrical Engineering of the Universidad de Chile in order to foster research in mobile robotics. Since 2012, UChileRT has been carrying out a restructuring process, where several changes and improvements are being implemented. Product of this, we have jumped from the last positions in the last years, to be within the top twelve teams in RoboCup-2013. In order to continue improving our game performance, currently some new developments and enhancements in our software code have been carried out. In addition, two doctoral thesis directly related with soccer robotic are under development. 2 Research focus 2.1 Dribling Engine A methodology to learn the ball-dribbling behavior in biped humanoid robots has been reported in [1], it proposes to model the dribbling problem by splitting it in two sub problems, alignment and ball-pushing. The alignment problem consists of controlling the position and orientation of the robot in order to obtain a proper alignment with the desired balls target. The ball-pushing problem consist of controlling the robots speed in order to obtain, at the same time, a high speed of the ball but a low relative distance between the ball and the robot, that means controllability and efficiency. These ideas are implemented by three modules: (i) a Takagi-Sugeno-Kang fuzzy logic control (TSK-FLC) for aligning the robot

2 UChile Robotics Team Fig. 1. Variables definition for the dribbling modeling. when approaching the ball, (ii) an automatic foot-selector for choosing which foot will hit the ball, and (iii) a reinforcement-learning (RL) based controller for controlling the robots speed when approaching and pushing the ball. The description of the defined behaviors uses the following variables:(v x, v y, v θ ), the robots linear and angular speeds; α, the robot-target angle; γ, the robot-ball angle; ρ, the robot-ball distance; and, φ, the robot-ball-target complementary angle. These variables are shown in figure 1, where the desired target ( ) is located in the middle of the opponent goal, and with x axis pointing always forwards, measured in a robots centered reference system. The designed dribbling engine has been successfully tested and it is currently included in our control architecture. It has shown better performance regarding our last RoboCup dribbling behavior, please see our qualification video [2]. 2.2 Localization Dis-Ambiguity It has been developed a system for working out the self-localization ambiguity problem. For this task, our goals identification module is used, which computes color histograms on surrounding s areas of the goals. The purpose is to assist the self-localization with the goal identification module, including a prioritized observation model. This approach allows obtaining four different 3-D histograms (Y, Cr, Cb channels) sampling the neighbor pixels of the detected goal and the region over the visual horizon, as it is shown in figure 2. The implemented procedure is: i) Obtaining reference histograms for both goals in the ready state; ii) calculating frame by frame new histograms for comparing with the references; iii) computing a similarity function to identify which of the goals is it; iv) updating the reference histograms weighted with the current one, for ensuring dynamic and robust behavior facing environment changes at the surrounding s areas of the goal.

Team Description for RoboCup 2014 3 Fig. 2. Color histograms on surrounding s areas of the goals. This method has been tested in real robots. Some partial results are shown in the qualification video [2], where it is compared the behavior of the particles filter with and without histograms. Results show convergence of the particles towards the real robot pose, leaving techniques based on mirror poses as previously used. For avoiding wrong reference histograms or loss of certainty through time, it is appropriate to set some reset conditions to the reference histograms, when the robot has a good initial pose hypothesis (e.g. after penalized). Currently, this localization dis-ambiguity method is only applied where the robot is nearby the center circle of the field. In order to extend that, more reference histograms have to be obtained from different viewing angles and robot s poses on the field. 2.3 Robot Perceptor The robot detection module has three different stages: i) to identify color regions in the image that are potential robots, to do so, image segmentation is realized; ii) to evaluate each region in relation to the other ones with a set of rules based on proportion and aspect ratio, for filtering purposes. Then, in the same stage, the detected robot pose is computed by using geometry projections. iii) to evaluate a second set of heuristic rules to ensure the validity of the detection. The robots detection has been successfully tested and it is capable of detecting robots that are seen at the front up to distances of 5m. Results are shown in our qualification video [2] and figure 3. Due to image segmentation imperfections, there are detection problems at close range. Moreover, detection depends strongly on the orientation of the robot that is being detected from the image. The improvement of these problems is an immediate pending task. In addition, it is necessary to evaluate the robot

4 UChile Robotics Team Fig. 3. A robot perceptor capture, green boxes are the detected jersey shirts and red points their projections on the floor perceptor using jersey shirts with sponsor or team logos, according to the 2014 SPL rule book. 2.4 Bezier s Curve Based Path Planning It is being developed a Bezier s curve based path planning [3]. The aim is to calculate a curve path that takes advantage of translations along X axis, which are much faster than translations along Y axis. The objective is to minimize the robot translation time, taking into account the initial robot s pose, the current obstacle map, and the desired robot pose. At present, current pose and target of the robot are evaluated frame by frame in order to compute an optimal curve. Then, using information of the curvature of the initial point of the path and instant velocity, the ẋ and θ speeds are computed. Finally, the robot calculates a curve trajectory that keep a desired position and orientation. This method has been successfully tested in real robots. Obtained curve paths reduces the robot s translation time regarding our previous approach. Some results obtained with a second degree Bezier s curve (i.e., it is traced with two control points) are shown on the qualification video [2]. It is intended to include the obstacles map in the curve generation. To do so, a dynamic degree Bezier s curve will be used, that will be determined by the quantity of present obstacles. The new control points will be set according to the position of the obstacles. Besides, it is planned to use the curve generation together with a path planning method, in order to follow an optimal path. 2.5 Active vision We are merging our active vision module proposed in [4] [5] [6] into our new control framework. The proposed algorithm in our previous work considers an

Team Description for RoboCup 2014 5 Fig. 4. A Bezier s curve example is drawing with the red line. The target pose is the yellow arrow, and the black lines indicate the two control points. UKF step with a probabilistic approach to generate simulated observations, and to decide which objects are more important using a value function. Now we are improving the observation model of our algorithm with the inclusion of an artificial neural network to modulate the simulated observations and to obtain a model more fitted to real world. We ve explored new approaches for the active vision problem [4] [5] [6] and we ve been working in a new module able to merge both static and dynamic observations in the field. The static information consists in preprocessed data of the most important landmarks in the field, such as goals, corners, and central circle, whereas that the dynamic information is related to mobile obstacles. Our approach uses both kinds of information through a gaussian representation in a pan and tilt space and selects the best position to move the robot s head (see figure 5), in order to mantain a good localization in spite of the pose and the dynamic environment, We already implemented a preliminar version of these ideas in our code and we hope to test it in a real environment in July, so as to present the results formaly next year. 2.6 Acoustic Communications In order to be a team with less dependence on wireless communications, we intend to develop an acoustic communication module that sends and receives short codified messages modulated over a dynamic band width according the environment noise. Currently we are able to receive and record audio signals by using directly the Advanced Linux Sound Architecture (ALSA) module integrated with the NAO s kernel.

6 UChile Robotics Team Fig. 5. A preliminar image from the active vision module: the green points indicates the grid used for the sampling step, whereas the blue spheres indicate target points. Big points represent best targets for localization. 2.7 Control Architecture According to the terms of the B-Human Code Release license, we have announced before RoboCup 2013 that UChile Robotics Team is using the B-Human 2012 control framework. In addition to modules aforementioned, our main modifications to this framework are listed as follows: 1. Simultaneous access to both cameras: An extra vision thread has been added for processing images from the secondary camera in parallel to the main one. This secondary thread is called when it is required, instead of switching the cameras. 2. Decision Making: All individual behaviors and team coordination. Currently we are developing our behaviors by using XABSL language[7]. Since the BHuman-2012 code release has replaced XABSL by SMBE, we had to fully integrate the XABSL tool into the BHuman-2012 source code. 3. C-Make project: It has created a custom C-Make integration to generate an alternative multi-platform project in order to include our modifications and future module additions in an easier way. 4. SimRobot customization: Background pictures have been added behind of each goal in order to develop and debug our goal dis-ambiguity method. We are working on SimRobot[8] to have full support for the secondary camera process. 3 Current Research Lines 3.1 Reinforcement learning This line of work is part of the doctoral thesis of one of the team members. It is proposed to generate a methodology for implementing a decision making

Team Description for RoboCup 2014 7 Fig. 6. Some SURF descriptors taken from the upper camera by using the OpenSurf library R. system, defining a state space according to specific game configurations, taking into account positions and probable team actions, and training recurrent and relevant game situations. This work includes three main stages: i) the implementation or learning of tasks such as dribbling, intercepting the ball, kicking, going to strategic positions, and other similar basic behaviors; ii) the identification of specific game settings, and recurrent and relevant playing situations; iii) the reinforcement learning of high level behaviors based on a state-space transformation according to an specific game setting. 3.2 Humanoid biped gait This line of work is also part of the doctoral thesis of one of the team members. It is proposed to develop a methodology for designing a humanoid biped gait based on Dynamic Movement Primitives (DMP)[9, 10], developing a robust walking adaptive to some physical robot conditions (gear wear, encoders offsets, etc.). The trajectory generation are performed by using DMP instead of analytical models based on inverted pendulum or ZMP, trying to minimize its extensive required parametrization. The base leg trajectories are learned by imitation from other already implemented gaits and optimized with reinforced learning. Because this initial knowledge taken from imitation, it is possible to reduce the number of epochs. So, it is able to implement reinforcement learning process in a real robot maximizing exploitation over exploration. 3.3 Self-localization supported by natural landmarks As part of the advised work of our graduate to undergraduate students, we are working on methods to support the localization module. Currently we are using and evaluating some natural landmarks based on SIFT, SURF, and Fern descriptors, in addition to faster methods such as color and LBP histograms. Some SURF descriptors obtained by using the extra vision thread mentioned in subsection 2.7 for driving the upper NAO s camera are shown in figure 6.

8 UChile Robotics Team 4 Past Relevant Work and Scientific Publications UChileRT has been involved in RoboCup competitions since 2003 in different leagues: Four-legged 2003-2007, @Home in 2007-2012, Humanoid in 2007-2009, and Standard Platform League (SPL) in 2008-2012. UChile s team members have served RoboCup organization in many ways: Javier Ruiz-del-Solar was the organizing chair of the Four-Legged competition in 2007, TC member of the Four-Legged league in 2007, TC member of the @Home league in 2009, Exec Member of the @Home league since 2009, and co-chair for the RoboCup 2010 Symposium. Among the main scientific achievements of the group are the obtaining of three important RoboCup awards: RoboCup 2004 Engineering Challenge Award, RoboCup 2007 and 2008 @Home Innovation Award. UChile s team members have published a total of 30 papers in RoboCup Symposia (see Table 1), 20 of them directly related with robotic soccer, in addition to many papers in international journals and conferences. A briefly description of some contributions and past relevant work is listed below. Table 1. Presented papers in the Robocup Symposia by year RoboCup Articles 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Oral 1 2 1 1 2 3 2 2 - - 1 1 Poster 1 1 1-3 2 - - 2 1 2 1 4.1 Open Source Contribution Tutorial - ROS Cross-Compiling and Installation for the NAO V4: UChileRT has uploaded to the ROS community, a detailed tutorial to build, install and run ROS natively onto the NAO V4 [11]. To the best on our knowledge, this was the first tutorial that provides a step-by-step guide to build, install and run ROS embedded onto the Atom CPU of the latest NAO V4 robot. ROS Node - Motion Module: Currently, UChile Robotics Team is using the B- Human walking and motion engine [12]. That motion module has been isolated, integrated as a ROS node, and shared as open source code. It is described in [13]. 4.2 Perception Vision System: UChileRT have developed an automatic on-line color segmentation technique that makes extensive use of the spatial relationships between color classes in the color space [14]. Using class-relative color spaces the system is able to remap color classes from the already trained ones. For achieving that, the system uses feedback information from the detected objects using the remapped

Team Description for RoboCup 2014 9 (or partially trained) classes. The system is able to generate a complete color look-up table from scratch, and to adapt itself quickly to severe lighting condition changes. In addition, the vision system incorporates a spatio-temporal context integration module that increases the robustness of the vision system [15, 16]. The module computes the coherence between a given detection (object candidate) with other simultaneous detections, objects detected in the past, and the physical context. A Bayesian model integrates all these information sources. 4.3 World Modelling Self-Localization: UChileRT has improved classical self-localization approaches by estimating, independently and in addition to the robot s pose, the pose of the static and mobile objects of interest [17]. This allows for using, in addition to fixed landmarks, dynamic landmarks such as temporally local objects (mobile objects) and spatially local objects (view-dependent objects or textures). Ground Truth: UChileRT has developed a portable laser-based ground-truth system [18]. The system can be easily ported from one environment to other and requires almost no calibration. 4.4 Decision Making Obstacle Avoidance: An obstacle avoidance engine based on ultra-sonic sensors (US), arms contact detector, and feet s bumpers was developed and it is still fully functional [19]. It uses those US operation modes where NAO s transmits and receives at the same time (modes 12, 68, 72 regarding NAO s DCM decimal notation), enabling writing/reading modules and turn on/off sensors as it corresponds. After a median filter, an offline configurable obstacle grid is filled according to obtained measures of distance and angle from the three available cones (left, right, and combined). This obstacle grid is complemented and also filled with readings from the arm contact detector and the feet s bumper. Some obstacle avoidances can be seen in our qualification video [2]. Dynamic role assignment: UChileRT has developed a dynamic roles that allows the players to change their role within the formation, depending to the situation. To accomplish this, the shared ball information is used to predict the robot that takes less time to reach it. This robot turns in the Striker, and the responsible to commit all the necessary actions to score. Besides, there is also the roles of supporter, defender and forward, that are assigned according to the positions where the robots currently are. For last, the goalkeeper stays on the goal unless he is the best qualified to be the striker, in which case, he takes this role. Please check the qualification video [2] to see our dynamic role assignment in some RoboCup-2013 games.

10 UChile Robotics Team Active Vision: UChileRT has developed a task oriented approach [4] [5] [6] to the active vision problem focused to SPL games. The system tries to reduce the most relevant components of the uncertainty in the world model, for the task that robot is currently performing. It is task oriented in the sense that it explicitly considers a task specific value function. Acknowledgments This research is partially supported by Fondecyt project 1130153. References 1. L. Leottau, C. Celemin, and J. Ruiz-del solar, Ball Dribbling for Humanoid Biped Robots: A Reinforcement Learning and Fuzzy Control Approach, in Rob. Symp. 2014 (RoboCup, ed.), (Joao Pessoa, Brazil.), 2014. 2. UChile Robotics Team, UChile Team Qualification Video. https://www.youtube.com/watch?v=bi4vpbu7gte, 2013. 3. K. Jolly, R. Sreerama Kumar, and R. Vijayakumar, A Bezier curve based path planning in a multi-agent robot soccer system without violating the acceleration limits, Robotics and Autonomous Systems, vol. 57, no. 1, pp. 23 33, 2009. 4. P. Guerrero, J. Ruiz-Del-Solar, and M. Romero, Explicitly Task Oriented Probabilistic Active Vision for a Mobile Robot, pp. 85 96. Berlin, Heidelberg: Springer- Verlag, 2009. 5. P. Guerrero, J. Ruiz-del Solar, M. Romero, and S. Angulo, Task-Oriented Probabilistic Active Vision, Int. J. Humanoid Robotics, pp. 451 476, 2010. 6. J. Testart, J. Ruiz Del Solar, R. Schulz, P. Guerrero, and R. Palma-Amestoy, A Real-Time Hybrid Architecture for Biped Humanoids with Active Vision Mechanisms, J. Intell. Robotics Syst., vol. 63, no. 2, pp. 233 255, 2011. 7. M. Lötzsch, J. Bach, H.-d. Burkhard, and M. Jüngel, Designing Agent Behavior with the Extensible Agent Behavior Specification Language XABSL, in RoboCup 2003: Robot Soccer World Cup VII SE - 10 (D. Polani, B. Browning, A. Bonarini, and K. Yoshida, eds.), vol. 3020 of Lecture Notes in Computer Science, pp. 114 124, Springer Berlin Heidelberg, 2004. 8. T. Laue, K. Spiess, and T. Röfer, SimRobot A General Physical Robot Simulator and Its Application in RoboCup, in RoboCup 2005: Robot Soccer World Cup IX (Y. Bredenfeld, Ansgar and Jacoff, Adam and Noda, Itsuki and Takahashi, ed.), pp. 173 183, Springer Berlin Heidelberg, 2006. 9. A. J. Ijspeert, J. Nakanishi, and S. Schaal, Learning Attractor Landscapes for Learning Motor Primitives, in Advances in Neural Information Processing Systems 15, pp. 1547 1554, MIT Press, 2002. 10. S. Schaal, Dynamic movement primitives-a framework for motor control in humans and humanoid robotics, Adaptive Motion of Animals and Machines, pp. 261 280, 2006. 11. L. Leottau, ROS Fuerte Cross-Compiling and Installation for the NAO V4. http://www.ros.org/wiki/nao/tutorials/cross-compiling NAO-V4, 2012. 12. C. Graf and T. Röfer, A Center of Mass Observing 3D-LIPM Gait for the RoboCup Standard Platform League Humanoid, in Rob. 2011 Robot Soccer World Cup XV SE - 9 (T. Röfer, N. Mayer, J. Savage, and U. Saranl, eds.), vol. 7416

Team Description for RoboCup 2014 11 of Lecture Notes in Computer Science, pp. 102 113, Springer Berlin Heidelberg, 2012. 13. L. Leottau, J. M. Yañez, and J. Ruiz-del Solar, Integration of the ROS Framework in Soccer Robotics: the NAO Case, in RoboCup 2013: Robot Soccer World Cup XVII Preproceedings, July 2013 (R. Federation, ed.), (Eindhoven, The Netherlands), 2013. 14. P. Guerrero, J. Ruiz-del Solar, J. Fredes, and R. P. Amestoy, Automatic On-Line Color Calibration Using Class-Relative Color Spaces, in RoboCup 2007: Robot Soccer World Cup XI, July 9-10, 2007, Atlanta, GA, USA, pp. 246 253, 2007. 15. R. Palma-Amestoy, P. Guerrero, J. Ruiz-Del-Solar, and C. Garretón, Bayesian Spatiotemporal Context Integration Sources in Robot Vision Systems, (Berlin, Heidelberg), pp. 212 224, Springer-Verlag, 2009. 16. R. Palma-Amestoy, J. R. del Solar, J. M. Yáñez, and P. Guerrero, Spatiotemporal Context Integration in Robot Vision., I. J. Humanoid Robotics, vol. 7, no. 3, pp. 357 377, 2010. 17. P. Guerrero and J. Ruiz-del Solar, Improving Robot Self-localization Using Landmarks Poses Tracking and Odometry Error Estimation, in RoboCup 2007: Robot Soccer World Cup XI, July 9-10, 2007, Atlanta, GA, USA, pp. 148 158, 2007. 18. R. Marchant, P. Guerrero, and J. Ruiz-del Solar, A Portable Ground-Truth System Based On A Laser Sensor, in RoboCup 2011: Robot Soccer World Cup XV. 19. W. Celedón Aguilera, Interacción de un robot móvil con un objeto móvil aplicado al fútbol robótico. Engineering Thesis, 2013.