Multi Robot Localization assisted by Teammate Robots and Dynamic Objects
|
|
- Horace Carroll
- 6 years ago
- Views:
Transcription
1 Multi Robot Localization assisted by Teammate Robots and Dynamic Objects Anil Kumar Katti Department of Computer Science University of Texas at Austin ABSTRACT This paper discusses multi robot localization (MRL) assisted by teammate robots and dynamic objects. We specifically relate our research to the robot soccer environment. We basically use a particle filter based localization scheme and further improve its accuracy by using information from teammate robots regarding their relative position and the relative position of dynamic objects in the environment. Localization of stationary robots is a different ball game in robot soccer. We specifically consider stationary robots which usually find it hard to get localized because they cannot view more than one beacon. In such scenarios, we improve their localization accuracy by using information available about the environment from their teammates. General Terms Multirobot Localization, Particle Filter, Dynamic Objects, Probabilistic Localization. 1. INTRODUCTION Lynne s paper [10] on general trend in multi robot systems motivates us to work on the problem related to multi robot systems. Further, localization is one problem which gets the maximum benefit out of redundancy created by multiple robots in an environment. We hence decided to approach this problem. Location information is of intrinsic interest in most of the robotic applications. We specifically consider robot soccer [1] application in this paper. Location information of a robot helps it in its motion towards the ball, goal and other robots. Location information from all the robots belonging to a team is used to decide the overall strategy for the game. This paper was a part of my course (CS393R - Autonomous Robots) research project. This project involved using concepts from 4 th and 5 th assignments along with a few ideas from MRL papers. Aibo ERS-7, a commercially available robotic dog is used in robot soccer league. This robot is equipped with a wide range of sensors like camera, IR sensors, ultrasonic sensors, accelerometers, gyroscopes etc. However, camera is the most extensively used sensor in robot soccer because of its versatality. We use this robot to implement our algorithms. Using sensed information from different sensors to localize a robot has been quite a challenge. Kalman filter based approach was used and discussed in Jetto et al. [7]. A more powerful and useful technique was proposed for localization in [3]. This approach uses a particle filter in place of a Kalman filter. We base our project on this approach. Fox et al. discussed MRL with probabilistic approach [5] which motivates our implementation of teammate robots based localization. Researchers have used different sensors to develop localization schemes. [6], [9], [4], [12] discuss different localization approaches based on different type of sensor data. We are particularly interested in vision based localization which was specifically discussed in [12]. After developing single robot localization, sufficient research has been done on MRL and distributed robot localization. [8], [11] discusses one such approach which bases its localization algorithm on relative object sighting in its environment. [2] discusses a similar approach on landmark based MRL. We mainly borrow concepts from from these two papers to implement teammate robot and dynamic object based MRL. 2. MOTIVATION Localization becomes relatively easy using the above described techniques when the robot under consideration is in constant motion in the field. In the sense, it gets to see more of its environment (beacons) and that helps it localize more accurately. On the other hand, when a robot is stationary (when it preparing for a kick), it has limited field of view and hence limited view of the environment. We consider such scenarios and develop algorithms which can help the robot to get localized more accurately. We take a three-phase approach to solve this problem. At first all the robots in the environment get themselves localized by looking at the visible beacons in their environment. In the second phase, they get localized based on the visible teammate robots in their environment. In the third phase, they get localized based on visible dynamic objects in their
2 environment. At the end of three phases, robots would have got localized more accurately than they would have in just beacon based approach. 3. ALGORITHM The approach we took for this project can be represented by the following two algorithms. We have our MRL algorithm running as both client and server on all the robots in the environment. Multi-robot-localization-client(Memory) Input: Memory - set of vision and motor inputs. Output: Particles - set of updated particles. 1 for all beacons in Memory 2 Single-robot-localization(beacon) 3 for all teammate robots in Memory 4 Send(particles, teammate bearing) 5 Wait-for-new-probabilities(new prob) 6 Update-particles(new prob) 7 for all dynamic object in Memory 8 Send(particles, object bearing, object dist) 9 Wait-for-new-probabilities(new prob) 10 Update-particles(new prob) Multi-robot-localization-server(Message) Inputs: Message - information regarding MRL. Outputs: Particles - set of updated particles. New prob - set of updated probabilities (for teammate). 1 teammate particles = Message.particles 2 if Message is regarding teammate robot 3 teammate bearing = Message.bearing 4 else 5 object bearing = Message.bearing 6 object dist = Message.dist 7 for all particle in Particles 8 for all teammate particle in teammate particles 9 s = similarity func(particles, bearing, dist) 10 update particle.prob based on s 11 update teammate particle.prob based on s 12 Send(teammate particles.prob) The three phase approach is clearly visible in the client algorithm. Each robot initially start off with simple single robot localization based on all the visible beacons. Once that s done, they check for teammate robot entries in their memory object. If they find entries corresponding to other teammate robots, they send their particles and teammate bearings to the seen robot. Similarly, if they find entries corresponding to dynamic objects, they send their particle Figure 1: Organization of different C++ modules. and object bearing and distance to all the robots in the environment. On the other hand, we ll have MRL server running as a daemon which gets invoked on receiving a message. This subroutine uses the obtained information to update its own probabilities and send back updated set of probabilities to the robot which had sent the message. The probability updation happens based on a similarity function. We run the similarity function with both particles and bearing and distance values obtained in the message. The similarity function just runs on all possible particle pairs and compute the likelihood of each of such pairs. The computed likelihood is normalized and assigned to the probability component of the particle. Similarity function code is attached with this paper. Note: teammate bearing is the angle at which the teammate robot is seen. Similarly, object bearing and object dist are angle and distance at which the dynamic object is seen. For Aibos, we assume that we have access to bearing alone since it is hard to compute distance from an Aibo and for objects like balls, we assume that we have access to both bearing and distance. 4. IMPLEMENTATION Though the MRL algorithm by itself is simple to implement, we had to develop a lot of other modules to get the whole project in place. We took a very modular approach to solve this problem with re-usability in mind. Specific C++ modules taking care of individual operations were developed. figure 1 shows the organization of different modules. Vision and Motion: We have interfaces for both vision and motion - V isioninterface and MotionInterface. We implemented Tekkotsu specific instances of these interfaces - T ekkotsuv isioninterface and T ekkotsumotioninterface. Implementing vision was a challenging task since Tekkotsu by default does not detect beacons. we had to hack into the vision pipeline of Tekkotsu to make it detect beacons as per our definition. After detecting beacons, we put the beacon information like, its ID, bearing and distance into current
3 instance of MemoryModule. We have defined threshold and heuristics as we did in Assignment 5 to see to it that we detect true beacons. A few of those heuristics is described below - Pixel density: When the Tekkotsu vision pipeline detects a blob, we first check for its pixel density. If the the pixel density exceed a particular value, the blob will be consider for the beacon detection else, it is discarded. Gap between blobs: To detect a beacon, Tekkotsu should essentially have detected 2 blobs of different color in pink, yellow, blue in the same frame. To check this we consider all the blobs available at any given point in time and we check their frame ID. If a frame consists of more than one colored blobs, then it is very like to be a beacon. Here we check the difference in the height of the 2 blobs. We reject this as a beacon if we have more than 5 pixels of gap between these two blobs. Another criteria at this point is the difference between x coordinate of the center of the 2 blobs. We put a threshold on this value too. Similarly, we define pixel density threshold for the orange ball as well. Further, we wrapped up robots in green color and used the same to detect robots. Every single time, we had to define thresholds using easy train application and we ended up wasting maximum amount of our time in developing this module accurately. We did not implement motion for 2 important reasons - we wanted to tackle stationary case specifically since single robot localization becomes constrained in such a case. We also ran out of time to implement simple motion in order to test our algorithm. Memory and Communication: Vision and Motion modules communicate through MemoryModule. At any given point in time, MemoryModule contains all the recent vision and motion updates. Vision updates can be seen as entries in worldobject array. LocalizationModule reads MemoryModule to get all the required updates. Any updates on beacons are immediately incorporated by running an instance of Single-robot-localization. If there are any updates on teammate robots or dynamic objects, suitable information is sent to CommunicationModule to communicate to the corresponding robot. Once this is done, the LocalizationModule waits on the CommunicationModule to obtain updated probabilities. CommunicationMoule also communicates particle information to the Localization GUI - which in turn displays all the particles. CommunicationModule uses wireless communication library which is a part of Tekkotsu. The communication essentially is over a TCP socket. We had a few challenges here to tackle - all the Aibos run in a NATed environment and the desktops are in the external network. This prevented communication from Aibo to the desktop. To overcome this, we instantiate a connection from the desktop to the Aibos and use the same connection to send information in the reverse direction. Localization: Localization module is invoked for every frame. A frame essentially consists of one set of motion and vision updates. We had 2 main subroutines in the LocalizationModule - processframe and multirobotlocalization. Figure 2: Setup for Multi Robot Localization. processframe is invoked on every new frame. It just reads the frame for new beacons and updates particle probabilities based on beacon entries. Once that is done, it performs reseeding and resampling according to the defined frequency. Implementing reseeding and resampling was another major challenge that we had to tackle. We had to carefully evaluate different scenarios for which reseeding would eiather create excessive particles or create much less particles than required. We had to come up with a generic decision factor for the reseed subroutine. To consider a specific instance, when particles were created due to MRL, we did not have to reseed them because reseeding them led to clusters in the environment. Such issues had to be solved one by one. Localization GUI: This component of the project was extremely helpful when we had to debug the above-mentioned algorithms. GUI was developed on python using pygame module for graphics and TCP sockets for communication. GUI takes in particle information at any point and parses the data to make it displayable. It then shows all the particles along with the robot at the weighted average position. The particles are displayed as small lines whose orientation represents the actual robot orientation and whose length represents the belief (probability) in that position. 5. EXPERIMENTS The setup required for our project was quite simple. We required 2 Aibo ERS-7 robots with one orange ball and 4 beacons. Beacons are classified based on the order of colors. Simulator represents the 2 colors of beacons as 2 concentric circles with top color being the inner circle and bottom color being the outer circle. Further, we refer to (pink on yellow) beacon as beacon 0, (yellow on pink) as beacon 1, (pink on blue) as beacon 2 and (blue on pink) as beacon 3. The particles belonging to two robots are represented in 2 different colors - yellow and white. The robots are represented by blue and red wedge-shaped figures on the GUI. Yellow particles correspond to the blue robot (henceforth referred to as robot A) and white particles correspond to red robot (henceforth referred to as robot B). Please note that the robots are places at the weighted average positions of the particles. Further, the dimension of the whole field is 100 X 100 sq. cm. The arena is shown in the figure 2.
4 Figure 3: Beacon based localization of robots. Scenario: robot A sees beacon 1 and the robot B sees beacon 0. We implement the algorithms discussed the the algorithms section. To ease our analysis, we consider 3 different scenarios and we tackle each of them separately. Localization based on a static beacon. Localization based on a static beacon and a teammate robot in the field of view. Localization based on a static beacon and a dynamic object in the field of view. Localization based on a static beacon: To start off with, we implement a simple particle filter based probabilistic localization on all the Aibos in the environment. Since we consider only stationary robots in this project, we assume that they will get to see only one beacon. When these robots run particle filter based localization, they localize to an arc at a particular distance from the beacon which is visible to them. In the base case setup, we placed robot A at a distance of approximately 114 cm from the beacon 1 and robot B at a distance of approximately 114 cm from beacon 0 and ran our particle filter algorithm. The result of our algorithm was seen on the localization GUI. It is displayed in the figure 3. Robot A got localized to an arc at approximately 114 cm from beacon 1 and robot B got localized to an arc at approximately 114 cm from beacon 0. Localization based on a static beacon and a teammate robot in the field of view: Once we had the basic localization working on both the robots, we constructed the next scenario which consists of the same 2 robots, robot A and robot B now facing beacon 2 and beacon 0 respectively. We placed them in such a way that they faced respective beacons with 0 bearing at approximately 114 cm and they faced teammate Aibos with 0.4 radians bearing. With such Figure 4: Localization of robots based on beacons and teammate robots. Scenario: robot A sees beacon 2 and robot B; robot B sees beacon 0 and robot A. a setup, we first ran single robot localization code. With this, they got localized as before. Once they were localized to arcs, we initialized MRL module on both robots. Since both robots could see each other, they could localize themselves better. We captured the result in the simulator. It is displayed in figure 4. Robot A got localized to a small cluster on the left hand side of the arena and robot B got localized to a small cluster on the right hand side of the arena. Localization based on a static beacon and a dynamic object in the field of view: Localization based on dynamic objects is just an extension of the MRL discussed in the previous scenario. In this case, we place robots such that robot A faces beacon 1 with 0 bearing and robot B faces beacon 0 with 0 bearing. They are placed at approximately 114 cm from the beacons. Further, both robot can also look at a ball (dynamic object) which is placed at a distance of 60 cm from robot A and 20 cm from robot B. With such a setup, we first ran single robot localization code. With this, the robots get localized to arcs as before. Once localized, we initialized MRL module on both robots. Since both robots could see the ball simultaneously, they could localize themselves better. We captured the result in the simulator. It is displayed in figure 5. Robot A got localized to a small cluster in the top center portion of the arena and the robot B got localized to the right hand side of the arena. 6. RESULTS In this section, I discuss results of the above-mentioned experiments. We considered the above 3 scenarios and tested out different parameters in each of the cases. Due to space constrains, I cannot discuss all the results here. But, one result which was of major interest was the standard deviation along x values of particles against time in the above 3 sce-
5 Figure 5: Beacon based localization of robots. Scenario: robot 1 sees beacon 1 and the robot 2 sees beacon 0. narios. We consider the first scenario as the base scenario. We compare scenario 2 and 3 against the first scenario to obtain the following results. Figure 6: Comparing beacon based localization with dynamic object assisted MRL. From figure 6 and 7, we can clearly see that in both the experiments, MRL (localization with beacon + other robot/ball) clearly out performs the beacon based single robot localization. After a certain duration of time, both the localizations approach a constant value and its clearly visible that the final value of MRL is many folds more accurate when compared the final value of single robot localization. 7. CONCLUSION MRL improves the accuracy of localization by many folds. Specifically when robots are stationary (when they are getting ready for a kick), we would want to use MRL to get robots localized more accurately. MRL suits a lot and helps the robot get a better estimate of where it is and hence what it is supposed to do. We have 2 topics in our mind which we plan to work on in the future - deriving closed forms for the robot s extent of localization and implementing negative observation based MRL. Though our aim was to solve the problem for the stationary robot case, we would also want to test out our algorithm on mobile beacon. We surely would want to take that as another project moving forward. 8. ACKNOWLEGEMENT I acknowledge my project partner Piyush Khandelwal for his efforts during the project. I also thank Prof. Peter Stone and Todd Hester for their constant support in terms of resources and ideas through out the project. Figure 7: Comparing beacon based localization with teammate robot assisted MRL.
6 9. REFERENCES [1] Robocup is an international robotics competition founded in the aim is to develop autonomous soccer robots with the intention of promoting research and education in the field of artificial intelligence. the name robocup is a contraction of the competition s full name, robot soccer world cup, but there are many other stages of the competition such as search and rescue and robot dancing - [ official site - [2] G. Dedeoglu and G. S. Sukhatme. Landmark-based matching algorithm for cooperative mapping by autonomous robots. Distributed Autonomous Robotics Systems, 4: , [3] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte carlo localization for mobile robots. IEEE International Conference on Robotics and Automation (ICRA99), May [4] A. Elfes. Sonar-based real world mapping and navigation. IEEE Journal of Robotics and Automation, pages , June [5] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A probabilistic approach to collaborative multi-robot localization. volume 8, pages , [6] P. Goel, S. Roumeliotis, and G. Sukhatme. Robust localization using relative and absolute position estimates. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, October [7] L. Jetto, S. Longhi, and G. Venturini. Development and experimental validation of an adaptive extended kalman filter for the localization of mobile robots. IEEE Transactions on Robotics and Automation, 15: , [8] R. Kurazume, S. Nagata, and S. Hirose. Multi-robot localization using relative observations. In Proceedings of the 1994 IEEE International Conference in Robotics and Automation, Los Alamitos, CA, pages , May [9] A. M. Ladd, K. E. Bekris, G. Marceau, A. Rudys, D. S. Wallach, and L. E. Kavraki. Using wireless ethernet for localization. In Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, September [10] L. E. Parker. Current research in multirobot systems. Artificial Life and Robotics., 7(1-2):1 5, August [11] S. Roumeliotis and G. Bekey. Distributed multi-robot localization. In Proceedings of Distributed Autonomous Robotic Systems., pages , October [12] M. Sridharan, G. Kuhlmann, and P. Stone. Practical vision-based monte carlo localization on a legged robot. In The International Conference on Robotics and Automation, 2005.
CS295-1 Final Project : AIBO
CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main
More informationFAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL
FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL Juan Fasola jfasola@andrew.cmu.edu Manuela M. Veloso veloso@cs.cmu.edu School of Computer Science Carnegie Mellon University
More informationCS 393R. Lab Introduction. Todd Hester
CS 393R Lab Introduction Todd Hester todd@cs.utexas.edu Outline The Lab: ENS 19N Website Software: Tekkotsu Robots: Aibo ERS-7 M3 Assignment 1 Lab Rules My information Office hours Wednesday 11-noon ENS
More informationAutomatic acquisition of robot motion and sensor models
Automatic acquisition of robot motion and sensor models A. Tuna Ozgelen, Elizabeth Sklar, and Simon Parsons Department of Computer & Information Science Brooklyn College, City University of New York 2900
More informationMulti-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy
Multi-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy Ioannis M. Rekleitis 1, Gregory Dudek 1, Evangelos E. Milios 2 1 Centre for Intelligent Machines, McGill University,
More information4D-Particle filter localization for a simulated UAV
4D-Particle filter localization for a simulated UAV Anna Chiara Bellini annachiara.bellini@gmail.com Abstract. Particle filters are a mathematical method that can be used to build a belief about the location
More informationMulti-Humanoid World Modeling in Standard Platform Robot Soccer
Multi-Humanoid World Modeling in Standard Platform Robot Soccer Brian Coltin, Somchaya Liemhetcharat, Çetin Meriçli, Junyun Tay, and Manuela Veloso Abstract In the RoboCup Standard Platform League (SPL),
More informationTeam KMUTT: Team Description Paper
Team KMUTT: Team Description Paper Thavida Maneewarn, Xye, Pasan Kulvanit, Sathit Wanitchaikit, Panuvat Sinsaranon, Kawroong Saktaweekulkit, Nattapong Kaewlek Djitt Laowattana King Mongkut s University
More informationCOOPERATIVE RELATIVE LOCALIZATION FOR MOBILE ROBOT TEAMS: AN EGO- CENTRIC APPROACH
COOPERATIVE RELATIVE LOCALIZATION FOR MOBILE ROBOT TEAMS: AN EGO- CENTRIC APPROACH Andrew Howard, Maja J Matarić and Gaurav S. Sukhatme Robotics Research Laboratory, Computer Science Department, University
More informationS.P.Q.R. Legged Team Report from RoboCup 2003
S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,
More informationCMDragons 2009 Team Description
CMDragons 2009 Team Description Stefan Zickler, Michael Licitra, Joydeep Biswas, and Manuela Veloso Carnegie Mellon University {szickler,mmv}@cs.cmu.edu {mlicitra,joydeep}@andrew.cmu.edu Abstract. In this
More informationMulti Robot Object Tracking and Self Localization
Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems October 9-5, 2006, Beijing, China Multi Robot Object Tracking and Self Localization Using Visual Percept Relations
More informationCooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors
In the 2001 International Symposium on Computational Intelligence in Robotics and Automation pp. 206-211, Banff, Alberta, Canada, July 29 - August 1, 2001. Cooperative Tracking using Mobile Robots and
More informationCooperative Tracking with Mobile Robots and Networked Embedded Sensors
Institutue for Robotics and Intelligent Systems (IRIS) Technical Report IRIS-01-404 University of Southern California, 2001 Cooperative Tracking with Mobile Robots and Networked Embedded Sensors Boyoon
More informationReactive Cooperation of AIBO Robots. Iñaki Navarro Oiza
Reactive Cooperation of AIBO Robots Iñaki Navarro Oiza October 2004 Abstract The aim of the project is to study how cooperation of AIBO robots could be achieved. In order to do that a specific problem,
More informationKeywords: Multi-robot adversarial environments, real-time autonomous robots
ROBOT SOCCER: A MULTI-ROBOT CHALLENGE EXTENDED ABSTRACT Manuela M. Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA veloso@cs.cmu.edu Abstract Robot soccer opened
More informationLocalisation et navigation de robots
Localisation et navigation de robots UPJV, Département EEA M2 EEAII, parcours ViRob Année Universitaire 2017/2018 Fabio MORBIDI Laboratoire MIS Équipe Perception ique E-mail: fabio.morbidi@u-picardie.fr
More informationInternational Journal of Informative & Futuristic Research ISSN (Online):
Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/
More informationArtificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization
Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department
More informationTask Allocation: Role Assignment. Dr. Daisy Tang
Task Allocation: Role Assignment Dr. Daisy Tang Outline Multi-robot dynamic role assignment Task Allocation Based On Roles Usually, a task is decomposed into roleseither by a general autonomous planner,
More informationUChile Team Research Report 2009
UChile Team Research Report 2009 Javier Ruiz-del-Solar, Rodrigo Palma-Amestoy, Pablo Guerrero, Román Marchant, Luis Alberto Herrera, David Monasterio Department of Electrical Engineering, Universidad de
More informationTechnical issues of MRL Virtual Robots Team RoboCup 2016, Leipzig Germany
Technical issues of MRL Virtual Robots Team RoboCup 2016, Leipzig Germany Mohammad H. Shayesteh 1, Edris E. Aliabadi 1, Mahdi Salamati 1, Adib Dehghan 1, Danial JafaryMoghaddam 1 1 Islamic Azad University
More informationCS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov
CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Semester Schedule C++ and Robot Operating System (ROS) Learning to use our robots Computational
More informationNTU Robot PAL 2009 Team Report
NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More informationA World Model for Multi-Robot Teams with Communication
1 A World Model for Multi-Robot Teams with Communication Maayan Roth, Douglas Vail, and Manuela Veloso School of Computer Science Carnegie Mellon University Pittsburgh PA, 15213-3891 {mroth, dvail2, mmv}@cs.cmu.edu
More informationAbstract. This paper presents a new approach to the cooperative localization
Distributed Multi-Robot Localization Stergios I. Roumeliotis and George A. Bekey Robotics Research Laboratories University of Southern California Los Angeles, CA 989-781 stergiosjbekey@robotics.usc.edu
More informationHierarchical Controller for Robotic Soccer
Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This
More informationTeam Edinferno Description Paper for RoboCup 2011 SPL
Team Edinferno Description Paper for RoboCup 2011 SPL Subramanian Ramamoorthy, Aris Valtazanos, Efstathios Vafeias, Christopher Towell, Majd Hawasly, Ioannis Havoutis, Thomas McGuire, Seyed Behzad Tabibian,
More informationVisual Based Localization for a Legged Robot
Visual Based Localization for a Legged Robot Francisco Martín, Vicente Matellán, Jose María Cañas, Pablo Barrera Robotic Labs (GSyC), ESCET, Universidad Rey Juan Carlos, C/ Tulipán s/n CP. 28933 Móstoles
More informationKMUTT Kickers: Team Description Paper
KMUTT Kickers: Team Description Paper Thavida Maneewarn, Xye, Korawit Kawinkhrue, Amnart Butsongka, Nattapong Kaewlek King Mongkut s University of Technology Thonburi, Institute of Field Robotics (FIBO)
More informationTeam Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize)
Team Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize) Martin Friedmann 1, Jutta Kiener 1, Robert Kratz 1, Sebastian Petters 1, Hajime Sakamoto 2, Maximilian
More informationAGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira
AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables
More informationFormation and Cooperation for SWARMed Intelligent Robots
Formation and Cooperation for SWARMed Intelligent Robots Wei Cao 1 Yanqing Gao 2 Jason Robert Mace 3 (West Virginia University 1 University of Arizona 2 Energy Corp. of America 3 ) Abstract This article
More informationBaset Adult-Size 2016 Team Description Paper
Baset Adult-Size 2016 Team Description Paper Mojtaba Hosseini, Vahid Mohammadi, Farhad Jafari 2, Dr. Esfandiar Bamdad 1 1 Humanoid Robotic Laboratory, Robotic Center, Baset Pazhuh Tehran company. No383,
More informationMulti-Platform Soccer Robot Development System
Multi-Platform Soccer Robot Development System Hui Wang, Han Wang, Chunmiao Wang, William Y. C. Soh Division of Control & Instrumentation, School of EEE Nanyang Technological University Nanyang Avenue,
More informationProf. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)
Project title: Optical Path Tracking Mobile Robot with Object Picking Project number: 1 A mobile robot controlled by the Altera UP -2 board and/or the HC12 microprocessor will have to pick up and drop
More informationDistributed Sensor Analysis for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks
Proc. of IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009. Distributed Sensor Analysis for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks Xingyan Li and Lynne E. Parker
More informationLocalization for Mobile Robot Teams Using Maximum Likelihood Estimation
Localization for Mobile Robot Teams Using Maximum Likelihood Estimation Andrew Howard, Maja J Matarić and Gaurav S Sukhatme Robotics Research Laboratory, Computer Science Department, University of Southern
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More informationTeam TH-MOS. Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China
Team TH-MOS Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China Abstract. This paper describes the design of the robot MOS
More informationCorrecting Odometry Errors for Mobile Robots Using Image Processing
Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,
More informationOptic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball
Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine
More informationFalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper. Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A.
FalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A. Robotics Application Workshop, Instituto Tecnológico Superior de San
More informationAn Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots
An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard
More informationZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014
ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014 Yu DongDong, Xiang Chuan, Zhou Chunlin, and Xiong Rong State Key Lab. of Industrial Control Technology, Zhejiang University, Hangzhou,
More informationZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015
ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015 Yu DongDong, Liu Yun, Zhou Chunlin, and Xiong Rong State Key Lab. of Industrial Control Technology, Zhejiang University, Hangzhou,
More informationA Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots
A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany
More informationNuBot Team Description Paper 2008
NuBot Team Description Paper 2008 1 Hui Zhang, 1 Huimin Lu, 3 Xiangke Wang, 3 Fangyi Sun, 2 Xiucai Ji, 1 Dan Hai, 1 Fei Liu, 3 Lianhu Cui, 1 Zhiqiang Zheng College of Mechatronics and Automation National
More informationCommunications for cooperation: the RoboCup 4-legged passing challenge
Communications for cooperation: the RoboCup 4-legged passing challenge Carlos E. Agüero Durán, Vicente Matellán, José María Cañas, Francisco Martín Robotics Lab - GSyC DITTE - ESCET - URJC {caguero,vmo,jmplaza,fmartin}@gsyc.escet.urjc.es
More informationRobotic Systems ECE 401RB Fall 2007
The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation
More informationCORC 3303 Exploring Robotics. Why Teams?
Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:
More informationRobot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
More informationCS594, Section 30682:
CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:
More informationFuzzy-Heuristic Robot Navigation in a Simulated Environment
Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,
More informationTightly-Coupled Navigation Assistance in Heterogeneous Multi-Robot Teams
Proc. of IEEE International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 2004. Tightly-Coupled Navigation Assistance in Heterogeneous Multi-Robot Teams Lynne E. Parker, Balajee Kannan,
More informationDevelopment of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics
Development of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics Kazunori Asanuma 1, Kazunori Umeda 1, Ryuichi Ueda 2, and Tamio Arai 2 1 Chuo University,
More informationMonte Carlo Localization in Dense Multipath Environments Using UWB Ranging
Monte Carlo Localization in Dense Multipath Environments Using UWB Ranging Damien B. Jourdan, John J. Deyst, Jr., Moe Z. Win, Nicholas Roy Massachusetts Institute of Technology Laboratory for Information
More informationCS343 Introduction to Artificial Intelligence Spring 2010
CS343 Introduction to Artificial Intelligence Spring 2010 Prof: TA: Daniel Urieli Department of Computer Science The University of Texas at Austin Good Afternoon, Colleagues Welcome to a fun, but challenging
More informationIntelligent Vehicle Localization Using GPS, Compass, and Machine Vision
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision Somphop Limsoonthrakul,
More informationCS 599: Distributed Intelligence in Robotics
CS 599: Distributed Intelligence in Robotics Winter 2016 www.cpp.edu/~ftang/courses/cs599-di/ Dr. Daisy Tang All lecture notes are adapted from Dr. Lynne Parker s lecture notes on Distributed Intelligence
More informationDutch Nao Team. Team Description for Robocup Eindhoven, The Netherlands November 8, 2012
Dutch Nao Team Team Description for Robocup 2013 - Eindhoven, The Netherlands http://www.dutchnaoteam.nl November 8, 2012 Duncan ten Velthuis, Camiel Verschoor, Auke Wiggers, Hessel van der Molen, Tijmen
More informationCreating a 3D environment map from 2D camera images in robotics
Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:
More informationRoboCup. Presented by Shane Murphy April 24, 2003
RoboCup Presented by Shane Murphy April 24, 2003 RoboCup: : Today and Tomorrow What we have learned Authors Minoru Asada (Osaka University, Japan), Hiroaki Kitano (Sony CS Labs, Japan), Itsuki Noda (Electrotechnical(
More informationGermanTeam The German National RoboCup Team
GermanTeam 2008 The German National RoboCup Team David Becker 2, Jörg Brose 2, Daniel Göhring 3, Matthias Jüngel 3, Max Risler 2, and Thomas Röfer 1 1 Deutsches Forschungszentrum für Künstliche Intelligenz,
More informationHumanoid Robot NAO: Developing Behaviors for Football Humanoid Robots
Humanoid Robot NAO: Developing Behaviors for Football Humanoid Robots State of the Art Presentation Luís Miranda Cruz Supervisors: Prof. Luis Paulo Reis Prof. Armando Sousa Outline 1. Context 1.1. Robocup
More informationTeam TH-MOS Abstract. Keywords. 1 Introduction 2 Hardware and Electronics
Team TH-MOS Pei Ben, Cheng Jiakai, Shi Xunlei, Zhang wenzhe, Liu xiaoming, Wu mian Department of Mechanical Engineering, Tsinghua University, Beijing, China Abstract. This paper describes the design of
More informationMRL Team Description Paper for Humanoid KidSize League of RoboCup 2017
MRL Team Description Paper for Humanoid KidSize League of RoboCup 2017 Meisam Teimouri 1, Amir Salimi, Ashkan Farhadi, Alireza Fatehi, Hamed Mahmoudi, Hamed Sharifi and Mohammad Hosseini Sefat Mechatronics
More informationFind Kick Play An Innate Behavior for the Aibo Robot
Find Kick Play An Innate Behavior for the Aibo Robot Ioana Butoi 05 Advisors: Prof. Douglas Blank and Prof. Geoffrey Towell Bryn Mawr College, Computer Science Department Senior Thesis Spring 2005 Abstract
More informationStergios I. Roumeliotis and George A. Bekey. Robotics Research Laboratories
Synergetic Localization for Groups of Mobile Robots Stergios I. Roumeliotis and George A. Bekey Robotics Research Laboratories University of Southern California Los Angeles, CA 90089-0781 stergiosjbekey@robotics.usc.edu
More informationTraffic Control for a Swarm of Robots: Avoiding Target Congestion
Traffic Control for a Swarm of Robots: Avoiding Target Congestion Leandro Soriano Marcolino and Luiz Chaimowicz Abstract One of the main problems in the navigation of robotic swarms is when several robots
More informationUsing Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots
Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information
More informationMulti-Agent Control Structure for a Vision Based Robot Soccer System
Multi- Control Structure for a Vision Based Robot Soccer System Yangmin Li, Wai Ip Lei, and Xiaoshan Li Department of Electromechanical Engineering Faculty of Science and Technology University of Macau
More informationSponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011
Sponsored by Nisarg Kothari Carnegie Mellon University April 26, 2011 Motivation Why indoor localization? Navigating malls, airports, office buildings Museum tours, context aware apps Augmented reality
More informationCS343 Introduction to Artificial Intelligence Spring 2012
CS343 Introduction to Artificial Intelligence Spring 2012 Prof: TA: Daniel Urieli Department of Computer Science The University of Texas at Austin Good Afternoon, Colleagues Welcome to a fun, but challenging
More informationTraffic Control for a Swarm of Robots: Avoiding Group Conflicts
Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots
More informationHanuman KMUTT: Team Description Paper
Hanuman KMUTT: Team Description Paper Wisanu Jutharee, Sathit Wanitchaikit, Boonlert Maneechai, Natthapong Kaewlek, Thanniti Khunnithiwarawat, Pongsakorn Polchankajorn, Nakarin Suppakun, Narongsak Tirasuntarakul,
More informationEDUCATIONAL ROBOTICS' INTRODUCTORY COURSE
AESTIT EDUCATIONAL ROBOTICS' INTRODUCTORY COURSE Manuel Filipe P. C. M. Costa University of Minho Robotics in the classroom Robotics competitions The vast majority of students learn in a concrete manner
More informationMulti Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture
Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture Alfredo Weitzenfeld University of South Florida Computer Science and Engineering Department Tampa, FL 33620-5399
More informationNao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann
Nao Devils Dortmund Team Description for RoboCup 2014 Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann Robotics Research Institute Section Information Technology TU Dortmund University 44221 Dortmund,
More informationShoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN
Long distance outdoor navigation of an autonomous mobile robot by playback of Perceived Route Map Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA Intelligent Robot Laboratory Institute of Information Science
More informationHigh Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden
High Speed vslam Using System-on-Chip Based Vision Jörgen Lidholm Mälardalen University Västerås, Sweden jorgen.lidholm@mdh.se February 28, 2007 1 The ChipVision Project Within the ChipVision project we
More informationThe UNSW RoboCup 2000 Sony Legged League Team
The UNSW RoboCup 2000 Sony Legged League Team Bernhard Hengst, Darren Ibbotson, Son Bao Pham, John Dalgliesh, Mike Lawther, Phil Preston, Claude Sammut School of Computer Science and Engineering University
More informationA Lego-Based Soccer-Playing Robot Competition For Teaching Design
Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University
More informationMulti-robot Dynamic Coverage of a Planar Bounded Environment
Multi-robot Dynamic Coverage of a Planar Bounded Environment Maxim A. Batalin Gaurav S. Sukhatme Robotic Embedded Systems Laboratory, Robotics Research Laboratory, Computer Science Department University
More informationAn Incremental Deployment Algorithm for Mobile Robot Teams
An Incremental Deployment Algorithm for Mobile Robot Teams Andrew Howard, Maja J Matarić and Gaurav S Sukhatme Robotics Research Laboratory, Computer Science Department, University of Southern California
More informationMajor Project SSAD. Mentor : Raghudeep SSAD Mentor :Manish Jha Group : Group20 Members : Harshit Daga ( ) Aman Saxena ( )
Major Project SSAD Advisor : Dr. Kamalakar Karlapalem Mentor : Raghudeep SSAD Mentor :Manish Jha Group : Group20 Members : Harshit Daga (200801028) Aman Saxena (200801010) We were supposed to calculate
More informationAutonomous Robot Soccer Teams
Soccer-playing robots could lead to completely autonomous intelligent machines. Autonomous Robot Soccer Teams Manuela Veloso Manuela Veloso is professor of computer science at Carnegie Mellon University.
More informationThe Future of AI A Robotics Perspective
The Future of AI A Robotics Perspective Wolfram Burgard Autonomous Intelligent Systems Department of Computer Science University of Freiburg Germany The Future of AI My Robotics Perspective Wolfram Burgard
More informationIntelligent Humanoid Robot
Intelligent Humanoid Robot Prof. Mayez Al-Mouhamed 22-403, Fall 2007 http://www.ccse.kfupm,.edu.sa/~mayez Computer Engineering Department King Fahd University of Petroleum and Minerals 1 RoboCup : Goal
More informationUltrasound-Based Indoor Robot Localization Using Ambient Temperature Compensation
Acta Universitatis Sapientiae Electrical and Mechanical Engineering, 8 (2016) 19-28 DOI: 10.1515/auseme-2017-0002 Ultrasound-Based Indoor Robot Localization Using Ambient Temperature Compensation Csaba
More informationThe UPennalizers RoboCup Standard Platform League Team Description Paper 2017
The UPennalizers RoboCup Standard Platform League Team Description Paper 2017 Yongbo Qian, Xiang Deng, Alex Baucom and Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia PA 19104, USA, https://www.grasp.upenn.edu/
More informationRealistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell
Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell 2004.12.01 Abstract I propose to develop a comprehensive and physically realistic virtual world simulator for use with the Swarthmore Robotics
More informationA Vision Based System for Goal-Directed Obstacle Avoidance
ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. A Vision Based System for Goal-Directed Obstacle Avoidance Jan Hoffmann, Matthias Jüngel, and Martin Lötzsch Institut
More informationCooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution
Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution Eiji Uchibe, Masateru Nakamura, Minoru Asada Dept. of Adaptive Machine Systems, Graduate School of Eng., Osaka University,
More informationGlobal Variable Team Description Paper RoboCup 2018 Rescue Virtual Robot League
Global Variable Team Description Paper RoboCup 2018 Rescue Virtual Robot League Tahir Mehmood 1, Dereck Wonnacot 2, Arsalan Akhter 3, Ammar Ajmal 4, Zakka Ahmed 5, Ivan de Jesus Pereira Pinto 6,,Saad Ullah
More informationA modular real-time vision module for humanoid robots
A modular real-time vision module for humanoid robots Alina Trifan, António J. R. Neves, Nuno Lau, Bernardo Cunha IEETA/DETI Universidade de Aveiro, 3810 193 Aveiro, Portugal ABSTRACT Robotic vision is
More informationConfidence-Based Multi-Robot Learning from Demonstration
Int J Soc Robot (2010) 2: 195 215 DOI 10.1007/s12369-010-0060-0 Confidence-Based Multi-Robot Learning from Demonstration Sonia Chernova Manuela Veloso Accepted: 5 May 2010 / Published online: 19 May 2010
More informationMulti-Fidelity Robotic Behaviors: Acting With Variable State Information
From: AAAI-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Multi-Fidelity Robotic Behaviors: Acting With Variable State Information Elly Winner and Manuela Veloso Computer Science
More informationCMDragons 2008 Team Description
CMDragons 2008 Team Description Stefan Zickler, Douglas Vail, Gabriel Levi, Philip Wasserman, James Bruce, Michael Licitra, and Manuela Veloso Carnegie Mellon University {szickler,dvail2,jbruce,mlicitra,mmv}@cs.cmu.edu
More information