Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

Similar documents
CS295-1 Final Project : AIBO

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

CS 393R. Lab Introduction. Todd Hester

Automatic acquisition of robot motion and sensor models

Multi-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy

4D-Particle filter localization for a simulated UAV

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

Team KMUTT: Team Description Paper

COOPERATIVE RELATIVE LOCALIZATION FOR MOBILE ROBOT TEAMS: AN EGO- CENTRIC APPROACH

S.P.Q.R. Legged Team Report from RoboCup 2003

CMDragons 2009 Team Description

Multi Robot Object Tracking and Self Localization

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors

Cooperative Tracking with Mobile Robots and Networked Embedded Sensors

Reactive Cooperation of AIBO Robots. Iñaki Navarro Oiza

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Localisation et navigation de robots

International Journal of Informative & Futuristic Research ISSN (Online):

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Task Allocation: Role Assignment. Dr. Daisy Tang

UChile Team Research Report 2009

Technical issues of MRL Virtual Robots Team RoboCup 2016, Leipzig Germany

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

NTU Robot PAL 2009 Team Report

Learning and Using Models of Kicking Motions for Legged Robots

A World Model for Multi-Robot Teams with Communication

Abstract. This paper presents a new approach to the cooperative localization

Hierarchical Controller for Robotic Soccer

Team Edinferno Description Paper for RoboCup 2011 SPL

Visual Based Localization for a Legged Robot

KMUTT Kickers: Team Description Paper

Team Description Paper: Darmstadt Dribblers & Hajime Team (KidSize) and Darmstadt Dribblers (TeenSize)

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Formation and Cooperation for SWARMed Intelligent Robots

Baset Adult-Size 2016 Team Description Paper

Multi-Platform Soccer Robot Development System

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Distributed Sensor Analysis for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks

Localization for Mobile Robot Teams Using Maximum Likelihood Estimation

Learning and Using Models of Kicking Motions for Legged Robots

Team TH-MOS. Liu Xingjie, Wang Qian, Qian Peng, Shi Xunlei, Cheng Jiakai Department of Engineering physics, Tsinghua University, Beijing, China

Correcting Odometry Errors for Mobile Robots Using Image Processing

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

FalconBots RoboCup Humanoid Kid -Size 2014 Team Description Paper. Minero, V., Juárez, J.C., Arenas, D. U., Quiroz, J., Flores, J.A.

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

NuBot Team Description Paper 2008

Communications for cooperation: the RoboCup 4-legged passing challenge

Robotic Systems ECE 401RB Fall 2007

CORC 3303 Exploring Robotics. Why Teams?

Robot Task-Level Programming Language and Simulation

CS594, Section 30682:

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Tightly-Coupled Navigation Assistance in Heterogeneous Multi-Robot Teams

Development of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics

Monte Carlo Localization in Dense Multipath Environments Using UWB Ranging

CS343 Introduction to Artificial Intelligence Spring 2010

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

CS 599: Distributed Intelligence in Robotics

Dutch Nao Team. Team Description for Robocup Eindhoven, The Netherlands November 8, 2012

Creating a 3D environment map from 2D camera images in robotics

RoboCup. Presented by Shane Murphy April 24, 2003

GermanTeam The German National RoboCup Team

Humanoid Robot NAO: Developing Behaviors for Football Humanoid Robots

Team TH-MOS Abstract. Keywords. 1 Introduction 2 Hardware and Electronics

MRL Team Description Paper for Humanoid KidSize League of RoboCup 2017

Find Kick Play An Innate Behavior for the Aibo Robot

Stergios I. Roumeliotis and George A. Bekey. Robotics Research Laboratories

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Multi-Agent Control Structure for a Vision Based Robot Soccer System

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

CS343 Introduction to Artificial Intelligence Spring 2012

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Hanuman KMUTT: Team Description Paper

EDUCATIONAL ROBOTICS' INTRODUCTORY COURSE

Multi Robot Systems: The EagleKnights/RoboBulls Small- Size League RoboCup Architecture

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

High Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden

The UNSW RoboCup 2000 Sony Legged League Team

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

Multi-robot Dynamic Coverage of a Planar Bounded Environment

An Incremental Deployment Algorithm for Mobile Robot Teams

Major Project SSAD. Mentor : Raghudeep SSAD Mentor :Manish Jha Group : Group20 Members : Harshit Daga ( ) Aman Saxena ( )

Autonomous Robot Soccer Teams

The Future of AI A Robotics Perspective

Intelligent Humanoid Robot

Ultrasound-Based Indoor Robot Localization Using Ambient Temperature Compensation

The UPennalizers RoboCup Standard Platform League Team Description Paper 2017

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

A Vision Based System for Goal-Directed Obstacle Avoidance

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

Global Variable Team Description Paper RoboCup 2018 Rescue Virtual Robot League

A modular real-time vision module for humanoid robots

Confidence-Based Multi-Robot Learning from Demonstration

Multi-Fidelity Robotic Behaviors: Acting With Variable State Information

CMDragons 2008 Team Description

Transcription:

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects Anil Kumar Katti Department of Computer Science University of Texas at Austin akatti@cs.utexas.edu ABSTRACT This paper discusses multi robot localization (MRL) assisted by teammate robots and dynamic objects. We specifically relate our research to the robot soccer environment. We basically use a particle filter based localization scheme and further improve its accuracy by using information from teammate robots regarding their relative position and the relative position of dynamic objects in the environment. Localization of stationary robots is a different ball game in robot soccer. We specifically consider stationary robots which usually find it hard to get localized because they cannot view more than one beacon. In such scenarios, we improve their localization accuracy by using information available about the environment from their teammates. General Terms Multirobot Localization, Particle Filter, Dynamic Objects, Probabilistic Localization. 1. INTRODUCTION Lynne s paper [10] on general trend in multi robot systems motivates us to work on the problem related to multi robot systems. Further, localization is one problem which gets the maximum benefit out of redundancy created by multiple robots in an environment. We hence decided to approach this problem. Location information is of intrinsic interest in most of the robotic applications. We specifically consider robot soccer [1] application in this paper. Location information of a robot helps it in its motion towards the ball, goal and other robots. Location information from all the robots belonging to a team is used to decide the overall strategy for the game. This paper was a part of my course (CS393R - Autonomous Robots) research project. This project involved using concepts from 4 th and 5 th assignments along with a few ideas from MRL papers. Aibo ERS-7, a commercially available robotic dog is used in robot soccer league. This robot is equipped with a wide range of sensors like camera, IR sensors, ultrasonic sensors, accelerometers, gyroscopes etc. However, camera is the most extensively used sensor in robot soccer because of its versatality. We use this robot to implement our algorithms. Using sensed information from different sensors to localize a robot has been quite a challenge. Kalman filter based approach was used and discussed in Jetto et al. [7]. A more powerful and useful technique was proposed for localization in [3]. This approach uses a particle filter in place of a Kalman filter. We base our project on this approach. Fox et al. discussed MRL with probabilistic approach [5] which motivates our implementation of teammate robots based localization. Researchers have used different sensors to develop localization schemes. [6], [9], [4], [12] discuss different localization approaches based on different type of sensor data. We are particularly interested in vision based localization which was specifically discussed in [12]. After developing single robot localization, sufficient research has been done on MRL and distributed robot localization. [8], [11] discusses one such approach which bases its localization algorithm on relative object sighting in its environment. [2] discusses a similar approach on landmark based MRL. We mainly borrow concepts from from these two papers to implement teammate robot and dynamic object based MRL. 2. MOTIVATION Localization becomes relatively easy using the above described techniques when the robot under consideration is in constant motion in the field. In the sense, it gets to see more of its environment (beacons) and that helps it localize more accurately. On the other hand, when a robot is stationary (when it preparing for a kick), it has limited field of view and hence limited view of the environment. We consider such scenarios and develop algorithms which can help the robot to get localized more accurately. We take a three-phase approach to solve this problem. At first all the robots in the environment get themselves localized by looking at the visible beacons in their environment. In the second phase, they get localized based on the visible teammate robots in their environment. In the third phase, they get localized based on visible dynamic objects in their

environment. At the end of three phases, robots would have got localized more accurately than they would have in just beacon based approach. 3. ALGORITHM The approach we took for this project can be represented by the following two algorithms. We have our MRL algorithm running as both client and server on all the robots in the environment. Multi-robot-localization-client(Memory) Input: Memory - set of vision and motor inputs. Output: Particles - set of updated particles. 1 for all beacons in Memory 2 Single-robot-localization(beacon) 3 for all teammate robots in Memory 4 Send(particles, teammate bearing) 5 Wait-for-new-probabilities(new prob) 6 Update-particles(new prob) 7 for all dynamic object in Memory 8 Send(particles, object bearing, object dist) 9 Wait-for-new-probabilities(new prob) 10 Update-particles(new prob) Multi-robot-localization-server(Message) Inputs: Message - information regarding MRL. Outputs: Particles - set of updated particles. New prob - set of updated probabilities (for teammate). 1 teammate particles = Message.particles 2 if Message is regarding teammate robot 3 teammate bearing = Message.bearing 4 else 5 object bearing = Message.bearing 6 object dist = Message.dist 7 for all particle in Particles 8 for all teammate particle in teammate particles 9 s = similarity func(particles, bearing, dist) 10 update particle.prob based on s 11 update teammate particle.prob based on s 12 Send(teammate particles.prob) The three phase approach is clearly visible in the client algorithm. Each robot initially start off with simple single robot localization based on all the visible beacons. Once that s done, they check for teammate robot entries in their memory object. If they find entries corresponding to other teammate robots, they send their particles and teammate bearings to the seen robot. Similarly, if they find entries corresponding to dynamic objects, they send their particle Figure 1: Organization of different C++ modules. and object bearing and distance to all the robots in the environment. On the other hand, we ll have MRL server running as a daemon which gets invoked on receiving a message. This subroutine uses the obtained information to update its own probabilities and send back updated set of probabilities to the robot which had sent the message. The probability updation happens based on a similarity function. We run the similarity function with both particles and bearing and distance values obtained in the message. The similarity function just runs on all possible particle pairs and compute the likelihood of each of such pairs. The computed likelihood is normalized and assigned to the probability component of the particle. Similarity function code is attached with this paper. Note: teammate bearing is the angle at which the teammate robot is seen. Similarly, object bearing and object dist are angle and distance at which the dynamic object is seen. For Aibos, we assume that we have access to bearing alone since it is hard to compute distance from an Aibo and for objects like balls, we assume that we have access to both bearing and distance. 4. IMPLEMENTATION Though the MRL algorithm by itself is simple to implement, we had to develop a lot of other modules to get the whole project in place. We took a very modular approach to solve this problem with re-usability in mind. Specific C++ modules taking care of individual operations were developed. figure 1 shows the organization of different modules. Vision and Motion: We have interfaces for both vision and motion - V isioninterface and MotionInterface. We implemented Tekkotsu specific instances of these interfaces - T ekkotsuv isioninterface and T ekkotsumotioninterface. Implementing vision was a challenging task since Tekkotsu by default does not detect beacons. we had to hack into the vision pipeline of Tekkotsu to make it detect beacons as per our definition. After detecting beacons, we put the beacon information like, its ID, bearing and distance into current

instance of MemoryModule. We have defined threshold and heuristics as we did in Assignment 5 to see to it that we detect true beacons. A few of those heuristics is described below - Pixel density: When the Tekkotsu vision pipeline detects a blob, we first check for its pixel density. If the the pixel density exceed a particular value, the blob will be consider for the beacon detection else, it is discarded. Gap between blobs: To detect a beacon, Tekkotsu should essentially have detected 2 blobs of different color in pink, yellow, blue in the same frame. To check this we consider all the blobs available at any given point in time and we check their frame ID. If a frame consists of more than one colored blobs, then it is very like to be a beacon. Here we check the difference in the height of the 2 blobs. We reject this as a beacon if we have more than 5 pixels of gap between these two blobs. Another criteria at this point is the difference between x coordinate of the center of the 2 blobs. We put a threshold on this value too. Similarly, we define pixel density threshold for the orange ball as well. Further, we wrapped up robots in green color and used the same to detect robots. Every single time, we had to define thresholds using easy train application and we ended up wasting maximum amount of our time in developing this module accurately. We did not implement motion for 2 important reasons - we wanted to tackle stationary case specifically since single robot localization becomes constrained in such a case. We also ran out of time to implement simple motion in order to test our algorithm. Memory and Communication: Vision and Motion modules communicate through MemoryModule. At any given point in time, MemoryModule contains all the recent vision and motion updates. Vision updates can be seen as entries in worldobject array. LocalizationModule reads MemoryModule to get all the required updates. Any updates on beacons are immediately incorporated by running an instance of Single-robot-localization. If there are any updates on teammate robots or dynamic objects, suitable information is sent to CommunicationModule to communicate to the corresponding robot. Once this is done, the LocalizationModule waits on the CommunicationModule to obtain updated probabilities. CommunicationMoule also communicates particle information to the Localization GUI - which in turn displays all the particles. CommunicationModule uses wireless communication library which is a part of Tekkotsu. The communication essentially is over a TCP socket. We had a few challenges here to tackle - all the Aibos run in a NATed environment and the desktops are in the external network. This prevented communication from Aibo to the desktop. To overcome this, we instantiate a connection from the desktop to the Aibos and use the same connection to send information in the reverse direction. Localization: Localization module is invoked for every frame. A frame essentially consists of one set of motion and vision updates. We had 2 main subroutines in the LocalizationModule - processframe and multirobotlocalization. Figure 2: Setup for Multi Robot Localization. processframe is invoked on every new frame. It just reads the frame for new beacons and updates particle probabilities based on beacon entries. Once that is done, it performs reseeding and resampling according to the defined frequency. Implementing reseeding and resampling was another major challenge that we had to tackle. We had to carefully evaluate different scenarios for which reseeding would eiather create excessive particles or create much less particles than required. We had to come up with a generic decision factor for the reseed subroutine. To consider a specific instance, when particles were created due to MRL, we did not have to reseed them because reseeding them led to clusters in the environment. Such issues had to be solved one by one. Localization GUI: This component of the project was extremely helpful when we had to debug the above-mentioned algorithms. GUI was developed on python using pygame module for graphics and TCP sockets for communication. GUI takes in particle information at any point and parses the data to make it displayable. It then shows all the particles along with the robot at the weighted average position. The particles are displayed as small lines whose orientation represents the actual robot orientation and whose length represents the belief (probability) in that position. 5. EXPERIMENTS The setup required for our project was quite simple. We required 2 Aibo ERS-7 robots with one orange ball and 4 beacons. Beacons are classified based on the order of colors. Simulator represents the 2 colors of beacons as 2 concentric circles with top color being the inner circle and bottom color being the outer circle. Further, we refer to (pink on yellow) beacon as beacon 0, (yellow on pink) as beacon 1, (pink on blue) as beacon 2 and (blue on pink) as beacon 3. The particles belonging to two robots are represented in 2 different colors - yellow and white. The robots are represented by blue and red wedge-shaped figures on the GUI. Yellow particles correspond to the blue robot (henceforth referred to as robot A) and white particles correspond to red robot (henceforth referred to as robot B). Please note that the robots are places at the weighted average positions of the particles. Further, the dimension of the whole field is 100 X 100 sq. cm. The arena is shown in the figure 2.

Figure 3: Beacon based localization of robots. Scenario: robot A sees beacon 1 and the robot B sees beacon 0. We implement the algorithms discussed the the algorithms section. To ease our analysis, we consider 3 different scenarios and we tackle each of them separately. Localization based on a static beacon. Localization based on a static beacon and a teammate robot in the field of view. Localization based on a static beacon and a dynamic object in the field of view. Localization based on a static beacon: To start off with, we implement a simple particle filter based probabilistic localization on all the Aibos in the environment. Since we consider only stationary robots in this project, we assume that they will get to see only one beacon. When these robots run particle filter based localization, they localize to an arc at a particular distance from the beacon which is visible to them. In the base case setup, we placed robot A at a distance of approximately 114 cm from the beacon 1 and robot B at a distance of approximately 114 cm from beacon 0 and ran our particle filter algorithm. The result of our algorithm was seen on the localization GUI. It is displayed in the figure 3. Robot A got localized to an arc at approximately 114 cm from beacon 1 and robot B got localized to an arc at approximately 114 cm from beacon 0. Localization based on a static beacon and a teammate robot in the field of view: Once we had the basic localization working on both the robots, we constructed the next scenario which consists of the same 2 robots, robot A and robot B now facing beacon 2 and beacon 0 respectively. We placed them in such a way that they faced respective beacons with 0 bearing at approximately 114 cm and they faced teammate Aibos with 0.4 radians bearing. With such Figure 4: Localization of robots based on beacons and teammate robots. Scenario: robot A sees beacon 2 and robot B; robot B sees beacon 0 and robot A. a setup, we first ran single robot localization code. With this, they got localized as before. Once they were localized to arcs, we initialized MRL module on both robots. Since both robots could see each other, they could localize themselves better. We captured the result in the simulator. It is displayed in figure 4. Robot A got localized to a small cluster on the left hand side of the arena and robot B got localized to a small cluster on the right hand side of the arena. Localization based on a static beacon and a dynamic object in the field of view: Localization based on dynamic objects is just an extension of the MRL discussed in the previous scenario. In this case, we place robots such that robot A faces beacon 1 with 0 bearing and robot B faces beacon 0 with 0 bearing. They are placed at approximately 114 cm from the beacons. Further, both robot can also look at a ball (dynamic object) which is placed at a distance of 60 cm from robot A and 20 cm from robot B. With such a setup, we first ran single robot localization code. With this, the robots get localized to arcs as before. Once localized, we initialized MRL module on both robots. Since both robots could see the ball simultaneously, they could localize themselves better. We captured the result in the simulator. It is displayed in figure 5. Robot A got localized to a small cluster in the top center portion of the arena and the robot B got localized to the right hand side of the arena. 6. RESULTS In this section, I discuss results of the above-mentioned experiments. We considered the above 3 scenarios and tested out different parameters in each of the cases. Due to space constrains, I cannot discuss all the results here. But, one result which was of major interest was the standard deviation along x values of particles against time in the above 3 sce-

Figure 5: Beacon based localization of robots. Scenario: robot 1 sees beacon 1 and the robot 2 sees beacon 0. narios. We consider the first scenario as the base scenario. We compare scenario 2 and 3 against the first scenario to obtain the following results. Figure 6: Comparing beacon based localization with dynamic object assisted MRL. From figure 6 and 7, we can clearly see that in both the experiments, MRL (localization with beacon + other robot/ball) clearly out performs the beacon based single robot localization. After a certain duration of time, both the localizations approach a constant value and its clearly visible that the final value of MRL is many folds more accurate when compared the final value of single robot localization. 7. CONCLUSION MRL improves the accuracy of localization by many folds. Specifically when robots are stationary (when they are getting ready for a kick), we would want to use MRL to get robots localized more accurately. MRL suits a lot and helps the robot get a better estimate of where it is and hence what it is supposed to do. We have 2 topics in our mind which we plan to work on in the future - deriving closed forms for the robot s extent of localization and implementing negative observation based MRL. Though our aim was to solve the problem for the stationary robot case, we would also want to test out our algorithm on mobile beacon. We surely would want to take that as another project moving forward. 8. ACKNOWLEGEMENT I acknowledge my project partner Piyush Khandelwal for his efforts during the project. I also thank Prof. Peter Stone and Todd Hester for their constant support in terms of resources and ideas through out the project. Figure 7: Comparing beacon based localization with teammate robot assisted MRL.

9. REFERENCES [1] Robocup is an international robotics competition founded in 1993. the aim is to develop autonomous soccer robots with the intention of promoting research and education in the field of artificial intelligence. the name robocup is a contraction of the competition s full name, robot soccer world cup, but there are many other stages of the competition such as search and rescue and robot dancing - [www.wikipedia.org]. official site - http://www.robocup.org/. [2] G. Dedeoglu and G. S. Sukhatme. Landmark-based matching algorithm for cooperative mapping by autonomous robots. Distributed Autonomous Robotics Systems, 4:251 260, 2000. [3] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte carlo localization for mobile robots. IEEE International Conference on Robotics and Automation (ICRA99), May 1999. [4] A. Elfes. Sonar-based real world mapping and navigation. IEEE Journal of Robotics and Automation, pages 249 265, June 1987. [5] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A probabilistic approach to collaborative multi-robot localization. volume 8, pages 325 344, 2000. [6] P. Goel, S. Roumeliotis, and G. Sukhatme. Robust localization using relative and absolute position estimates. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 1999. [7] L. Jetto, S. Longhi, and G. Venturini. Development and experimental validation of an adaptive extended kalman filter for the localization of mobile robots. IEEE Transactions on Robotics and Automation, 15:219 229, 1999. [8] R. Kurazume, S. Nagata, and S. Hirose. Multi-robot localization using relative observations. In Proceedings of the 1994 IEEE International Conference in Robotics and Automation, Los Alamitos, CA, pages 1250 1257, May 1994. [9] A. M. Ladd, K. E. Bekris, G. Marceau, A. Rudys, D. S. Wallach, and L. E. Kavraki. Using wireless ethernet for localization. In Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, September 2002. [10] L. E. Parker. Current research in multirobot systems. Artificial Life and Robotics., 7(1-2):1 5, August 2006. [11] S. Roumeliotis and G. Bekey. Distributed multi-robot localization. In Proceedings of Distributed Autonomous Robotic Systems., pages 179 188, October 2000. [12] M. Sridharan, G. Kuhlmann, and P. Stone. Practical vision-based monte carlo localization on a legged robot. In The International Conference on Robotics and Automation, 2005.