Autonomous Localization

Similar documents
Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Emergency Stop Final Project

Lecture: Allows operation in enviroment without prior knowledge

International Journal of Informative & Futuristic Research ISSN (Online):

We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

Multi-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy

Progress Report. Mohammadtaghi G. Poshtmashhadi. Supervisor: Professor António M. Pascoal

Emotional BWI Segway Robot

Localisation et navigation de robots

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network

Mobile Robots Exploration and Mapping in 2D

Service Robots in an Intelligent House

Road Boundary Estimation in Construction Sites Michael Darms, Matthias Komar, Dirk Waldbauer, Stefan Lüke

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Development of a Sensor-Based Approach for Local Minima Recovery in Unknown Environments

Learning and Using Models of Kicking Motions for Legged Robots

Creating a 3D environment map from 2D camera images in robotics

Learning and Using Models of Kicking Motions for Legged Robots

CS295-1 Final Project : AIBO

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

CS123. Programming Your Personal Robot. Part 3: Reasoning Under Uncertainty

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Initial Report on Wheelesley: A Robotic Wheelchair System

Sensor Data Fusion Using Kalman Filter

INTELLIGENT WHEELCHAIRS

GPS data correction using encoders and INS sensors

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

An Autonomous Self- Propelled Robot Designed for Obstacle Avoidance and Fire Fighting

Computer Vision Based Real-Time Stairs And Door Detection For Indoor Navigation Of Visually Impaired People

Team Description Paper

Simulation of a mobile robot navigation system

FSR99, International Conference on Field and Service Robotics 1999 (to appear) 1. Andrew Howard and Les Kitchen

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza

Wi-Fi Fingerprinting through Active Learning using Smartphones

Range Sensing strategies

Formation and Cooperation for SWARMed Intelligent Robots

TurtleBot2&ROS - Learning TB2

Multisensory Based Manipulation Architecture

Assisting and Guiding Visually Impaired in Indoor Environments

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization

Lighting the Way. Abstract. Introduction. Shivam Patel, Danyaal Ali Dr. Jivko Sinapov 15 May 2017

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

Limits of a Distributed Intelligent Networked Device in the Intelligence Space. 1 Brief History of the Intelligent Space

Mini Turty II Robot Getting Started V1.0

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

In cooperative robotics, the group of robots have the same goals, and thus it is

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

High Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden

More Info at Open Access Database by S. Dutta and T. Schmidt

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

COOPERATIVE RELATIVE LOCALIZATION FOR MOBILE ROBOT TEAMS: AN EGO- CENTRIC APPROACH

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Visual compass for the NIFTi robot

4D-Particle filter localization for a simulated UAV

[31] S. Koenig, C. Tovey, and W. Halliburton. Greedy mapping of terrain.

ROBOT NAVIGATION MODALITIES

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

Vision System for a Robot Guide System

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

NTU Robot PAL 2009 Team Report

Autonomous Wheelchair for Disabled People

2D Visual Localization for Robot Vacuum Cleaners at Night

Mobile Target Tracking Using Radio Sensor Network

An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Ultrawideband Radar Processing Using Channel Information from Communication Hardware. Literature Review. Bryan Westcott

Computational Principles of Mobile Robotics

A Qualitative Approach to Mobile Robot Navigation Using RFID

An Agent-Based Architecture for an Adaptive Human-Robot Interface

INTELLIGENT UNMANNED GROUND VEHICLES Autonomous Navigation Research at Carnegie Mellon

Summary of robot visual servo system

A Comparative Study of Different Kalman Filtering Methods in Multi Sensor Data Fusion

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

Autonomous Positioning of Mobile Robot Based on RFID Information Fusion Algorithm

GestureCommander: Continuous Touch-based Gesture Prediction

Re: ENSC 370 Project Gerbil Process Report

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

Learning Behaviors for Environment Modeling by Genetic Algorithm

Preliminary Results in Range Only Localization and Mapping

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors

SnakeSIM: a Snake Robot Simulation Framework for Perception-Driven Obstacle-Aided Locomotion

Robot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

What is Robot Mapping? Robot Mapping. Introduction to Robot Mapping. Related Terms. What is SLAM? ! Robot a device, that moves through the environment

Using Reactive and Adaptive Behaviors to Play Soccer

András László Majdik. MSc. in Eng., PhD Student

Classification of Road Images for Lane Detection

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

PROJECTS 2017/18 AUTONOMOUS SYSTEMS. Instituto Superior Técnico. Departamento de Engenharia Electrotécnica e de Computadores September 2017

Interactive Teaching of a Mobile Robot

Research Proposal: Autonomous Mobile Robot Platform for Indoor Applications :xwgn zrvd ziad mipt ineyiil zinepehe`e zciip ziheaex dnxethlt

Evaluation of an Enhanced Human-Robot Interface

Robotics Enabling Autonomy in Challenging Environments

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

Transcription:

Autonomous Localization Jennifer Zheng, Maya Kothare-Arora I. Abstract This paper presents an autonomous localization service for the Building-Wide Intelligence segbots at the University of Texas at Austin. The BWI segbots currently localize manually: a human must set a 2D pose estimate on the RViz GUI and then drive the robot so it can accumulate sensor data and determine its location. In order to adopt a more intelligent approach to localizing, we implemented a feature that uses both the global localization service to begin the localization process and the ROS topic cmd_vel to drive the robot. We also explored different distances, locations, velocities, and time ranges to determine which amounts for those variables would provide the most accurate results. II. Introduction The Building-Wide Intelligence segbots currently localize by either requiring users to manually indicate the robot's location via the 2D Pose Estimate function on RViz or having the users teleop the robot while it attempts to localize via the global localization service. Our project allows the BWI segbots to localize with minimal human guidance as a step towards more autonomous robots. We used the global localization service in the ROS amcl package along with the pre-existing map of the third floor to implement our idea. The global localization service distributes particles across the map, and as the robot moves around the area, gathers data from the sensors and clusters the particles around the predicted locations of the robot. To

drive the robot and accumulate data for the global localization service, we used the cmd_velocity topic. Since the accuracy of localization would be highly dependent on the global localization service, we also experimented with different numbers for variables such as the speed, distance, and location to ensure that our implementation would provide the most accurate results possible. III. Background and Related Works Much research has been devoted to addressing the autonomous localization issue. Some approaches involve trying to solve the Kidnapped Robot Problem. The paper Quick and Dirty Localization for a Lost Robot by Uwe Gerecke and Noel Sharkey details a way in which a robot can determine its location when placed in a new environment. The robots will create a cluster of locations on a single node and can use reference points to localize. The localization works through three steps: First, the SOM shows several possible locations for the robot, based on sensor input. Each of these locations is incremented by 1 in the evidence vector. Then, the robot must move a small distance, and read in new sensor data. Again, the evidence vector is updated (each possible location is incremented by 1). Lastly, evidence shifting must be performed. This process must be repeated iteratively [1]. A somewhat similar approach was taken in A Near-tight Approximation Lower Bound and Algorithm For the Kidnapped Robot Problem by Sven Koenig, Apurva Mudgal, and Craig Tovey. This approach splits the problem into two parts: hypothesis generation and hypothesis elimination. The sensory data helps to create a set of hypotheses, and hypothesis elimination is necessary to limit it down to the exact localization. In this

approach, the hypothesis elimination takes place in stages, with the set of hypotheses being halved in each phase. This happens by classifying each hypothesis h in the initial set as either blocked or traversable [2]. Because we are using the global localization service, our code also works by first dealing with a large set of hypotheses, and then limiting the set based on data from the sensors. Other works in autonomous localization include Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans which introduces two algorithms to evaluate a robot s relative location. The method for localization involves using sensors and compare sensor data to a map using the algorithms [3]. However, this approach depends on the accuracy of the sensors. To implement this method on the BWI segbots, we would need to account for noise in the sensor data. Mobile Robot Localization by Tracking Geometric Beacons approaches localization by using a geometrical beacon and an algorithm developed by the authors. The algorithm is based on the extended Kalman filter that matches between the beacon and a map, using geometry to pinpoint the location of the robot. There is also validation gate that accounts for any noise when localizing [4]. Unfortunately, we would not be able to implement this exact method as the robots in the paper use sonar rather than lasers. Finally, "Monocular Vision for Mobile Robot Localization and Autonomous Navigation" proposes a localization method using a camera and outdoor landmarks. This method involves recording a video sequence, building a 3D map from the sequence, and using the map to localize [5]. The approach involves a human initially driving the robot in order to record the video; however, we want to have the robot localize with minimum human involvement. We also cannot implement our localization the same way the authors

because our landmarks are more subject to change. Our project relies on the fact that the robot will be localizing in the lab, rather than a big open space outside. IV. Technical Approach We first experimented with the accuracy of the global localization service. Using the v3 BWI segbot, we varied the number of min and max particles in amcl.launch file and varied the speed we used to teleop the robot. Min particles Max particles Speed Successful? 40,000 160,000 0.5 no 25,000 100,000 0.5 no 10,000 40,000 0.5 yes 10,000 40,000 0.44 yes 10,000 40,000 0.39 yes 10,000 40,000 0.25 no 5,000 20,000 0.5 no We concluded that 10,000 min particles and 40,000 max particles had the most accuracy when attempting to localize using the global localization service. We also found that the speed at which the robot is moving does affect the localization: 0.39 is the slowest the robot could move while still localizing accurately.

To implement our solution, we first attempted to use the move_base topic and set goals to move the robot forward and to spin so that the robot would accumulate sensor data. However, because we were setting goals before the robot was localized, the robot was unable to generate a path. To address this problem, we used the cmd_vel topic and set the linear speed in the x direction to move the robot forward. Furthermore, we attempted two different approaches for the path of the robot to determine which path would provide the most accurate results. We first tried having the robot move at a linear velocity of 0.5 m/s in a straight line down the hallway outside the lab for 20 seconds. We also tried moving the bot straight at a linear velocity of 0.5 m/s for 5 seconds, and then rotating the bot for 3 seconds, repeating this entire process for 32 seconds. V. Evaluation and Example Demonstration The robot localized better around open areas that had distinct barriers, such as the cubicles near the doors to the elevators. Generally, it would not localize while driving through the hallway where it was surrounded by walls but would localize in just a matter of seconds after reaching open space at the end of the hallway.

Figure 1: The global localization service is called, and as the robot moves, the particles begin to cluster. Note that the robot did not localize accurately when the path was from the BWI lab to the lounge. We concluded that the reason that it would not localize until the end of the hallway was because of homogeneity of the sensor data gathered while driving through the hallway. Since the third-floor lab has many hallways, it was hard for the robot to determine which one it was in. Once it reached the cubicles, however, the data gathered was distinct enough for the robot for the global localization particles to cluster to the correct location. The most favorable path was from the lounge area near the lab to the doors leading to the elevators, as shown in Figure 2. Figure 2: The robot successfully localized once it reached the cubicle area.

We also discovered that having the robot rotate every so often as it advances down the path as opposed to just moving in a straight line down a path did not give better results. We deduced that the reason for this was because the v3 segbot has 360 sensors, therefore already has the accumulated data from the area surrounding it. Demonstration: https://www.youtube.com/watch?v=gqgcwbj2h5i&feature=youtu.be Code: https://github.com/jennifer-zheng/autonomous-localization VI. Conclusion and Future Work Our code was able to call the global localization service and move down the hallway far enough to localize; however, the accuracy of localization was not consistent. Additionally, because we had to use the cmd_vel topic, the robot will not detect obstacles while running our code, and therefore, it is not safe to run without a human supervising the robot yet. Ideally, we want the robot to be able to carry out this process without human supervision. In the future, we could potentially fetch data from the map to estimate where obstacles are and make it safer for the robot to operate on our code even while subscribing to the cmd_vel topic. VII. References [1] Gerecke, Uwe, and Noel Sharkey. "Quick and Dirty Localization for a Lost Robot." Computational Intelligence in Robotics and Automation (1999): 262-67. IEEEXplore. Web. 18 Apr. 2017. [2] Koenig, Sven, Apurva Mudgal, and Craig Tovey. "A Near-tight Approximation Lower Bound and Algorithm for the Kidnapped Robot Problem." SODA '06 Proceedings

of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm (2006): 133-42. ACM Digital Library. Web. 4 May 2017. [3] Lu, Feng, and Milios, Evangelos E. Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans. 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (1994): 935 938. [4] Leonard, John J., and Hugh F. Durrant-Whyte. "Mobile robot localization by tracking geometric beacons." IEEE Transactions on Robotics and Automation 7.3 (1991): 376-382. [5] Royer, E., Lhuillier, M., Dhome, M. et al. "Monocular Vision for Mobile Robot Localization and Autonomous Navigation." Int J Comput Vision (2007) 74: 237.