Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Similar documents
Localisation et navigation de robots

As a first approach, the details of how to implement a common nonparametric

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

International Journal of Informative & Futuristic Research ISSN (Online):

Lecture: Allows operation in enviroment without prior knowledge

Multi-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy

4D-Particle filter localization for a simulated UAV

Indoor navigation with smartphones

Gesture Recognition with Real World Environment using Kinect: A Review

PROJECTS 2017/18 AUTONOMOUS SYSTEMS. Instituto Superior Técnico. Departamento de Engenharia Electrotécnica e de Computadores September 2017

12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, ISIF 126

Autonomous Localization

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

What is Robot Mapping? Robot Mapping. Introduction to Robot Mapping. Related Terms. What is SLAM? ! Robot a device, that moves through the environment

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

INTRODUCTION TO VEHICLE NAVIGATION SYSTEM LECTURE 5.1 SGU 4823 SATELLITE NAVIGATION

COOPERATIVE RELATIVE LOCALIZATION FOR MOBILE ROBOT TEAMS: AN EGO- CENTRIC APPROACH

Robot Mapping. Introduction to Robot Mapping. Cyrill Stachniss

Range Sensing strategies

Preliminary Results in Range Only Localization and Mapping

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

Deploying Artificial Landmarks to Foster Data Association in Simultaneous Localization and Mapping

Robot Mapping. Introduction to Robot Mapping. Gian Diego Tipaldi, Wolfram Burgard

Brainstorm. In addition to cameras / Kinect, what other kinds of sensors would be useful?

Cubature Kalman Filtering: Theory & Applications

Team Description Paper

INDOOR HEADING MEASUREMENT SYSTEM

Team Description Paper

Vision-based Localization and Mapping with Heterogeneous Teams of Ground and Micro Flying Robots

Pedestrian Navigation System Using. Shoe-mounted INS. By Yan Li. A thesis submitted for the degree of Master of Engineering (Research)

COS Lecture 7 Autonomous Robot Navigation

Monte Carlo Localization in Dense Multipath Environments Using UWB Ranging

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

Mobile Robots Exploration and Mapping in 2D

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network

Spatial Navigation Algorithms for Autonomous Robotics

Robotics Enabling Autonomy in Challenging Environments

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

License Plate Localisation based on Morphological Operations

Durham E-Theses. Development of Collaborative SLAM Algorithm for Team of Robots XU, WENBO

Particle. Kalman filter. Graphbased. filter. Kalman. Particle. filter. filter. Three Main SLAM Paradigms. Robot Mapping

Content Based Image Retrieval Using Color Histogram

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Autonomous Positioning of Mobile Robot Based on RFID Information Fusion Algorithm

FEKF ESTIMATION FOR MOBILE ROBOT LOCALIZATION AND MAPPING CONSIDERING NOISE DIVERGENCE

HCI for Real world Applications

Sample PDFs showing 20, 30, and 50 ft measurements 50. count. true range (ft) Means from the range PDFs. true range (ft)

Development of a Low-Cost SLAM Radar for Applications in Robotics

Automated Driving Car Using Image Processing

State and Path Analysis of RSSI in Indoor Environment

Robot Mapping. Summary on the Kalman Filter & Friends: KF, EKF, UKF, EIF, SEIF. Gian Diego Tipaldi, Wolfram Burgard

SLIC based Hand Gesture Recognition with Artificial Neural Network

NTU Robot PAL 2009 Team Report

Estimation of Absolute Positioning of mobile robot using U-SAT

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

Learning and Using Models of Kicking Motions for Legged Robots

Autonomous Underwater Vehicle Navigation.

GPS data correction using encoders and INS sensors

CS295-1 Final Project : AIBO

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

Mobile Target Tracking Using Radio Sensor Network

The Future of AI A Robotics Perspective

Decentralised SLAM with Low-Bandwidth Communication for Teams of Vehicles

NAVIGATION OF MOBILE ROBOTS

The Research of the Lane Detection Algorithm Base on Vision Sensor

Available online at ScienceDirect. Procedia Computer Science 76 (2015 )

Randomized Motion Planning for Groups of Nonholonomic Robots

Hybrid Positioning through Extended Kalman Filter with Inertial Data Fusion

Autonomous Navigation of Mobile Robot based on DGPS/INS Sensor Fusion by EKF in Semi-outdoor Structured Environment

Unit 1: Introduction to Autonomous Robotics

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information

Filtering Impulses in Dynamic Noise in the Presence of Large Measurement Noise

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

Event-based Algorithms for Robust and High-speed Robotics

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Carrier Phase GPS Augmentation Using Laser Scanners and Using Low Earth Orbiting Satellites

A MULTI-SENSOR FUSION FOR INDOOR-OUTDOOR LOCALIZATION USING A PARTICLE FILTER

Autonomous Mobile Robots

S.P.Q.R. Legged Team Report from RoboCup 2003

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Ultrasound-Based Indoor Robot Localization Using Ambient Temperature Compensation

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

A Qualitative Approach to Mobile Robot Navigation Using RFID

Wirelessly Controlled Wheeled Robotic Arm

Introduction to Mobile Robotics Welcome

Integration of GNSS and INS

Effective Collision Avoidance System Using Modified Kalman Filter

Estimation and Control of Lateral Displacement of Electric Vehicle Using WPT Information

Summary of robot visual servo system

Localization (Position Estimation) Problem in WSN

A METHOD FOR DISTANCE ESTIMATION USING INTRA-FRAME OPTICAL FLOW WITH AN INTERLACE CAMERA

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Robot Personality from Perceptual Behavior Engine : An Experimental Study

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

REAL TIME INDOOR TRACKING OF TAGGED OBJECTS WITH A NETWORK OF RFID READERS

Prediction and Correction Algorithm for a Gesture Controlled Robotic Arm

Transcription:

Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department of Electrical Engineering, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan (Received April 1, 2015; accepted April 20, 2016) Keywords: SLAM, map construction, RGB-D mapping, indoor robot localization, particle filter The most important issue for intelligent mobile robot development is the ability to navigate autonomously in the environment to complete certain tasks. Thus, the indoor localization problem of a mobile robot has become a key component for real applications. In general, two categories of mobile robot localization technique are identified: one is robot pose tracking, and the other is robot global localization. In pose tracking problems such as the simultaneous localization and mapping (SLAM) process, the robot has to find its new pose using the observed landmarks or features in its knowledge base of its previous observations. In global localization methodology, a robot does not have knowledge of its previous pose. It has to find its new pose directly in the environment, such as by using a global positioning system (GPS). On the other hand, an artificial beacon-based localization technique, such as the received signal strength indicator (RSSI), causes higher pose uncertainty. However, the artificial beacon can provide a good initial reference for robot position mapping. The gyrocompass of a mobile robot is suited for short-term dead-reckoning. Also the RGB-D camera of a mobile robot can record meaningful features or landmarks in 3D space. The purpose of this work is to fuse the advantages of these sensors via strategy control by a particle filter to enhance the estimation accuracy of indoor mobile robot localization. 1. Introduction The issue of intelligent mobile robot self-localization can be divided into two categories: robot pose tracking and robot global localization. In the pose tracking problem, such as the simultaneous localization and mapping (SLAM) process, the robot has to find its new pose using the knowledge of its previous pose i.e., to associate the observed landmarks in its knowledge base. In the global localization problem, a robot does not have knowledge of its previous pose. It has to find its new pose directly in the environment, such as by using a global positioning system (GPS). Furthermore, for real mobile robot applications, the kidnapping problem arises, which is a combination of those two localization problems. For example, while the mobile robot is performing an extended kalman filter (EKF)-SLAM, it may be suddenly kidnapped by moving to another pose, after which the robot cannot associate the landmarks in its knowledge base. Then the mobile robot will fail and be dangerous. * Corresponding author: e-mail: sukl@yuntech.edu.tw ISSN 0914-4935 MYU K.K.

696 Sensors and Materials, Vol. 28, No. 6 (2016) In this work, a fused concept including RGB-D visual landmark extraction, gyro-odometer, and artificial beacons for mobile robot indoor localization is proposed as shown in Fig. 1. First, a SLAM process with RGB-D camera is executed to construct an environment map with visual features. Ideally, the RGB-D camera captures all significant features in the indoor environment, but in practice, the visual feature extraction is constrained by the light, shapes, and colors of the environment. Hence, in real applications, we added some artificial beacons, such as an infrared receiver (IR) or radio beacons to the environment, and marked them on the map to help indoor mobile robot localization. Finally, the experimental results show that the proposed concept actually increased the accuracy for robot pose association and localization. 2. Related Work 2.1 Localization and mapping development of a mobile robot An intelligent mobile robot is an artificial system that perceives the environment and its own status through sensors in order to navigate in an indefinite environment to complete certain tasks. Thus, the SLAM problem, also known as concurrent mapping and localization (CML), is one of the fundamental challenges of intelligent mobile robot development. The SLAM problem deals with the uncertainty in the pose of a robot when the environment is only partially known or completely unknown, and uses sensor measurements to estimate the robot pose and to construct an incremental environment map simultaneously. Many studies and technologies have been developed in this field. (1) Up to this point, two aspects of SLAM techniques have been classified: feature-based SLAM (2) and graph-based SLAM. (3) Feature-based SLAM applies estimation methods from Bayesian probability, and graph-based SLAM uses global optimal estimation techniques for alignment based on relative observations. Fig. 1. (Color online) The mobile robot platform and proposed system architecture.

Sensors and Materials, Vol. 28, No. 6 (2016) 697 2.2 RGB-D environment SLAM Visual recognition is a critical component for a robot to behave autonomously. Since the RGB-D sensor of Microsoft Kinect, (4) as shown in Fig. 2, was announced for the Xbox-360 console in 2009, it has been possible to easily capture image and depth maps at a lower cost, and therefore to use Kinect for various visual recognition studies including SLAM, and recognition of objects, human gestures, and actions. Figure 2 shows results from the RGB-D SLAM. 3. SLAM with RGB-D Camera 3.1 Visual landmark and feature extraction Numerous techniques for computer vision have been proposed to model landmarks for a robot navigating in indoor environments. They all rely on two assumptions: (1) landmarks have to be easily detected in the image signal; and (2) landmarks must be locally characterized to distinguish them from others features. Most recent work also makes use of points to define landmarks, taking advantage of new, powerful interest point detection and characterization algorithms such as the scale invariant feature transform (SIFT), (5) shown in Fig. 3, which has emerged as an effective methodology in general object recognition as well as for other applications relating to machine vision. An important aspect of this approach is that it generates large numbers of features in a local region such as location, scale, rotation, magnitude, and orientation in order to record information about key points. Fig. 2. (Color online) Microsoft Kinect RGB-D camera and the results of SLAM. Fig. 3. (Color online) The SIFT feature matching between two different views.

698 Sensors and Materials, Vol. 28, No. 6 (2016) 3.2 Bayesian filtering SLAM SLAM using Bayesian filtering involves finding appropriate representations for modeling both an observation and motion, as shown in Fig. 4. The observation model describes the probability of making an observation in time k(z k ), when a vehicle s location x k and a map (including the landmark locations) are known. It is generally described as p(z k x k, m). (1) The motion model describes the probability distribution of transitions in the robot s state as p(x k x k 1, u k ). (2) The state transition is assumed to be a Markov process in which the next pose x k depends only on the previous state x k 1 and the applied control input u k. The SLAM algorithm is implemented in a standard two-step recursive (sequential) prediction and correction as follows. Prediction: p(x k, m z 0:k 1, u 0:k, x 0 ) = p(x k x k 1, u k ) p(x k 1, m z 0:k 1, u 0:k 1, x 0 )dx k 1 (3) Correction: p(x k, m z 0:k, u 0:k, x 0 ) = p(z k, x k, m)p(x k, m z 0:k 1, u 0:k, x 0 ) p(z k z 0:k 1, u 0:k ) (4) The solution involves an appropriate representation for both the motion model and the observation model that allows efficient and consistent computation of the prior and posterior distributions. The most popular of the state-of-the-art SLAM methods is the application of the EKF. The EKF linearizes the nonlinear motion model at an estimated linearization point, uses a first-order approximation to represent the state, and involves a Jacobian matrix calculation. The SIFT RGB-D feature map can be constructed directly with the EKF-SLAM, as shown in Fig. 5. Fig. 4. SLAM process with observation and motion. Fig. 5. (Color online) RGB-D map with SIFT features.

Sensors and Materials, Vol. 28, No. 6 (2016) 699 4. System Implementation 4.1 EKF construction of an indoor environment and beacons post processing There are a number of SLAM application programming interfaces (APIs), such as the mobile robot programming toolkit (MRPT), for real mobile robot SLAM implementation. In this work, the EKF-SLAM toolbox using the classical EKF implementation was applied in advance to simulate a Kinect RGB-D camera to construct an environmental feature map. (6) Figure 6 shows the EKF- SLAM process with point landmarks in a global view, and Fig. 6 shows the point landmarks captured in the image of an RGB sensor on a mobile robot. Ideally, the RGB-D camera is expected to capture all significant features of the indoor environment, but practically, the visual feature extraction is constrained by light, shapes, and colors. Hence, in real applications, it is expected that some artificial beacons, such as IR or radio beacons, will be added in the environment and marked on the map to help indoor localization. The idea and concept are shown in Fig. 7, where the diamond shapes on the map are mapped to the visual features in the environment, and the blue circles are man-made beacons which are placed in corresponding positions in the environment. 4.2 Monte Carlo particle filter (PF) A PF, also called CONDENSATION (conditional density propagation), (7) is based on Monte Carlo and Bayesian methods. The PF uses random sampling. Each particle presents an assumption of the location (x,y) and orientation (θ) of the robot. For example, 1000 green particles are initialized with uniform distribution as shown in Fig. 8. The z-axis represents the robot head angle in radians. The advantage of PF is that it can eliminate background noise. For mobile robot applications, two types of data are distinguished: perceptual data, such as landmark measurements, and odometer data or control, which carries information about robot motion. The PF algorithm for mobile robot self-localization is shown in Table 1, in which x t is robot pose at time t, u t is the robot motion command, and z t is measurement of the robot. Fig. 6. (Color online) EKF-SLAM toolbox for map construction: global mapping in EKF-SLAM and feature landmark in camera view.

700 Sensors and Materials, Vol. 28, No. 6 (2016) Fig. 7. (Color online) Visual features/landmarks and IR or radio beacons on the map. Table 1 PF algorithm. PF algorithm for mobile robot localitzation Input: X t 1, u t, z t, m Output: X t X t = X t = φ for m = 1 to M do 1. x [m] t = sample_motion_model(u t, x t 1) [m] 2. w [m] t = measurement_model(z t, x [m] t, m) 3. X t = X t + x [m] t, w [m] t endfor for m = 1 to M do 1. draw i with probability w [i] t 2. add x [i] t to X t endfor return X t Fig. 8. (Color online) PF initialization with uniform distribution. 4.3 Strategy control with beacons The strategy is to control the field of sampling when PF is initialized for robot pose estimation. The main idea is that the wireless beacon s radio signal strength or the infrared beacon s transmit scope can be pre-measured or pre-determined as a circular field. Figure 9 shows the green particles with uniform sampling of the entire map, because it has no cues to guess the initial pose of the robot in the map for PF initialization. However, if there is a beacon mounted on the ceiling and upon the robot as shown in Fig. 9, then we can constrain all the green particles with uniform distribution inside the circular boundary, and speed up PF initialization. Furthermore, for a consistent estimate of PF, we modified the concept from Fig. 9. i.e., when the statistical standard deviation of all particles for estimation of the robot pose exceeds a threshold, e.g., half the radius of the beacon s scope, as shown in Fig. 10, and if the nearest beacon is identified as shown in Fig. 10, then the particle is re-sampling in the scope and its new mean position is the same as the last mean position. The complete flow diagram for re-sampling control is shown in Fig. 10(c).

Sensors and Materials, Vol. 28, No. 6 (2016) 701 Fig. 9. (Color online) PF initialization with/without beacon: uniform sampling of the entire map and uniform sampling of the beacon s field. (c) Fig. 10. (Color online) Consistent particles re-sampling control: the estimates of deviation increase, resampling with last mean position, and (c) flow diagram of particle re-sampling. 5. Experimental Results 5.1 Comparison of number of landmarks for PF localization The first experiment indicated that the number of landmarks/features and number of particles affected the accuracy of the estimated pose. There are 100 landmarks and 2000 particles on the map for mobile robot self-localization, as shown in Fig. 11. The red line represents the PF s estimate and the blue line represents actual trajectory of the robot for a 2500 time-step (250 s) simulation. Figure 11 shows the standard deviation of robot pose estimation (blue, x; green, y; red, θ), and the PF achieved excellent pose estimation over the time step. Figure 11(c) is simulated with 100 landmarks and 1000 particles, where the initial estimation could not locate the actual robot pose, but after 80 time steps, the particles converge to robot s actual trajectory. Figure 11(d) is simulated with 30 landmarks and 1000 particles, where the estimate of the mobile robot s trajectory becomes difficult.

702 Sensors and Materials, Vol. 28, No. 6 (2016) (c) (d) Fig. 11. (Color online) The comparison of estimates for different landmarks and particles; the RGB-D sensor observation ability is limited to a 6 m range and 45 to 45 deg with a deviation of 0.008 m and 1 deg. The odometer deviation is 0.005 m and 2 deg. landmarks = 100, particles = 2000; the pose standard deviation in each time step; (c) landmarks = 100, particles = 1000; and (d) landmarks = 30, particles = 1000. 5.2 Estimation with initial beacon From the experiment in 5.1, the number of landmarks is the key to increasing the accuracy of the PF estimation. In real applications however, the visual landmark is constrained by the light, shapes, and colors in the environment. Thus, the artificial beacons were added to assistant robot self-localization. The concept and comparison are shown in Figs. 9 and 12. Without artificial beacons as shown in Fig. 9, the initial estimation of robot position is a uniform distribution over the map, and it is difficult to converge to an actual pose in the beginning phase with fewer visual landmarks observed. When more landmarks were observed in the middle time step, the PF converged to an actual robot trajectory as shown in Fig. 12. As shown in Fig. 9, an artificial beacon was added, and it restricted the initial particle sampling field. Figure 12 shows an acceptable estimation result for matching an actual trajectory. 5.3 Multiple beacons with strategy control From the experiment in 5.2, an initial beacon restricted the initial particle sampling field to reach an acceptable estimation for matching an actual trajectory. In this experiment, multiple beacons were simulated with the strategy control to assist mobile robot localization in an environment with fewer less visual landmarks (landmarks = 20). Figure 13 shows the comparisons

Sensors and Materials, Vol. 28, No. 6 (2016) 703 Fig. 12. (Color online) PF comparisons with/without an initial beacon: initial estimations are wrong and an acceptable estimation with an initial beacon. Fig. 13. (Color online) PF comparisons with/without multiple beacons: landmarks = 20, particles = 800, without beacon control, the position estimation mean error = 3.5510 m; the standard deviation of the estimation increased when landmarks were not observed, and landmarks = 20, particles = 800, with multiple beacon control, the position estimation mean error = 0.4248 m; the standard deviation of the estimation was maintained within the threshold when the beacon signal was received. when landmarks = 20, particles = 800. Without the artificial beacons to assistant mobile robot localization in Fig. 13, the mean error in position was 3.5510 m. With the artificial beacons to assistant mobile robot localization in Fig. 13, the mean error in position was 0.4248 m, which is

704 Sensors and Materials, Vol. 28, No. 6 (2016) clearly a better result than that without beacons. Figure 14 shows a repeat of the experiment with particles = 2000. This experiment indicated that more particles enhance the estimation accuracy. In Fig. 14 without beacon control, the mean position error was decreased to 1.0791 m; in Fig. 14 with beacon control, the mean position was decreased to 0.2589 m. All the comparisons are shown in Table 2. Fig. 14. (Color online) PF comparisons with/without multiple beacons: landmarks = 20, particles = 2000, without beacon control, the position estimation mean error = 1.0791 m; the standard deviation of the estimation increased when landmarks were not observed, and landmarks = 20, particles = 2000, with multiple beacon control, the position estimation mean error = 0.2589 m; the standard deviation of the estimation was maintained within the threshold when beacon signal was received. Table 2 Position mean error comparison with/without beacons in 20 landmarks. No. of particles With beacons 800 2000 Position mean error Position mean error No 3.5510 m 1.0791 m Position mean error Position mean error Yes 0.4248 m 0.2589 m

Sensors and Materials, Vol. 28, No. 6 (2016) 705 6. Conclusions In this work, a fused concept of RGB-D visual landmarks, gyro-odometer, and artificial beacons was proposed to increase the accuracy of mobile robot indoor localization. First, an EKF- SLAM process using RGB-D camera was carried out to construct an environment map with visual features. Ideally, the RGB-D camera captured all significant features in the indoor environment, but practically, the visual feature extraction was constrained by the light, shapes, and colors of the environment. Hence, in real applications, we added some artificial beacons, such as IR or radio beacons, in the environment and marked them on a map to help indoor mobile robot selflocalization. In addition, for consistent estimation of robot trajectory, a re-sampling control strategy using a PF was proposed. Finally, all the experimental results show the proposed method actually increases the accuracy of mobile robot association and localization when fewer visual features are in the environment. References 1 T. Bailey and H. Durrant-Whyte: IEEE Rob. Autom. Mag. 13 (2006) 108. 2 M. R. Walter, R. M. Eustice, and J. J. Leonard: Int. J. Rob. Res. 26 (2007) 335. 3 G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard: Intell. Transp. Syst. Mag. 2 (2010) 31. 4 Z. Zhang: Microsoft Kinect Sens. Eff. 19 (2012) 4. 5 P. Scovanner, S. Ali, and M. Shah: Proc. 15th Int. Conf. Multimedia (2007) p. 357. 6 J. Sola: IEEE Int. Conf. Rob. Autom. (2010) p. 3513. 7 C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau: IEEE Int. Conf. Med. Biol. (2006) p. 6384.