Mobile Robot Localization Using Fusion of Object Recognition and Range Information

Similar documents
Spring Localization I. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Comparing image compression predictors using fractal dimension

Lab 3 Acceleration. What You Need To Know: Physics 211 Lab

Autonomous Humanoid Navigation Using Laser and Odometry Data

SLAM Algorithm for 2D Object Trajectory Tracking based on RFID Passive Tags

Knowledge Transfer in Semi-automatic Image Interpretation

Foreign Fiber Image Segmentation Based on Maximum Entropy and Genetic Algorithm

Pointwise Image Operations

A Comparison of EKF, UKF, FastSLAM2.0, and UKF-based FastSLAM Algorithms

The vslam Algorithm for Navigation in Natural Environments

Localizing Objects During Robot SLAM in Semi-Dynamic Environments

Lecture #7: Discrete-time Signals and Sampling

Memorandum on Impulse Winding Tester

A Cognitive Modeling of Space using Fingerprints of Places for Mobile Robot Navigation

3D Laser Scan Registration of Dual-Robot System Using Vision

Distributed Multi-robot Exploration and Mapping

Fuzzy Inference Model for Learning from Experiences and Its Application to Robot Navigation

Exploration with Active Loop-Closing for FastSLAM

Role of Kalman Filters in Probabilistic Algorithm

Evaluation of the Digital images of Penaeid Prawns Species Using Canny Edge Detection and Otsu Thresholding Segmentation

ECE-517 Reinforcement Learning in Artificial Intelligence

Motion-blurred star image acquisition and restoration method based on the separable kernel Honglin Yuana, Fan Lib and Tao Yuc

Estimation of Automotive Target Trajectories by Kalman Filtering

Table of Contents. 3.0 SMPS Topologies. For Further Research. 3.1 Basic Components. 3.2 Buck (Step Down) 3.3 Boost (Step Up) 3.4 Inverter (Buck/Boost)

Autonomous Robotics 6905

A Segmentation Method for Uneven Illumination Particle Images

arxiv: v1 [cs.ro] 19 Nov 2018

Abstract. 1 Introduction

On line Mapping and Global Positioning for autonomous driving in urban environment based on Evidential SLAM

Laplacian Mixture Modeling for Overcomplete Mixing Matrix in Wavelet Packet Domain by Adaptive EM-type Algorithm and Comparisons

Acquiring hand-action models by attention point analysis

A WIDEBAND RADIO CHANNEL MODEL FOR SIMULATION OF CHAOTIC COMMUNICATION SYSTEMS

Social-aware Dynamic Router Node Placement in Wireless Mesh Networks

Person Tracking in Urban Scenarios by Robots Cooperating with Ubiquitous Sensors

R. Stolkin a *, A. Greig b, J. Gilby c

Automatic Power Factor Control Using Pic Microcontroller

A new image security system based on cellular automata and chaotic systems

DrunkWalk: Collaborative and Adaptive Planning for Navigation of Micro-Aerial Sensor Swarms

Lecture September 6, 2011

MAP-AIDED POSITIONING SYSTEM

P. Bruschi: Project guidelines PSM Project guidelines.

Fast and accurate SLAM with Rao Blackwellized particle filters

A Slit Scanning Depth of Route Panorama from Stationary Blur

Transmit Beamforming with Reduced Feedback Information in OFDM Based Wireless Systems

Dead Zone Compensation Method of H-Bridge Inverter Series Structure

The IMU/UWB Fusion Positioning Algorithm Based on a Particle Filter

KALMAN FILTER AND NARX NEURAL NETWORK FOR ROBOT VISION BASED HUMAN TRACKING UDC ( KALMAN), ( ), (007.2)

Receiver-Initiated vs. Short-Preamble Burst MAC Approaches for Multi-channel Wireless Sensor Networks

Particle Filter-based State Estimation in a Competitive and Uncertain Environment

Experiments in Vision-Laser Fusion using the Bayesian Occupancy Filter

Notes on the Fourier Transform

Ship Target Detection Algorithm for Maritime Surveillance Video Based on Gaussian Mixture Model

Line Structure-based Localization for Soccer Robots

sensors ISSN

Bounded Iterative Thresholding for Lumen Region Detection in Endoscopic Images

Variation Aware Cross-Talk Aggressor Alignment by Mixed Integer Linear Programming

PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBILE ROBOT LOCALIZATION IN INDOOR ENVIRONMENTS

Signal Characteristics

Negative frequency communication

Activity Recognition using Hierarchical Hidden Markov Models on Streaming Sensor Data

Communications II Lecture 7: Performance of digital modulation

A Smart Sensor with Hyperspectral/Range Fovea and Panoramic Peripheral View

UNIT IV DIGITAL MODULATION SCHEME

Modeling and Prediction of the Wireless Vector Channel Encountered by Smart Antenna Systems

Location Tracking in Mobile Ad Hoc Networks using Particle Filter

An Application System of Probabilistic Sound Source Localization

Answer Key for Week 3 Homework = 100 = 140 = 138

Pulse Train Controlled PCCM Buck-Boost Converter Ming Qina, Fangfang Lib

DAGSTUHL SEMINAR EPIDEMIC ALGORITHMS AND PROCESSES: FROM THEORY TO APPLICATIONS

A New and Robust Segmentation Technique Based on Pixel Gradient and Nearest Neighbors for Efficient Classification of MRI Images

Lecture 4. EITN Chapter 12, 13 Modulation and diversity. Antenna noise is usually given as a noise temperature!

The design of an improved matched filter in DSSS-GMSK system

Auto-Tuning of PID Controllers via Extremum Seeking

Particle Filtering and Sensor Fusion for Robust Heart Rate Monitoring using Wearable Sensors

Moving Object Localization Based on UHF RFID Phase and Laser Clustering

EE 330 Lecture 24. Amplification with Transistor Circuits Small Signal Modelling

Comparitive Analysis of Image Segmentation Techniques

Effective Team-Driven Multi-Model Motion Tracking

Increasing multi-trackers robustness with a segmentation algorithm

Direct Analysis of Wave Digital Network of Microstrip Structure with Step Discontinuities

Attitude Estimation of A Rocking Ship with The Angle of Arrival Measurements Using Beacons

A New Measurement Method of the Dynamic Contact Resistance of HV Circuit Breakers

THE OSCILLOSCOPE AND NOISE. Objectives:

EE201 Circuit Theory I Fall

Pushing towards the Limit of Sampling Rate: Adaptive Chasing Sampling

Color-Based Object Tracking in Multi-camera Environments

ACTIVITY BASED COSTING FOR MARITIME ENTERPRISES

GaN-HEMT Dynamic ON-state Resistance characterisation and Modelling

Abstract. 1 Introduction

Channel Estimation for Wired MIMO Communication Systems

Estimating a Time-Varying Phillips Curve for South Africa

Learning Spatial-Semantic Representations from Natural Language Descriptions and Scene Classifications

Learning Spatial-Semantic Representations from Natural Language Descriptions and Scene Classifications

International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:15 No:03 7

3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

Fault Diagnosis System Identification Based on Impedance Matching Balance Transformer

Application of Neural Q-Learning Controllers on the Khepera II via Webots Software

Grey Level Image Receptive Fields. Difference Image. Region Selection. Edge Detection. To Network Controller. CCD Camera

EXPERIMENT #9 FIBER OPTIC COMMUNICATIONS LINK

Dynamic Networks for Motion Planning in Multi-Robot Space Systems

Transcription:

007 IEEE Inernaional Conference on Roboics and Auomaion Roma, Ialy, 10-14 April 007 FrB1.3 Mobile Robo Localizaion Using Fusion of Objec Recogniion and Range Informaion Byung-Doo Yim, Yong-Ju Lee, Jae-Bok Song and Woojin Chung Absrac Mos presen localizaion algorihms are eiher range or vision-based. In many environmens, only one ype of sensor canno ofen ensure successful localizaion; furhermore, using low-priced range sensors insead of expensive, bu accurae, laser scanners ofen lead o poor performance. This paper proposes an MCL-based localizaion mehod ha robusly esimaes he robo pose wih fusion of he range informaion from a low-cos IR scanner and he SIFT based visual informaion gahered using a mono camera. Wih sensor fusion, he rough pose esimaion from range-based sensors is compensaed by he vision-based sensors and slow ec recogniion can be overcome by he frequen updae of he range informaion. In order o synchronize he wo sensors wih differen bandwidhs, he encoder informaion gahered during ec recogniion is exploied. This paper also suggess a mehod for evaluaing localizaion performance ha is based on he normalized probabiliy of a vision sensor model. Various experimens show ha he proposed algorihm can esimae he robo pose reasonably well and can accuraely evaluae he localizaion performance. I. INTRODUCTION Localizaion is a mehod for esimaing he pose of a robo wih an environmenal map and informaion from sensors mouned on he robo. Localizaion is a fundamenal and imporan ask for he auonomous mobile robo. Range sensors, such as laser and IR scanners have been exensively used for global localizaion. However, when only range sensors are employed, he esimaion error of he robo s pose increases when in dynamic or cluered environmens. I is also difficul o find an accurae robo pose when he robo is placed in a simple environmen like a hallway. On he oher hand, a vision sensor usually provides more informaion han a range sensor, and i provides good performance a a low cos. Therefore, localizaion using vision sensors has drawn much aenion in recen years. However, mos algorihms on vision-based localizaion suffer from shorcomings in ha heir implemenaion akes longer han range-based This research was conduced byr he Inelligen Roboics Developmen Program, one of he 1s Cenury Fronier R&D Programs funded by he Minisry of Commerce, Indusry and Energy of Korea. Byung-Doo Yim is wih he Dep. of Mecharonics, Korea Universiy, Seoul, Korea (e-mail: ybd101@korea.ac.kr. Yong-Ju Lee is wih he Dep. of Mechanical Eng., Korea Universiy, Seoul, Korea (e-mail: yongju_lee@korea.ac.kr. Jae-Bok Song is a Professor of he Dep. of Mechanical Eng., Korea Universiy, Seoul, Korea (Tel.: +8 390 3363; fax: +8 390 3757; e-mail: jbsong@korea.ac.kr Woojin Chung is an Assisan Professor of he of he Dep. of Mechanical Eng., Korea Universiy, Seoul, Korea (Tel.: +8 390 3375; e-mail: smarrobo@korea.ac.kr localizaion because of he compuaion ime needed for exracing feaure informaion from he camera image. In he pas decade, subsanial effors have been direced oward he developmen of vision-based global localizaion. In opological Markov localizaion [1], he inpu image was compared o he images sored a each node of a opological map and hen he node a which he robo was locaed was found by Markov localizaion. In his mehod, however, i is difficul o ge an accurae robo pose if he robo is beween nodes. Markov localizaion uses he ceiling informaion [] [3], which led o successful localizaion even when people surrounded he robo. However, in an environmen wih a low ceiling, he robo could no collec sufficien informaion for localizaion from he camera image because i covered only a small porion of he ceiling. A Vision-based SLAM was proposed ha used he Scale Invarian feaure ransform (SIFT algorihm [4] based on a sereo vision [5] [6]. This approach required a large amoun of memory because i mus sore all keypoins of he enire environmen and i uses hree cameras (a mono camera and a sereo camera. Localizaion using relaively cheap sensors is imporan from a pracical poin of view, bu localizaion wih low-priced sensors seldom provides good localizaion performance in various environmens due o inaccurae sensor measuremens. Eiher he range-based or vision-based scheme alone canno overcome hese sensor limiaions; herefore, sensor fusion based localizaion should be implemened o compensae for shorcomings of each sensor. This paper proposes he global localizaion algorihm based on he he fusion of he range informaion from a low-cos IR scanner and he visual informaion from a mono camera. The proposed localizaion scheme is mainly based on he Mone Carlo Localizaion (MCL algorihm [7]. Dependable navigaion is possible since he relaively poor range accuracy from an IR scanner can be compensaed hrough vision-based localizaion and slow ec recogniion can be overcome by he frequen updae of he range informaion. One problem involved in he fusion of range and visual daa is heir differen processing imes. Tha is, he range informaion has a higher updae rae han he visual informaion because ec recogniion based on SIFT feaure exracion requires a long compuaion ime, especially when he ec has many feaures. In his paper, he daa from he wo sensors are synchronized by compensaing for he ime delay caused by he slow vision-based localizaion by using he encoder informaion. 1-444-060-1/07/$0.00 007 IEEE. 3533

Anoher issue of global localizaion is he evaluaion of localizaion performance. The capabiliy of deecing and recovering from localizaion failure is essenial for auonomous navigaion, because no localizaion algorihm guaranees is success a all imes. The MCL algorihm uses random samples o cope wih he kidnapped robo problem and localizaion failure [8], bu if he number of random samples is no sufficien, he ime required o deec localizaion failure may be quie long. This paper proposes a scheme ha evaluaes he localizaion performance based on he normalized probabiliy of he vision sensor model. If he localizaion performance is regarded as localizaion failure, hen he proposed scheme can recover he robo pose using he recognized ec. The remainder of his paper is organized as follows: Secion II presens he vision sensor model and he range sensor model. Secion III presens he fusion of he wo sensor models for MCL. Secion IV shows he deecion and recovery from localizaion failure. Finally, conclusions are oulined in secion V. II. SENSOR MODELS In his research, he range and vision sensors are fused ogeher for improved localizaion of a mobile robo. Insead of a laser scanner, which is very accurae bu expensive, an IR scanner is used as he main range sensor. The IR scanner generaes a vecor of 11 range values wih a resoluion of 1.8 o. An inexpensive mono camera is also employed as he main vision sensor insead of a sereo camera, which is expensive bu can provide range informaion. Objecs are recognized by he well-known SIFT algorihm o exrac he visual feaures. The sensor model for each sensor is required for probabiliy updae of random samples (i.e., candidaes for he robo pose used in MCL. A. Range sensor model In he range sensor model, he probabiliy of samples is updaed according o he difference beween he range daa measured by he IR scanner and hose compued from he sample pose on he map, as shown in Fig. 1. Tha is, if he robo pose a ime is denoed as x, he probabiliy of sample i (i = 1,, N is updaed by p 1 ( z = (1 ( z ( k d ( k k = 1 ir k Fig. 1 Range daa; (a measured by he acual sensor, and (b compued from candidae robo pose. B. Vision sensor model In he vision sensor model, he probabiliy is updaed according o he difference beween he measured range and angle o he recognized ec and hose compued from he samples on he map. The cener of he ec is seleced as a poin represening his ec because he ec has is own image size on he image plane. The relaive range and angle from he robo o he ec are exraced wih respec o his cener poin. Therefore, robus and accurae exracion of he cener poin of he recognized ec is very imporan in minimizing he localizaion error. The following affine ransform, which calculaes he geomerical relaions beween he recognized ec and he ec sored in he daabase, is used o exrac he accurae cener poin [4]. ui m1 = vi m3 m xi x + m4 yi y where he vecor [ x, y ] T is associaed wih he ranslaion and he parameer m i (i = 1,.., 4 wih he 3D roaion. The vecor [x i, y i ] T is he keypoin relaive o he image sored in he daabase, and [u i, v i ] T is he keypoin exraced in he curren image, as shown in Fig.. Fig. Example of affine ransform. Equaion ( is rearranged o compue he 6 parameers as follows: ( where z (k represens he k-h value of he range daa measured a ime (k = 1,.., 11, and d (i (k is he k-h value of he range daa compued from sample i on he map. Alhough he IR scanner provides a oal of 11 range daa, only he daa less han 4m (k is he oal number of such daa are used in Eq. (1 because he daa exceeding 4m are found o be incorrec. x1 0 x 0 y1 0 0 x1 y 0 0 x... 0 1 0 m1 u1 y1 0 1 m v 1 0 1 0 m 3 = u y 0 1 m4 v x M y (3 3534

The parameers can be compued by inversion of he 6 by 6 marix if only 3 pairs of mached keypoins are given, as shown in Fig.. For more han 3 pairs, he pseudo inverse marix is used o yield he 6 parameers. Once he parameers are idenified, he cener poin of he recognized ec, (u c, v c, can be compued by Eq. ( from ha of he ec in he daabase, (x c, y c. To obain he relaive angle o he recognized ec from he robo, we need o ransform he exraced cener poin of he ec relaive o he image plane ino ha in he robo frame. This angle a ime, denoed as, is given by ( / = an 1 w image uc an( / fov (4 ( wimage / fov 109 fov 109 (, are compared wih d (i and (i, which are he range and relaive angle o he ec from he robo compued from sample i on he map, respecively. Based on he difference beween he measured and he compued resuls, he probabiliies are updaed as follows: p d ( z p ( z 1 1 ( d d = ηr exp( (5 πσ σ d d 1 1 ( = η exp( (6 πσ σ where p d ( z and p ( z are he probabiliies associaed wih he respecive range and relaive angle, and η r and η are he normalizing consans for he range and angle, respecively. Each sensor model has a Gaussian disribuion wih he mean of d and, and a variance of σ d and σ. For each sensor model, he overall vision sensor model is given by pv ( z = pd ( z p ( z (7 Fig. 3 Exracion of visual feaure; (a he cener poin of he ec in he image, (b deecion range of IR scanner. where (x r, y r of Fig. 3(a represens he robo frame; u c is he coordinae of he cener poin relaive o he image frame, w image is he number of pixels (e.g., 30 pixels of he image plane in he u axis of he image frame, and fov is he camera s field of view. In his research, he IR scanner is used o compensae for he mono camera s inabiliy o provide he range informaion. As shown in Fig. 3(b, he camera s field of view is always included in he scanning range of he IR scanner. Therefore, he range reading corresponding o he angle is used as he range o he ec s cener poin. A more accurae range value can be obained by inerpolaing beween he wo adjacen range daa ha are separaed by an angle of 1.8 o, which happens o be he resoluion of an IR scanner. d (i (i d Fig. 4 Range and relaive angle o he recognized ec; (a measured from he robo, and (b compued from sample i. In he vision sensor model shown in Fig. 4, he range (d and relaive angle o he recognized ec from he robo III. FUSION OF RANGE AND VISION SENSORS Vision-based localizaion can generally give more effecive localizaion performance han range-based localizaion provided here are many ecs wih visual feaures in he environmen. However, because only a small number of ecs can be used as visual feaures in normal indoor environmens, vision-based localizaion alone is no sufficien o provide saisfacory localizaion performance in mos environmens. Thus, if he recognized ecs canno be found a he curren robo pose, only he range sensor model can be used o updae he probabiliy of samples as follows: p ( z = pir ( z (8 where p ir (z x is he range sensor model given by (1. If he vision sensor recognizes any ec, he range sensor model and he vision sensor model are fused o updae he probabiliy of samples. p ( z = pv ( z pir ( z (9 where p v (z x is he vision sensor model given by (7. I is imporan ha he daa from he IR scanner and he vision sensor are fused in a synchronous fashion. However, in conras o he relaively fas response of an IR scanner, vision-based ec recogniion ofen requires a raher long processing ime. Due o his delay, he informaion obained upon compleion of he ec recogniion acually reflecs he environmen informaion a he beginning of ec recogniion. For synchronizaion purposes, he range daa measured a he sar of ec recogniion mus be fused wih 3535

he visual daa a he end of ec recogniion, as shown in Fig. 5. As he processing ime for ec recogniion increases, several ses of recen range daa should be discarded for synchronizaion wih he vision daa, as shown in Fig. 5. Thus, he overall updae rae of he sample probabiliy becomes low, which leads o an increasing failure rae of localizaion due o lack of he mos recen environmen informaion. Fig. 5 Sensor fusion wih loss of range informaion. In order o cope wih his problem, he range daa and he vision daa are used separaely in his research. Tha is, he range daa coninue o be used o updae he probabiliy of he samples while ec recogniion is in process. Sensor fusion is conduced only when ec recogniion is compleed and he visual daa are available. The uncerainy in he encoder daa should be considered in compuaion of ˆ (, ˆ d as follows. σ σ d + α1 d + α = σ dˆ + α3 d + α4 = σ ˆ (10 where σ d and σ are he respecive uncerainies in he range and he relaive angle measured by he vision sensor, and d and are he ranslaional and roaional moion of he robo during is ec recogniion, respecively. The parameers, α 1 and α associaed wih σ dˆ, and α 3 and α 4 wih σ ˆ, depend on he characerisics of he robo. These parameers depend on he robo. In he real experimens, σ d and σ were se o 0.m and 3, respecively. The parameers α 1, α, α 3 and α 4 were se o 0.1, 0.m/deg, 0.5 /m and respecively. I is noed ha he uncerainy σ dˆ and σ ˆ increase wih an increase in d and. Objec recogniion sars Objec Objec recogniion finishes d Encoder dˆ ˆ Encoder Fig. 8 Sensor fusion wihou loss of sensor informaion. (a (b (c Fig. 6 Relaion beween robo pose and ec when ec recogniion sars and finishes. As shown in Fig. 6, he observaion ( d, obained a he end of ec recogniion is acually based on he previous robo pose because he robo is moving while ec recogniion is in process. Therefore, his observaion mus be compensaed o reflec he curren robo pose. This compensaion can be performed using he encoder daa, by assuming ha a change in he robo pose esimaed by he encoder daa over a shor period of ime is relaively accurae. This compensaion based on he encoder daa is applied o all samples, as shown in Fig. 7(d. Various experimens were performed using a robo equipped wih an IR scanner (Hokuyo PBS-03JN and a mono camera (a normal web camera. As shown in Fig. 9(a, he experimenal environmen was 15m x 80m and consised of a long hallway and several doors. The grid size of he grid map was 10cm. Figure 9(b illusraes he ecs used as visual landmarks for localizaion, and heir posiions are shown as red dos in Fig. 9(a. (a ( d d, ˆ ˆ dˆ dˆ dˆ ˆ dˆ ˆ Fig. 7 Predicion of relaion beween robo pose and ec using encoder daa a he end of ec recogniion. (b Fig. 9 (a global map of experimenal environmen, and (b ecs used as visual landmarks. 11 visual landmarks were used in he experimens for global localizaion and were carried ou in a room and a hallway. When he robo navigaes hrough a hallway, he informaion from he range sensor is no sufficien for successful localizaion, which ofen leads o slow convergence of he samples. 5,000 samples were iniially disribued hroughou 3536

he enire environmen and hese samples were converged o a small local area by coninuously updaing he probabiliy of samples using he sensor informaion. Localizaion was deermined o be compleed once he sample variances become smaller han he pre-deermined hresholds. updae of samples becomes more efficien in ha all he daa from he range and vision sensor can be used wihou any loss of daa. In addiion, he daa from he vision sensor is he range and relaive angle o he ec from he robo. Therefore, if only he vision sensor is used o find he robo pose, a leas wo ecs mus be observed. Wih fusion of he vision sensor and range sensor, however, i is possible o find he robo pose even when only one ec is recognized. Fig. 10 Variance of sample posiion in x, y axis and in localizaion; (a room, (b hallway. Suppose he robo was placed a a room a he beginning of MCL. As shown in Fig. 10(a, he sample variances converge o zero and hus he esimaed robo pose can keep rack of he acual pose reasonably well because enough environmenal informaion can be obained only from only he range sensor if a robo is locaed in a room. However, if he vision daa are fused wih he IR scanner daa, he sample variances converge o zero more rapidly han wih only he range daa. In he fusion-based localizaion, fas convergence can be achieved once he ecs are visually recognized. Noe ha he variance associaed wih he y-axis (i.e., along he hallway is much larger han ha wih he x-axis because he range daa in he y-axis is quie uncerain due o he limiaions of he range sensor (4m in his experimen. If a robo is locaed a a hallway a he beginning of MCL, he localizaion performance is generally worse han when a robo being is in a room. Firs of all, few geomeric feaures can be colleced by he range sensor because he geomeric informaion in he hallway is quie similar from place o place. Furhermore, he IR scanner has a relaively shor measuring range and he sample variances associaed wih he y-axis do no converge saisfacorily, someime hey diverge as shown in Fig, 11(b. Even in his case, however, if sensor fusion is conduced for localizaion, a beer convergence of he sample variances can be achieved, hus resuling in successful localizaion. In he proposed mehod of sensor fusion, he probabiliy IV. RECOVERY FROM LOCALIZATION FAILURE If he gap beween he given environmen map and he sensor informaion becomes large in he localizaion of a mobile robo, he random samples in MCL can converge o give incorrec poses or divergence, hus resuling in localizaion failure. Oher sources of localizaion failures are caused by inaccurae encoder daa due o slippage during navigaion and he kidnapped robo problem. In case of localizaion failure, i is imporan o deec his failure and recover from i for dependable navigaion. To evaluae localizaion performance, he probabilisic vision sensor model, explained in secion II, is adoped in his research. The probabiliy associaed wih he vision sensor model depends on he uncerainy σ dˆ and σ ˆ given by Eq. (10. I is difficul o selec a fixed probabiliy hreshold o deermine wheher localizaion is successful because hese uncerainies change a he end of each ieraion of MCL. Therefore, a new crierion o evaluae he localizaion performance is proposed in his research. Figures 11(a and (c depic he probabiliy disribuion associaed wih he range o he visual ec. Boh disribuions are Gaussian wih a mean of m, bu heir sandard deviaions (represening he uncerainy are se o 0.m for Fig. 11(a and 0.4m for Fig. 11(c. Figure 9(e and (g depic he probabiliy disribuion associaed wih he relaive angle. Boh are Gaussian wih a mean of 0, bu heir sandard deviaions are 3 for Fig. 11(e and 5 for Fig. 11(g. Figures 11(b, (d, (f and (h represen he probabiliy disribuions normalized by heir respecive maximum probabiliies (indicaed as a red do in Fig 11(a, (c, (e and (g. The evaluaion of localizaion performance is illusraed in Fig. 11(b, (d, (f and (h. Localizaion performance or qualiy can be classified ino hree cases. Case A, corresponding o he upper 30% of he normalized probabiliy disribuion, is regarded as successful localizaion. Case C, he lower 30%, is recognized as localizaion failure and he remainder, case B, is classified as a warning. For insance, suppose he compued range and angle are given by.3m and 10 from he sensor model given in Fig. 11(c and 11(g, which corresponds o he normalized probabiliy of 0.75 and 0.13, respecively. In his siuaion, localizaion is judged as a failure, because he normalized probabiliy associaed wih he angle falls ino region C even hough he probabiliy associaed wih he range is in region A. 3537

Probabiliy 0.0 probabiliy 0.0 dˆ Probabiliy Probabiliy Probabiliy = 3 σ ˆ = 5 σ ˆ 0.40 probabiliy probabiliy probabiliy = 3 σ ˆ 0.40 = 5 σ ˆ Fig. 11 Decision of he localizaion performance based on vision wih uncerainy: (a, (c, (e and (g are he probabiliy disribuions of a visual feaure, (b, (d, (f and (h are normalized probabiliies and he localizaion performance, A: successful localizaion, B: warning, C: localizaion failure. Fig. 1 Recovery from localizaion failure; (a deecion of localizaion failure, (b disribuion of samples near recognized visual feaure, and (c recovery of robo pose. V. CONCLUSIONS This paper proposes an efficien sensor fusion based localizaion algorihm, in which an IR scanner and a cheap web camera are used. From his research, he following conclusions have been drawn. 1 Sensor fusion based localizaion proposed here enables he samples in MCL o converge o he acual robo pose faser han eiher range-based or vision-based localizaion alone. Alhough he processing ime for ec recogniion akes a long ime and is no periodic, he probabiliy of samples can be updaed a he speed of a range sensor using he proposed mehod. 3 The proposed algorihm for he evaluaion of localizaion performance based on he vision sensor model works well o deec localizaion failure and recover from i. Currenly, research on he improvemen in he accuracy and speed of he ec recogniion algorihm is under way. Judgmen of localizaion failure requires ha he normalized probabiliy falls in region C 3 imes in a row in order o avoid misjudgmen due o false-maching in he ec recogniion based on SIFT feaures. Also, if he normalized probabiliy falls ino region B 10 imes in a row, hen localizaion is considered o fail. The proposed algorihm for vision-based recovery from localizaion performance was invesigaed hrough various experimens. Figure 1 shows recovery from localizaion failure using he normalized probabiliy. Once he robo is aware of a localizaion failure, i wanders o collec visual daa while he range-based localizaion is in process. If an ec is recognized by he vision sensor, i is needless o disribue he random samples for MCL on he enire empy area of he environmen because he posiion of he recognized ec can be approximaed from he map informaion. As shown in Fig. 1(b, samples are mainly drawn near he circle wih a radius of he measured range and cenered a he recognized ec. Obviously, his sample disribuion is more efficien in localizaion han he uniform disribuion covering he enire environmen. Alhough visual feaures are rare in some environmens, vision-based recovery of he robo pose is efficien and robus provided ha visual feaures are available. REFERENCES [1] J. Kosecka and F. Li, Vision based opological Markov localizaion, Proc. of IEEE In l Conf. on Roboics and Auomaion, vol., pp. 1481-1486, 004. [] S. Thrun e al., Minerva: a second-generaion museum our-guide robo, Proc. of IEEE In l Conf. on Roboics and Auomaion, vol., pp. 1999-005, May, 1999. [3] W.-Y. Jeong, K.-M. Lee, CV-SLAM: A new Ceiling Vision-based SLAM echnique, Proc. of IEEE/RSJ In. Conf. on Inelligen Robos and Sysems, pp. 3195-300, Aug. 005. [4] D.G. Lowe, Disincive image feaures from scale invarian keypoins, In l Journal of Compuer Vision, vol. 60, no, pp. 91-110, 004. [5] S. Se, D. Lowe and J. Lile, Mobile robo localizaion and mapping wih uncerainy using scale invarian visual landmarks, In l Journal of Roboics Research, vol. 1, no.8, pp. 735 758, Aug. 00. [6] D.G. Lowe and S. Se, Vision-Based global localizaion and mapping for mobile robos, Proc. of IEEE Transacions on Roboics, vol. 1, pp. 17-6, June, 005. [7] T.-B. Kwon, J.-H. Yang, J.-B. Song, W. Chung, Efficiency Improvemen in Mone Carlo Localizaion hrough Topological Informaion, Proc. of IEEE/RSJ In. Conf. on Inelligen Robos and Sysems, Oc. 006. [8] S. Thrun, W. Burgard and D. Fox, Probabiliy Roboics, MIT press. 005, ch. 8. 3538