Distributed Search and Rescue with Robot and Sensor Teams

Similar documents
Robot and Sensor Networks for First Responders

Sample PDFs showing 20, 30, and 50 ft measurements 50. count. true range (ft) Means from the range PDFs. true range (ft)

The Cricket Indoor Location System

International Journal of Informative & Futuristic Research ISSN (Online):

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Communication-assisted Localization and Navigation for Networked Robots

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1

Preliminary Results in Range Only Localization and Mapping

Mobile Target Tracking Using Radio Sensor Network

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Field Testing of Wireless Interactive Sensor Nodes

Bloodhound RMS Product Overview

Engineering Project Proposals

Available online at ScienceDirect. Procedia Computer Science 76 (2015 )

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

DESIGN AND DEVELOPMENT OF RF BASED MODULAR ROBOTS WITH LOCAL AND GLOBAL COMMUNICATION

Ultrasound-Based Indoor Robot Localization Using Ambient Temperature Compensation

EEL5666C IMDL Spring 2006 Student: Andrew Joseph. *Alarm-o-bot*

Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks

LOCALIZATION WITH GPS UNAVAILABLE

Autonomous Localization

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Distributed Area Coverage Using Robot Flocks

GPS data correction using encoders and INS sensors

Mobile Robots Exploration and Mapping in 2D

Static Path Planning for Mobile Beacons to Localize Sensor Networks

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

Multi-robot Dynamic Coverage of a Planar Bounded Environment

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

Beacon Based Positioning and Tracking with SOS

Parrots: A Range Measuring Sensor Network

Hardware System for Unmanned Surface Vehicle Using IPC Xiang Shi 1, Shiming Wang 1, a, Zhe Xu 1, Qingyi He 1

Skyworker: Robotics for Space Assembly, Inspection and Maintenance

Tracking Moving Targets in a Smart Sensor Network

Performance Analysis of Ultrasonic Mapping Device and Radar

NavShoe Pedestrian Inertial Navigation Technology Brief

Fire Fighter Location Tracking & Status Monitoring Performance Requirements

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

Introduction To Wireless Sensor Networks

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Cooperative navigation (part II)

Lecture: Allows operation in enviroment without prior knowledge

Reliable and Energy-Efficient Data Delivery in Sparse WSNs with Multiple Mobile Sinks

INDOOR LOCATION SENSING AMBIENT MAGNETIC FIELD. Jaewoo Chung

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Localization (Position Estimation) Problem in WSN

Indoor navigation with smartphones

2006 CCRTS THE STATE OF THE ART AND THE STATE OF THE PRACTICE. Network on Target: Remotely Configured Adaptive Tactical Networks. C2 Experimentation

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

Learning and Using Models of Kicking Motions for Legged Robots

Cooperative localization (part I) Jouni Rantakokko

Localisation et navigation de robots

Wireless Sensor Network based Shooter Localization

Robotics Enabling Autonomy in Challenging Environments

Correcting Odometry Errors for Mobile Robots Using Image Processing

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February ISSN

A RFID LANDMARK NAVIGATION AUXILIARY SYSTEM

Semi-Autonomous Parking for Enhanced Safety and Efficiency

A Model Based Approach for Human Recognition and Reception by Robot

INTELLIGENT SELF-PARKING CHAIR

Agenda Motivation Systems and Sensors Algorithms Implementation Conclusion & Outlook

4D-Particle filter localization for a simulated UAV

3D ULTRASONIC STICK FOR BLIND

Hardware Implementation of an Explorer Bot Using XBEE & GSM Technology

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

15. ZBM2: low power Zigbee wireless sensor module for low frequency measurements

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

LABORATORY AND FIELD INVESTIGATIONS ON XBEE MODULE AND ITS EFFECTIVENESS FOR TRANSMISSION OF SLOPE MONITORING DATA IN MINES

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Multi robot Team Formation for Distributed Area Coverage. Raj Dasgupta Computer Science Department University of Nebraska, Omaha

Limits of a Distributed Intelligent Networked Device in the Intelligence Space. 1 Brief History of the Intelligent Space

Location Estimation in Ad-Hoc Networks with Directional Antennas

Wireless Location Detection for an Embedded System

Learning and Using Models of Kicking Motions for Legged Robots

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

Cooperative navigation: outline

Automatic Docking System with Recharging and Battery Replacement for Surveillance Robot

Passive Mobile Robot Localization within a Fixed Beacon Field. Carrick Detweiler

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Mobile Target Tracking Using Radio Sensor Network

Hybrid Positioning through Extended Kalman Filter with Inertial Data Fusion

An Algorithm for Dispersion of Search and Rescue Robots

Measurement report. Laser total station campaign in KTH R1 for Ubisense system accuracy evaluation.

Mechatronics Project Report

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

COLLECTING USER PERFORMANCE DATA IN A GROUP ENVIRONMENT

Real-World Range Testing By Christopher Hofmeister August, 2011

Experiments in the Coordination of Large Groups of Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

Robotic Vehicle Design

As a first approach, the details of how to implement a common nonparametric

Requirements Specification Minesweeper

Formation and Cooperation for SWARMed Intelligent Robots

A Solar-Powered Wireless Data Acquisition Network

Improved Pedestrian Navigation Based on Drift-Reduced NavChip MEMS IMU

SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE

Transcription:

The 4th International Conference on Field and Service Robotics, July 14 16, 2003 Distributed Search and Rescue with Robot and Sensor Teams G. Kantor and S. Singh R. Peterson and D. Rus A. Das, V. Kumar, G. Pereira and J. Spletzer Robotics Instituate Department of Computer Science GRASP Laboratory Carnegie Mellon University Dartmouth College University of Pennsylvania Pittsburgh, PA, USA Hanover, NH, USA Philadelphia, PA, USA ssingh@cmu.edu rus@cs.dartmouth.edu kumar@cis.upenn.edu Abstract We develop a network of distributed mobile sensor systems as a solution to the emergency response problem. The mobile sensors are inside a building and they form a connected ad-hoc network. We discuss cooperative localization algorithms for these nodes. The sensors collect temperature data and run a distributed algorithm to assemble a temperature gradient. The mobile nodes are controlled to navigate using this temperature gradient. We also discuss how such networks can assist human users to find an exit. We have conducted an experiment to at a facility used to train firefighters to understand the environment and to test component technology. Results from experiments at this facility as well as simulations are presented here. 1 Motivation We consider search and rescue applications in which heterogeneous groups of agents (humans, robots, static and mobile sensors) enter an unknown building and disperse while following gradients in temperature and concentration of toxins, and looking for immobile humans. The agents deploy the static sensors and maintain line of sight visibility and communication connectivity whenever possible. Since different agents have different sensors and therefore different pieces of information, communication is necessary for tasking the network, sharing information, and for control. An ad-hoc network is formed by a group of mobile hosts upon a wireless local network interface. It is a temporary network formed without the aid of any established infrastructure or centralized administration. A sensor network consists of a collection of sensors and distributed over some area that form an ad-hoc network. Our heterogeneous teams of agents (sensors, robots, and humans) constitute distributed adaptive sensor networks and are well-suited for tasks in extreme environments, especially when the environmental model and the task specifications are uncertain and the system has to adapt to it. Applications of this work cover search and rescue for first responders, monitoring and surveillance, and infrastructure protection. We combine networking, sensing, and control to control the flow of information in search and rescue in unknown environments. Specifically, this research examines (1) localization in an environment with no infrastructure such as a burning building (for both sensors and robots) (2) information flow across a sensor network that can localize on the fly for delivering the most relevant and current information to its consumer, maintaining current maps, and automating localization; (3) using feedback from the sensor network to control the autonomous robots for placing sensors, collecting data from sensors, and locating targets; and (4) delivering the information gathered from the sensor network (integrated as a global picture) to human users. The paper will detail our technical results in these 4 areas and describe an integrated experiment for navigation in burning buildings. 2 Localization Localization in dynamic environments such as posed by search and rescue operations is difficult because no infrastructure can be presumed and because simple assumptions such as line of sight to known features can not be guaranteed. We have been investigating the use of low cost radio beacons that can be placed in the environment by rescue personnel or carried by robots. In this paradigm, as the robot moves, it periodically sends out a query, and any tags within range respond by sending a reply. The robot can then estimate the distance to each responding tag by determining the time lapsed between sending the query and

Figure 1: (Left) An ad-hoc network of robots and Mote sensors deployed in a burning building at the Allegheny Fire Academy, Aug 23, 2002 (from an experimental exercise involving CMU, Dartmouth, and U. Penn). (Right) The temperature gradient graph collected using an ad-hoc network of Mote sensors. receiving the response. The advantage of such a method is that it does not require line of sight between tags and the mobile robot, mak- ing it useful in many environmental conditions that fail optical methods. Note that, since each tag transmits a unique ID number, distance readings are automati- cally associated with the appropriate tags, so the data association problem, a difficult issue especially in environments that can be visually obscured is solved trivially. Since the position of the tags is unknown to start and can potentially change during operation, it is necessary to localize both the receiver and the beacons simultaneously. This problem is often known as Simultaneous Localization and Mapping (SLAM). Although generally it is assumed that a receiver is able to measure both range and bearing to "features", we can assume only that the range to tags is known and that this measurement may be very noisy. We have adapted the well-known estimation techniques of Kalman filtering, Markov methods, and Monte Carlo localization to solve the problem of robot localization from range-only measurements [KS02] [SKS02]. All three of these methods estimate robot position as a distribution of probabilities over the space of possible robot positions. In the same work we presented an algorithm capable of solving SLAM in cases where approximate a priori estimates of robot and landmark locations exist. The primary difficulty stems from the annular distribution of potential relative locations that results from a range only measurement. Since the distribution is highly non-gaussian, SLAM solutions based on Kalman filtering falter. In theory, Markov methods (probability grids) and Monte Carlo methods (particle filtering) have the flexibility to handle annular distributions. Unfortunately, the scaling properties of these methods severely limit the number of landmarks that can be mapped. In truth, Markov and Monte Carlo methods have much more flexibility than we need; they can represent arbitrary distributions while we need only to deal Figure 2: A radio tag, approximately 12 x 9 cm in size, with which a robot can communicate to obtain range data. Such tags can be scattered into a burning building as firefighters move about or even be deployed by robots themselves. Since these tags are are placed without careful survey, their position must be calculated along with the position of the mobile agents themselves. with very well structured annular distributions. What is needed is a compact way to represent annular distributions together with a computationally efficient way of combining annular distributions with each other and with Gaussian distributions. In most cases, we expect the results of these combinations to be well approximated by mixtures of Gaussians so that standard techniques such as Kalman filtering or multiple hypothesis tracking could be applied to solve the remaining estimation problem. We have also extended these results to deal with the case when the tag locations are unknown using a geometrically inspired batch processing method. The basic idea is to store the robot locations and measured ranges the first few times the landmark is encountered and then obtain an estimate of landmark position by intersecting circles on the plane. Once an estimate of a new landmark is produced, the landmark is added to the Kalman filter where its estimate is then improved along with the estimates of the other (previously seen) landmarks. Because it takes advantage

Figure 3: 13 RF tags in known locations are used to localize a robot moving in an open area in conjunction with a wheel encoder and gyro. While individual range readings have a standard deviation of as much as 1.3 m, it is possible to localize the robot to within 0.3 m of the robots location. Tag locations are denoted by "o". The true path is denoted by the green line and the estimated path is denoted by a red line. Figure 4: In the case that the tag locations are unknown to start, they can be determined approximately using a batch scheme and then these approximate locations are used in a kalman filter to continuously update the position of the tag along with the position of the robot. 3 Information Flow of the special structure of the problem, the resulting approach is less computationally cumbersome and avoids the local maxima problems associated with standard batch optimization techniques. To collect data for this experiment, we used an instrumented autonomous robot that has highly accurate (2 cm) positioning for groundtruth using RTK GPS re- ceivers as well as a fiber optic gyro and wheel encoders. Position is updated at 100 Hz. We equipped this robot with a RF ranging system (Pinpoint from RF Technologies) that has four antennae pointing in four directions and a computer to control the tag queries and process responses. For each tag response, the system produces a time-stamped dis- tance estimate to the responding tag, along with the unique ID number for that tag. The distance esti- mate is simply an integer estimate of the distance be- tween the robot and the tag. The localization experiment was conducted on a flat, area about 30 meters by 40 meters in size. We distributed 13 RF tags throughout the area, then pro- grammed the robot to drive in a repeating path among the tags. With this setup, we collected three kinds of data: the ground truth path of the robot from GPS and inertial sensors, the dead reckoning estimated path of the robot from inertial sensors only, and the range measurements to the RF tags. Results from our experiments are shown in Figs??and?? Greater details can be found in [DKS03]. Sensors detect information about the area they cover. They can store this information locally or forward it to a base station for further analysis and use. Sensors can also use communication to integrate their sensed values with the rest of the sensor landscape. Users of the network (robots or people) can use this information as they traverse the network. We have developed distributed protocols for navigation tasks in which a distributed sensor field guides a user across the filed [LdRR03]. We use the localization techniques presented above to compute environmental maps and sensor maps, such as temperature gradients. These maps are then used for human and robot navigation to a target, while avoiding danger (hot areas). Figure 1(Right) shows the layout of a room in which a fire was started. We have collected a temperature gradient map during the fire burning experiment as shown in Figure 1. The Mote sensors 1 were deployed by hand at the locations marked in the figure. The sensors computed multi-hop communication paths to a base station placed at the door. Data was sent to the base station over a period of 30 minutes. 1 Each Mote sensor (http://today.cs.berkeley.edu/tos/) consists of an Atmel ATMega128 microcontroller (with 4 MHz 8 bit CPU, 128KB flash program space, 4K RAM, 4K EEPROM), a 916 MHz RF transceiver (50Kbits/sec, 100ft range), a UART and a 4Mbit serial flash. A Mote runs for approximately one month on two AA batteries. It includes light, sound, and temperature sensors, but other types of sensors may be added. Each Mote runs the TinyOS operating system.

11 (203, 512) 12 (280, 518) 11 (203, 512) 12 (280, 518) 9 (203, 418) 10(280, 413) 9 (203, 418) 10(280, 413) 8 (203, 309) 8 (203, 309) 7 (203, 215) 7 (203, 215) 5 6 (168, 135) (112, 126) 4 (112, 84) 5 6 (168, 135) (112, 126) 4 (112, 84) 1 (0, 0) 2 (44, 23) 3 (112, 14) 1 (0, 0) 2 (44, 23) Figure 5: (Left) The floor map for the directional guidance experiment. Arrows indicate the correct direction to be detected by the Flashlight. (Right) The floor map for the directional guidance experiment with the Flashlight feedback directions marked on it. 3.1 Directional Guidance We used the structure of the data we collected during the fire burning exercise to develop a navigation guidance algorithm designed to guide a user to the door, in a hopby-hop fashion. We have deployed 12 Mote sensors along corridors in our building and guide a human user out of the building. Using an interactive device that can transmit directional feedback called a Flashlight [PR02] a human user was directed across the field. The Flashlight prototype we designed and built is shown in Figure 6(left). This device can be carried by a human user or placed on a mobile robot (or flying robot) to interact with a sensor field. The beam of the Flashlight is sensor-tosensor, multi-hop routed RF messages which send or return information. The Flashlight consists of an analog compass, alert LED, pager vibrator, a 3 position mode switch, a power switch, a range potentiometer, some power conditioning circuitry, and a microcontroller based CPU/RF transceiver. The processing and RF communication components of the Flashlight and the sensor network are Berkeley Motes, shown in Figure 6(center,right). A switch selects the sensor type (light, sound, temperature, etc.) When the user points the Flashlight in a direction, if sensor reports of the selected type are received from any sensors in that direction, a silent vibrating alarm activates. The vibration amplitude can be used to encode how far (in number of hops) was the sensor that triggered. The potentiometer is used to set 3 (112, 14) Figure 6: The left figure shows the Flashlight prototype. The center figure shows a Mote board. The right figure shows the Mote sensor board. the detection range (calibrated in number of network hops from sensor to sensor.) The electronic compass supplies heading data indicating the pointed direction of the device. The Flashlight uses one Berkeley Mote (http://today.cs.berkeley.edu/tos/) as a main processor and sensor board. The Mote handles data processing tasks, A/D conversion of sensor output, RF transmission and reception, and user interface I/O. It consists of an Atmel ATMega128 microcontroller (with 4 MHz 8 bit CPU, 128KB flash program space, 4K RAM, 4K EEPROM), a 916 MHz RF transceiver (50Kbits/sec, 100ft range), a UART and a 4Mbit serial flash. A Mote runs for approximately one month on two AA batteries. It includes light, sound, and temperature sensors, but other types of sensors may be added. Each Mote runs the TinyOS operating system. A moving Flashlight interacts with a wireless sensor network consisting of Mote sensors. We have deployed 12 Mote sensors along corridors in our building and used the Flashlight and the communication infrastructure presented here to guide a human user out of the building. Figure 5 shows the map. The Flashlight interacted with sensors to compute the next direction of movement towards the exit. For each interaction, the user did a rotation scan until the Flashlight was pointed in the direction computed from the sensor data. The user then walked in that direction to the next sensor. Each time we recorded the correct direction and the direction detected by the Flashlight. The directional error was 8% (or 30 degrees) on average. However, because the corridors and office doorways are wide, and the sensors sufficiently dense, the exit was identified successfully. The user was never directed towards a blocked or wrong configuration. An interesting question is how dense should the sensors be, given the feedback accuracy.

4 Control of a Network of Robots Robots augment the surveillance capabilities of a sensor network by using mobility. Each robot must use partial state information derived from its sensors and from the communication network to control in cooperation with other robots the distribution of robots and the motion of the team. We treat this as a problem of formation control where the motion of the team is modeled as an element of a Lie group, while the shape of the formation is a point in shape space. We seek abstractions and control laws that allow partial state information to be used effectively and in a scalable manner. Our platforms are car-like robots equipped with omnidirectional cameras as their primary sensors. The communication among the robots relies on IEEE 802.11 networking. By using information from its camera system each robot is only able to estimate its distance and bearing from their teammates. However, if two robots exchange their bearing to each other, they are also able to estimate their relative orientations 6 [SDF + 01]. We use this idea to combine the information of a group of two or more robots in order to improve the knowledge of the group about their relative position. We have developed control protocols for using such a team of robots in connection with a sensor network to explore a known building. We assume that a network of Mote sensors previously deployed in the environment guide the robots towards the source of heat. The robots can modify their trajectories and still find the building exit. The robots can also switch between the potential fields (or temperature gradients) computed and stored in the sensor network (see Figure 7). The first switch occurs automatically when the first robot encounters a Mote sensor at a given location. The robots move toward the fire and stop at a safer distance (given by the temperature gradient). They stay there until they are asked to evacuate the building, at which point they use the original potential field to find the exit. 5 User Feedback When robots or people interact with the sensor network, it becomes an extension of their capabilities, basically extending their sensory systems and ability to act over a much large range. We have developed software that allows an intuitive, immersive display of environments. Using, panoramic imaging sensors that can be carried by small robots into the heart of a damaged structure, the display can be coupled to head mounted, head tracking sensors that enable a remote operator to look around in the environment without the delay associated with mechanical pan and tilt mechanisms. Figure 7: Three robots switching motion plans in real time in order to get information from the hottest spot of the building. In (b) a gradient of temperature is obtained from a network of Mote sensors distributed on the ground. The data collected from imaging systems such as visible cameras and IR cameras are displayed on a wearable computer to give the responder the most accurate and current information. Distributed protocols collect data from the geographically dispersed sensor network and integrate this data into a global map such as a temperature gradient that can also be displayed on a wearable computer to the user. 6 Discussion The three groups met on August 23, 2002, at the Allegheny County firefighting training facility to conduct preliminary experiments involving a search and rescue exercise in a burning building (see Figure 1). A Mote sensor network was deployed manually in the building to collect temperature data and deliver the data to an outside point. A

Figure 8: 360 degree panorama of a room in the burning building taken by a catadioptric system. The image can be viewed using a combination of a head tracker/head mounted display to enable a digital pan and tilt with very low latency. network of robots navigated the space. A network of cameras took panoramic images and IR images that were subsequently used to localize the robots. A network of radio tags was also used for localization. Although these modules were not integrated, the data collected during this exercise was used off-site to test the algorithms described in this paper. The firefighters who assisted us expressed great eagerness for having the kinds of support our vision provides. [SKS02] S. Singh, G. Kantor, and D. Strelow. Recent results in extensions to simultaneous localization and mapping. In Proc. of International Symposium of Experimental Robotics, 2002. References [KKS03] [KS02] Derek Kurth, George Kantor, and Sanjiv Singh. Experimental results in range-only localization with radio. In Submitted to IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2003. G. Kantor and S. Singh. Preliminary results in range only localization and mapping. In IEEE Intl. Conf. on Robotics and Automation, pages 1819 1825, 2002. [LdRR03] Q. Li, M. de Rosa, and D. Rus. Distributed algorithms for guiding navigation across a sensor net. In Proceedings of Mobicom, 2003. [PR02] R. Peterson and D. Rus. Interacting with a sensor network. In Proc. of Australian Conf. on Robotics an Automation, 2002. [SDF + 01] J. Spletzer, A. K. Das, R. Fierro, C. J. Taylor, V. Kumar, and J. P. Ostrowski. Cooperative localization and control for multi-robot manipulation. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2001.