CO600 Group Project. Collaborative Exploration by Autonomous Robotic Rovers

Similar documents
Learning serious knowledge while "playing"with robots

Toeing the Line Experiments with Line-following Algorithms

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

Robot Olympics: Programming Robots to Perform Tasks in the Real World

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX.

Chapter 9: Experiments in a Physical Environment

Design. BE 1200 Winter 2012 Quiz 6/7 Line Following Program Garan Marlatt

The Robot Olympics: A competition for Tribot s and their humans

Re: ENSC 370 Project Gerbil Process Report

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

Parts of a Lego RCX Robot

LEGO MINDSTORMS CHEERLEADING ROBOTS

Where C= circumference, π = 3.14, and D = diameter EV3 Distance. Developed by Joanna M. Skluzacek Wisconsin 4-H 2016 Page 1

ACTIVE LEARNING USING MECHATRONICS IN A FRESHMAN INFORMATION TECHNOLOGY COURSE

Properties of two light sensors

Lab book. Exploring Robotics (CORC3303)

Closed-Loop Transportation Simulation. Outlines

Artificial Intelligence Planning and Decision Making

Mindstorms NXT. mindstorms.lego.com

Robot Programming Manual

Chapter 14. using data wires

2.4 Sensorized robots

Table of Contents. Sample Pages - get the whole book at

After Performance Report Of the Robot

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology

Team Description Paper

Introduction.

CSC C85 Embedded Systems Project # 1 Robot Localization

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Multi-Agent Robotics with GPS Navigation

Randomized Motion Planning for Groups of Nonholonomic Robots

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

The use of programmable robots in the education of programming

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

Pre-Activity Quiz. 2 feet forward in a straight line? 1. What is a design challenge? 2. How do you program a robot to move

MADISON PUBLIC SCHOOL DISTRICT. GRADE 7 Robotics Cycle

Automata Depository Model with Autonomous Robots

Design Project Introduction DE2-based SecurityBot

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

LEGO Mindstorms Class: Lesson 1

Control System for an All-Terrain Mobile Robot

Course: STEM Robotics Engineering Total Framework Hours up to: 600 CIP Code: Exploratory Preparatory

Chapter 1. Robots and Programs

Deriving Consistency from LEGOs

A Rubik s Cube Solving Robot Using Basic Lego Mindstorms NXT kit

BEYOND TOYS. Wireless sensor extension pack. Tom Frissen s

Design & Development of a Robotic System Using LEGO Mindstorm

LDOR: Laser Directed Object Retrieving Robot. Final Report

Multi-Robot Cooperative System For Object Detection

Embedded Control Project -Iterative learning control for

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT

Learning and Using Models of Kicking Motions for Legged Robots

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Correcting Odometry Errors for Mobile Robots Using Image Processing

understanding sensors

Smart-M3-Based Robot Interaction in Cyber-Physical Systems

An External Command Reading White line Follower Robot

Designing Toys That Come Alive: Curious Robots for Creative Play

Balancing Bi-pod Robot

Rudimentary Swarm Robotics

GE423 Laboratory Assignment 6 Robot Sensors and Wall-Following

How Do You Make a Program Wait?

POKER BOT. Justin McIntire EEL5666 IMDL. Dr. Schwartz and Dr. Arroyo

Robotics using Lego Mindstorms EV3 (Intermediate)

Collective Construction Using Lego Robots

Walle. Members: Sebastian Hening. Amir Pourshafiee. Behnam Zohoor CMPE 118/L. Introduction to Mechatronics. Professor: Gabriel H.

AUTONOMOUS SLAM ROBOT MECHENG 706. Group 4: Peter Sefont Tom Simson Xiting Sun Yinan Xu Date: 5 June 2016

Case Study: Distributed Autonomous Vehicle Stimulation Architecture (DAVSA)

EEL5666C IMDL Spring 2006 Student: Andrew Joseph. *Alarm-o-bot*

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

EE 314 Spring 2003 Microprocessor Systems

Undefined Obstacle Avoidance and Path Planning

COSC343: Artificial Intelligence

Levels of Description: A Role for Robots in Cognitive Science Education

Using sound levels for location tracking

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Lego Mindstorms Robotic Football John Russell Dowson Computer Science 2002/2003

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington

Lab 7: Introduction to Webots and Sensor Modeling

Solar Powered Obstacle Avoiding Robot

AUTOMATED BEARING WEAR DETECTION. Alan Friedman

Ev3 Robotics Programming 101

Preliminary Proposal Accessible Manufacturing Equipment Team 2 10/22/2010 Felix Adisaputra Jonathan Brouker Nick Neumann Ralph Prewett Li Tian

More Info at Open Access Database by S. Dutta and T. Schmidt

Learning and Using Models of Kicking Motions for Legged Robots

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

I.1 Smart Machines. Unit Overview:

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. LEGO Bowling Workbook

Robotic teaching for Malaysian gifted enrichment program

Introduction to the VEX Robotics Platform and ROBOTC Software

Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School

NAVIGATION OF MOBILE ROBOTS

Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Figure 1 HDR image fusion example

Agent-based/Robotics Programming Lab II

International Journal of Informative & Futuristic Research ISSN (Online):

Mission Reliability Estimation for Repairable Robot Teams

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1

Transcription:

CO600 Group Project Collaborative Exploration by Autonomous Robotic Rovers Thomas Benwell-Mortimer Nicholas Griffiths Andrew Garner Stephen Jackson 1

Collaborative Exploration by Autonomous Robotic Rovers Thomas Benwell- Mortimer University of Kent tbb2@kent.ac.uk Andrew Garner University of Kent ag64@kent.ac.uk Nicholas Griffiths University of Kent nbg2@kent.ac.uk Stephen Jackson University of Kent sj27@kent.ac.uk Abstract A key problem in robotics is maintaining awareness of current position and orientation. Internal odometry cannot be relied upon due to mechanical problems and external impacts (e.g. wheel slip). In unknown environments (such as the seabed) no dependence on external beacons is possible either. Recently it has been proposed that using a team of at least four robots exploring from a known starting point and equipped with a means for relative position determination, accurate autonomous exploration might be feasible. This report details how we have used the Lego Mindstorms hardware platform to build robots capable of autonomously exploring an area. Starting with initial experiments in moving the robots and using the sensors, we then proceed to explain such intermediary stages as hardware calibration and path traversal. Finally, we describe the development of an exploration algorithm and the use of a PC to process the information gathered. 1 Introduction The field of robotics is a complex one, whereby one must carefully consider all the limitations of the hardware and any problems that may arise from the external environment when writing the instructions that govern the machine. Much of the research can be found in space exploration, where it is imperative to have robust code and obtain accurate results from hostile, unknown terrain. In this environment comprehensive testing is very important because a single attempt can cost incredible amounts of money and time to implement, with a high chance of error or even failure. Using the Lego Mindstorms RCX platform we decided to undertake the task of programming a small team of robots, with the aim of exploring an unknown area as autonomously as possible. The nature of the task involves unpredictable environmental conditions for the robot to cope with, which should be handled in the software as much as possible. The project we undertook was to program code that could cope with all these conditions and still produce meaningful results. These results would provide us with some useful feedback on an area of unknown terrain and help us plot further exploration. This document will detail how as a group we explored these goals, starting from simple testing of the hardware s capabilities and limitations and following through to the design and implementation of exploratory algorithms. Through our investigations we loosely followed five iterations, each iteration representing a change in the immediate focus of our work. Through each step we set goals, worked towards those goals and finally evaluated what we had accomplished. 2 Background 2.1 Lego Mindstorms The Lego Mindstorms Robotics Invention System (RIS) is a product manufactured by the Lego Group, consisting of a programmable microcontroller (the RCX brick see Fig 1), motors, touch sensors, a light sensor and hundreds of other Lego components 1. The RCX brick was designed and manufactured by the Lego Group, based on the Programmable Brick research undertaken at MIT 2. The RIS also comes with PC software that provides users with a graphical interface to programming the robots. The programs are then downloaded to the RCX brick via infra-red, using an IR tower connected to the PC and the IR port on the front of the RCX. With no previous knowledge of robotics, a person can follow the instructions to quickly build and program a robot to perform novel tasks. 2

Input Ports Infra-red IO Port additional, RCX firmware specific features, such as tasks. LCD Display Figure 1: An RCX brick Output Ports Externally, the RCX has three input ports for sensors and three output ports for motors and lights. Internally, the heart of the RCX is a Hitachi H8 microcontroller, which controls the input/output ports and infra-red transceiver. The RCX has 32KiB of external RAM and a 16KiB ROM containing the onchip driver 3. To extend this driver, the Lego Group provides a 16KiB firmware image that is downloaded to the RCX before programming. By constantly draining a small amount of power, the RCX keeps the firmware in RAM even after it has been switched off. A maximum of 5 user-written programs can be downloaded to the RCX in byte code, which is interpreted by the firmware Since Mindstorms was released, many books and websites have been written by enthusiasts, ranging from the inner workings of the RCX brick to building complex projects involving thousands of Lego components. Kekoa Proudfoot 4, for example, produced a comprehensive guide of the internal components of the RCX and a complete list of the opcodes that are interpreted by the RCX firmware. Dave Baum 5 produced the original description of the IR protocol, which was later expanded upon. Thanks in part to the efforts of Proudfoot, Baum and many others the RCX can be programmed in a wide variety of languages, the most publicized of which are Visual Basic, LeJOS 6 and NQC 7 (Not Quite C). The Lego Group provides an ActiveX control 8 (SPIRIT.OCX) as part of its Mindstorms SDK that can be used to program using Visual Basic. LeJOS replaces the RCX firmware with a small Java virtual machine and provides the programmer with a Java API. We decided to use NQC because of its relatively lightweight implementation (compared to LeJOS), adequate API and the wide range of relevant books and websites available 9. NQC was originally written by Dave Baum. It is syntactically similar to C and provides a subset of its features, but also provides 2.2 Exploration The use of robots in exploration and mapping unknown terrain is an already widely researched field; NASA s Mars Rover robots are a famous example. Often, a lone robot will contain all the tools and skills necessary to accomplish tasks simultaneously. However, there are also many examples of multiple robots working together as a team 10 11. Millibots 12 are a small team of centimeter-scale robots that work together to explore and produce a map of an area. They are built using a modular architecture, allowing each robot to specialize in performing a particular task. This way, the function of each team member can be decided at runtime to suit the current mission. This concept of modular design is obviously well suited to the Lego Mindstorms platform. Another important aspect of robotics briefly discussed in Millibots is position determination. In order to accurately explore an area and build maps, it is fundamentally important that a robot know its position within that area at any given time. This issue is discussed in great detail in Where Am I? 13 The topics covered in this document range from simple techniques, such as dead reckoning and internal odometry, to more complex systems such as GPS tracking. Of particular relevance to our project, this document discusses the use of infra-red beacons to triangulate a robot s position 14. There are other systems that provide far more accurate results when attempting to determine position, such as the ultrasonic pulse trilateration method used by Millibots. However, due to size, hardware and budget constraints, this was obviously not something we could use for our project. 3 Assumptions The aims that we made and the extent to which we could achieve our goals were based on some assumptions, these are that: The terrain that the robots are to explore should not be too challenging i.e. feasible to traverse with the given hardware. The robots could not rely on internal odometry to measure the distance travelled. The robots could not rely on any external beacons as a point of reference to aid exploration, other than the robots themselves. The robot s hardware should be capable of performing basic tasks as expected. 3

4 Aims The overall aim of our project is to make a group of robots explore an unknown area of terrain in a collaborative and autonomous manner. Within this high-level aim we have defined several sub-aims which are necessary in order for us to obtain our main objective. The robots should be able to explore relatively large areas of different unknown terrains which aren t necessarily homogeneous. The robots should be able to explore in a robust manner, coping with obstacles and adapting to difficult areas of terrain encountered during exploration. The robots should be able to communicate with each other in order to deal with the situations that arise and further exploration. Developing a system where the robots maintain awareness of their current position without the use of external beacons. Developing an algorithm which demonstrates accurate autonomous exploration from a known starting point using relative position determination tools. 5 Iteration One: Experimenting With NQC 5.1 Aims Our first iteration of the project was mainly concerned with us familiarising ourselves with the Lego Mindstorms hardware and NQC. We began with the goal of writing code that would make the robots move around with some purpose and perform actions based on light sensor readings. We then extended this to investigate how we might use the rotation and touch sensors to aid in our exploration. We also decided to test the functionality of the IR communications as this was the obvious choice for inter-robot communication. Experimenting in this way we hoped would shed some light on the feasibility of any ideas we had. 5.2 Overview We began by writing programs to experiment with the use of light sensors and the motors, with the aim of making the robots perform actions based on the light sensor s readings. Firstly, we wrote a program that used the light sensor to detect obstacles and avoid crashing into them 15 and then programs to utilise a floor facing light sensor to get the robot to follow a black line 16 17. For the first program we attached a light sensor (facing forwards) to the top of a robot and we used this to scan for dark surfaces. When the light sensor detected a light reading above our designated dark threshold value the robot stopped, reversed, turned around and then continued with its forward motion. The line following programs were successfully implemented in different ways. One used equilibrium values between light and dark to keep constantly on the edge of the black tape, which led to smooth, but unreliable performance. The other program used a system of detecting a dark surface and turning towards it using only one wheel at a time; this resulted in a jerkier but more efficient performance. The next area of investigation was IR communication 18, in which we wanted to get a robot to perform actions based on a variation of IR messages it received. From this we hoped to improve our understanding of how the IR communications could be used to assist our exploration. This involved two robots, one of which was sending commands and the other was performing actions such as speeding up or stopping based on the content of these messages. We implemented other techniques from our research in order to help with position determination and obstacle detection. We found details of a method called incremental encoding in the Where Am I? 19 document. This outlines how counting wheel rotations can be used as a method for position determination as opposed to using internal odometry (based on time) which, as advised by our supervisor, could be less reliable. To detect collisions, we attached a touch sensor to a robot. David Baum s book details how to use touch sensors as part of a front bumper to detect obstacles, which inspired us to try a similar approach. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration1 Experimenting with NQC. 5.3 Problems Encountered During these sessions we became very aware of the need for a more dynamic way of changing the light threshold and/or a calibration function, because as the amount of ambient light changed the code became less reliable and we needed to amend our light threshold values for the code to work. These thresholds were currently being chosen as a value which when based on current light readings indicated a significantly darker/brighter light. We discovered that when using rotation sensors, the robot would often stop past the single revolution limit, this was due to either not being able to stop quickly enough or that the code would not execute before more turning had taken place. In an attempt to overcome this, we investigated using gears and a differential 20. Gears provided more accurate readings but the size of the robot was too wide to be effectively 4

manoeuvred, it took too long to construct and the implementation also took up one of the three sensor inputs. The touch sensor itself had far too small a surface area and sensitivity to detect objects to a reasonable extent. The touch sensor/bumper solution created similar problems to the rotation sensor in that the final robot was rather large and time-consuming to build. We decided to search for simpler, software-based ways of detecting collisions. 5.4 Learning Outcomes This iteration led us to begin to understand the importance of the wait() function in NQC, which we started using to close the gap between the vast speeds at which a program can run, compared to the response time of the attached components. We now have a better understanding of the limitations and what can be achieved with the different pieces of hardware. We produced coding standards specific to NQC and our project 21. 6 Iteration 2: Implementing the Stuck in the Mud Concept 6.1 Introduction In the interest of getting the robots to perform a useful task collectively we came up with the idea of an adaptation of the game stuck in the mud 22. We discussed this concept with our project supervisor and he suggested that we could adapt the idea to that of becoming stuck during the exploration of an unknown, perhaps complex area and introducing a recovery procedure for this 23. 6.2 Aims To realise this concept we identified goals that would need to be attained in order to be able to build up a solution. We identified these areas, in collaboration our supervisor, as: Identifying and locating a stuck robot Moving towards the stuck robot Detection of a stuck robot Exploration of an area 6.3 Overview To identify and locate a stuck robot we set up the scenario with two robots, the stuck robot with a light source to indicate its location and the other exploring robot equipped with a light sensor to detect this. The explorer traversed a bounded area randomly until it receives an IR message from the stuck robot. It then looks for the light source and homes in on the position of the stuck robot, using an algorithm based on the intensity of light detected. We also implemented a means of dynamically calibrating the light sensor to the surrounding ambient light by introducing an initialise function. This function is called at the start of a program and runs code which then takes in light readings and calculates an average light reading based on these. To find a light source, the program simply looks for a light which is a certain threshold value above this average light, removing the need to hard code in these light readings. The second half of this problem is for a robot to detect when it has become stuck. In a meeting with our supervisor we discussed ways of doing this including, if a robot has fallen over the wheels would be turning much faster and the readings from the light sensor would be fairly static. To implement this we began writing software that would utilise a rotation sensor attached to a wheel to determine how fast it was moving (and also being able to judge distance more accurately). From this point on we decided to concentrate our efforts on the exploration side of the project. A key aspect of exploration would require a sense of position determination and the ability for any two robots to perform the same task when given the same instruction. Straight line and turning movement - To test our assumption that given the same code, two identically set up robots should travel the same path we set them both to go forwards for a set amount of time, expecting them to travel the same distance. We then decided to see how effective it would be to use time as a parameter for turning a specific angle. The robot was told to turn for a certain amount of time on the same speed and its eventual angle measured. Using the results from this we tried to instruct the robot to turn 90 degrees once and then multiple times to perform a complete rotation. Castor wheel Many of the movement test results proved inconsistent, a factor of this was determined to be due to the gap between the front and back of the robot and the floor allows a lot of rocking. Research into problems of this nature led us to a possible solution being a castor wheel to stabilise the rocking. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration 2 Implementing the stuck in the mud concept. 6.4 Problems Encountered Initially to identify and locate a stuck robot, the explorer traversed the area until it detected the light source from the stuck robot, which was turned on as a cry for help, and then homed in on its location. We identified a problem with this method as the explorer only became alerted to the robot being stuck when in 5

close proximity, due to the limited brightness of the standard light source. To overcome the problems with this implementation we decided to use IR messaging to identify when a robot has become stuck, because this has a greater range and can bounce off of walls. Using the rotation sensors to detect whether a robot was stuck was abandoned when a manoeuvrable robot capable of travelling in a straight line was not easily constructed. Over the coming weeks it was decided that we should research other approaches to this. Whilst testing the straight line movement of two robots it was found that the stopping point differed considerably between the two robots and the distance travelled by each robot would vary by between 5 and 10% for each run 24. On top of this, neither robot would travel in a straight line they would both veer by a relatively large angle one way or the other. Whilst experimenting, we encountered significant variation when turning ninety degrees which greatly increased when turning multiple times. It was thought that the reason for some of this variation was because as one fraction of a turn was completed, the momentum was carried through to the next turn causing the robot to turn too much. Also the amount of time needed to complete these turns varied from robot to robot. The gap between the front and back of the robot and the floor allows a lot of rocking and this caused a couple of problems. We discovered that the resulting path differs if the robot s front is up or down. The castor wheel solution to this did not work for numerous reasons; one of these being that when a robot has turned, the castor wheel was still facing the previous direction and when the robot tried to go forwards it acted as a rudder causing the robot to turn off course. 6.5 Learning Outcomes We deduced that factors affecting movement variation included the amount of battery power, differences in the surface of the terrain (since the robots would travel different paths they would encounter different areas), and that the motors themselves were subtly different from one another. We discovered that the rocking problem could be resolved by adding an extra piece of Lego to the bottom of the robot 25. With our current resources it was proving difficult to determine when a robot was stuck. Due to time constraints we decided to pursue this as area of the concept at a later date. 7 Iteration Three: Implementing a means of position determination 7.1 Introduction To progress the stuck in the mud idea further we decided that the robots should explore from a fixed point of reference, the origin, so that a distressed robot could instruct the other robots of its position relative to this 22. To implement this, internally the robots would need to keep track of their movements in order to be able to communicate these to the other robots and return to the origin themselves. For the other robots to successfully locate the stuck robot/origin we need to standardise their movement in some way. 7.2 Aims Introduce a mechanism for tracking robots movements/position relative to a fixed point. Standardise robot movement. Make robots capable of following instructions accurately. 7.3 Overview The Back Home Algorithm implemented functions to move a robot forwards and turn it ninety degrees, we used a combination of these functions to build up a path for the robot to explore. The robot keeps track of its X and Y co-ordinate values as it follows this path, using its starting point as the origin. Depending on the compass direction the robot is heading, it will increment or decrement its X and Y position accordingly (N.B. a robot always assumes it starts facing north). When the robot has reached its destination it will then evaluate the shortest path back to its starting point and follow it (using forward movement and ninety degree turns) 26. It does this by taking the appropriate amount of steps east or west and north or south to restore the absolute value of the X and Y coordinates to zero i.e. the origin. A robot travels along the yellow path, and then works out the quickest path back to the starting point (green). After following this algorithm, the robot should theoretically be back at its starting point. However, this was rarely the case due to our aforementioned difficulties moving straight and turning accurately. 6

Before we could use our algorithm effectively, we would need to be able to move the robots reliably. Calibrating the robots - Our initial attempts to increase the robot s movement accuracy involved experimenting with different hardware configurations 27. We concluded that the best way of solving this issue was to rebuild the robots using specific pairs of motors with similar speeds and then calibrate each robot in software. To do this we tested the maximum speed of each motor by measuring the number of rotations over a fixed period of time 28. This dramatically improved each robot s ability to move forwards in a straight line. To improve the turning accuracy we modified both turning functions to apply a series of 10ms bursts of full power to each wheel, so that a substantial amount of momentum did not build up. Each robot now had two variables to calibrate - the number of steps required to turn left and right ninety degrees (two variables were required because the motors travel at different speeds forwards and backwards). Enhanced relative position As a result of calibrating the robots and pairing up better motors, the robots now follow a path with greater accuracy; leading to runs which would end with the robot in a much closer proximity to the origin. Introducing header files To further improve the modularisation and readability of our code we decided to put the specific calibration values for the standardised movement of each robot and also the value for the average ambient light into a header file. The result of this was that overall the robots performed the task more reliably because the code was tailored to their individual eccentricities. Later on, we turned pieces of code that were used regularly into functions and built up a library of more complicated functions in another header file 29. With this library we were able to write programs to do quite complicated things, but in only a few lines of code. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration3 Implementing a means of position determination. 7.4 Problems Encountered The standard movements were still not carried out with 100% accuracy and on top of this slight variations in terrain would cause large discrepancies which are currently not handled by our program. This level of accuracy did not meet our expectations and we decided that future programs should be capable of recovering from and recording this error. If we consider these inaccuracies when returning to the origin then we cannot say for sure that the robot will be pointing exactly towards any one compass direction. The more successive paths the robot traverses, the more the directions that it faces will deviate from the original compass points. 7.5 Learning Outcomes Using a calibration program to standardise robot s movements results in different robots performing tasks and following paths in a similar fashion. However, we are still not able to fully overcome all of the limitations of the hardware. Using compass points enables the robots to return to the absolute starting position in a more efficient manner. 8 Iteration Four: Returning the exploring robot to its starting position 8.1 Introduction We were satisfied that we were able to get a robot to plot a path back to the origin from an explored point, but this did not take in to account any error that had occurred from encountering unexpected terrain or from the hardware. In order to cope with this we discussed the idea of guiding the robot back home by placing a beacon (light source) at the origin. Earlier in the project we had developed code to home in on a light source 30 that when tested was reasonably successful for perfectly smooth terrains, such as a table top. However, we had used the lowest power setting for the motors and as a result the robot would not move on carpeted surfaces. We tried increasing the power settings in order to allow the robot to move on a greater range of surfaces, but the robot performed the sweeping search too fast and wide, this resulted in the robot crashing in to the beacon a lot of the time. 8.2 Aims Enhance the home light program to move towards a beacon in a more efficient and robust manner. To enable a robot to recover from, and return to the origin from an erroneous path. Automated repositioning of the robot at the origin to prepare for next exploration. 8.3 Overview Enhanced Home Light - To overcome the limitations of the first bit of code that homed in on a light, we decided to locate the light in a similar fashion, but instead move towards it in a straight line rather than in a sweeping motion. By using this method we hoped to overcome the limitations of the previous approach, this would allow us to use the wheels on full power. 7

However, this code still did not solve all of our problems. The light sources that we were using had a very large angle of projection, the result being that the robot would head towards the edge of this cone rather than the centre; this either led to the robot hitting the beacon or going past it. In order to resolve this issue we took inspiration from the way in which we calibrated the light sensor in order to make sure that the robot is facing the centre of the light source when it goes towards it. Two Beacon Algorithm We were still facing difficulties in re-identifying which direction was north after returning to the starting position. We spoke to our supervisor about this conundrum, discussing the idea of introducing a collaborative solution to this, involving not just one beacon robot but the possibility of multiple beacons as reference points to assist in realigning the explorer after returning to the origin. Our first interpretation of this suggestion was to place two beacons in close proximity to the origin. The significance of this was that one beacon could be just below/south the origin facing north and the other beacon could be off to the side of the origin facing back towards the origin; in theory the explorer could now home in on the origin and then only have to turn to align with the east or west beacon. Array path In order to move towards a more explorative project we decided that it was necessary to include the ability to dynamically alter exploration paths and pass these paths from robot to robot and PC to robot. Firstly we wrote some code that used an array of integers to represent the exploration path; the robot then dereferences these integers one by one, turning them into movements by calling the respective functions that we had already defined in the header file 31. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration4 Returning the exploring robot to its starting position. 8.4 Problems Encountered The standard light sources (small incandescent light bulbs) were still causing problems because they were not very bright and our code works on the assumption that the beacon s light source will be brighter than the ambient light. In an attempt to resolve this we had some custom made light sources built, that were brighter and had a more focused cone of light. In implementing the 2 beacon algorithm we encountered great difficulties in developing a suitable protocol which would involve instructing the relevant robot to turn its light on or off so that the correct beacon was being used to home in or straighten up the explorer s position. For these reasons we deemed it necessary to think of a new approach 8.5 Learning Outcomes We realised that using at least two beacon robots to return an explorer back to its exact starting point would be essential. We also introduced the idea that that the error on a path due to such factors as bad terrain could be measured by the amount of time or steps needed by the home light function to return the robot to its starting position. We found that arrays were an invaluable tool in defining paths for the robot to follow. 9 Iteration Five: The baseline exploration Algorithm 9.1 Introduction As a group we decided to focus more on solving the problems inherent with exploring unknown terrain in an autonomous and collaborative fashion, since we believed this to be a more relevant direction. To this end, we simultaneously worked on two aspects of an autonomous exploration system; an updated exploration algorithm which would build on our previous work and a PC application that communicates with the explorer to determine the best path to explore next, based on how erroneous the previous paths were. 9.2 Aims Update the Back Home algorithm to follow a path in reverse, as opposed to taking the most direct route. Define an algorithm to explore a given path, evaluate how safe that path is and communicate the information to a PC. Write a PC application capable of communicating with a robot to send and receive paths. 9.3 The Refined Back Home Algorithm As described, our Back Home algorithm was originally designed to evaluate the quickest path back to the starting point. However, we concluded that this method could not be used for exploration because it introduced a potential problem. If a robot traversed a path, returned to its starting position and encountered an error along the way then it would be impossible to tell on which path the error was encountered. For this reason we re-wrote the algorithm to use the same path to travel to and from its destination. 8

9.3 The Baseline Algorithm 32 For the initial version of our baseline algorithm, we decided to use three robots and give each one a specific function. Taking inspiration from our Millibots research, we use one explorer robot with a front and rear light sensor and two beacon robots with a forwards facing light. 33 The explorer robot starts in between the two beacon robots and takes a reading from each beacon using its front and rear light sensors. It then turns ninety degrees, traverses a pre-determined path (see Exploration Control Program section) and then follows the same path back to its starting position, using the back home algorithm. However, due to inconsistencies in the explored terrain (e.g. obstacles, uneven surface), it is likely that the explorer would actually finish some distance away from its starting position. To return there, it scans for the closest beacon using its front light sensor and moves towards it, using the find light code. When it is close enough, it turns on its axis until its rear light sensor finds light from the other beacon, then homes in on that. The robot will then repeat the previous two steps until light readings from its sensors are very close to the readings taken when it started (See Appendix A for diagram). To further improve the baseline algorithm we replaced the standard incandescent lights with custombuilt infra-red lights. We also decided to make the beacons distinguishable from one another by placing one light on top of the RCX brick, the other below and we changed the light sensor positions on the explorer robot to match. This is so that the strongest light readings can only be attained when pointed at the relevant beacon i.e. facing the same beacon that it started facing. 34 The final stage in the baseline algorithm is to generate an error value for each path that the robot explores. Once the explorer has traversed a path and returned to, what it believes to be, its starting position; if its front and rear light sensor readings do not match up with those taken at the start, the number of forward steps required to home in on the lower beacon are counted and stored as the error value. Once a robot has manoeuvred itself back to the starting position, it sends this error value to a PC via infra-red as a measure of how consistent the terrain is and awaits the next path. 9.4 Exploration Control Program This program was written in Perl and ran on a Linux machine with the aim of sending the robot a path, receiving the path s error from the robot after a run and calculating a new path based on the error and currently unexplored paths along with the final destination 35. It accomplished this by progressively sending a route which was one step closer to the final destination each run, and then applying the resulting error to a look-up grid. In this way it would build up a knowledge base of difficult areas in the terrain. This system took advantage of the fact that we would only explore one piece of unexplored terrain per journey. Using the method of building up a path by adding one node at a time, we know that that the error introduced by adding a single new node is the total error of that path minus the error for all the other traversed nodes on this journey. When enough information had been gathered from exploring, a breadth-first search algorithm could be applied to find the route with the least error to the destination. The program could then send a message signifying the end of the search or for the advancement of the baseline for a new exploration. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration5 The baseline exploration algorithm. 9.5 Problems Encountered We had difficulties in distinguishing between the two light beacons until we repositioned the light sources and sensors. The custom built blue and green lights did not meet our expectations because the light sensors are less sensitive to the visible light spectrum. Our system of exploration limits us to only being able to explore one new node per journey (making the whole exploration much slower). Due to the inability to deal with all of the problems which may arise with the hardware and the external environment, the nodes are only a rough approximation to an area. 9.6 Learning Outcomes By interfacing the robot with the PC we discovered that it was essential to develop a standard protocol. This involved repeatedly sending messages until the device receives an acknowledgement as we could not guarantee the message had been received. The program keeping a record of all the error values made it feasible to print out a map of the terrain, with high error values representing obstacles, as an extension to this program. We have learnt methods for calculating shortest paths and avoiding hazardous terrain. Ultimately, we learned the underlying difficulties in applying advanced robotic concepts to the Lego Mindstorms Platform. 10 Conclusion We set out with the goal of writing software that would enable Lego Mindstorms robots to collaboratively and autonomously explore an area of 9

unknown terrain. Given the limitations of the hardware we are satisfied with the complexity and the extent of what we have achieved. Autonomous Exploration The robots are able to successfully explore an area by following a communicated path and return to their starting position with reasonable accuracy; once there, they are capable of repositioning themselves and exploring the next path. The only external interaction is with a program, which itself generates relevant paths autonomously; by using other robots as points of reference we have avoided using external beacons. However, having not had the time to implement our stuck in the mud idea should the explorer become unable to locate or return to the beacons external interaction would be required to resolve the problem. Collaborative Exploration Our program enables the robots to effectively use each other for points of reference whilst exploring; one robot is used to analyse whether an area can be easily negotiated to help plot further exploration for all the robots. Overcoming Limitation We have overcome many of the platform s inherent problems, such as identical component differences, inaccurate internal odometry, and limited capability of the provided sensors. For the main part, this was dealt with by the software adapting to each robot s individual eccentricities, we are confident this approach could be applied to other forms of robotic exploration where the environment interferes with basic motion. Extensions To further the project, given more time, there are a number of things we feel could be accomplished. We believe we were one step away from moving the baseline forward to begin exploring a new area once the primary area had been explored. We would also like to have increased the number of beacons to three to gain more accurate triangulation techniques for position determination, but due to budget and time restraints we were unable to accomplish this idea, which would have required a radial emitter rather than the directional emitters we were using. We could have used more advanced hardware in our work, for example the NXT 36 Mindstorms robots which have much more accurate timekeeping, ultrasonic distance sensors and Bluetooth communication technology, which would have allowed much easier communication, non-determinate on the direction the robots were facing. Future enhancements which we could have feasibly carried out in the near future would involve a working prototype of the stuck in the mud recovery concept and software based analyse of the error with map generation 37. We believe our project has introduced innovative method for analysing terrain using error and an interesting error recovery protocol with the stuck in the mud proposal. Ultimately, we feel the biggest achievement in our work was to use the unreliable hardware and still produce a meaningful output. Acknowledgements We would like to give our thanks to our project supervisor Ian Marshall and to Mark Price for the custom built light sources. With the constraints of the given hardware we were forced to extend the project by constructing brighter LEDs and could have possibly built other more functional sensors, however, this would have taken away the emphasis of the project from a software challenge to a robotics construction challenge. 10

Appendix A Bibliography and References [ 1 ] Lego Mindstorms: http://mindstorms.lego.com/eng/products/ris/risdetails. asp Last accessed 20/03/2007 [ 2 ] Lifelong Kindergarten: http://llk.media.mit.edu/projects.php?id=135 Last accessed 15/03/2007 [ 3 ] RCX Internals: http://graphics.stanford.edu/~kekoa/rcx/#overview Last accessed 20/03/2007 [ 4 ] Kekoa Proudfoot: http://graphics.stanford.edu/~kekoa/rcx/ Last accessed 15/03/2007 [ 5 ] Dave Baum s definitive guide to Lego Mindstorms, Dave Baum, Apress, 2000. [ 6 ] LeJOS: http://lejos.sourceforge.net/ Last accessed 15/03/2007 [ 7 ] NQC: http://bricxcc.sourceforge.net/nqc/ Last accessed 15/03/2007 [ 8 ] The Lego Group: http://mindstorms.lego.com/sdk2/?domainredir=www.l egomindstorms.com Last accessed 15/03/2007 [ 9 ] Corpus of Materials: Coding Standards and Quality Assurance Procedures.doc [ 10 ] Schmidt, D; Luksch, T ;Wettach, J ;Berns, K: Autonomous behavior-based exploration of office environments, In Proceedings of the 3rd International Conference on Informatics in Control, Automation and Robotics 2006 [ 11 ] Giannetti, L; Valigi, P: Collaboration among members of a team: a heuristic strategy for multi-robot exploration. In Proceedings of 14th Mediterranean Conference on Control and Automation 2006 [ 12 ] Luis E. Navarro-Serment, Robert Grabowski, Christiaan J.J. Paredis and Pradeep K. Khosla: Millibots: a Distributed Heterogeneous Robot Team, Carnegie Mellon University. [ 13 ] J. Borenstein, H. R. Everett, and L. Feng: Where am I? Sensors and Methods for Mobile Robot Positioning, University of Michigan, 1996 [ 14 ] Where am I? page 152 [ 15 ] Corpus of Materials: bumpngrind.nqc [ 16 ] Corpus of Materials: LINE FOLLOW.nqc [ 17 ] Corpus of Materials: Follow the Line.nqc [ 18 ] Corpus of Materials: IRmessaging.nqc [ 19 ] Where am I? page 138 [ 20 ] Jonathan Knudsen, O Reilly Network: http://www.oreillynet.com/pub/a/network/2000/05/22/ LegoMindstorms.html Last accessed 15/03/07 [ 21 ] Corpus of Materials: Coding Standards and Quality Assurance Procedures.doc [ 22 ] Corpus of Materials: The Stuck in the Mud concept.doc [ 23 ] Corpus of Materials: Minutes 2006-10-06 (with Ian).doc [ 24 ] Corpus of Materials: movement_test.xls [ 25 ] Corpus of Materials: Standard Robot Build.doc, section 6.1 11

[ 26 ] Corpus of Materials: turningmeasure compass2 [ 27 ] Corpus of Materials: Standard Robot Build.doc [ 28 ] Corpus of Materials: Motor Calibration.doc [ 29 ] Corpus of Materials: utils.nqh [ 30 ] Corpus of Materials: home_light_v3.nqc [ 31 ] Corpus of Materials: array_test_2.7.nqc, utils.nqh [ 32 ] Corpus of Materials: Baseline Algorithm Description.doc [ 33 ] Corpus of Materials: Standard Robot Build.doc, section 6.1 [ 34 ] Corpus of Materials: Standard Robot Build.doc, section 6.3 [ 35 ] Corpus of Materials: Exploration Control program description.doc [ 36 ] The Lego Group: http://mindstorms.lego.com/overview/ Last accessed 15/03/07 [ 37 ] Corpus of Materials: Exploration Control program description.doc 12