CO600 Group Project. Collaborative Exploration by Autonomous Robotic Rovers

Size: px
Start display at page:

Download "CO600 Group Project. Collaborative Exploration by Autonomous Robotic Rovers"

Transcription

1 CO600 Group Project Collaborative Exploration by Autonomous Robotic Rovers Thomas Benwell-Mortimer Nicholas Griffiths Andrew Garner Stephen Jackson 1

2 Collaborative Exploration by Autonomous Robotic Rovers Thomas Benwell- Mortimer University of Kent Andrew Garner University of Kent Nicholas Griffiths University of Kent Stephen Jackson University of Kent Abstract A key problem in robotics is maintaining awareness of current position and orientation. Internal odometry cannot be relied upon due to mechanical problems and external impacts (e.g. wheel slip). In unknown environments (such as the seabed) no dependence on external beacons is possible either. Recently it has been proposed that using a team of at least four robots exploring from a known starting point and equipped with a means for relative position determination, accurate autonomous exploration might be feasible. This report details how we have used the Lego Mindstorms hardware platform to build robots capable of autonomously exploring an area. Starting with initial experiments in moving the robots and using the sensors, we then proceed to explain such intermediary stages as hardware calibration and path traversal. Finally, we describe the development of an exploration algorithm and the use of a PC to process the information gathered. 1 Introduction The field of robotics is a complex one, whereby one must carefully consider all the limitations of the hardware and any problems that may arise from the external environment when writing the instructions that govern the machine. Much of the research can be found in space exploration, where it is imperative to have robust code and obtain accurate results from hostile, unknown terrain. In this environment comprehensive testing is very important because a single attempt can cost incredible amounts of money and time to implement, with a high chance of error or even failure. Using the Lego Mindstorms RCX platform we decided to undertake the task of programming a small team of robots, with the aim of exploring an unknown area as autonomously as possible. The nature of the task involves unpredictable environmental conditions for the robot to cope with, which should be handled in the software as much as possible. The project we undertook was to program code that could cope with all these conditions and still produce meaningful results. These results would provide us with some useful feedback on an area of unknown terrain and help us plot further exploration. This document will detail how as a group we explored these goals, starting from simple testing of the hardware s capabilities and limitations and following through to the design and implementation of exploratory algorithms. Through our investigations we loosely followed five iterations, each iteration representing a change in the immediate focus of our work. Through each step we set goals, worked towards those goals and finally evaluated what we had accomplished. 2 Background 2.1 Lego Mindstorms The Lego Mindstorms Robotics Invention System (RIS) is a product manufactured by the Lego Group, consisting of a programmable microcontroller (the RCX brick see Fig 1), motors, touch sensors, a light sensor and hundreds of other Lego components 1. The RCX brick was designed and manufactured by the Lego Group, based on the Programmable Brick research undertaken at MIT 2. The RIS also comes with PC software that provides users with a graphical interface to programming the robots. The programs are then downloaded to the RCX brick via infra-red, using an IR tower connected to the PC and the IR port on the front of the RCX. With no previous knowledge of robotics, a person can follow the instructions to quickly build and program a robot to perform novel tasks. 2

3 Input Ports Infra-red IO Port additional, RCX firmware specific features, such as tasks. LCD Display Figure 1: An RCX brick Output Ports Externally, the RCX has three input ports for sensors and three output ports for motors and lights. Internally, the heart of the RCX is a Hitachi H8 microcontroller, which controls the input/output ports and infra-red transceiver. The RCX has 32KiB of external RAM and a 16KiB ROM containing the onchip driver 3. To extend this driver, the Lego Group provides a 16KiB firmware image that is downloaded to the RCX before programming. By constantly draining a small amount of power, the RCX keeps the firmware in RAM even after it has been switched off. A maximum of 5 user-written programs can be downloaded to the RCX in byte code, which is interpreted by the firmware Since Mindstorms was released, many books and websites have been written by enthusiasts, ranging from the inner workings of the RCX brick to building complex projects involving thousands of Lego components. Kekoa Proudfoot 4, for example, produced a comprehensive guide of the internal components of the RCX and a complete list of the opcodes that are interpreted by the RCX firmware. Dave Baum 5 produced the original description of the IR protocol, which was later expanded upon. Thanks in part to the efforts of Proudfoot, Baum and many others the RCX can be programmed in a wide variety of languages, the most publicized of which are Visual Basic, LeJOS 6 and NQC 7 (Not Quite C). The Lego Group provides an ActiveX control 8 (SPIRIT.OCX) as part of its Mindstorms SDK that can be used to program using Visual Basic. LeJOS replaces the RCX firmware with a small Java virtual machine and provides the programmer with a Java API. We decided to use NQC because of its relatively lightweight implementation (compared to LeJOS), adequate API and the wide range of relevant books and websites available 9. NQC was originally written by Dave Baum. It is syntactically similar to C and provides a subset of its features, but also provides 2.2 Exploration The use of robots in exploration and mapping unknown terrain is an already widely researched field; NASA s Mars Rover robots are a famous example. Often, a lone robot will contain all the tools and skills necessary to accomplish tasks simultaneously. However, there are also many examples of multiple robots working together as a team Millibots 12 are a small team of centimeter-scale robots that work together to explore and produce a map of an area. They are built using a modular architecture, allowing each robot to specialize in performing a particular task. This way, the function of each team member can be decided at runtime to suit the current mission. This concept of modular design is obviously well suited to the Lego Mindstorms platform. Another important aspect of robotics briefly discussed in Millibots is position determination. In order to accurately explore an area and build maps, it is fundamentally important that a robot know its position within that area at any given time. This issue is discussed in great detail in Where Am I? 13 The topics covered in this document range from simple techniques, such as dead reckoning and internal odometry, to more complex systems such as GPS tracking. Of particular relevance to our project, this document discusses the use of infra-red beacons to triangulate a robot s position 14. There are other systems that provide far more accurate results when attempting to determine position, such as the ultrasonic pulse trilateration method used by Millibots. However, due to size, hardware and budget constraints, this was obviously not something we could use for our project. 3 Assumptions The aims that we made and the extent to which we could achieve our goals were based on some assumptions, these are that: The terrain that the robots are to explore should not be too challenging i.e. feasible to traverse with the given hardware. The robots could not rely on internal odometry to measure the distance travelled. The robots could not rely on any external beacons as a point of reference to aid exploration, other than the robots themselves. The robot s hardware should be capable of performing basic tasks as expected. 3

4 4 Aims The overall aim of our project is to make a group of robots explore an unknown area of terrain in a collaborative and autonomous manner. Within this high-level aim we have defined several sub-aims which are necessary in order for us to obtain our main objective. The robots should be able to explore relatively large areas of different unknown terrains which aren t necessarily homogeneous. The robots should be able to explore in a robust manner, coping with obstacles and adapting to difficult areas of terrain encountered during exploration. The robots should be able to communicate with each other in order to deal with the situations that arise and further exploration. Developing a system where the robots maintain awareness of their current position without the use of external beacons. Developing an algorithm which demonstrates accurate autonomous exploration from a known starting point using relative position determination tools. 5 Iteration One: Experimenting With NQC 5.1 Aims Our first iteration of the project was mainly concerned with us familiarising ourselves with the Lego Mindstorms hardware and NQC. We began with the goal of writing code that would make the robots move around with some purpose and perform actions based on light sensor readings. We then extended this to investigate how we might use the rotation and touch sensors to aid in our exploration. We also decided to test the functionality of the IR communications as this was the obvious choice for inter-robot communication. Experimenting in this way we hoped would shed some light on the feasibility of any ideas we had. 5.2 Overview We began by writing programs to experiment with the use of light sensors and the motors, with the aim of making the robots perform actions based on the light sensor s readings. Firstly, we wrote a program that used the light sensor to detect obstacles and avoid crashing into them 15 and then programs to utilise a floor facing light sensor to get the robot to follow a black line For the first program we attached a light sensor (facing forwards) to the top of a robot and we used this to scan for dark surfaces. When the light sensor detected a light reading above our designated dark threshold value the robot stopped, reversed, turned around and then continued with its forward motion. The line following programs were successfully implemented in different ways. One used equilibrium values between light and dark to keep constantly on the edge of the black tape, which led to smooth, but unreliable performance. The other program used a system of detecting a dark surface and turning towards it using only one wheel at a time; this resulted in a jerkier but more efficient performance. The next area of investigation was IR communication 18, in which we wanted to get a robot to perform actions based on a variation of IR messages it received. From this we hoped to improve our understanding of how the IR communications could be used to assist our exploration. This involved two robots, one of which was sending commands and the other was performing actions such as speeding up or stopping based on the content of these messages. We implemented other techniques from our research in order to help with position determination and obstacle detection. We found details of a method called incremental encoding in the Where Am I? 19 document. This outlines how counting wheel rotations can be used as a method for position determination as opposed to using internal odometry (based on time) which, as advised by our supervisor, could be less reliable. To detect collisions, we attached a touch sensor to a robot. David Baum s book details how to use touch sensors as part of a front bumper to detect obstacles, which inspired us to try a similar approach. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration1 Experimenting with NQC. 5.3 Problems Encountered During these sessions we became very aware of the need for a more dynamic way of changing the light threshold and/or a calibration function, because as the amount of ambient light changed the code became less reliable and we needed to amend our light threshold values for the code to work. These thresholds were currently being chosen as a value which when based on current light readings indicated a significantly darker/brighter light. We discovered that when using rotation sensors, the robot would often stop past the single revolution limit, this was due to either not being able to stop quickly enough or that the code would not execute before more turning had taken place. In an attempt to overcome this, we investigated using gears and a differential 20. Gears provided more accurate readings but the size of the robot was too wide to be effectively 4

5 manoeuvred, it took too long to construct and the implementation also took up one of the three sensor inputs. The touch sensor itself had far too small a surface area and sensitivity to detect objects to a reasonable extent. The touch sensor/bumper solution created similar problems to the rotation sensor in that the final robot was rather large and time-consuming to build. We decided to search for simpler, software-based ways of detecting collisions. 5.4 Learning Outcomes This iteration led us to begin to understand the importance of the wait() function in NQC, which we started using to close the gap between the vast speeds at which a program can run, compared to the response time of the attached components. We now have a better understanding of the limitations and what can be achieved with the different pieces of hardware. We produced coding standards specific to NQC and our project Iteration 2: Implementing the Stuck in the Mud Concept 6.1 Introduction In the interest of getting the robots to perform a useful task collectively we came up with the idea of an adaptation of the game stuck in the mud 22. We discussed this concept with our project supervisor and he suggested that we could adapt the idea to that of becoming stuck during the exploration of an unknown, perhaps complex area and introducing a recovery procedure for this Aims To realise this concept we identified goals that would need to be attained in order to be able to build up a solution. We identified these areas, in collaboration our supervisor, as: Identifying and locating a stuck robot Moving towards the stuck robot Detection of a stuck robot Exploration of an area 6.3 Overview To identify and locate a stuck robot we set up the scenario with two robots, the stuck robot with a light source to indicate its location and the other exploring robot equipped with a light sensor to detect this. The explorer traversed a bounded area randomly until it receives an IR message from the stuck robot. It then looks for the light source and homes in on the position of the stuck robot, using an algorithm based on the intensity of light detected. We also implemented a means of dynamically calibrating the light sensor to the surrounding ambient light by introducing an initialise function. This function is called at the start of a program and runs code which then takes in light readings and calculates an average light reading based on these. To find a light source, the program simply looks for a light which is a certain threshold value above this average light, removing the need to hard code in these light readings. The second half of this problem is for a robot to detect when it has become stuck. In a meeting with our supervisor we discussed ways of doing this including, if a robot has fallen over the wheels would be turning much faster and the readings from the light sensor would be fairly static. To implement this we began writing software that would utilise a rotation sensor attached to a wheel to determine how fast it was moving (and also being able to judge distance more accurately). From this point on we decided to concentrate our efforts on the exploration side of the project. A key aspect of exploration would require a sense of position determination and the ability for any two robots to perform the same task when given the same instruction. Straight line and turning movement - To test our assumption that given the same code, two identically set up robots should travel the same path we set them both to go forwards for a set amount of time, expecting them to travel the same distance. We then decided to see how effective it would be to use time as a parameter for turning a specific angle. The robot was told to turn for a certain amount of time on the same speed and its eventual angle measured. Using the results from this we tried to instruct the robot to turn 90 degrees once and then multiple times to perform a complete rotation. Castor wheel Many of the movement test results proved inconsistent, a factor of this was determined to be due to the gap between the front and back of the robot and the floor allows a lot of rocking. Research into problems of this nature led us to a possible solution being a castor wheel to stabilise the rocking. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration 2 Implementing the stuck in the mud concept. 6.4 Problems Encountered Initially to identify and locate a stuck robot, the explorer traversed the area until it detected the light source from the stuck robot, which was turned on as a cry for help, and then homed in on its location. We identified a problem with this method as the explorer only became alerted to the robot being stuck when in 5

6 close proximity, due to the limited brightness of the standard light source. To overcome the problems with this implementation we decided to use IR messaging to identify when a robot has become stuck, because this has a greater range and can bounce off of walls. Using the rotation sensors to detect whether a robot was stuck was abandoned when a manoeuvrable robot capable of travelling in a straight line was not easily constructed. Over the coming weeks it was decided that we should research other approaches to this. Whilst testing the straight line movement of two robots it was found that the stopping point differed considerably between the two robots and the distance travelled by each robot would vary by between 5 and 10% for each run 24. On top of this, neither robot would travel in a straight line they would both veer by a relatively large angle one way or the other. Whilst experimenting, we encountered significant variation when turning ninety degrees which greatly increased when turning multiple times. It was thought that the reason for some of this variation was because as one fraction of a turn was completed, the momentum was carried through to the next turn causing the robot to turn too much. Also the amount of time needed to complete these turns varied from robot to robot. The gap between the front and back of the robot and the floor allows a lot of rocking and this caused a couple of problems. We discovered that the resulting path differs if the robot s front is up or down. The castor wheel solution to this did not work for numerous reasons; one of these being that when a robot has turned, the castor wheel was still facing the previous direction and when the robot tried to go forwards it acted as a rudder causing the robot to turn off course. 6.5 Learning Outcomes We deduced that factors affecting movement variation included the amount of battery power, differences in the surface of the terrain (since the robots would travel different paths they would encounter different areas), and that the motors themselves were subtly different from one another. We discovered that the rocking problem could be resolved by adding an extra piece of Lego to the bottom of the robot 25. With our current resources it was proving difficult to determine when a robot was stuck. Due to time constraints we decided to pursue this as area of the concept at a later date. 7 Iteration Three: Implementing a means of position determination 7.1 Introduction To progress the stuck in the mud idea further we decided that the robots should explore from a fixed point of reference, the origin, so that a distressed robot could instruct the other robots of its position relative to this 22. To implement this, internally the robots would need to keep track of their movements in order to be able to communicate these to the other robots and return to the origin themselves. For the other robots to successfully locate the stuck robot/origin we need to standardise their movement in some way. 7.2 Aims Introduce a mechanism for tracking robots movements/position relative to a fixed point. Standardise robot movement. Make robots capable of following instructions accurately. 7.3 Overview The Back Home Algorithm implemented functions to move a robot forwards and turn it ninety degrees, we used a combination of these functions to build up a path for the robot to explore. The robot keeps track of its X and Y co-ordinate values as it follows this path, using its starting point as the origin. Depending on the compass direction the robot is heading, it will increment or decrement its X and Y position accordingly (N.B. a robot always assumes it starts facing north). When the robot has reached its destination it will then evaluate the shortest path back to its starting point and follow it (using forward movement and ninety degree turns) 26. It does this by taking the appropriate amount of steps east or west and north or south to restore the absolute value of the X and Y coordinates to zero i.e. the origin. A robot travels along the yellow path, and then works out the quickest path back to the starting point (green). After following this algorithm, the robot should theoretically be back at its starting point. However, this was rarely the case due to our aforementioned difficulties moving straight and turning accurately. 6

7 Before we could use our algorithm effectively, we would need to be able to move the robots reliably. Calibrating the robots - Our initial attempts to increase the robot s movement accuracy involved experimenting with different hardware configurations 27. We concluded that the best way of solving this issue was to rebuild the robots using specific pairs of motors with similar speeds and then calibrate each robot in software. To do this we tested the maximum speed of each motor by measuring the number of rotations over a fixed period of time 28. This dramatically improved each robot s ability to move forwards in a straight line. To improve the turning accuracy we modified both turning functions to apply a series of 10ms bursts of full power to each wheel, so that a substantial amount of momentum did not build up. Each robot now had two variables to calibrate - the number of steps required to turn left and right ninety degrees (two variables were required because the motors travel at different speeds forwards and backwards). Enhanced relative position As a result of calibrating the robots and pairing up better motors, the robots now follow a path with greater accuracy; leading to runs which would end with the robot in a much closer proximity to the origin. Introducing header files To further improve the modularisation and readability of our code we decided to put the specific calibration values for the standardised movement of each robot and also the value for the average ambient light into a header file. The result of this was that overall the robots performed the task more reliably because the code was tailored to their individual eccentricities. Later on, we turned pieces of code that were used regularly into functions and built up a library of more complicated functions in another header file 29. With this library we were able to write programs to do quite complicated things, but in only a few lines of code. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration3 Implementing a means of position determination. 7.4 Problems Encountered The standard movements were still not carried out with 100% accuracy and on top of this slight variations in terrain would cause large discrepancies which are currently not handled by our program. This level of accuracy did not meet our expectations and we decided that future programs should be capable of recovering from and recording this error. If we consider these inaccuracies when returning to the origin then we cannot say for sure that the robot will be pointing exactly towards any one compass direction. The more successive paths the robot traverses, the more the directions that it faces will deviate from the original compass points. 7.5 Learning Outcomes Using a calibration program to standardise robot s movements results in different robots performing tasks and following paths in a similar fashion. However, we are still not able to fully overcome all of the limitations of the hardware. Using compass points enables the robots to return to the absolute starting position in a more efficient manner. 8 Iteration Four: Returning the exploring robot to its starting position 8.1 Introduction We were satisfied that we were able to get a robot to plot a path back to the origin from an explored point, but this did not take in to account any error that had occurred from encountering unexpected terrain or from the hardware. In order to cope with this we discussed the idea of guiding the robot back home by placing a beacon (light source) at the origin. Earlier in the project we had developed code to home in on a light source 30 that when tested was reasonably successful for perfectly smooth terrains, such as a table top. However, we had used the lowest power setting for the motors and as a result the robot would not move on carpeted surfaces. We tried increasing the power settings in order to allow the robot to move on a greater range of surfaces, but the robot performed the sweeping search too fast and wide, this resulted in the robot crashing in to the beacon a lot of the time. 8.2 Aims Enhance the home light program to move towards a beacon in a more efficient and robust manner. To enable a robot to recover from, and return to the origin from an erroneous path. Automated repositioning of the robot at the origin to prepare for next exploration. 8.3 Overview Enhanced Home Light - To overcome the limitations of the first bit of code that homed in on a light, we decided to locate the light in a similar fashion, but instead move towards it in a straight line rather than in a sweeping motion. By using this method we hoped to overcome the limitations of the previous approach, this would allow us to use the wheels on full power. 7

8 However, this code still did not solve all of our problems. The light sources that we were using had a very large angle of projection, the result being that the robot would head towards the edge of this cone rather than the centre; this either led to the robot hitting the beacon or going past it. In order to resolve this issue we took inspiration from the way in which we calibrated the light sensor in order to make sure that the robot is facing the centre of the light source when it goes towards it. Two Beacon Algorithm We were still facing difficulties in re-identifying which direction was north after returning to the starting position. We spoke to our supervisor about this conundrum, discussing the idea of introducing a collaborative solution to this, involving not just one beacon robot but the possibility of multiple beacons as reference points to assist in realigning the explorer after returning to the origin. Our first interpretation of this suggestion was to place two beacons in close proximity to the origin. The significance of this was that one beacon could be just below/south the origin facing north and the other beacon could be off to the side of the origin facing back towards the origin; in theory the explorer could now home in on the origin and then only have to turn to align with the east or west beacon. Array path In order to move towards a more explorative project we decided that it was necessary to include the ability to dynamically alter exploration paths and pass these paths from robot to robot and PC to robot. Firstly we wrote some code that used an array of integers to represent the exploration path; the robot then dereferences these integers one by one, turning them into movements by calling the respective functions that we had already defined in the header file 31. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration4 Returning the exploring robot to its starting position. 8.4 Problems Encountered The standard light sources (small incandescent light bulbs) were still causing problems because they were not very bright and our code works on the assumption that the beacon s light source will be brighter than the ambient light. In an attempt to resolve this we had some custom made light sources built, that were brighter and had a more focused cone of light. In implementing the 2 beacon algorithm we encountered great difficulties in developing a suitable protocol which would involve instructing the relevant robot to turn its light on or off so that the correct beacon was being used to home in or straighten up the explorer s position. For these reasons we deemed it necessary to think of a new approach 8.5 Learning Outcomes We realised that using at least two beacon robots to return an explorer back to its exact starting point would be essential. We also introduced the idea that that the error on a path due to such factors as bad terrain could be measured by the amount of time or steps needed by the home light function to return the robot to its starting position. We found that arrays were an invaluable tool in defining paths for the robot to follow. 9 Iteration Five: The baseline exploration Algorithm 9.1 Introduction As a group we decided to focus more on solving the problems inherent with exploring unknown terrain in an autonomous and collaborative fashion, since we believed this to be a more relevant direction. To this end, we simultaneously worked on two aspects of an autonomous exploration system; an updated exploration algorithm which would build on our previous work and a PC application that communicates with the explorer to determine the best path to explore next, based on how erroneous the previous paths were. 9.2 Aims Update the Back Home algorithm to follow a path in reverse, as opposed to taking the most direct route. Define an algorithm to explore a given path, evaluate how safe that path is and communicate the information to a PC. Write a PC application capable of communicating with a robot to send and receive paths. 9.3 The Refined Back Home Algorithm As described, our Back Home algorithm was originally designed to evaluate the quickest path back to the starting point. However, we concluded that this method could not be used for exploration because it introduced a potential problem. If a robot traversed a path, returned to its starting position and encountered an error along the way then it would be impossible to tell on which path the error was encountered. For this reason we re-wrote the algorithm to use the same path to travel to and from its destination. 8

9 9.3 The Baseline Algorithm 32 For the initial version of our baseline algorithm, we decided to use three robots and give each one a specific function. Taking inspiration from our Millibots research, we use one explorer robot with a front and rear light sensor and two beacon robots with a forwards facing light. 33 The explorer robot starts in between the two beacon robots and takes a reading from each beacon using its front and rear light sensors. It then turns ninety degrees, traverses a pre-determined path (see Exploration Control Program section) and then follows the same path back to its starting position, using the back home algorithm. However, due to inconsistencies in the explored terrain (e.g. obstacles, uneven surface), it is likely that the explorer would actually finish some distance away from its starting position. To return there, it scans for the closest beacon using its front light sensor and moves towards it, using the find light code. When it is close enough, it turns on its axis until its rear light sensor finds light from the other beacon, then homes in on that. The robot will then repeat the previous two steps until light readings from its sensors are very close to the readings taken when it started (See Appendix A for diagram). To further improve the baseline algorithm we replaced the standard incandescent lights with custombuilt infra-red lights. We also decided to make the beacons distinguishable from one another by placing one light on top of the RCX brick, the other below and we changed the light sensor positions on the explorer robot to match. This is so that the strongest light readings can only be attained when pointed at the relevant beacon i.e. facing the same beacon that it started facing. 34 The final stage in the baseline algorithm is to generate an error value for each path that the robot explores. Once the explorer has traversed a path and returned to, what it believes to be, its starting position; if its front and rear light sensor readings do not match up with those taken at the start, the number of forward steps required to home in on the lower beacon are counted and stored as the error value. Once a robot has manoeuvred itself back to the starting position, it sends this error value to a PC via infra-red as a measure of how consistent the terrain is and awaits the next path. 9.4 Exploration Control Program This program was written in Perl and ran on a Linux machine with the aim of sending the robot a path, receiving the path s error from the robot after a run and calculating a new path based on the error and currently unexplored paths along with the final destination 35. It accomplished this by progressively sending a route which was one step closer to the final destination each run, and then applying the resulting error to a look-up grid. In this way it would build up a knowledge base of difficult areas in the terrain. This system took advantage of the fact that we would only explore one piece of unexplored terrain per journey. Using the method of building up a path by adding one node at a time, we know that that the error introduced by adding a single new node is the total error of that path minus the error for all the other traversed nodes on this journey. When enough information had been gathered from exploring, a breadth-first search algorithm could be applied to find the route with the least error to the destination. The program could then send a message signifying the end of the search or for the advancement of the baseline for a new exploration. For descriptions of the programs from this section please refer to Corpus of Materials: Experimentations and coding sessions/iteration5 The baseline exploration algorithm. 9.5 Problems Encountered We had difficulties in distinguishing between the two light beacons until we repositioned the light sources and sensors. The custom built blue and green lights did not meet our expectations because the light sensors are less sensitive to the visible light spectrum. Our system of exploration limits us to only being able to explore one new node per journey (making the whole exploration much slower). Due to the inability to deal with all of the problems which may arise with the hardware and the external environment, the nodes are only a rough approximation to an area. 9.6 Learning Outcomes By interfacing the robot with the PC we discovered that it was essential to develop a standard protocol. This involved repeatedly sending messages until the device receives an acknowledgement as we could not guarantee the message had been received. The program keeping a record of all the error values made it feasible to print out a map of the terrain, with high error values representing obstacles, as an extension to this program. We have learnt methods for calculating shortest paths and avoiding hazardous terrain. Ultimately, we learned the underlying difficulties in applying advanced robotic concepts to the Lego Mindstorms Platform. 10 Conclusion We set out with the goal of writing software that would enable Lego Mindstorms robots to collaboratively and autonomously explore an area of 9

10 unknown terrain. Given the limitations of the hardware we are satisfied with the complexity and the extent of what we have achieved. Autonomous Exploration The robots are able to successfully explore an area by following a communicated path and return to their starting position with reasonable accuracy; once there, they are capable of repositioning themselves and exploring the next path. The only external interaction is with a program, which itself generates relevant paths autonomously; by using other robots as points of reference we have avoided using external beacons. However, having not had the time to implement our stuck in the mud idea should the explorer become unable to locate or return to the beacons external interaction would be required to resolve the problem. Collaborative Exploration Our program enables the robots to effectively use each other for points of reference whilst exploring; one robot is used to analyse whether an area can be easily negotiated to help plot further exploration for all the robots. Overcoming Limitation We have overcome many of the platform s inherent problems, such as identical component differences, inaccurate internal odometry, and limited capability of the provided sensors. For the main part, this was dealt with by the software adapting to each robot s individual eccentricities, we are confident this approach could be applied to other forms of robotic exploration where the environment interferes with basic motion. Extensions To further the project, given more time, there are a number of things we feel could be accomplished. We believe we were one step away from moving the baseline forward to begin exploring a new area once the primary area had been explored. We would also like to have increased the number of beacons to three to gain more accurate triangulation techniques for position determination, but due to budget and time restraints we were unable to accomplish this idea, which would have required a radial emitter rather than the directional emitters we were using. We could have used more advanced hardware in our work, for example the NXT 36 Mindstorms robots which have much more accurate timekeeping, ultrasonic distance sensors and Bluetooth communication technology, which would have allowed much easier communication, non-determinate on the direction the robots were facing. Future enhancements which we could have feasibly carried out in the near future would involve a working prototype of the stuck in the mud recovery concept and software based analyse of the error with map generation 37. We believe our project has introduced innovative method for analysing terrain using error and an interesting error recovery protocol with the stuck in the mud proposal. Ultimately, we feel the biggest achievement in our work was to use the unreliable hardware and still produce a meaningful output. Acknowledgements We would like to give our thanks to our project supervisor Ian Marshall and to Mark Price for the custom built light sources. With the constraints of the given hardware we were forced to extend the project by constructing brighter LEDs and could have possibly built other more functional sensors, however, this would have taken away the emphasis of the project from a software challenge to a robotics construction challenge. 10

11 Appendix A Bibliography and References [ 1 ] Lego Mindstorms: asp Last accessed 20/03/2007 [ 2 ] Lifelong Kindergarten: Last accessed 15/03/2007 [ 3 ] RCX Internals: Last accessed 20/03/2007 [ 4 ] Kekoa Proudfoot: Last accessed 15/03/2007 [ 5 ] Dave Baum s definitive guide to Lego Mindstorms, Dave Baum, Apress, [ 6 ] LeJOS: Last accessed 15/03/2007 [ 7 ] NQC: Last accessed 15/03/2007 [ 8 ] The Lego Group: egomindstorms.com Last accessed 15/03/2007 [ 9 ] Corpus of Materials: Coding Standards and Quality Assurance Procedures.doc [ 10 ] Schmidt, D; Luksch, T ;Wettach, J ;Berns, K: Autonomous behavior-based exploration of office environments, In Proceedings of the 3rd International Conference on Informatics in Control, Automation and Robotics 2006 [ 11 ] Giannetti, L; Valigi, P: Collaboration among members of a team: a heuristic strategy for multi-robot exploration. In Proceedings of 14th Mediterranean Conference on Control and Automation 2006 [ 12 ] Luis E. Navarro-Serment, Robert Grabowski, Christiaan J.J. Paredis and Pradeep K. Khosla: Millibots: a Distributed Heterogeneous Robot Team, Carnegie Mellon University. [ 13 ] J. Borenstein, H. R. Everett, and L. Feng: Where am I? Sensors and Methods for Mobile Robot Positioning, University of Michigan, 1996 [ 14 ] Where am I? page 152 [ 15 ] Corpus of Materials: bumpngrind.nqc [ 16 ] Corpus of Materials: LINE FOLLOW.nqc [ 17 ] Corpus of Materials: Follow the Line.nqc [ 18 ] Corpus of Materials: IRmessaging.nqc [ 19 ] Where am I? page 138 [ 20 ] Jonathan Knudsen, O Reilly Network: LegoMindstorms.html Last accessed 15/03/07 [ 21 ] Corpus of Materials: Coding Standards and Quality Assurance Procedures.doc [ 22 ] Corpus of Materials: The Stuck in the Mud concept.doc [ 23 ] Corpus of Materials: Minutes (with Ian).doc [ 24 ] Corpus of Materials: movement_test.xls [ 25 ] Corpus of Materials: Standard Robot Build.doc, section

12 [ 26 ] Corpus of Materials: turningmeasure compass2 [ 27 ] Corpus of Materials: Standard Robot Build.doc [ 28 ] Corpus of Materials: Motor Calibration.doc [ 29 ] Corpus of Materials: utils.nqh [ 30 ] Corpus of Materials: home_light_v3.nqc [ 31 ] Corpus of Materials: array_test_2.7.nqc, utils.nqh [ 32 ] Corpus of Materials: Baseline Algorithm Description.doc [ 33 ] Corpus of Materials: Standard Robot Build.doc, section 6.1 [ 34 ] Corpus of Materials: Standard Robot Build.doc, section 6.3 [ 35 ] Corpus of Materials: Exploration Control program description.doc [ 36 ] The Lego Group: Last accessed 15/03/07 [ 37 ] Corpus of Materials: Exploration Control program description.doc 12

Learning serious knowledge while "playing"with robots

Learning serious knowledge while playingwith robots 6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Learning serious knowledge while "playing"with robots Zoltán Istenes Department of Software Technology and Methodology,

More information

Toeing the Line Experiments with Line-following Algorithms

Toeing the Line Experiments with Line-following Algorithms Toeing the Line Experiments with Line-following Algorithms Grade 9 Contents Abstract... 2 Introduction... 2 Purpose... 2 Hypothesis... 3 Materials... 3 Setup... 4 Programming the robot:...4 Building the

More information

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

A Lego-Based Soccer-Playing Robot Competition For Teaching Design Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University

More information

Robot Olympics: Programming Robots to Perform Tasks in the Real World

Robot Olympics: Programming Robots to Perform Tasks in the Real World Robot Olympics: Programming Robots to Perform Tasks in the Real World Coranne Lipford Faculty of Computer Science Dalhousie University, Canada lipford@cs.dal.ca Raymond Walsh Faculty of Computer Science

More information

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX.

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX. Review the following material on sensors. Discuss how you might use each of these sensors. When you have completed reading through this material, build a robot of your choosing that has 2 motors (connected

More information

Chapter 9: Experiments in a Physical Environment

Chapter 9: Experiments in a Physical Environment Chapter 9: Experiments in a Physical Environment The new agent architecture, INDABA, was proposed in chapter 5. INDABA was partially implemented for the purpose of the simulations and experiments described

More information

Design. BE 1200 Winter 2012 Quiz 6/7 Line Following Program Garan Marlatt

Design. BE 1200 Winter 2012 Quiz 6/7 Line Following Program Garan Marlatt Design My initial concept was to start with the Linebot configuration but with two light sensors positioned in front, on either side of the line, monitoring reflected light levels. A third light sensor,

More information

The Robot Olympics: A competition for Tribot s and their humans

The Robot Olympics: A competition for Tribot s and their humans The Robot Olympics: A Competition for Tribot s and their humans 1 The Robot Olympics: A competition for Tribot s and their humans Xinjian Mo Faculty of Computer Science Dalhousie University, Canada xmo@cs.dal.ca

More information

Re: ENSC 370 Project Gerbil Process Report

Re: ENSC 370 Project Gerbil Process Report Simon Fraser University Burnaby, BC V5A 1S6 trac-tech@sfu.ca April 30, 1999 Dr. Andrew Rawicz School of Engineering Science Simon Fraser University Burnaby, BC V5A 1S6 Re: ENSC 370 Project Gerbil Process

More information

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,

More information

Parts of a Lego RCX Robot

Parts of a Lego RCX Robot Parts of a Lego RCX Robot RCX / Brain A B C The red button turns the RCX on and off. The green button starts and stops programs. The grey button switches between 5 programs, indicated as 1-5 on right side

More information

LEGO MINDSTORMS CHEERLEADING ROBOTS

LEGO MINDSTORMS CHEERLEADING ROBOTS LEGO MINDSTORMS CHEERLEADING ROBOTS Naohiro Matsunami\ Kumiko Tanaka-Ishii 2, Ian Frank 3, and Hitoshi Matsubara3 1 Chiba University, Japan 2 Tokyo University, Japan 3 Future University-Hakodate, Japan

More information

Where C= circumference, π = 3.14, and D = diameter EV3 Distance. Developed by Joanna M. Skluzacek Wisconsin 4-H 2016 Page 1

Where C= circumference, π = 3.14, and D = diameter EV3 Distance. Developed by Joanna M. Skluzacek Wisconsin 4-H 2016 Page 1 Instructor Guide Title: Distance the robot will travel based on wheel size Introduction Calculating the distance the robot will travel for each of the duration variables (rotations, degrees, seconds) can

More information

ACTIVE LEARNING USING MECHATRONICS IN A FRESHMAN INFORMATION TECHNOLOGY COURSE

ACTIVE LEARNING USING MECHATRONICS IN A FRESHMAN INFORMATION TECHNOLOGY COURSE ACTIVE LEARNING USING MECHATRONICS IN A FRESHMAN INFORMATION TECHNOLOGY COURSE Doug Wolfe 1, Karl Gossett 2, Peter D. Hanlon 3, and Curtis A. Carver Jr. 4 Session S1D Abstract This paper details efforts

More information

Properties of two light sensors

Properties of two light sensors Properties of two light sensors Timo Paukku Dinnesen (timo@daimi.au.dk) University of Aarhus Aabogade 34 8200 Aarhus N, Denmark January 10, 2006 1 Introduction Many projects using the LEGO Mindstorms RCX

More information

Lab book. Exploring Robotics (CORC3303)

Lab book. Exploring Robotics (CORC3303) Lab book Exploring Robotics (CORC3303) Dept of Computer and Information Science Brooklyn College of the City University of New York updated: Fall 2011 / Professor Elizabeth Sklar UNIT A Lab, part 1 : Robot

More information

Closed-Loop Transportation Simulation. Outlines

Closed-Loop Transportation Simulation. Outlines Closed-Loop Transportation Simulation Deyang Zhao Mentor: Unnati Ojha PI: Dr. Mo-Yuen Chow Aug. 4, 2010 Outlines 1 Project Backgrounds 2 Objectives 3 Hardware & Software 4 5 Conclusions 1 Project Background

More information

Artificial Intelligence Planning and Decision Making

Artificial Intelligence Planning and Decision Making Artificial Intelligence Planning and Decision Making NXT robots co-operating in problem solving authors: Lior Russo, Nir Schwartz, Yakov Levy Introduction: On today s reality the subject of artificial

More information

Mindstorms NXT. mindstorms.lego.com

Mindstorms NXT. mindstorms.lego.com Mindstorms NXT mindstorms.lego.com A3B99RO Robots: course organization At the beginning of the semester the students are divided into small teams (2 to 3 students). Each team uses the basic set of the

More information

Robot Programming Manual

Robot Programming Manual 2 T Program Robot Programming Manual Two sensor, line-following robot design using the LEGO NXT Mindstorm kit. The RoboRAVE International is an annual robotics competition held in Albuquerque, New Mexico,

More information

Chapter 14. using data wires

Chapter 14. using data wires Chapter 14. using data wires In this fifth part of the book, you ll learn how to use data wires (this chapter), Data Operations blocks (Chapter 15), and variables (Chapter 16) to create more advanced programs

More information

2.4 Sensorized robots

2.4 Sensorized robots 66 Chap. 2 Robotics as learning object 2.4 Sensorized robots 2.4.1 Introduction The main objectives (competences or skills to be acquired) behind the problems presented in this section are: - The students

More information

Table of Contents. Sample Pages - get the whole book at

Table of Contents. Sample Pages - get the whole book at Table of Contents Chapter 1: Introduction... 1 Chapter 2: minivex Basics... 4 Chapter 3: What is a Robot?... 20 Chapter 4: Flowcharting... 25 Chapter 5: How Far?... 28 Chapter 6: How Fast?... 32 Chapter

More information

After Performance Report Of the Robot

After Performance Report Of the Robot After Performance Report Of the Robot Engineering 112 Spring 2007 Instructor: Dr. Ghada Salama By Mahmudul Alam Tareq Al Maaita Ismail El Ebiary Section- 502 Date: May 2, 2007 Introduction: The report

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology

idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology Final Proposal Team #2 Gordie Stein Matt Gottshall Jacob Donofrio Andrew Kling Facilitator: Michael Shanblatt Sponsor:

More information

Team Description Paper

Team Description Paper Team Description Paper Sebastián Bejos, Fernanda Beltrán, Ivan Feliciano, Giovanni Guerrero, Moroni Silverio 1 Abstract We describe the design of the hardware and software components, as well as the algorithms

More information

Introduction.

Introduction. Teaching Deliberative Navigation Using the LEGO RCX and Standard LEGO Components Gary R. Mayer *, Jerry B. Weinberg, Xudong Yu Department of Computer Science, School of Engineering Southern Illinois University

More information

CSC C85 Embedded Systems Project # 1 Robot Localization

CSC C85 Embedded Systems Project # 1 Robot Localization 1 The goal of this project is to apply the ideas we have discussed in lecture to a real-world robot localization task. You will be working with Lego NXT robots, and you will have to find ways to work around

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Multi-Agent Robotics with GPS Navigation

Multi-Agent Robotics with GPS Navigation Jay Joshi Edison High School 50 Boulevard of the Eagles Edison, NJ 08817 Multi-Agent Robotics with GPS Navigation Abstract The GPS Navigation project is a multi-agent robotics project. A GPS Navigation

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

The use of programmable robots in the education of programming

The use of programmable robots in the education of programming Proceedings of the 7 th International Conference on Applied Informatics Eger, Hungary, January 28 31, 2007. Vol. 2. pp. 29 36. The use of programmable robots in the education of programming Zoltán Istenes

More information

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer University of Toronto Companion ECE1778 Winter 2015 Creative Applications for Mobile Devices Wei Hao Chang Apper Alexander Hong Programmer April 9, 2015 Contents 1 Introduction 3 1.1 Problem......................................

More information

Pre-Activity Quiz. 2 feet forward in a straight line? 1. What is a design challenge? 2. How do you program a robot to move

Pre-Activity Quiz. 2 feet forward in a straight line? 1. What is a design challenge? 2. How do you program a robot to move Maze Challenge Pre-Activity Quiz 1. What is a design challenge? 2. How do you program a robot to move 2 feet forward in a straight line? 2 Pre-Activity Quiz Answers 1. What is a design challenge? A design

More information

MADISON PUBLIC SCHOOL DISTRICT. GRADE 7 Robotics Cycle

MADISON PUBLIC SCHOOL DISTRICT. GRADE 7 Robotics Cycle MADISON PUBLIC SCHOOL DISTRICT GRADE 7 Robotics Cycle Authored by: Erik Lih Richard Newbery Reviewed by: Lee Nittel Director of Curriculum and Instruction Tom Paterson K12 Supervisor of Science and Technology

More information

Automata Depository Model with Autonomous Robots

Automata Depository Model with Autonomous Robots Acta Cybernetica 19 (2010) 655 660. Automata Depository Model with Autonomous Robots Zoltán Szabó, Balázs Lájer, and Ágnes Werner-Stark Abstract One of the actual topics on robotis research in the recent

More information

Design Project Introduction DE2-based SecurityBot

Design Project Introduction DE2-based SecurityBot Design Project Introduction DE2-based SecurityBot ECE2031 Fall 2017 1 Design Project Motivation ECE 2031 includes the sophomore-level team design experience You are developing a useful set of tools eventually

More information

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell 2004.12.01 Abstract I propose to develop a comprehensive and physically realistic virtual world simulator for use with the Swarthmore Robotics

More information

LEGO Mindstorms Class: Lesson 1

LEGO Mindstorms Class: Lesson 1 LEGO Mindstorms Class: Lesson 1 Some Important LEGO Mindstorm Parts Brick Ultrasonic Sensor Light Sensor Touch Sensor Color Sensor Motor Gears Axle Straight Beam Angled Beam Cable 1 The NXT-G Programming

More information

Control System for an All-Terrain Mobile Robot

Control System for an All-Terrain Mobile Robot Solid State Phenomena Vols. 147-149 (2009) pp 43-48 Online: 2009-01-06 (2009) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/ssp.147-149.43 Control System for an All-Terrain Mobile

More information

Course: STEM Robotics Engineering Total Framework Hours up to: 600 CIP Code: Exploratory Preparatory

Course: STEM Robotics Engineering Total Framework Hours up to: 600 CIP Code: Exploratory Preparatory Camas School District Framework: Introductory Robotics Course: STEM Robotics Engineering Total Framework Hours up to: 600 CIP Code: 150405 Exploratory Preparatory Date Last Modified: 01/20/2013 Career

More information

Chapter 1. Robots and Programs

Chapter 1. Robots and Programs Chapter 1 Robots and Programs 1 2 Chapter 1 Robots and Programs Introduction Without a program, a robot is just an assembly of electronic and mechanical components. This book shows you how to give it a

More information

Deriving Consistency from LEGOs

Deriving Consistency from LEGOs Deriving Consistency from LEGOs What we have learned in 6 years of FLL and 7 years of Lego Robotics by Austin and Travis Schuh 1 2006 Austin and Travis Schuh, all rights reserved Objectives Basic Building

More information

A Rubik s Cube Solving Robot Using Basic Lego Mindstorms NXT kit

A Rubik s Cube Solving Robot Using Basic Lego Mindstorms NXT kit A Rubik s Cube Solving Robot Using Basic Lego Mindstorms NXT kit Khushboo Tomar Department of Electronics and Communication Engineering, Amity University, Sector-125, Noida 201313 (U.P.) India tomar2khushboo@gmail.com

More information

BEYOND TOYS. Wireless sensor extension pack. Tom Frissen s

BEYOND TOYS. Wireless sensor extension pack. Tom Frissen s LEGO BEYOND TOYS Wireless sensor extension pack Tom Frissen s040915 t.e.l.n.frissen@student.tue.nl December 2008 Faculty of Industrial Design Eindhoven University of Technology 1 2 TABLE OF CONTENT CLASS

More information

Design & Development of a Robotic System Using LEGO Mindstorm

Design & Development of a Robotic System Using LEGO Mindstorm Design & Development of a Robotic System Using LEGO Mindstorm Nurulfajar bin Abd Manap 1, Sani Irwan Md Salim 1 Nor Zaidi bin Haron 1 Faculty of Electronic and Computer Engineering (KUTKM) ABSTRACT This

More information

LDOR: Laser Directed Object Retrieving Robot. Final Report

LDOR: Laser Directed Object Retrieving Robot. Final Report University of Florida Department of Electrical and Computer Engineering EEL 5666 Intelligent Machines Design Laboratory LDOR: Laser Directed Object Retrieving Robot Final Report 4/22/08 Mike Arms TA: Mike

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

Embedded Control Project -Iterative learning control for

Embedded Control Project -Iterative learning control for Embedded Control Project -Iterative learning control for Author : Axel Andersson Hariprasad Govindharajan Shahrzad Khodayari Project Guide : Alexander Medvedev Program : Embedded Systems and Engineering

More information

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT Humanity s ability to use data and intelligence has increased dramatically People have always used data and intelligence to aid their journeys. In ancient

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005) Project title: Optical Path Tracking Mobile Robot with Object Picking Project number: 1 A mobile robot controlled by the Altera UP -2 board and/or the HC12 microprocessor will have to pick up and drop

More information

Correcting Odometry Errors for Mobile Robots Using Image Processing

Correcting Odometry Errors for Mobile Robots Using Image Processing Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,

More information

understanding sensors

understanding sensors The LEGO MINDSTORMS EV3 set includes three types of sensors: Touch, Color, and Infrared. You can use these sensors to make your robot respond to its environment. For example, you can program your robot

More information

Smart-M3-Based Robot Interaction in Cyber-Physical Systems

Smart-M3-Based Robot Interaction in Cyber-Physical Systems FRUCT 16, Oulu, Finland October 30, 2014 Smart-M3-Based Robot Interaction in Cyber-Physical Systems Nikolay Teslya *, Sergey Savosin * * St. Petersburg Institute for Informatics and Automation of the Russian

More information

An External Command Reading White line Follower Robot

An External Command Reading White line Follower Robot EE-712 Embedded System Design: Course Project Report An External Command Reading White line Follower Robot 09405009 Mayank Mishra (mayank@cse.iitb.ac.in) 09307903 Badri Narayan Patro (badripatro@ee.iitb.ac.in)

More information

Designing Toys That Come Alive: Curious Robots for Creative Play

Designing Toys That Come Alive: Curious Robots for Creative Play Designing Toys That Come Alive: Curious Robots for Creative Play Kathryn Merrick School of Information Technologies and Electrical Engineering University of New South Wales, Australian Defence Force Academy

More information

Balancing Bi-pod Robot

Balancing Bi-pod Robot Balancing Bi-pod Robot Dritan Zhuja Computer Science Department Graceland University Lamoni, Iowa 50140 zhuja@graceland.edu Abstract This paper is the reflection on two years of research and development

More information

Rudimentary Swarm Robotics

Rudimentary Swarm Robotics Rudimentary Swarm Robotics Josiah Hamid Khani, Thomas Keller, Matthew Sims, & Isaac Swift Episcopal School of Dallas, josiahhk@gmail Project Description Rudimentary Swarm Robotics The concept of swarm

More information

GE423 Laboratory Assignment 6 Robot Sensors and Wall-Following

GE423 Laboratory Assignment 6 Robot Sensors and Wall-Following GE423 Laboratory Assignment 6 Robot Sensors and Wall-Following Goals for this Lab Assignment: 1. Learn about the sensors available on the robot for environment sensing. 2. Learn about classical wall-following

More information

How Do You Make a Program Wait?

How Do You Make a Program Wait? How Do You Make a Program Wait? How Do You Make a Program Wait? Pre-Quiz 1. What is an algorithm? 2. Can you think of a reason why it might be inconvenient to program your robot to always go a precise

More information

POKER BOT. Justin McIntire EEL5666 IMDL. Dr. Schwartz and Dr. Arroyo

POKER BOT. Justin McIntire EEL5666 IMDL. Dr. Schwartz and Dr. Arroyo POKER BOT Justin McIntire EEL5666 IMDL Dr. Schwartz and Dr. Arroyo Table of Contents: Introduction.page 3 Platform...page 4 Function...page 4 Sensors... page 6 Circuits....page 8 Behaviors...page 9 Problems

More information

Robotics using Lego Mindstorms EV3 (Intermediate)

Robotics using Lego Mindstorms EV3 (Intermediate) Robotics using Lego Mindstorms EV3 (Intermediate) Facebook.com/roboticsgateway @roboticsgateway Robotics using EV3 Are we ready to go Roboticists? Does each group have at least one laptop? Do you have

More information

Collective Construction Using Lego Robots

Collective Construction Using Lego Robots Collective Construction Using Lego Robots Crystal Schuil 1, Matthew Valente 1, Justin Werfel 2, Radhika Nagpal 1 1 Harvard University, 33 Oxford Street, Cambridge, MA 02138 2 Massachusetts Institute of

More information

Walle. Members: Sebastian Hening. Amir Pourshafiee. Behnam Zohoor CMPE 118/L. Introduction to Mechatronics. Professor: Gabriel H.

Walle. Members: Sebastian Hening. Amir Pourshafiee. Behnam Zohoor CMPE 118/L. Introduction to Mechatronics. Professor: Gabriel H. Walle Members: Sebastian Hening Amir Pourshafiee Behnam Zohoor CMPE 118/L Introduction to Mechatronics Professor: Gabriel H. Elkaim March 19, 2012 Page 2 Introduction: In this report, we will explain the

More information

AUTONOMOUS SLAM ROBOT MECHENG 706. Group 4: Peter Sefont Tom Simson Xiting Sun Yinan Xu Date: 5 June 2016

AUTONOMOUS SLAM ROBOT MECHENG 706. Group 4: Peter Sefont Tom Simson Xiting Sun Yinan Xu Date: 5 June 2016 2016 AUTONOMOUS SLAM ROBOT MECHENG 706 Group 4: Peter Sefont Tom Simson Xiting Sun Yinan Xu Date: 5 June 2016 Executive Summary The aim of this project is to design and develop an Autonomous Simultaneous

More information

Case Study: Distributed Autonomous Vehicle Stimulation Architecture (DAVSA)

Case Study: Distributed Autonomous Vehicle Stimulation Architecture (DAVSA) Case Study: Distributed Autonomous Vehicle Stimulation Architecture (DAVSA) Mr Bojan Lovric; Dr William Scott Defence and Systems Institute, University of South Australia Mawson Lakes, South Australia

More information

EEL5666C IMDL Spring 2006 Student: Andrew Joseph. *Alarm-o-bot*

EEL5666C IMDL Spring 2006 Student: Andrew Joseph. *Alarm-o-bot* EEL5666C IMDL Spring 2006 Student: Andrew Joseph *Alarm-o-bot* TAs: Adam Barnett, Sara Keen Instructor: A.A. Arroyo Final Report April 25, 2006 Table of Contents Abstract 3 Executive Summary 3 Introduction

More information

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;

More information

EE 314 Spring 2003 Microprocessor Systems

EE 314 Spring 2003 Microprocessor Systems EE 314 Spring 2003 Microprocessor Systems Laboratory Project #9 Closed Loop Control Overview and Introduction This project will bring together several pieces of software and draw on knowledge gained in

More information

Undefined Obstacle Avoidance and Path Planning

Undefined Obstacle Avoidance and Path Planning Paper ID #6116 Undefined Obstacle Avoidance and Path Planning Prof. Akram Hossain, Purdue University, Calumet (Tech) Akram Hossain is a professor in the department of Engineering Technology and director

More information

COSC343: Artificial Intelligence

COSC343: Artificial Intelligence COSC343: Artificial Intelligence Lecture 2: Starting from scratch: robotics and embodied AI Alistair Knott Dept. of Computer Science, University of Otago Alistair Knott (Otago) COSC343 Lecture 2 1 / 29

More information

Levels of Description: A Role for Robots in Cognitive Science Education

Levels of Description: A Role for Robots in Cognitive Science Education Levels of Description: A Role for Robots in Cognitive Science Education Terry Stewart 1 and Robert West 2 1 Department of Cognitive Science 2 Department of Psychology Carleton University In this paper,

More information

Using sound levels for location tracking

Using sound levels for location tracking Using sound levels for location tracking Sasha Ames sasha@cs.ucsc.edu CMPE250 Multimedia Systems University of California, Santa Cruz Abstract We present an experiemnt to attempt to track the location

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

Lego Mindstorms Robotic Football John Russell Dowson Computer Science 2002/2003

Lego Mindstorms Robotic Football John Russell Dowson Computer Science 2002/2003 Lego Mindstorms Robotic Football John Russell Dowson Computer Science 2002/2003 The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has

More information

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington Department of Computer Science and Engineering The University of Texas at Arlington Team Autono-Mo Jacobia Architecture Design Specification Team Members: Bill Butts Darius Salemizadeh Lance Storey Yunesh

More information

Lab 7: Introduction to Webots and Sensor Modeling

Lab 7: Introduction to Webots and Sensor Modeling Lab 7: Introduction to Webots and Sensor Modeling This laboratory requires the following software: Webots simulator C development tools (gcc, make, etc.) The laboratory duration is approximately two hours.

More information

Solar Powered Obstacle Avoiding Robot

Solar Powered Obstacle Avoiding Robot Solar Powered Obstacle Avoiding Robot S.S. Subashka Ramesh 1, Tarun Keshri 2, Sakshi Singh 3, Aastha Sharma 4 1 Asst. professor, SRM University, Chennai, Tamil Nadu, India. 2, 3, 4 B.Tech Student, SRM

More information

AUTOMATED BEARING WEAR DETECTION. Alan Friedman

AUTOMATED BEARING WEAR DETECTION. Alan Friedman AUTOMATED BEARING WEAR DETECTION Alan Friedman DLI Engineering 253 Winslow Way W Bainbridge Island, WA 98110 PH (206)-842-7656 - FAX (206)-842-7667 info@dliengineering.com Published in Vibration Institute

More information

Ev3 Robotics Programming 101

Ev3 Robotics Programming 101 Ev3 Robotics Programming 101 1. EV3 main components and use 2. Programming environment overview 3. Connecting your Robot wirelessly via bluetooth 4. Starting and understanding the EV3 programming environment

More information

Preliminary Proposal Accessible Manufacturing Equipment Team 2 10/22/2010 Felix Adisaputra Jonathan Brouker Nick Neumann Ralph Prewett Li Tian

Preliminary Proposal Accessible Manufacturing Equipment Team 2 10/22/2010 Felix Adisaputra Jonathan Brouker Nick Neumann Ralph Prewett Li Tian Preliminary Proposal Accessible Manufacturing Equipment Team 2 10/22/2010 Felix Adisaputra Jonathan Brouker Nick Neumann Ralph Prewett Li Tian Under the supervision of Dr. Fang Peng Sponsored by Resource

More information

More Info at Open Access Database by S. Dutta and T. Schmidt

More Info at Open Access Database  by S. Dutta and T. Schmidt More Info at Open Access Database www.ndt.net/?id=17657 New concept for higher Robot position accuracy during thermography measurement to be implemented with the existing prototype automated thermography

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

I.1 Smart Machines. Unit Overview:

I.1 Smart Machines. Unit Overview: I Smart Machines I.1 Smart Machines Unit Overview: This unit introduces students to Sensors and Programming with VEX IQ. VEX IQ Sensors allow for autonomous and hybrid control of VEX IQ robots and other

More information

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. LEGO Bowling Workbook

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. LEGO Bowling Workbook Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl LEGO Bowling Workbook Robots are devices, sometimes they run basic instructions via electric circuitry or on most occasions they can be programmable.

More information

Robotic teaching for Malaysian gifted enrichment program

Robotic teaching for Malaysian gifted enrichment program Available online at www.sciencedirect.com Procedia Social and Behavioral Sciences 15 (2011) 2528 2532 WCES-2011 Robotic teaching for Malaysian gifted enrichment program Rizauddin Ramli a *, Melor Md Yunus

More information

Introduction to the VEX Robotics Platform and ROBOTC Software

Introduction to the VEX Robotics Platform and ROBOTC Software Introduction to the VEX Robotics Platform and ROBOTC Software Computer Integrated Manufacturing 2013 Project Lead The Way, Inc. VEX Robotics Platform: Testbed for Learning Programming VEX Structure Subsystem

More information

Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School

Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School elvonbur@mpsaz.org Water Sabers (2008)* High Heelers (2009)* Helmeteers (2009)* Cyber Sleuths (2009)* LEGO All Stars

More information

NAVIGATION OF MOBILE ROBOTS

NAVIGATION OF MOBILE ROBOTS MOBILE ROBOTICS course NAVIGATION OF MOBILE ROBOTS Maria Isabel Ribeiro Pedro Lima mir@isr.ist.utl.pt pal@isr.ist.utl.pt Instituto Superior Técnico (IST) Instituto de Sistemas e Robótica (ISR) Av.Rovisco

More information

Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks

Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks Min Song, Trent Allison Department of Electrical and Computer Engineering Old Dominion University Norfolk, VA 23529, USA Abstract

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Agent-based/Robotics Programming Lab II

Agent-based/Robotics Programming Lab II cis3.5, spring 2009, lab IV.3 / prof sklar. Agent-based/Robotics Programming Lab II For this lab, you will need a LEGO robot kit, a USB communications tower and a LEGO light sensor. 1 start up RoboLab

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

Mission Reliability Estimation for Repairable Robot Teams

Mission Reliability Estimation for Repairable Robot Teams Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2005 Mission Reliability Estimation for Repairable Robot Teams Stephen B. Stancliff Carnegie Mellon University

More information

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1 ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS Xiang Ji and Hongyuan Zha Material taken from Sensor Network Operations by Shashi Phoa, Thomas La Porta and Christopher Griffin, John Wiley,

More information