Simulation and control of distributed robot search teams

Size: px
Start display at page:

Download "Simulation and control of distributed robot search teams"

Transcription

1 Computers and Electrical Engineering 29 (2003) Simulation and control of distributed robot search teams Robert L. Dollarhide a,1, Arvin Agah b, * a Signal Exploitation and Geolocation Division, Southwest Research Institute San Antonio, TX 78238, USA b Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS 66045, USA Received in revised form 14 April 1999; accepted 9 September 1999 Abstract This article describes the simulation of distributed autonomous robots for search and rescue operations. The simulation system is utilized to perform experiments with various control strategies for the robot team and team organizations, evaluating the comparative performance of the strategies and organizations. The objective of the robot team is to, once deployed in an environment (floor-plan) with multiple rooms, cover as many rooms as possible. The simulated robots are capable of navigation through the environment, and can communicate using simple messages. The simulator maintains the world, provides each robot with sensory information, and carries out the actions of the robots. The simulator keeps track of the rooms visited by robots and the elapsed time, in order to evaluate the performance of the robot teams. The robot teams are composed of homogenous robots, i.e., identical control strategies are used to generate the behavior of each robot in the team. The ability to deploy autonomous robots, as opposed to humans, in hazardous search and rescue missions could provide immeasurable benefits. Ó 2003 Elsevier Science Ltd. All rights reserved. Keywords: Evolutionary robotics; Distributed robotics; Robot search teams; Multi-agent systems 1. Introduction It was not long ago when the images and public perception of robots were limited to the extreme visions created by science fiction writers and the entertainment industry. However, today it is not uncommon to read an interesting article about recent advances in robotics, watch a robot search the surface of Mars [15] on the nightly news, or even possibly encounter one in the work place [2]. As robots make such inroads into our daily lives, it becomes increasingly apparent how * Corresponding author. Tel.: ; fax: address: agah@ku.edu (A. Agah). 1 Work performed while Robert L. Dollarhide was at the University of Kansas /03/$ - see front matter Ó 2003 Elsevier Science Ltd. All rights reserved. doi: /s (01)

2 626 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) they can benefit society. Nowhere is this more evident than in situations where one or more robots could replace humans in a dangerous situation. One area of study, which has recently piqued the interest of the robotics community, is the use of robots in search and rescue operations. Search and rescue operations require a massive effort by rescuers in very dangerous environments. Collapsed and unstable buildings, leaking gas lines, and fire are only a few of the things that pose a threat to the lives of human rescue teams. The ability to deploy autonomous robots, as opposed to humans, into this type of environment to search for survivors provides immeasurable benefits. The potential for using robots in place of humans requires addressing questions such as: (1) What type of robot can move effectively in an unknown and dynamic environment? (2) How many robots should be used to efficiently cover the most area in a search operation? (3) How should the robot be controlled? These and related issues can be addressed through simulation experiments where teams of deployed robots are implemented, tested, evaluated, and improved upon. This paper presents the design and implementation of a computer simulation for area coverage of a team of robots in a search and rescue mission [9]. A graphical computer simulation program was developed as part of this project using OpenGL [18] and programmed in C++ [8]. The simulator is provided with a variety of parameters to specify the scope of the experiment and then displays the robots as they work their way through a specified floorplan. In each time step, a robot uses its sensors to detect objects, and the sensory data is then mapped to the rule sets (situation-action pairs), resulting in the associated action. The performance of robot teams are rated by the percentage of total rooms (enclosed areas) entered within a pre-specified amount of time. Teams reaching complete coverage (100%) are also rated upon how quickly total coverage was achieved. A snapshot of the simulated search robots, deployed in a multi-room floorplan in shown in Fig. 1. Fig. 1. The simulated robot search team deployed in a specific floorplan.

3 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Robotics Robotics is an extremely broad field encompassing a variety of applications and research interests. From the robots used on factory assembly lines [2] to those conducting Mars exploration for NASA [15], there are seemingly endless possibilities relating to the visions and uses of robotics. The definition of the word robot is itself dependent upon both who is defining it and its intended context. For our purposes, an intelligent robot is defined as: A machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposive manner [4]. Most researchers agree that the methods used for controlling autonomous robots can be divided into three general categories: deliberative, reactive, and hybrid systems. The deliberative approach (also referred to as planner-based) is a strategy where intelligent tasks can be implemented by a reasoning process operating on an internal model of the world [12]. This approach dominated the artificial intelligence community for years resulting in the development of a standard architecture by the US Government in the 1980s, which reflected the deliberative model [4]. Rodney Brooks, Director of the Massachusetts Institute of TechnologyÕs Artificial Intelligence Laboratory, refers to deliberative architectures as a sense-model-plan-act (SMPA) framework [6]. In contrast, reactive systems do not maintain an internal model of the world and apply some form of a simple mapping between stimuli and responses [13]. Ronald Arkin of the Georgia Institute of Technology gives the following precise definition of the approach: Simply put, reactive control is a technique for tightly coupling perception and action, typically in the context of motor behaviors, to produce timely robotic response in dynamic and unstructured worlds [4]. The robotics community began to take interest in reactive systems in the mid 1980s, as many of the shortcomings of deliberative control for mobile robots became apparent. Specifically, deliberative autonomous systems displayed a number of deficiencies such as brittleness, inflexibility, and slow response times when operating in complex and dynamic environments [12]. Speed of response was a key weakness for deliberative systems. Maja Mataric of the University of Southern California suggests that the primary division between reactive and deliberative strategies can be drawn based on the type and amount of computation performed at run-time [13]. In 1985, Brooks presented a new and fundamentally different architecture for controlling mobile robots using a purely reactive approach [7]. His architecture consisted of a number of asynchronous modules that were layered into a hierarchy for controlling a robot. Each module for control was computationally simple and could run in parallel with other modules. To accomplish complex tasks, higher-level modules could subsume lower level modules, temporarily suspending their operations. A large amount of research using subsumption (reactive) architecture followed BrooksÕ initial study. Hybrid architectures for controlling mobile robots have inevitably arisen incorporating the benefits of both deliberative and reactive systems. Most hybrid architectures utilize a reactive approach to handle real-time issues concerning the robotõs performance within its real world environment, while a deliberative approach is used for higher level planning and complex computation [6]. These approaches frequently divide the system into a low level (reactive) layer, a high level (deliberative) layer, and an interface layer between the two. Reactive architectures and behavior-based architectures are most often considered identical. However, extremes exist regarding how basic or how complex a system can become while still

4 628 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) being classified as reactive or behavior-based. Mataric contends that there is a fundamental difference between reactive and behavior-based systems [13]. She suggests that though behaviorbased systems contain properties or even components of a purely reactive system, their computation need not be as limited. In this way, behavior-based systems can store various forms of state and can implement different representations. Furthermore, she suggests that behaviors are more time-extended than the reflexive actions of a reactive system. As interest in reactive systems grew, researchers inevitably attempted to mimic biological systems using machinery and computational systems for the purposes of accomplishing a desired task. Arkin describes how neuroscience provides a basis for understanding and modeling the underlying circuitry of biological behavior [4]. He points out that a number of psychological schools of thought have inspired robotics researchers over the years. In particular, the study of behaviorism (which originated in the early 1910s) has secured a solid foundation within robotics. This method of study is based upon observation only in which everything is considered in terms of stimulus and response [4]. In this context, numerous research efforts have examined biological behaviors in hopes of imitating complex animal behaviors in autonomous mobile robots. Arkin makes a correlation between animal and artificial intelligence by defining intelligence as a system with the ability to improve its likelihood of survival within the real world and where appropriate to compete or cooperate successfully with other agents to do so [4]. Recently, this study of biological behavior has extended into the world of multi-agent systems. Sociobiological behaviors have been studied and emulated using groups of mobile robots. Arvin Agah of the University of Kansas examined individual and collective robot learning using a Tropism System Cognitive Architecture based on the likes and dislikes of the robot agents [1]. Some of the work in this area has relied upon observations of ant and bee colonies and their ability to carry out global tasks using the limited local information of individual agents within the system. Within the last decade, researchers have begun to focus on robotic systems consisting of multiple robots, either homogeneous or heterogeneous, to accomplish one or more tasks. Some of the advantages of using distributed robotics consist of robustness, flexibility, distributed nature, and more simplified robots. Examples of such work include: [3,5,10,14,16,17]. 3. The simulator The simulator program developed for this study was created using the GNU C++ compiler version 2.7. The simulation consists of four elements: parameters, testing environments, robots, and robot control (rule sets), each of which is described in this section Simulator parameters Before launching a run for one or more robots, the user is prompted for a number of parameters to define the scope of the experiment. The user is first asked if they would like to use the graphics mode. The simulator can be run using either a graphics mode to display the robots as they maneuver through an area or with a Ôno graphicsõ mode. Primarily, the no graphics mode is

5 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) used when utilizing methodologies for generating controllers, i.e., rule sets, since the amount of time it takes to complete a run is greatly reduced. Both modes are identical in every aspect except for the presence of the graphics display. The display was created using OpenGL version 1.2 [18]. An instance of no graphics mode is when the simulator utilizes genetic algorithms to identify nearoptimal rule sets for governing the actions of the robot teams. Genetic algorithms are a search algorithm based upon the mechanics of natural selection and natural genetics [11]. Next, the user must specify the number of robots they wish to deploy and the desired sensing range for each robot in the team. The user can select to use one to 20 robots. Additionally, the user can select one of three sensing ranges. Sensing range 1 uses a more immediate sensing area around the robot while sensing ranges 2 and 3 enlarge the range to encompass an increasingly larger area. Robot team members are homogenous and therefore, all have identical sensing range. Once the robot parameters are specified, the user selects an environment (a floorplan) through which the robots will maneuver. There are three environments to select from (1) a home floorplan, (2) an office floorplan, and (3) a hotel or apartment complex floorplan. The floorplans consist of 6, 12, and 24 rooms respectively (these are described in depth later in this section). The simulator then asks the user to specify a time limit for the robots to cover the selected floorplan. This is to be specified in units of seconds. One simulation time unit is represented by a tenth of a second. The robots will only have the specified amount of time to move through the area and will have their performance evaluated upon completion of the time period. Performance is measured by the percentage of rooms entered. For instance, a team that enters 55% of the rooms in an environment receives an initial performance score of 55. If the robot team enters 100% of the rooms, performance is also measured by how quickly the team accomplished total coverage by adding points to a robot teamõs initial score. The less time it takes to achieve 100% coverage, the greater amount of points that are added to the initial score. Once these parameters have been specified, the program asks the user if they would like to use the genetic algorithm. By answering Ôyes,Õ the user will ultimately generate a rule set through genetic evolution. A ÔnoÕ response indicates that the user will test a specific rule set that has already been developed or evolved. Whether using the simulator to test a developed rule set or to evolve a set of rules using the genetic algorithm, it is necessary to specify the number of rules that exist within a set. A rule set can contain from the minimum of one to a maximum of 50 rules. If testing a developed rule set, the number input by the user must match the number of rules in the actual set. When evolving rule sets with the genetic algorithm, the rules will initially be randomly generated for the user. The description of the rule set format is addressed later. If the user does not choose to use the genetic algorithm, the program launches the simulation as specified by the user parameters. If the user decides to use the genetic algorithm, it is required to specify the size of a population and number of generations. For a population of size n, the program will randomly generate n sets of rules. Once created, the program will initiate the simulation for the first member of the population using the first randomly generated rule set. When the user-defined time limit for the simulation is reached, the performance of the robot team is evaluated and a new simulation is launched for the next member of the population using its associated randomly generated rule set. This process repeats until all members of the population have completed simulations using their rule set and have been evaluated. Once this first generation finishes, the genetic algorithm creates a new population for the second generation from the population members of the first based on the fitness. This process repeats until all generations

6 630 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) have completed their simulations for all population members. The genetic algorithm approach creates a rule set file from the best performing population member from the final generation, which can be used for future testing Testing environments There are three testing environments (floorplans) through which the robots maneuver. There are the home floorplan 1, office floorplan 2, and hotel or apartment floorplan 3, consisting of 6, 12, and 24 rooms, respectively. The floorplans are shown in Fig. 2. Fig. 2. Floorplans: (a) home floorplan 1, (b) office floorplan 2, and (c) hotel/apartment complex floorplan 3.

7 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) A testing environment is defined within a space of units. In graphics mode, this is within a window containing pixels. The viewing size can be adjusted. A white border defines the perimeter of each testing environment. The interior of the space consists of simple white walls over a black background (in a pattern similar to a maze). Yellow semi-circle trip wires are found at openings to rooms and are used for evaluating team performance (not for controlling robot actions). An open area in the lower left corner of each floorplan is the location where robot teams start out at the beginning of every simulation Robots Robots are represented in the simulation by colored circles with a black line from the center to the outer edge indicating a robotõs current forward direction. The robots can change between any one of three states (red, green, and blue) throughout the course of a simulation. The colors are intended to represent state communication, which can be detected by other robots. Each robot senses in eight directions (every 45 about the robotõs center). These directions are identical to the standard directions of a compass (N, NW, W, SW, S, SE, E, NE) with North being the position of a robotõs current forward direction (Fig. 3) Robot control Robot control is implemented using rule sets, which are comprised of a predetermined number (n) of rules (specified by the user). Each rule is represented by a 141 bit string making the rule set a bit string 141 n in length. The string format (Fig. 4) defines two distance values and an entity value (a DDE) for each of the robotõs eight sensors. In addition, a state value and an action value are defined for mapping a robotõs sensed world to a specific action. Each of the value types is described below: Fig. 3. Sensing directions of a robot.

8 632 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) DDE DDE DDE DDE DDE DDE DDE Distance Values Entity Value State Value Action Value DDE Fig. 4. Rule format for robot control. The 16 distance values within the rule bit string (two for each sensor) represent range boundaries where an entity specified by the entity value is detected. Each distance value is seven bits in length providing a range from zero to 128 units. One distance unit is equivalent to one element of the array used to represent the testing grounds. If the simulation were run using graphics mode (a pixel window), robot sensors would have the capability of sensing a distance of up to 128 pixels from the robot center. Each of the eight entity values within a rule string requires three bits and is associated with two distance values. Two distance values and one entity value define one of the eight DDEs contained within a rule (one DDE for each sensor). Entity values range from zero to four, indicating the following: the absence of any detected object, a red robot, a green robot, a blue robot, or a wall. The state value requires two bits of the rule bit string and indicates the current state of a robot. State values range from zero to two representing a red, green, or blue state, respectively. Should a ruleõs state value match a robotõs current state, it is more likely that the action associated with that rule will be selected. Mapping a robotõs sensors to a particular rule is addressed in depth later in this section. The action value is three bits and represents the action that is to be carried out by the robot should the sensed world match the associated rule. Action values range from zero to five allowing for the following six actions: forward, turn left, turn right, turn red, turn green, or turn blue, respectively. Rule sets are read from a text file when running a simulation without the genetic algorithm. The format of the text file (Table 1) is identical to the file created following rule set evolution using the Table 1 Sample rule in text format Dist. Val. 1 Dist. Val. 2 Entity Val. Forward Right Forward Right Right Back Back Left Back Left Left Forward State/Null/Action Val

9 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) genetic algorithm. The format consists of nine rows and three columns. Each of the first eight rows displays the two distance values and the entity value for one of the robot sensors. The ninth row contains the state value, an irrelevant zero value (used to fill the ninth row second column element), and the action value. All rows begin with the ÔÕ character which is used as an identifying tag when parsing the rule set text file. Table 1 illustrates a rule in a text file format. The shaded area depicts the actual format of the rule. This rule (one of n rules in a rule set) can be interpreted as follows: when a green (state value of one) robotõs Right Forward and Right Back sensors detect an entity value of 4 (a wall) at a range between 10 and 40 units and all other sensors detect nothing, the robot should turn right (action value of two). A user might create a rule such as the one in Table 1 based upon the following reasoning: if the robot detects a wall to the Front Right and Back Right, but not to the immediate Right, it has most likely encountered an opening and should turn right to investigate. 4. Simulator implementation 4.1. The world The entire simulation environment is maintained within a array of integer values. Array elements can contain one of nine possible values, zero through eight, indicating that the position within the array is: empty; contains a trip wire; is white (a wall), red, green, or blue; is red, green, or blue in a space over a trip wire, respectively. This is as a large grid where each square (an element of the array) can contain a color (a value) used to ÔdrawÕ the robotsõ world. Table 2 illustrates the nine possible array values. At the beginning of every simulation, the specified floorplan is ÔdrawnÕ into the array. All walls and trip wires are first placed in the array. Next, the robots are positioned over the existing array. In most cases, robots are placed over array elements containing zero values (empty). However, if a trip wire first occupies the position, the robot value inserted into the array element is between six and eight indicating that the space contained a trip wire (this assists in Table 2 Element values in the environment array Value Represents 0 Empty 1 Trip Wire 2 Wall 3 Red (over empty) 4 Green (over empty) 5 Blue (over empty) 6 Red (over trip wire) 7 Green (over trip wire) 8 Blue (over trip wire)

10 634 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Fig. 5. Assigning values to environment array elements. replacing proper trip wire and empty values after a robot passes through a portion of the array). Fig. 5 illustrates a graphical example of how this occurs using arrays Collision detection and control Collisions between two robots or between a robot and a wall are detected by checking the values of the environment array in 16 positions around the perimeter of a robot. With every unit of time that passes, the simulator checks an array element at the distance of the robotõs radius + 1 for every 22.5 about the robotõs center. Should the array element contain a wall or a robot value, a collision flag is set within the primary sensing function of the robot object within the simulation program. If a collision is detected within 90 of a robotõs forward direction, a redirection value is established. Collisions detected to the left of the robotõs forward direction result in a redirection value of intended_direction + ((angle_of_collision + 90 ) mod 360). Collisions detected to the right Collision Detected 45 from Intended Direction (315 ) Intended Direction Angle of Redirection: 0 + ( ( ) mod360 ) =45 Fig. 6. Robot redirection.

11 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) of the robotõs forward direction result in a redirection value of intended_direction + ((angle_of_collision 90 ) mod 360). Fig. 6 illustrates how redirect values are set. If a redirection value is set as a robot attempts to move straight ahead, the robotõs forward direction value remains unchanged while its true direction is adjusted by (intended_direction + redirection) mod Robot sensing The method used for robot sensing is similar to that applied to collision detection. Sensing is accomplished by checking the values of the environment array in eight directions around the center of a robot. With every unit of time that passes, the simulator checks array elements within a specified range for every 45 about the robotõs center starting at the angle of forward direction (Fig. 7). For instance, if the sensing range is 50 units, the simulator checks every array element between radius + 1 and 50 units along the lines 0, 45, 90, 135, 180, 225, 270, and 315 of the robotõs center (if the forward direction happened to be 0 ). In order to give the sensors a slightly wider spread, every fourth array element down lines 5 from the primary sensor direction are also checked. For example, when sensing down a line 90 from the robotõs forward direction, every array element along that line is checked, as is every fourth element along lines 85 and 95 of the forward direction. Sensing begins at the element radius + 1 from the center of a robot and continues down the sensing line until a robot or wall is detected or the end of the sensing range is reached. If an entity is detected, the distance to the entity and the associated entity value are placed into an array for the sensed data (Table 3). Once an entity is detected or the end of the sense range is reached, the robot senses along the line of the next sensor. This continues until all sensing lines have been checked. Once complete, the sensed data array represents the robotõs sensed world, which can then be mapped to a specific rule to determine a robotõs action. Intended Direction Range between radius+1 and n Environment Array Element Fig. 7. Simulation of robot sensing.

12 636 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Table 3 Sample sensed data array Sensor position Distance to entity Entity type Front 0 0 Right Front 0 0 Right 0 0 Right Back 14 2 Back 0 0 Left Back 0 0 Left 26 4 Left Front Matching rules A robotõs action is selected after matching its sensed data array to a specific rule in its rule set. This mapping is analogous to playing the classic childrenõs game, Battleship. The sensed data can be thought of as a playerõs guess as to where their opponentõs ships lie. These guesses are then placed over the opponentõs playing field (a rule) and hits are scored. This is done for every rule in the rule set. The rule that takes the most hits is determined to be a match. Rule scores are maintained in a separate scoring array. For each distance and entity entry in the sensed data array (one of each for all eight sensors), the matching function determines if: The distance falls within the range specified by the distance values for the associated sensor in the rule. The entity value in the sensed data array matches the entity value in the rule. If the above criteria are met, the rule score is increased by one; otherwise, the rule score decreases by one. The following example illustrates initial scoring: The Front sensor of the sensed data shows a distance of 20 and entity value of four (a wall). The Front distance values for Rule X are 10 and 30 and the entity value is four. The rule score for Rule X is increased by one because 20 is between 10 and 30 and the entity values match. After matching each of the sensed data to the rule data, the current state of the robot is compared to the state value specified in the rule. If the current state matches the ruleõs state value, the score is increased by five. Greater points are awarded for matching state values to promote a divergence of robot roles through genetic evolution. By weighting the scoring, there is a greater probability that a red robot maps to a rule with a red state value and that a green robot would map to a rule with a green state value. Rule scoring is a linear task, starting with rule one and ending with rule n. After scoring a rule, its score is compared to the score of the winner. If a ruleõs score is larger than the current winnerõs

13 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) it becomes the new winner. Once all of the rules have been scored, the action associated with the winning rule is selected Bias and instincts In case a rule receives the same score as the current winner, an action bias has been imposed to determine the final action for the robot. If the rule and current winner contain the same action value, there is no need to utilize the bias; however, should the two have different action values, the bias is used to determine the final action for selection. The bias is an integer variable that holds the value of the last action taken by the robot. In case of ties where a ruleõs score differs from the current winner, the ruleõs action value is compared to the bias. If the two values are equal, the rule becomes the new winner; otherwise, the current winner remains unchanged. This method for tie resolution favors the repetition of the most recent robot action. The reasoning for this is to impose a limited amount of continuity upon a robotõs actions. For instance, if a robot were in the process of turning left towards an opening based upon the actions of Rule A, but Rule B (with the same score as Rule A) conflicted with this action, the turn would continue, uninterrupted by Rule B, because of the robot bias. Throughout the course of a simulation, robots may become disabled for a variety of reasons such as: Switching back and forth between two actions indefinitely. Repeatedly attempting to move forward when obstructed. Indefinitely repeating the same action. To counter problems like these, two instincts were given to the robotõs that can override the action value defined within a winning rule. The first instinct maintains a timer that keeps track of the amount of time that a robotõs coordinate (position) within the environment array remains unchanged. If this time exceeds 10 s, the robotõs forward direction is adjusted one degree clockwise before selecting an action. If the robot were attempting to repeatedly move forward while obstructed, it would begin to slowly turn right after 10 s (attempting to free itself from immobility). The second instinct maintains a timer that keeps track of the duration that a robot repeats an action. If this time exceeds 15 s, the robot turns off its sensors for one second to rely solely upon collision control for guidance. This momentarily allows for a free roam in the forward direction (with possible redirection through the collision handling method previously described). This approach has proven useful when a robot becomes deadlocked by continuously changing to the same state color or spinning. When the redirection method triggered by the first instinct is ineffective at countering immobility, the free roam activated by the second instinct allows the robot to relocate before sensing again (where new sensed data could map to a rule with a more effective action value). The value of 15 s was selected to allow for at least one full rotation with adequate redirection (approximately 110 from the robotõs forward direction when it first started to repeat an action) before triggering an instinct.

14 638 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Evaluation The performance of a robot team is evaluated when the user-defined time limit is reached in a simulation. Performance is measured by the percentage of rooms entered in the environment (floorplan). Additionally, should the robots enter 100% of the rooms, the time it took to complete the total coverage is considered. These values are used to determine the performance of a robot search team and are also used as fitness levels for rule sets when using the genetic algorithm. The simulator maintains a list of the rooms that each robot has entered. This data is only used for team evaluation and not for assisting the robot during a simulation. Once the time limit is reached, the room data from each robot is checked to determine the percentage of coverage for the run. If two or more robots enter the same room, the simulator displays a list of rooms where this occurred; however, duplicated effort does not impact the evaluation of a robot team. Robots identify rooms by continually checking the environment array element found at their own x and y coordinates to see if a trip wire value is present (described previously). For instance, if the center of a robot is positioned in the 35th element of the 20th row within the environment array, the element is checked for a trip wire value. If a trip wire value is detected, the simulator compares the robotõs coordinates to a global list indicating the location of trip wires within the testing environment and the room number associated with the trip wire. The simulator stores the room number corresponding to the detected trip wire. 5. Simulation experiments In series of simulation experiments, genetic algorithms were used to evolve controllers (rule sets) for teams of search robots. Genetic algorithms basically consist of three steps of reproduction, crossover, and mutation [11]. Reproduction is the process in which chromosomal strings that represent individuals are copied according to their fitness levels to be used in the creation of a new population of strings. Population members with greater fitness values are more likely to be used for creating strings for the following generation. The crossover process consists of selecting two strings and exchanging string segments between the two. The final step of a genetic algorithm is mutation, the random alteration of the value of a string position. In the reported experiments, rule sets are converted into binary chromosomal string and the fitness of the robots is computed to favor the rule set of robots achieving more coverage of the environment in less time. Three important variables in the experiments were the number of robots in each team, the selected floorplan, and the number of rules in a rule set. The sets of charts in Figs. 8 and 9 display the evolved results for robot team with sizes of 4 and 14, respectively. The charts in each figure represent the results of three different floor plans used for evolution of the robot teams. The evolution simulation experiments conducted showed that the genetic algorithm clearly improved robot team performance over the course of 100 generations. The simulation is being used in order to investigate the effects of team size, environment, controller size, and other factors influencing the performance of robot search teams.

15 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Average Fitness Average Fitness (a) Generation 10 Rule Fitness 20 Rule Fitness 30 Rule Fitness 10 Rule Fitness 20 Rule Fitness 30 Rule Fitness 5 (b) Generation Average Fitness Rule Fitness 20 Rule Fitness 30 Rule Fitness 5 (c) Generation Fig. 8. Team of 4 Robots: (a) floorplan 1, (b) floorplan 2, and (c) floorplan 3.

16 640 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) Average Fitness Rule Fitness 20 Rule Fitness 30 Rule Fitness (a) Generation Average Fitness Rule Fitness 20 Rule Fitness 30 Rule Fitness 10 5 (b) Generation Average Fitness Rule Fitness 20 Rule Fitness 30 Rule Fitness 10 5 (c) Generation Fig. 9. Team of 14 Robots: (a) floorplan 1, (b) floorplan 2, and (c) floorplan 3.

17 6. Conclusion R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) The presented simulation program has been used to conduct over 300 simulation experiments. The simulator provides an efficient tool for testing different robot behaviors and numerous compositions of robot search teams. The initial results from the evolution experiments show that robot team performance improves as the team size increases; however, this begins to plateau once team sizes reach the sizes of With smaller team sizes, the limited time allotted for area coverage seems to have a significant impact on how many rooms could be entered by team members. Furthermore, as team size increases, team interaction promoting dispersion among team members results in larger fitness level payoffs, the effects of which may level off with medium to large sized teams. Results from cross testing rules evolved for a specific team size using different sized teams shows that in all cases, robot teams utilizing rules from different sized teams never outperform those using the team size used for evolution. This does not imply that using rule sets evolved from teams of different sizes are completely ineffective. The impacts of rule set size on robot team performance are currently inconclusive and requires further investigation. The planned future work of this simulation project is to increase the number of environments in which robot teams can be evolved and tested. Future studies should include floorplans with rooms of different shapes and sizes. Furthermore, throughout the course of evolution the robot teams and their rule sets could be migrated from one environment to another or the environment could even be randomly selected for each new generation. This could result in teams with greater flexibility when encountering new environments. The eventual goal of this simulation system is to be utilized as a tool for better design, development, and deployment of robot search teams to assist in search and rescue missions. Acknowledgements This work was sponsored in part by the National Science Foundation under grant EIA References [1] Agah A, Bekey G. Phylogenetic and ontogenetic learning in a colony of interacting robots. Autonom Robots 1997: [2] The American Society of Mechanical Engineers (ASME), Cobots Page. current/features/cobots/cobots.html, [3] Arkin RC, Balch T. Cooperative multiagent robotic systems. Available from: ftp://ftp.cc.gatech.edu/pub/people/ arkin/web-papers/coop.ps.z, [4] Arkin RC. Behavior-based robotics. Cambridge, MA: MITPress; [5] Asama H, Ozaki K, Ishida Y, Yokota K, Matsumoto A, Kaetsu H, Endo I. Collaborative team organization using communication in a decentralized robotic system. IROS 1994: [6] Brooks RA. Intelligence without reason, MITAI Lab Memo 1293, [7] Brooks RA. A robust layered control system for a mobile robot. MITAI Lab Memo 864, [8] Deitel HM, Deitel PJ. How to program C++. Upper Saddle River, NJ: Prentice Hall; 1998.

18 642 R.L. Dollarhide, A. Agah / Computers and Electrical Engineering 29 (2003) [9] Dollarhide RL. Evolving behavior-based brains for robot search teams. M.S. Thesis, Department of Electrical Engineering and Computer Science, The University of Kansas, [10] Fontan MS, Mataric MJ. Territorial multi-robot task division. IEEE Trans Robotics Automat 1998;14(5). [11] Goldberg DE. Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley; [12] Maes P. Situated agents can have goals. Robotics Autonom Syst 1990: [13] Mataric MJ. Behavior-based control: examples from navigation, learning, and group behavior. In: Hexmoor, Horswill, Kortenkamp, editors. J Experimental Theoret Artificial Intellig. Special Issue on Software Architectures for Physical Agents 1997;9(2 3). [14] McLurkin J. Using cooperative robots for explosive ordnance disposal. MITArtificial Intelligence Laboratory. Available from: [15] National Aeronautics and Space Administration (NASA), Rover Sojourner Home Page. mirrors/jpl/pathfinder/rover/sojourner.html. [16] Parker LE. Cooperative robotics for multi-target observation. Intelligent Automation and Soft Computing, Robotics Research at Oak Ridge National Laboratory 1999;5(1):5 19. [17] Wang J. On sign-board based inter-robot communication in distributed robotic systems. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, p [18] Woo M, Neider J, Davis T, Schreiner D. OpenGL programming guide third edition the official guide to learning OpenGL version. Reading, MA: Addison-Wesley; 1999.

A Genetic Algorithm-Based Controller for Decentralized Multi-Agent Robotic Systems

A Genetic Algorithm-Based Controller for Decentralized Multi-Agent Robotic Systems A Genetic Algorithm-Based Controller for Decentralized Multi-Agent Robotic Systems Arvin Agah Bio-Robotics Division Mechanical Engineering Laboratory, AIST-MITI 1-2 Namiki, Tsukuba 305, JAPAN agah@melcy.mel.go.jp

More information

Evolving Control for Distributed Micro Air Vehicles'

Evolving Control for Distributed Micro Air Vehicles' Evolving Control for Distributed Micro Air Vehicles' Annie S. Wu Alan C. Schultz Arvin Agah Naval Research Laboratory Naval Research Laboratory Department of EECS Code 5514 Code 5514 The University of

More information

Reactive Planning with Evolutionary Computation

Reactive Planning with Evolutionary Computation Reactive Planning with Evolutionary Computation Chaiwat Jassadapakorn and Prabhas Chongstitvatana Intelligent System Laboratory, Department of Computer Engineering Chulalongkorn University, Bangkok 10330,

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS GARY B. PARKER, CONNECTICUT COLLEGE, USA, parker@conncoll.edu IVO I. PARASHKEVOV, CONNECTICUT COLLEGE, USA, iipar@conncoll.edu H. JOSEPH

More information

Optimization of Tile Sets for DNA Self- Assembly

Optimization of Tile Sets for DNA Self- Assembly Optimization of Tile Sets for DNA Self- Assembly Joel Gawarecki Department of Computer Science Simpson College Indianola, IA 50125 joel.gawarecki@my.simpson.edu Adam Smith Department of Computer Science

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015 Subsumption Architecture in Swarm Robotics Cuong Nguyen Viet 16/11/2015 1 Table of content Motivation Subsumption Architecture Background Architecture decomposition Implementation Swarm robotics Swarm

More information

Multi-Robot Coordination. Chapter 11

Multi-Robot Coordination. Chapter 11 Multi-Robot Coordination Chapter 11 Objectives To understand some of the problems being studied with multiple robots To understand the challenges involved with coordinating robots To investigate a simple

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Enhancing Embodied Evolution with Punctuated Anytime Learning

Enhancing Embodied Evolution with Punctuated Anytime Learning Enhancing Embodied Evolution with Punctuated Anytime Learning Gary B. Parker, Member IEEE, and Gregory E. Fedynyshyn Abstract This paper discusses a new implementation of embodied evolution that uses the

More information

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton Genetic Programming of Autonomous Agents Senior Project Proposal Scott O'Dell Advisors: Dr. Joel Schipper and Dr. Arnold Patton December 9, 2010 GPAA 1 Introduction to Genetic Programming Genetic programming

More information

LOCAL OPERATOR INTERFACE. target alert teleop commands detection function sensor displays hardware configuration SEARCH. Search Controller MANUAL

LOCAL OPERATOR INTERFACE. target alert teleop commands detection function sensor displays hardware configuration SEARCH. Search Controller MANUAL Strategies for Searching an Area with Semi-Autonomous Mobile Robots Robin R. Murphy and J. Jake Sprouse 1 Abstract This paper describes three search strategies for the semi-autonomous robotic search of

More information

Evolution of Sensor Suites for Complex Environments

Evolution of Sensor Suites for Complex Environments Evolution of Sensor Suites for Complex Environments Annie S. Wu, Ayse S. Yilmaz, and John C. Sciortino, Jr. Abstract We present a genetic algorithm (GA) based decision tool for the design and configuration

More information

CPS331 Lecture: Agents and Robots last revised April 27, 2012

CPS331 Lecture: Agents and Robots last revised April 27, 2012 CPS331 Lecture: Agents and Robots last revised April 27, 2012 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents 3. To introduce the subsumption architecture

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

The Behavior Evolving Model and Application of Virtual Robots

The Behavior Evolving Model and Application of Virtual Robots The Behavior Evolving Model and Application of Virtual Robots Suchul Hwang Kyungdal Cho V. Scott Gordon Inha Tech. College Inha Tech College CSUS, Sacramento 253 Yonghyundong Namku 253 Yonghyundong Namku

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

CPS331 Lecture: Agents and Robots last revised November 18, 2016

CPS331 Lecture: Agents and Robots last revised November 18, 2016 CPS331 Lecture: Agents and Robots last revised November 18, 2016 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents 3. To introduce the subsumption architecture

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

Robotic Systems ECE 401RB Fall 2007

Robotic Systems ECE 401RB Fall 2007 The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation

More information

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016 CPS331 Lecture: Genetic Algorithms last revised October 28, 2016 Objectives: 1. To explain the basic ideas of GA/GP: evolution of a population; fitness, crossover, mutation Materials: 1. Genetic NIM learner

More information

LANDSCAPE SMOOTHING OF NUMERICAL PERMUTATION SPACES IN GENETIC ALGORITHMS

LANDSCAPE SMOOTHING OF NUMERICAL PERMUTATION SPACES IN GENETIC ALGORITHMS LANDSCAPE SMOOTHING OF NUMERICAL PERMUTATION SPACES IN GENETIC ALGORITHMS ABSTRACT The recent popularity of genetic algorithms (GA s) and their application to a wide range of problems is a result of their

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 16, 2009 COMP 4766/6778 (MUN) Course Introduction January

More information

Fault Location Using Sparse Wide Area Measurements

Fault Location Using Sparse Wide Area Measurements 319 Study Committee B5 Colloquium October 19-24, 2009 Jeju Island, Korea Fault Location Using Sparse Wide Area Measurements KEZUNOVIC, M., DUTTA, P. (Texas A & M University, USA) Summary Transmission line

More information

DETERMINING AN OPTIMAL SOLUTION

DETERMINING AN OPTIMAL SOLUTION DETERMINING AN OPTIMAL SOLUTION TO A THREE DIMENSIONAL PACKING PROBLEM USING GENETIC ALGORITHMS DONALD YING STANFORD UNIVERSITY dying@leland.stanford.edu ABSTRACT This paper determines the plausibility

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Introduction (1.1) SC Constituants and Conventional Artificial Intelligence (AI) (1.2) NF and SC Characteristics (1.3) Jyh-Shing Roger

More information

Neural Networks for Real-time Pathfinding in Computer Games

Neural Networks for Real-time Pathfinding in Computer Games Neural Networks for Real-time Pathfinding in Computer Games Ross Graham 1, Hugh McCabe 1 & Stephen Sheridan 1 1 School of Informatics and Engineering, Institute of Technology at Blanchardstown, Dublin

More information

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Journal of Academic and Applied Studies (JAAS) Vol. 2(1) Jan 2012, pp. 32-38 Available online @ www.academians.org ISSN1925-931X NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Sedigheh

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

Space Exploration of Multi-agent Robotics via Genetic Algorithm

Space Exploration of Multi-agent Robotics via Genetic Algorithm Space Exploration of Multi-agent Robotics via Genetic Algorithm T.O. Ting 1,*, Kaiyu Wan 2, Ka Lok Man 2, and Sanghyuk Lee 1 1 Dept. Electrical and Electronic Eng., 2 Dept. Computer Science and Software

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

Simple Target Seek Based on Behavior

Simple Target Seek Based on Behavior Proceedings of the 6th WSEAS International Conference on Signal Processing, Robotics and Automation, Corfu Island, Greece, February 16-19, 2007 133 Simple Target Seek Based on Behavior LUBNEN NAME MOUSSI

More information

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Outline Introduction Soft Computing (SC) vs. Conventional Artificial Intelligence (AI) Neuro-Fuzzy (NF) and SC Characteristics 2 Introduction

More information

A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition

A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition LUBNEN NAME MOUSSI and MARCONI KOLM MADRID DSCE FEEC UNICAMP Av Albert Einstein,

More information

A Genetic Algorithm for Solving Beehive Hidato Puzzles

A Genetic Algorithm for Solving Beehive Hidato Puzzles A Genetic Algorithm for Solving Beehive Hidato Puzzles Matheus Müller Pereira da Silva and Camila Silva de Magalhães Universidade Federal do Rio de Janeiro - UFRJ, Campus Xerém, Duque de Caxias, RJ 25245-390,

More information

Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller

Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller From:MAICS-97 Proceedings. Copyright 1997, AAAI (www.aaai.org). All rights reserved. Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller Douglas S. Blank and J. Oliver

More information

1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg)

1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg) 1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg) 6) Virtual Ecosystems & Perspectives (sb) Inspired

More information

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1 Introduction to Robotics CSCI 445 Laurent Itti Group Robotics Introduction to Robotics L. Itti & M. J. Mataric 1 Today s Lecture Outline Defining group behavior Why group behavior is useful Why group behavior

More information

Collective Robotics. Marcin Pilat

Collective Robotics. Marcin Pilat Collective Robotics Marcin Pilat Introduction Painting a room Complex behaviors: Perceptions, deductions, motivations, choices Robotics: Past: single robot Future: multiple, simple robots working in teams

More information

Learning Behaviors for Environment Modeling by Genetic Algorithm

Learning Behaviors for Environment Modeling by Genetic Algorithm Learning Behaviors for Environment Modeling by Genetic Algorithm Seiji Yamada Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo

More information

In cooperative robotics, the group of robots have the same goals, and thus it is

In cooperative robotics, the group of robots have the same goals, and thus it is Brian Bairstow 16.412 Problem Set #1 Part A: Cooperative Robotics In cooperative robotics, the group of robots have the same goals, and thus it is most efficient if they work together to achieve those

More information

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Population Adaptation for Genetic Algorithm-based Cognitive Radios Population Adaptation for Genetic Algorithm-based Cognitive Radios Timothy R. Newman, Rakesh Rajbanshi, Alexander M. Wyglinski, Joseph B. Evans, and Gary J. Minden Information Technology and Telecommunications

More information

CS594, Section 30682:

CS594, Section 30682: CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:

More information

Learning to Play like an Othello Master CS 229 Project Report. Shir Aharon, Amanda Chang, Kent Koyanagi

Learning to Play like an Othello Master CS 229 Project Report. Shir Aharon, Amanda Chang, Kent Koyanagi Learning to Play like an Othello Master CS 229 Project Report December 13, 213 1 Abstract This project aims to train a machine to strategically play the game of Othello using machine learning. Prior to

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

PROG IR 0.95 IR 0.50 IR IR 0.50 IR 0.85 IR O3 : 0/1 = slow/fast (R-motor) O2 : 0/1 = slow/fast (L-motor) AND

PROG IR 0.95 IR 0.50 IR IR 0.50 IR 0.85 IR O3 : 0/1 = slow/fast (R-motor) O2 : 0/1 = slow/fast (L-motor) AND A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specied asks Wei-Po Lee John Hallam Henrik H. Lund Department of Articial Intelligence University of Edinburgh Edinburgh,

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

NASA Swarmathon Team ABC (Artificial Bee Colony)

NASA Swarmathon Team ABC (Artificial Bee Colony) NASA Swarmathon Team ABC (Artificial Bee Colony) Cheylianie Rivera Maldonado, Kevin Rolón Domena, José Peña Pérez, Aníbal Robles, Jonathan Oquendo, Javier Olmo Martínez University of Puerto Rico at Arecibo

More information

Distributed Control of Multi-Robot Teams: Cooperative Baton Passing Task

Distributed Control of Multi-Robot Teams: Cooperative Baton Passing Task Appeared in Proceedings of the 4 th International Conference on Information Systems Analysis and Synthesis (ISAS 98), vol. 3, pages 89-94. Distributed Control of Multi- Teams: Cooperative Baton Passing

More information

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MF1 94) Las Vega, NV Oct. 2-5, 1994 Fuzzy Logic Based Robot Navigation In Uncertain

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

Evolutions of communication

Evolutions of communication Evolutions of communication Alex Bell, Andrew Pace, and Raul Santos May 12, 2009 Abstract In this paper a experiment is presented in which two simulated robots evolved a form of communication to allow

More information

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control Int. J. of Computers, Communications & Control, ISSN 1841-9836, E-ISSN 1841-9844 Vol. VII (2012), No. 1 (March), pp. 135-146 Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

More information

Evolving Digital Logic Circuits on Xilinx 6000 Family FPGAs

Evolving Digital Logic Circuits on Xilinx 6000 Family FPGAs Evolving Digital Logic Circuits on Xilinx 6000 Family FPGAs T. C. Fogarty 1, J. F. Miller 1, P. Thomson 1 1 Department of Computer Studies Napier University, 219 Colinton Road, Edinburgh t.fogarty@dcs.napier.ac.uk

More information

SECTOR SYNTHESIS OF ANTENNA ARRAY USING GENETIC ALGORITHM

SECTOR SYNTHESIS OF ANTENNA ARRAY USING GENETIC ALGORITHM 2005-2008 JATIT. All rights reserved. SECTOR SYNTHESIS OF ANTENNA ARRAY USING GENETIC ALGORITHM 1 Abdelaziz A. Abdelaziz and 2 Hanan A. Kamal 1 Assoc. Prof., Department of Electrical Engineering, Faculty

More information

Efficient Evaluation Functions for Multi-Rover Systems

Efficient Evaluation Functions for Multi-Rover Systems Efficient Evaluation Functions for Multi-Rover Systems Adrian Agogino 1 and Kagan Tumer 2 1 University of California Santa Cruz, NASA Ames Research Center, Mailstop 269-3, Moffett Field CA 94035, USA,

More information

Review of Soft Computing Techniques used in Robotics Application

Review of Soft Computing Techniques used in Robotics Application International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 3 (2013), pp. 101-106 International Research Publications House http://www. irphouse.com /ijict.htm Review

More information

Solving and Analyzing Sudokus with Cultural Algorithms 5/30/2008. Timo Mantere & Janne Koljonen

Solving and Analyzing Sudokus with Cultural Algorithms 5/30/2008. Timo Mantere & Janne Koljonen with Cultural Algorithms Timo Mantere & Janne Koljonen University of Vaasa Department of Electrical Engineering and Automation P.O. Box, FIN- Vaasa, Finland timan@uwasa.fi & jako@uwasa.fi www.uwasa.fi/~timan/sudoku

More information

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots. 1 José Manuel Molina, Vicente Matellán, Lorenzo Sommaruga Laboratorio de Agentes Inteligentes (LAI) Departamento de Informática Avd. Butarque 15, Leganés-Madrid, SPAIN Phone: +34 1 624 94 31 Fax +34 1

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Co-evolution for Communication: An EHW Approach

Co-evolution for Communication: An EHW Approach Journal of Universal Computer Science, vol. 13, no. 9 (2007), 1300-1308 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/9/07 J.UCS Co-evolution for Communication: An EHW Approach Yasser Baleghi Damavandi,

More information

PSYCO 457 Week 9: Collective Intelligence and Embodiment

PSYCO 457 Week 9: Collective Intelligence and Embodiment PSYCO 457 Week 9: Collective Intelligence and Embodiment Intelligent Collectives Cooperative Transport Robot Embodiment and Stigmergy Robots as Insects Emergence The world is full of examples of intelligence

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Multi-robot Dynamic Coverage of a Planar Bounded Environment

Multi-robot Dynamic Coverage of a Planar Bounded Environment Multi-robot Dynamic Coverage of a Planar Bounded Environment Maxim A. Batalin Gaurav S. Sukhatme Robotic Embedded Systems Laboratory, Robotics Research Laboratory, Computer Science Department University

More information

Implementation of FPGA based Decision Making Engine and Genetic Algorithm (GA) for Control of Wireless Parameters

Implementation of FPGA based Decision Making Engine and Genetic Algorithm (GA) for Control of Wireless Parameters Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 11, Number 1 (2018) pp. 15-21 Research India Publications http://www.ripublication.com Implementation of FPGA based Decision Making

More information

Path Planning for Mobile Robots Based on Hybrid Architecture Platform

Path Planning for Mobile Robots Based on Hybrid Architecture Platform Path Planning for Mobile Robots Based on Hybrid Architecture Platform Ting Zhou, Xiaoping Fan & Shengyue Yang Laboratory of Networked Systems, Central South University, Changsha 410075, China Zhihua Qu

More information

A Divide-and-Conquer Approach to Evolvable Hardware

A Divide-and-Conquer Approach to Evolvable Hardware A Divide-and-Conquer Approach to Evolvable Hardware Jim Torresen Department of Informatics, University of Oslo, PO Box 1080 Blindern N-0316 Oslo, Norway E-mail: jimtoer@idi.ntnu.no Abstract. Evolvable

More information

Evolving CAM-Brain to control a mobile robot

Evolving CAM-Brain to control a mobile robot Applied Mathematics and Computation 111 (2000) 147±162 www.elsevier.nl/locate/amc Evolving CAM-Brain to control a mobile robot Sung-Bae Cho *, Geum-Beom Song Department of Computer Science, Yonsei University,

More information

Structural Analysis of Agent Oriented Methodologies

Structural Analysis of Agent Oriented Methodologies International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 6 (2014), pp. 613-618 International Research Publications House http://www. irphouse.com Structural Analysis

More information

The Application of Multi-Level Genetic Algorithms in Assembly Planning

The Application of Multi-Level Genetic Algorithms in Assembly Planning Volume 17, Number 4 - August 2001 to October 2001 The Application of Multi-Level Genetic Algorithms in Assembly Planning By Dr. Shana Shiang-Fong Smith (Shiang-Fong Chen) and Mr. Yong-Jin Liu KEYWORD SEARCH

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information

More information

Artificial Intelligence and Mobile Robots: Successes and Challenges

Artificial Intelligence and Mobile Robots: Successes and Challenges Artificial Intelligence and Mobile Robots: Successes and Challenges David Kortenkamp NASA Johnson Space Center Metrica Inc./TRACLabs Houton TX 77058 kortenkamp@jsc.nasa.gov http://www.traclabs.com/~korten

More information

Artificial Intelligence and Asymmetric Information Theory. Tshilidzi Marwala and Evan Hurwitz. University of Johannesburg.

Artificial Intelligence and Asymmetric Information Theory. Tshilidzi Marwala and Evan Hurwitz. University of Johannesburg. Artificial Intelligence and Asymmetric Information Theory Tshilidzi Marwala and Evan Hurwitz University of Johannesburg Abstract When human agents come together to make decisions it is often the case that

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms Wouter Wiggers Faculty of EECMS, University of Twente w.a.wiggers@student.utwente.nl ABSTRACT In this

More information

SPQR RoboCup 2016 Standard Platform League Qualification Report

SPQR RoboCup 2016 Standard Platform League Qualification Report SPQR RoboCup 2016 Standard Platform League Qualification Report V. Suriani, F. Riccio, L. Iocchi, D. Nardi Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti Sapienza Università

More information

CPE/CSC 580: Intelligent Agents

CPE/CSC 580: Intelligent Agents CPE/CSC 580: Intelligent Agents Franz J. Kurfess Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. 1 Course Overview Introduction Intelligent Agent, Multi-Agent

More information

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Traffic Control for a Swarm of Robots: Avoiding Target Congestion Traffic Control for a Swarm of Robots: Avoiding Target Congestion Leandro Soriano Marcolino and Luiz Chaimowicz Abstract One of the main problems in the navigation of robotic swarms is when several robots

More information

Situated Robotics INTRODUCTION TYPES OF ROBOT CONTROL. Maja J Matarić, University of Southern California, Los Angeles, CA, USA

Situated Robotics INTRODUCTION TYPES OF ROBOT CONTROL. Maja J Matarić, University of Southern California, Los Angeles, CA, USA This article appears in the Encyclopedia of Cognitive Science, Nature Publishers Group, Macmillian Reference Ltd., 2002. Situated Robotics Level 2 Maja J Matarić, University of Southern California, Los

More information

BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab

BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab Please read and follow this handout. Read a section or paragraph completely before proceeding to writing code. It is important that you understand exactly

More information

How the Body Shapes the Way We Think

How the Body Shapes the Way We Think How the Body Shapes the Way We Think A New View of Intelligence Rolf Pfeifer and Josh Bongard with a contribution by Simon Grand Foreword by Rodney Brooks Illustrations by Shun Iwasawa A Bradford Book

More information

M ous experience and knowledge to aid problem solving

M ous experience and knowledge to aid problem solving Adding Memory to the Evolutionary Planner/Navigat or Krzysztof Trojanowski*, Zbigniew Michalewicz"*, Jing Xiao" Abslract-The integration of evolutionary approaches with adaptive memory processes is emerging

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 6912 Andrew Vardy Department of Computer Science Memorial University of Newfoundland May 13, 2016 COMP 6912 (MUN) Course Introduction May 13,

More information

The Necessity of Average Rewards in Cooperative Multirobot Learning

The Necessity of Average Rewards in Cooperative Multirobot Learning Carnegie Mellon University Research Showcase @ CMU Institute for Software Research School of Computer Science 2002 The Necessity of Average Rewards in Cooperative Multirobot Learning Poj Tangamchit Carnegie

More information

Localization (Position Estimation) Problem in WSN

Localization (Position Estimation) Problem in WSN Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Using Cyclic Genetic Algorithms to Evolve Multi-Loop Control Programs

Using Cyclic Genetic Algorithms to Evolve Multi-Loop Control Programs Using Cyclic Genetic Algorithms to Evolve Multi-Loop Control Programs Gary B. Parker Computer Science Connecticut College New London, CT 0630, USA parker@conncoll.edu Ramona A. Georgescu Electrical and

More information

Multi-Platform Soccer Robot Development System

Multi-Platform Soccer Robot Development System Multi-Platform Soccer Robot Development System Hui Wang, Han Wang, Chunmiao Wang, William Y. C. Soh Division of Control & Instrumentation, School of EEE Nanyang Technological University Nanyang Avenue,

More information

Raster Based Region Growing

Raster Based Region Growing 6th New Zealand Image Processing Workshop (August 99) Raster Based Region Growing Donald G. Bailey Image Analysis Unit Massey University Palmerston North ABSTRACT In some image segmentation applications,

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Control Arbitration. Oct 12, 2005 RSS II Una-May O Reilly

Control Arbitration. Oct 12, 2005 RSS II Una-May O Reilly Control Arbitration Oct 12, 2005 RSS II Una-May O Reilly Agenda I. Subsumption Architecture as an example of a behavior-based architecture. Focus in terms of how control is arbitrated II. Arbiters and

More information

Adaptive Multi-Robot Behavior via Learning Momentum

Adaptive Multi-Robot Behavior via Learning Momentum Adaptive Multi-Robot Behavior via Learning Momentum J. Brian Lee (blee@cc.gatech.edu) Ronald C. Arkin (arkin@cc.gatech.edu) Mobile Robot Laboratory College of Computing Georgia Institute of Technology

More information