Coevolving Influence Maps for Spatial Team Tactics in a RTS Game

Size: px
Start display at page:

Download "Coevolving Influence Maps for Spatial Team Tactics in a RTS Game"

Transcription

1 Coevolving Influence Maps for Spatial Team Tactics in a RTS Game ABSTRACT Phillipa Avery University of Nevada, Reno Department of Computer Science and Engineering Nevada, USA pippa@cse.unr.edu Real Time Strategy (RTS) games provide a representation of spatial tactics with group behaviour. Often tactics will involve using groups of entities to attack from different directions and at different times of the game, using coordinated techniques. Our goal in this research is to learn tactics which are challenging for human players. The method we apply to learn these tactics, is a coevolutionary system designed to generate effective team behavior. To do this, we present a unique Influence Map representation, with a coevolutionary technique that evolves the maps together for a group of entities. This allows the creation of autonomous entities that can move in a coordinated manner. We apply this technique to a naval RTS island scenario, and present the successful creation of strategies demonstrating complex tactics. Categories and Subject Descriptors I.2.8 [ARTIFICIAL INTELLIGENCE]: Problem Solving, Control Methods, and Search Plan execution, formation, and generation General Terms Experimentation Keywords Influence Maps, Coevolution, Real Time Strategy 1. INTRODUCTION This paper investigates the evolution of boat group tactics in a real-time capture-the-flag game. We present a method of evolving group tactics using spatial information to specify the relative positions of entities, and manoeuvring of entities to achieve a set of objectives. We demonstrate how the technique can be used to create adaptive tactics for a world environment representing a naval island scenario. In this Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO 10, July 7 11, 2010, Portland, Oregon, USA. Copyright 2010 ACM /10/07...$ Sushil Louis University of Nevada, Reno Department of Computer Science and Engineering Nevada, USA sushil@cse.unr.edu research we are interested in generating new and interesting tactics and eventually applying these tactics to playing against human players. The ability to automatically generate spatially resolved group tactics has applications in computer gaming, robotics, multi-agent systems, and war-gaming. Our prior work in co-evolving real-time strategy game players pointed out the importance of tactical superiority when strategies and resources are balanced among competing players [10]. Realtime strategy games emphasize strategy, but achieving strategic objectives depends on tactics. Bad tactical planning, an inability to adapt quickly, and unskilled manoeuvring can ruin good strategy. Tactical planning thus plays an important role in many situations. Although we have discussed tactics and strategy, we have not yet defined these terms. For our work, we find it useful to define strategic planning as planning that uses all available resources to achieve a goal such as winning the game. Tactical planning uses a subset of available resources to achieve an objective that supports the over-arching strategic objective of winning the game. In this paper, we use a coevolutionary algorithm to evolve tactical control for small groups of boats as we work towards tactics that involve deception, sacrifice, feints, flanking, splitting enemy fire-power, and hopefully others that will surprise us. We want the game entities (boats) to function autonomously, but still work with the rest of their group in achieving an objective. The autonomous behaviour allows the boat some independence to move and attack on its own, while the coordination provides an overall tactic for the attacks. We present a representation for generating adaptive tactics using coevolved Influence Maps (IMs). As the IM changes with the game state, this representation allows adaptive tactic generation. In our research we use an IM for each entity in the game, to assign independent movement for that entity. By assigning each entity its own IM, an entity is able to move autonomously and generate paths independently to the other team entities. To coordinate tactics, we evolve all the IM weights for a team together, creating an evolutionary individual that defines the individual IMs for an entire team. The coevolution can thus find IMs that work together to create coordinated tactics. Thus there is no teamlevel IM needed to move the entities as a group, as this can be achieved by evolving individual IMs that coordinate the movement. Each entity s IM has weights corresponding to all the 783

2 game entities, and the entity then moves according to the current IM. The movement is performed by assigning a goal represented by the strongest attracting weight on the IM, and then calculating the A* path to the goal using the IM. This paper demonstrates the effective use of the coevolved IM technique to generate tactics for a two team game of capture-the-flag, in a island terrain scenario. We show that the technique is able to create complex tactics using evolved triggers represented in the IM mechanisms. The paper is comprised as follows. Section 2 provides a background, including relevant subject matter and other research. We also describe the previous work done by the authors on this research. Section 3 describes the complexity added to the previous work represented by land mass, and section 4 provides a methodology of how this was implemented. Section 5 gives the experiments and observations for the research. Finally, section 6 concludes the paper with a discussion on the findings and future work. 2. BACKGROUND An Influence Map (IM) is a graph or grid placed over the game map, representing points of interest and breaking the game map into discrete regions. The IM is then assigned values to each node or cell by some function. Once computed, an influence map can encode spatial features or some spatially related concept. They can identify boundaries of control, choke points, and other tactically interesting spatial features. Influence maps (IM)s evolved out of work done on spatial reasoning within the game of Go [19], and have since then been occasionally used in commercial video games such as Age of Empires, and the AI and CI in games communities [16] for games such as Pacman [17], real-time strategy games [9,10,12 14], and turn-based games [3]. Typically the use of an IM in RTS style games, involves generating the IM at each time step in the game (using the IM function), and then using the assigned weights for path finding. As the game state changes, e.g. resources are used up or game entities move or are destroyed, the IM function that assigns the weights will create a new IM reflecting these changes. While there are many possible representations for spatial tactics, such as artificial neural networks and knowledge based approaches, we chose to use an IM representation for a number of reasons. IMs give an intuitive representation of spatial characteristics, and as a result have been used extensively for spatial reasoning in games as discussed above. IMs also allow us to easily scale for complexity, with the addition of new game entities. Finally, IMs allow us easy visualization and interpretation of the evolved tactics. A coevolutionary algorithm was chosen to generate new tactics through self-discovery. The use of coevolution has a successful history in self-learning strategies for game playing [1,4,7]. We want to use this ability of coevolution to generate new and interesting tactics for the naval RTS games. Eventually we also wish to use the adaptive nature of coevolution to generate tactics that can adapt to a human player. 2.1 Related work Prior work evolved a single influence map tree to generate objectives for the LagoonCraft RTS game [11]. These objectives were then achieved by a genetic algorithm generated near-optimal allocation of resources [10]. The current work differs in that we are evolving co-operative influence maps for a group of entities that coordinate to complete a task. Each entity uses an individually assigned IM, but the fitness evaluation assigns one fitness to the entire set of influence maps evolved. In other words, if we have five boats in our attack group, each individual in the population contains parameters that generate five influence maps. There is other related work using IMs for game playing, which we summarize here. Sweetser and Wiles [15] used IMs for game agents, where the IM was used to create reactive agent behaviour to environmental changes. While we are looking at group tactics over agent behaviour, this research demonstrates the benefits of IMs in agent decision-making. Bergsma and Spronck also use influence maps to generate tactics for a turn based strategy game [3]. A neural network layers several IMs to generate attack and move values for each cell. Like Miles work, Bergsma and Spronck combine IMs to generate tactical or strategic objectives. Unlike Miles work, they use neural networks (whose weights are evolved) to combine the influence maps. Similar work has been done by Jang and Cho who also use layered IMs with a neural network for a real-time strategy game [8]. While we are researching games, specifically real-time strategy games, it is worth noting that the use of IMs for spatial decision making also has benefits in areas such as robotics [18]. Danielsiek et. al. [5] use IMs in conjunction with flocking to determine movement of groups in RTS games, with the goal of cohesively moving groups of units. Our goal differs in that we are attempting to generate autonomous tactics for each entity, which are evolved to coordinate their tactics. The prior work cited in this section seems most relevant, but there is much other work in using influence maps for spatial reasoning, organizational behaviour, and in other non game related fields. To the best of our knowledge, this is the first attempt to evolve and co-evolve a set of co-operative influence maps together and use these co-adapted plans to generate competent, adaptive tactics for attack and defense. 2.2 Previous work This paper continues the work done in [2], which demonstrated the use of evolving and coevolving Influence Maps (IMs) for a team of entities in a real-time strategy scenario. Each entity was assigned its own IM, and used it to move autonomously. By evolving the individual IMs of a team together as a single individual in the coevolution, coordinated tactics were also possible. In our experiments, we used a capture-the-flag game (described further in the next section), with two teams of five boats (entities). One team (the defenders) had a flag to defend, with the goal of protecting the flag for the given time limit. The other team s goal (the attackers) was to reach the opponent s flag in the time limit, with the sub-goals of destroying as many opponent boats as possible, and doing it as quickly as possible. We performed two types of experiments, first evolving the attacking team against unmoving defenders placed around the flag. Secondly we coevolved both the attackers and the defenders, where the defenders could move around to attack and defend. In the second scenario, the goals of both teams became the same as the attackers mentioned above. The evolutionary representation was a chromosome of evolved weights used by the IM function to generate an IM for each game entity every time step. A single chromosome represented the IMs for an entire team. Each entity in the team 784

3 Figure 1: Example chromosome structure has a set of genes (weights) representing: each team mate, each opponent entity, the flag. Figure 1 depicts how the chromosome is used for the first scenario described above. The chromosome at the top of the Figure displays the set of weights for entity 3, which is given the label f3 (friendly 3). The weights are assigned as f0 f4 for the friendly team entities, e0 e4 the enemy team entities, and flg for the flag. The IM function takes the weights, along with the current game state map, and outputs the IMs for each entity in the team. This step is depicted in Figure 1, where the game map on the left is used to generate the IM for entity 3 shown on the right. The current game state map represents where in the world map all the entities are at the current time step. The IM function then takes the game map, and generates the IMs for all the entities represented in its team. The algorithm used for generating the IMs for the team of m entities is as follows: for i in m for x in maxx for y in maxy if(not isoccupied(map[x][y])) IM[i][x][y] = 0 if (flag(max[x][y])) genflaginfluence(im, i, x, y) if (friendly(map[x][y]) genfriendlyinfluence(im, i, x, y, map[x][y]) if (enemy(map[x][y]) genenemyinfluence(im, i, x, y, map[x][y]) genfriendlyinfluence directly assigns the IM cell at x, y the corresponding chromosome allele for the entity represented at Map[x][y], from the corresponding gene set for i. For example, referring to Figure 1 (i = 3), ifmap[x][y] == f1, then the corresponding gene on the chromosome for entity 3, would be the value represented by the strong green colour (a repulsive influence). This value is then directly added to the IM cell at location x, y. The same form of assignment is done for the flag value with genflaginfluence. genenemyinfluence differs slightly in the assignment. It sets the evolved parameter value (v i)atlocationx, y as per genfriendlyinfluence, as well as setting reducing values for all cells neighbouring location x, y in range r. The values are assigned as per below: IM[i][x ][y v i ]+= v i r max(x,y ) where x and y are the current row and column in range (x, x+r] and(y,y+r] respectively,v i is the evolved parameter value for the enemy entity, and r is a fixed constant representing the enemy weapon s firing range (in experiments r = 4). The max function simply makes sure the largest value of either the current column and row is used to calculate the value, thus maintaining max distance. Since it is possible for multiple entities to be in the same map/im cell or close enough for their influences to overlap, the final IM cell value is the sum of all influences on that cell, hence the += notation in the formula above. Each entity then used its generated IM to assign a goal and movement towards the goal. The goal was assigned as the most attracting weight on the IM (in the range [-50, 50], with -50 the most attracting, and 50 repulsing). The A* path finding algorithm was then used to find a path to the goal. By using the A* algorithm, a path can be found taking into account the IM influences, and the entities can avoid potential conflicts, or change the path depending on the IM influences at the current game state. Our previous experiments using the above representation in a evolutionary and coevolutionary system showed potential for discovering coordinated team tactics. The entities were able to assign weights that allowed them to follow friends, attack enemies and capture the flag. Tactics were evolved that assigned consecutive weights as goals. This meant that when a friend being followed or an enemy being attacked were no longer on the battle field (were destroyed), the entity was able to assign a new goal and change tactics. This allowed creation of tactics such as coordinated attacks on a group of enemy entities, followed by a charge on the flag when the game state changed. The representation needed further testing however, for more complex scenarios such as harder game maps. The coevolution from the previous research did not yield overly positive results. The game scenario proved too simple and unbalanced to see many interesting tactics develop. The open-ocean scenario used needed further complexity to see correspondingly complex tactics. Additionally the coevolution found imbalance in the game mechanism used of a team of defenders and a team of attackers. The attackers had an optimal tactic that the defenders were not able to overcome, of all entities charging straight for the flag. We attempted to evolve against this tactic, but were not able to find any defender tactics that could counter the attackers. From analysis and game play, there is no tactic that the defenders could apply that would stop this given the game in its current state. 2.3 The capture-the-flag game The game used for this research was a simple game of capture-the-flag. This game was chosen as it is a well known example of a simple game with complex tactics, and provides a good test-bed for our research. In this research we use the following version of the game. There are two teams of players, each with five boats and a flag to defend. The goal is to capture the opposition flag in the least time possible, with sub-goals to sink as many of the enemy on the way, while retaining as many own players as possible. The sub goals had a minimal weight compared to the main goal of reaching the flag. The end of game scenarios are: either player team reaching the opposition flag and winning, or the game timer 785

4 Figure 3: The land mass map Figure 2: The world map used running out, in which case a time-out occurs and neither player wins. The mechanics of the game were kept very simple. The boats have an auto-firing mechanism that begins firing when an opposition boat is within a certain distance. If multiple boats enter a firing range at the same time, the closest boat gets targeted first. This firing mechanism is very simple, and not ideal. In the future, we will implement a more sophisticated method of when and whom to fire on. The game entities use boat physics which model a turning circle and acceleration/deceleration. 3. INCREASING THE COMPLEXITY - US- ING LAND IN TACTICS Our first step in increasing the complexity was to address the imbalance in the game seen in the previous experiments with coevolution. We found that the attacker and defender style of game, where one team defends a flag and the other team has to reach the flag, was hard to balance. The defender team was unable to defend the flag against a tactic of all attacker units charging the flag. For this experiment, we implemented a more symmetric scenario, where both teams were of equal strength, and had their own flag to defend. In this version, the first team to capture the flag within the given time frame won, otherwise a loss was recorded. To further increase the complexity of the tactics being evolved and gain more out of the coevolutionary process, we also implemented land mass. The world map used can be seen if Figure 2. To provide navigation around the land masses, we used the image shown in Figure 3 to define the land areas for the IM. The image in Figure 3 shows a bitmap that is the same size as the IM (in this case, 50x50 bits). The bits marked with black are then given a strong repulsing value in the corresponding IM position. The path finding algorithm also marks these points as being unvisitable. Using the described evolved IM technique, the assigned path goals are flags, team-mates or enemies. In this scenario, the quickest way (with an A* heuristic set to this) to get to goals representing an enemy or their flag, is through the channel to the right of the largest land mass. As a result, the direct application of the evolved IMs in this scenario does not create much change in tactic generation, apart from a narrowing of the available navigation area. To take into ac- count other routes, and possibly incorporate the land masses into tactic generation, the evolved individuals must be given information about the land masses in the map. This information can then be used to determine goal/path points, leading to autonomous and coordinated tactics. The methods used for this are described in the next section. 4. METHODOLOGY To provide the land mass information to the coevolved individuals, we must determine what information to provide, and how to provide it. We also want the solution to be scalable to larger scenarios, with more land mass. We decided to provide the information by extending the evolved IM weights to include weights for the land points. Giving the individuals information about the land mass allows use of that information in designing spatial tactics. For example, they could use the provided land points to define a path that might not have as much enemy traffic. Applying the added land information, the IM for each entity now includes information on the team entities, the opposition entities, the two flags, and the land points provided. As a result, the gene set for each entity in the chromosome is also expanded to include this information. The same technique described in Section 2.2 for genfriendlyinfluence is used to assign the evolved weights to the corresponding point on the IM. The next step was deciding what land points to include. Providing the entire land mass perimeter would be ideal but infeasible, due to the amount of inputs required. This would detrimentally increase the size of the chromosome, rendering the evolution effectively useless in evolving useful tactics given the time frame. This becomes of increased importance whenthesizeandcomplexityoftheworldmapisup-scaled. Instead we chose to sample the land mass perimeter, and provide limited information to the individual. For this stage of the research, we settled on providing four points of information for each land mass: the furthest upper, lower, left and right bound locations. In future research however, we would like to expand this to include possible interesting points on the land, such as large concave or convex points, and allowing more land points for larger masses. An algorithm to find these points of interest around the land was required. We implemented a flood-fill algorithm to find the perimeter of each land mass. The furthest north, south, east and west points off each land mass were then added as points on the influence map. Each entity can then evolve weights for these points the same as evolving weights for other game entities. This design allows for adaptive implementation, where any world map could be used, and the entities can adapt to the terrain, and use it in their tactics. Figure 4 shows the IM for an entity. For this Figure, the weights of all entities and flags have been set to neutral (0), 786

5 Figure 4: The land mass IM while the land points are all assigned strong attracting influences (-50). As mentioned before, the land masses are all given strong repulsing weights, which can be seen as the lighter green squares over the land. There are four influence points given for each land mass, unless the point is on another land mass, or outside the map. Each point is set one square away from its assigned land mass, to allow navigation without colliding into land, and can be seen as the dark squares. The method of path generation currently used, where the lowest IM weight is assigned as the goal and A* used to calculate a path to the goal, is not sufficient in utilizing the land as part of the tactics. For example, if the flag is assigned the lowest weight, the A* algorithm will find the shortest path, which will in this case always be through the center of the grouping of islands. To give the entities further options on how to use the land, we changed the way goals are assigned. Instead of assigning a single end goal, we broke the movement into sub-goals on the way to the end goal. The end goal with the lowest weight was found, then the closest attracting sub-goal to the entity was found on the way to the end goal, and a path to this goal created. Once this sub-goal had been reached, the next closest attracting sub-goal was found, and so on until the end goal is reached. This allows the creation of tactics that use land points to move around the map, taking routes that may not be the quickest way to the goal, but are more tactically advantageous. The next section discusses the coevolutionary system used to evolve tactics for the entities. 4.1 The coevolution To evolve individuals with interesting coordinated behaviour, we used a two population coevolutionary system. Each population comprised individuals representing a team in the game. The two teams started in the positions shown in Figure 2, one team at the top of the large land mass, and one at the bottom. Each team had their own flag to defend, and an opposition flag to capture. The population of individuals starting at the bottom of the map were labelled population 1, and the individuals starting at the top were population 2. The representation of the individual was the same as shown in Figure 1, with the addition of land points for each entity gene set. In our case, there are eight land masses found by the flood-fill algorithm, so each entity in the team has an extra 8 4 genes added (there are 4 points per land mass). So the total chromosome length for team 1 is: Team1Num(Team1Num+Team2Num+FlagNum+LandNum), where Team1Num and Team2Num is the number of entities in team 1 and 2 respectively, FlagNum is the number of flags, and LandNum is the number of land points. In our experiments Team1Num and Team2Num are both 5, FlagNum is 2 and as discussed above, LandNum is 32, for a total of 220 genes in the chromosome. This leads to a very large chromosome, and in future research we would like to investigate the impact of this, and possible ways to limit the size. The gene values use integer encoding in the range [ 50, 50]. The fitness of an individual is found by evaluating the individual against a number of individuals from the opposing population. The evaluation occurs by playing a game using the evolved IMs to determine the movement of the entities in the game. The evaluation function is run after an individual had completed a game; when one team captured the flag or time ran out. The results of the game are recorded, including the win/loss result, the number of player and opposition entities remaining, and the time remaining. These results are then used to record the fitness for the individual, with the following fitness function: fitness = winratio winw eight +fweight friendlycount ew eight enemycount +time +win oppw inratio oppw eight where winratio is the percentage of games won over all games played by the individual. The winw eight then assigns the weight of the winratio on the fitness, and was set to 500. friendlycount is the number of friendly units left after the encounter. Similarly, enemycount is the number of enemies left. time is the amount of time left if the individual won, with respect to the given time limit. For example, if the time limit was 60 seconds, and the individual reached the flag in 40 seconds, than time would equal 20. The oppw inratio is the number of wins per games played for the opposition, which only came into play if win was 1 (True). The oppw eight determines the weight the oppw inratio had on the fitness, and was set to 200. The use of the oppw inratio allows individuals that won against harder individuals to score a better fitness. This fitness calculation is a very simple one used to test the effectiveness of the coevolutionary system. In future research we would like to investigate the affect the fitness has in regards to the goals and weights used. The selection method used was rank selection, which was chosen as the most effective after experiments with rank, tournament and roulette selection. Elitism was applied to a percentage of the population, and the variation operators applied to non-elite individuals. The variation operators used were two point crossover and mutation, where mutation occurred after crossover. The two point crossover was chosen as a simple implementation solution. It is not the most efficient mechanism to use however, particularly given the integer encoding creates a fixed mapping of the gene values, unlike binary encoding. In future research we would like to investigate the use of uniform cross over, or include a more sophisticated crossover mechanism that incorporates 787

6 450 Population 1 Population Max Avg Min 500 Max Avg Min Fitness Fitness 200 Fitness Generations Generations 200 (a) Population 1 (b) Population Generations Figure 5: The average of the best individuals from both populations at each generation a disruption to the values surrounding the crossover points, allowing the crossover to become a search operator similar to binary two point crossover. To do this, we are interested in applying a similar technique to that described by Deb and Agrawal for real variable values [6]. The mutation combined small and large parameter perturbations with a smaller chance of a large mutation occurring. The small mutation was to add or subtract 1 from the parameter. Large mutation involved picking a new random integer between the 50 to 50 range of all parameters. The next section provides the details of the experimental setting used, and the results. We describe the scenario and details for each experiment, and then give a summary of our observations and findings. 5. EXPERIMENT AND RESULTS We implemented the described coevolutionary system on a parallel cluster to test the effectiveness of the IM representation with terrain. The evaluation for each individual was performed on separate nodes. For initial testing, a population size of 20 with 100 generations was used, and 5 evaluation games were played for each individual against random unique individuals from the other population. This meant that each individual played at least 5 games, with a possible maximum of 20, due to random selection from the other population s evaluation. After initial successes, we then experimented with decreasing the number of games and increasing the population size, which resulted in a equivalent to higher fitness achieved. The experiment parameters for the results described below are as follows. The population size was 30 and run for 200 generations. The crossover rate was set to 50% with a mutation rate of 1%. The elitism ratio was set to 5%. Each individual played a minimum of 3 games against unique individuals from the opposing population, with a possible maximum of 30 (all opposition individuals). The coevolution was run 10 times, with the best, worst and average fitness of the individuals for each generation recorded. The average of all results for the best individuals at each generation, in both populations (overlaid), can be seen in Figure 5, with a bezier curve used to smooth the results for comparison. Figure 5 then displays the average results of the runs for the maximum, minimum and average fitness of each generation for a population. Figure 6: Average best, minimum and maximum results of each generation in the populations The results show a clear steady increase in the fitness of the populations, with population 1 appearing to have a slight advantage over population 2. This could be due to the asymmetrical land mass providing a slight advantage. When viewing the strategies being evolved, some complex tactics became apparent. The entities were acting as coordinated groups. Boats would follow each other to a point, then split with separate goals, or even wait for some trigger to occur. An example of this is where a boat would head to a point and attempt to draw enemy boats to that point. When the enemies reached within the range of the boat, it would retreat back to where a group of peers waited. As the boat is initially drawn to an attracting weight for the waiting point, when an enemy with a repulsive influence approaches, the attracting weight of the waiting point is cancelled (due to the summation of weights). One of the friendly reinforcement boats then has the next highest attracting force, and thus the approaching boats effectively trigger the retreat. This sort of strategy is highly dependent on circumstance however, and did not prove a good overall strategy. If no enemies were drawn to the boat, then it is essentially a wasted force. The land points were also being used in other ways, both as ways to move around the landscape, and as part of the tactics. For example, an entity would set course for one of the outlying islands, and then wait for the game state to change - such as an enemy moving away, before moving to attack or head to the flag. The use of the sub-goals also came into play, with travel via different paths being used to time attacks. For example, if an entity travels to a further point (such as an outlying island) on its way to the flag, it has more opportunity to avoid enemy entities that are moving in the other direction. Generally the better tactics evolved in the later stages of evolution involved the team playing a cat and mouse game with the opposition. They would stay at the starting point to begin with to see what the opposition was going to do, and then move in accordance. One interesting example of this can be seen in Figure 5 where in this run the best individuals from both populations evolved similar cat and mouse tactics. In Figure 7(a) the starting state can be seen, where both teams send a single boat to head towards the opposition s flag. After heading off however, they turn around when the remaining boats start forming a group around their flags. The rest of the game involves a stalemate, where both teams continue to try and draw the enemy away from the flag, then retreat when unsuccessful. 788

7 400 Avg (a) Example tactic - stage 1 (b) example tactic - stage 2 Figure 7: Example evolved tactic Another typical tactic involved one or two entities heading around the large land mass one way, while the rest either waited by the flag or headed around the other way. There would be one entity whose job was to reach the opponent flag, while the others kept the opponents busy or guarded the flag. The entity with the flag as a goal often had repulsive influences around the opposition, and would go out of its way to avoid conflict. If it was sunk however, there was often no back-up plan evolved where another entity would take over the goal or reaching the flag. This is something we would like to see being evolved in the future. One mechanism to achieve this is to hard code or evolve an extra IM that can be assigned as a key strategy for all the entities. For example, to avoid the stalemate witnessed, if there are no boats remaining that assign the flag as final goal, the key strategy can be put into play. Due to the nature of coevolution, it is difficult to determine the exact nature of the growth. To attempt to provide a baseline method of measurement, we implemented a simple tactic that we could evaluate the best individual from each generation against. The tactic involved sending the farthest left boat to the west of the large land mass, with the enemy flag as a goal. The second boat followed the first boat, with a secondary goal of reaching the flag if the first boat was sunk. The remaining three boats set a course directly to the flag, which took them to the east of the large land mass. The average of the results from all the runs of population 2 against this tactic can be seen in Figure 8. Figure 8: Best individuals per generation vs. the baseline tactic These results are very promising, as they show that even with no knowledge of this simple human induced tactic, the coevolutionary process was able to repeatedly come up with solutions that beat the tactic. The evolved tactics from the coevolution with land proved complex compared to previous research, involving both timing and placement of game entities. However, there is still much room for improvement. Even after 200 generations, the individuals were still comparatively inflexible when compared with human players. The individuals have a tendency to get stuck, where they have obtained all their sub-goals and goals, and no longer adapt to the scenario except where the IM changes. This can be detrimental if there is only one or two entities left, and none have the opposition flag as a goal, causing a stale-mate. It may be possible to hard code some end game tactics into the individuals, but ideally we would like this to be self-discovered. In future work we would like to make changes to the variation operators, and leave the coevolution running for longer with a larger population size, observing if this helps the matter. 6. CONCLUSIONS AND FUTURE WORK We have shown that our method of coevolving Influence Maps for coordinated tactic generation can be applied to more complex environments. The ability for the evolution to generate tactics that essentially represented feints and complex manoeuvres, showed the technique actually worked better in more complex environments. In previous research, as expected, the evolved tactics were as simple as the problem required, often using brute force tactics such as charging the flag. In this research, we have witnessed more complex tactics being evolved, as a result of the new mechanisms implemented to include land. This demonstrates the adaptability of this technique, and we are excited to see further interesting tactics being evolved in even more complex scenarios. Following on from this research, we would like to include further information for the entities, to decide their movement using the IM. This research included the additional information of land mass, and in the future we would like to add information such as speed and health. For speed we are considering including another layer of influence maps, that provide the speed ranges an entity should achieve at spatial points on the map, such as land and other entities. This allows the entity to make decisions on how fast/slow it wishes to go in relation to other entities on the map. Controlling speed should allow the creation of more complex tactics, such as more complex feinting, and surprise attacks. 789

8 Including information about health into the influence range for enemy entities, should allow for creation of more complex sneak and avoidance/attacking measures. We can change the IM function, so it uses an evolved range determined by the health for each enemy entity, instead of the set range of 4 used in this research. This allows the evolution to determine the range of influence for each entity. We could also then use this range to determine when an entity should start firing on an opponent. We would also like to extend the current implementation. We want to experiment with different forms of crossover, and find one that is more suited to our problem. Ideally we would like to expand the diversity of the population for longer, and implement a longer search for more complex solutions. To extend the implementation of the land masses, we want to investigate an algorithm that determines interesting points on the land mass that the entity can use. This includes the identification of convex and concave points on the land, with a cut-off weight as to how many points each land mass can have. We would like to evolve this weight and let the evolution decide how much information it needs. We also want to extend the research through the implementation of a hierarchy of tactics. The tactics described in this research have been very low level who to attack and where individual boats should move. Eventually we want to implement layers of a strategic/tactical hierarchy, representing different planning levels. To do this we would use an individual IM as described in this work, but add a higher level IM for each group. Instead of specifying what to attack and where to move, the group IM could provide tactics such as an area of the map that has higher importance for a particular group. By combining the group IM with the individual s IM, this could lead to high level strategic planning. This could also help to scale the IM solution, as the coevolution for individual teams could be separate runs. 7. ACKNOWLEDGMENTS This research is supported by ONR grants N and N REFERENCES [1] P. Avery, G. Greenwood, and Z. Michalewicz. Coevolving strategic intelligence. In Proceedings for IEEE Congress on Evolutionary Computation, Hong Kong, [2] P.M.Avery,S.J.Louis,andB.Avery.Evolving coordinated spatial tactics for autonomous agents using influence maps. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Games. IEEE Press, [3] M. Bergsma and P. Spronck. Adaptive spatial reasoning for turn-based strategy games. In Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference. AAAI, [4] S. Chong, D. Ku, H. Lim, M. Tan, and J. White. Evolved neural networks learning othello strategies. In The 2003 Congress on Evolutionary Computation, volume 3, pages , Newport Beach, California, USA, [5] H. Danielsiek, R. Stüer, A. Thom, N. Beume, B. Naujoks, and M. Preuss. Intelligent moving of groups in real-time strategy games. In Proceedings of the 2008 IEEE Symposium on Computational Intelligence in Games, pages 63 70, [6] K. Deb and R. B. Agrawal. Simulated binary crossover for continuous search space. In Complex Systems, volume 9, pages , [7] D. B. Fogel. Blondie24: Playing at the Edge of AI. Morgan Kaufmann, San Francisco, CA, [8] S. Jang and S. Cho. Evolving neural NPCs with layered influence map in the real-time simulation game ŚconquerorŠ. In Proceedings of the 2008 IEEE Symposium on Computational Intelligence in Games. IEEE Press, [9] S. J. Louis and C. Miles. Playing to learn: Case-injected genetic algorithms for learning to play computer games. IEEE Transactions on Evolutionary Computation, pages , [10] C. Miles. Co-Evolving Real-Time Strategy Game Players. PhD thesis, [11] C. Miles. Lagoon craft, [12] C. Miles and S. J. Louis. Co-evolving real-time strategy game playing influence map trees with genetic algorithms. In Proceedings of the International Congress on Evolutionary Computation, Portland, Oregon. IEEE Press, [13] C. Miles and S. J. Louis. Towards the co-evolution of influence map tree based strategy game players. In Proceedings of the 2006 IEEE Symposium on Computational Intelligence in Games, pages IEEE Press, [14] C. Miles and S. J. Louis. Co-evolving influence map tree based strategy game players. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Games. IEEE Press, [15] P. Sweetser and J. Wiles. Combining influence maps and cellular automata for reactive game agents. In 6th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2005), pages , [16] P. Tozour. Influence mapping. In M. DeLoura, editor, Game Programming Gems 2, pages Charles River Media, [17] N. Wirth and M. Gallagher. An influence map model for playing Ms. Pac-Man. In Symposium on Computational Intelligence in Games. IEEE Press, [18] P. Zigoris, J. Siu, O. Wang, and A. Hayes. Balancing automated behavior and human control in multi-agent systems: a case study in RoboFlag. In American Control Conference, volume 1, pages , [19] A. L. Zobrist. A model of visual organization for the game of GO. In AFIPS Conf. Proc., pages 34, ,

Coevolving team tactics for a real-time strategy game

Coevolving team tactics for a real-time strategy game Coevolving team tactics for a real-time strategy game Phillipa Avery, Sushil Louis Abstract In this paper we successfully demonstrate the use of coevolving Influence Maps (IM)s to generate coordinating

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Evolving Effective Micro Behaviors in RTS Game

Evolving Effective Micro Behaviors in RTS Game Evolving Effective Micro Behaviors in RTS Game Siming Liu, Sushil J. Louis, and Christopher Ballinger Evolutionary Computing Systems Lab (ECSL) Dept. of Computer Science and Engineering University of Nevada,

More information

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS GARY B. PARKER, CONNECTICUT COLLEGE, USA, parker@conncoll.edu IVO I. PARASHKEVOV, CONNECTICUT COLLEGE, USA, iipar@conncoll.edu H. JOSEPH

More information

Comparing Heuristic Search Methods for Finding Effective Group Behaviors in RTS Game

Comparing Heuristic Search Methods for Finding Effective Group Behaviors in RTS Game Comparing Heuristic Search Methods for Finding Effective Group Behaviors in RTS Game Siming Liu, Sushil J. Louis and Monica Nicolescu Dept. of Computer Science and Engineering University of Nevada, Reno

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

Co-evolving Real-Time Strategy Game Micro

Co-evolving Real-Time Strategy Game Micro Co-evolving Real-Time Strategy Game Micro Navin K Adhikari, Sushil J. Louis Siming Liu, and Walker Spurgeon Department of Computer Science and Engineering University of Nevada, Reno Email: navinadhikari@nevada.unr.edu,

More information

Adapting to Human Game Play

Adapting to Human Game Play Adapting to Human Game Play Phillipa Avery, Zbigniew Michalewicz Abstract No matter how good a computer player is, given enough time human players may learn to adapt to the strategy used, and routinely

More information

Online Interactive Neuro-evolution

Online Interactive Neuro-evolution Appears in Neural Processing Letters, 1999. Online Interactive Neuro-evolution Adrian Agogino (agogino@ece.utexas.edu) Kenneth Stanley (kstanley@cs.utexas.edu) Risto Miikkulainen (risto@cs.utexas.edu)

More information

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( )

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( ) COMP3211 Project Artificial Intelligence for Tron game Group 7 Chiu Ka Wa (20369737) Chun Wai Wong (20265022) Ku Chun Kit (20123470) Abstract Tron is an old and popular game based on a movie of the same

More information

Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players

Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players Chris Miles Evolutionary Computing Systems Lab Dept. of Computer Science and Engineering University of Nevada, Reno miles@cse.unr.edu

More information

Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players

Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players Towards the Co-Evolution of Influence Map Tree Based Strategy Game Players Chris Miles Evolutionary Computing Systems Lab Dept. of Computer Science and Engineering University of Nevada, Reno miles@cse.unr.edu

More information

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms Felix Arnold, Bryan Horvat, Albert Sacks Department of Computer Science Georgia Institute of Technology Atlanta, GA 30318 farnold3@gatech.edu

More information

Evolution and Prioritization of Survival Strategies for a Simulated Robot in Xpilot

Evolution and Prioritization of Survival Strategies for a Simulated Robot in Xpilot Evolution and Prioritization of Survival Strategies for a Simulated Robot in Xpilot Gary B. Parker Computer Science Connecticut College New London, CT 06320 parker@conncoll.edu Timothy S. Doherty Computer

More information

IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN

IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN FACULTY OF COMPUTING AND INFORMATICS UNIVERSITY MALAYSIA SABAH 2014 ABSTRACT The use of Artificial Intelligence

More information

Evolving robots to play dodgeball

Evolving robots to play dodgeball Evolving robots to play dodgeball Uriel Mandujano and Daniel Redelmeier Abstract In nearly all videogames, creating smart and complex artificial agents helps ensure an enjoyable and challenging player

More information

Evolutions of communication

Evolutions of communication Evolutions of communication Alex Bell, Andrew Pace, and Raul Santos May 12, 2009 Abstract In this paper a experiment is presented in which two simulated robots evolved a form of communication to allow

More information

Playing to Train: Case Injected Genetic Algorithms for Strategic Computer Gaming

Playing to Train: Case Injected Genetic Algorithms for Strategic Computer Gaming Playing to Train: Case Injected Genetic Algorithms for Strategic Computer Gaming Sushil J. Louis 1, Chris Miles 1, Nicholas Cole 1, and John McDonnell 2 1 Evolutionary Computing Systems LAB University

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

Evolving Parameters for Xpilot Combat Agents

Evolving Parameters for Xpilot Combat Agents Evolving Parameters for Xpilot Combat Agents Gary B. Parker Computer Science Connecticut College New London, CT 06320 parker@conncoll.edu Matt Parker Computer Science Indiana University Bloomington, IN,

More information

The Behavior Evolving Model and Application of Virtual Robots

The Behavior Evolving Model and Application of Virtual Robots The Behavior Evolving Model and Application of Virtual Robots Suchul Hwang Kyungdal Cho V. Scott Gordon Inha Tech. College Inha Tech College CSUS, Sacramento 253 Yonghyundong Namku 253 Yonghyundong Namku

More information

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario Proceedings of the Fifth Artificial Intelligence for Interactive Digital Entertainment Conference A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario Johan Hagelbäck and Stefan J. Johansson

More information

Optimizing the State Evaluation Heuristic of Abalone using Evolutionary Algorithms

Optimizing the State Evaluation Heuristic of Abalone using Evolutionary Algorithms Optimizing the State Evaluation Heuristic of Abalone using Evolutionary Algorithms Benjamin Rhew December 1, 2005 1 Introduction Heuristics are used in many applications today, from speech recognition

More information

Opponent Modelling In World Of Warcraft

Opponent Modelling In World Of Warcraft Opponent Modelling In World Of Warcraft A.J.J. Valkenberg 19th June 2007 Abstract In tactical commercial games, knowledge of an opponent s location is advantageous when designing a tactic. This paper proposes

More information

Five-In-Row with Local Evaluation and Beam Search

Five-In-Row with Local Evaluation and Beam Search Five-In-Row with Local Evaluation and Beam Search Jiun-Hung Chen and Adrienne X. Wang jhchen@cs axwang@cs Abstract This report provides a brief overview of the game of five-in-row, also known as Go-Moku,

More information

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER World Automation Congress 21 TSI Press. USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER Department of Computer Science Connecticut College New London, CT {ahubley,

More information

Neural Networks for Real-time Pathfinding in Computer Games

Neural Networks for Real-time Pathfinding in Computer Games Neural Networks for Real-time Pathfinding in Computer Games Ross Graham 1, Hugh McCabe 1 & Stephen Sheridan 1 1 School of Informatics and Engineering, Institute of Technology at Blanchardstown, Dublin

More information

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms Wouter Wiggers Faculty of EECMS, University of Twente w.a.wiggers@student.utwente.nl ABSTRACT In this

More information

LEARNABLE BUDDY: LEARNABLE SUPPORTIVE AI IN COMMERCIAL MMORPG

LEARNABLE BUDDY: LEARNABLE SUPPORTIVE AI IN COMMERCIAL MMORPG LEARNABLE BUDDY: LEARNABLE SUPPORTIVE AI IN COMMERCIAL MMORPG Theppatorn Rhujittawiwat and Vishnu Kotrajaras Department of Computer Engineering Chulalongkorn University, Bangkok, Thailand E-mail: g49trh@cp.eng.chula.ac.th,

More information

Reactive Planning with Evolutionary Computation

Reactive Planning with Evolutionary Computation Reactive Planning with Evolutionary Computation Chaiwat Jassadapakorn and Prabhas Chongstitvatana Intelligent System Laboratory, Department of Computer Engineering Chulalongkorn University, Bangkok 10330,

More information

Creating a Dominion AI Using Genetic Algorithms

Creating a Dominion AI Using Genetic Algorithms Creating a Dominion AI Using Genetic Algorithms Abstract Mok Ming Foong Dominion is a deck-building card game. It allows for complex strategies, has an aspect of randomness in card drawing, and no obvious

More information

VIDEO games provide excellent test beds for artificial

VIDEO games provide excellent test beds for artificial FRIGHT: A Flexible Rule-Based Intelligent Ghost Team for Ms. Pac-Man David J. Gagne and Clare Bates Congdon, Senior Member, IEEE Abstract FRIGHT is a rule-based intelligent agent for playing the ghost

More information

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract 2012-07-02 BTH-Blekinge Institute of Technology Uppsats inlämnad som del av examination i DV1446 Kandidatarbete i datavetenskap. Bachelor thesis Influence map based Ms. Pac-Man and Ghost Controller Johan

More information

Coevolution and turnbased games

Coevolution and turnbased games Spring 5 Coevolution and turnbased games A case study Joakim Långberg HS-IKI-EA-05-112 [Coevolution and turnbased games] Submitted by Joakim Långberg to the University of Skövde as a dissertation towards

More information

Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe

Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe Proceedings of the 27 IEEE Symposium on Computational Intelligence and Games (CIG 27) Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe Yi Jack Yau, Jason Teo and Patricia

More information

Adjustable Group Behavior of Agents in Action-based Games

Adjustable Group Behavior of Agents in Action-based Games Adjustable Group Behavior of Agents in Action-d Games Westphal, Keith and Mclaughlan, Brian Kwestp2@uafortsmith.edu, brian.mclaughlan@uafs.edu Department of Computer and Information Sciences University

More information

Creating a Poker Playing Program Using Evolutionary Computation

Creating a Poker Playing Program Using Evolutionary Computation Creating a Poker Playing Program Using Evolutionary Computation Simon Olsen and Rob LeGrand, Ph.D. Abstract Artificial intelligence is a rapidly expanding technology. We are surrounded by technology that

More information

The Co-Evolvability of Games in Coevolutionary Genetic Algorithms

The Co-Evolvability of Games in Coevolutionary Genetic Algorithms The Co-Evolvability of Games in Coevolutionary Genetic Algorithms Wei-Kai Lin Tian-Li Yu TEIL Technical Report No. 2009002 January, 2009 Taiwan Evolutionary Intelligence Laboratory (TEIL) Department of

More information

Finding Robust Strategies to Defeat Specific Opponents Using Case-Injected Coevolution

Finding Robust Strategies to Defeat Specific Opponents Using Case-Injected Coevolution Finding Robust Strategies to Defeat Specific Opponents Using Case-Injected Coevolution Christopher Ballinger and Sushil Louis University of Nevada, Reno Reno, Nevada 89503 {caballinger, sushil} @cse.unr.edu

More information

Potential-Field Based navigation in StarCraft

Potential-Field Based navigation in StarCraft Potential-Field Based navigation in StarCraft Johan Hagelbäck, Member, IEEE Abstract Real-Time Strategy (RTS) games are a sub-genre of strategy games typically taking place in a war setting. RTS games

More information

The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents

The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents Matt Parker Computer Science Indiana University Bloomington, IN, USA matparker@cs.indiana.edu Gary B. Parker Computer Science

More information

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Population Adaptation for Genetic Algorithm-based Cognitive Radios Population Adaptation for Genetic Algorithm-based Cognitive Radios Timothy R. Newman, Rakesh Rajbanshi, Alexander M. Wyglinski, Joseph B. Evans, and Gary J. Minden Information Technology and Telecommunications

More information

USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES

USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES T. Bullen and M. Katchabaw Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7

More information

Evolutionary Neural Networks for Non-Player Characters in Quake III

Evolutionary Neural Networks for Non-Player Characters in Quake III Evolutionary Neural Networks for Non-Player Characters in Quake III Joost Westra and Frank Dignum Abstract Designing and implementing the decisions of Non- Player Characters in first person shooter games

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

A Quoridor-playing Agent

A Quoridor-playing Agent A Quoridor-playing Agent P.J.C. Mertens June 21, 2006 Abstract This paper deals with the construction of a Quoridor-playing software agent. Because Quoridor is a rather new game, research about the game

More information

Game Playing for a Variant of Mancala Board Game (Pallanguzhi)

Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Varsha Sankar (SUNet ID: svarsha) 1. INTRODUCTION Game playing is a very interesting area in the field of Artificial Intelligence presently.

More information

Vesselin K. Vassilev South Bank University London Dominic Job Napier University Edinburgh Julian F. Miller The University of Birmingham Birmingham

Vesselin K. Vassilev South Bank University London Dominic Job Napier University Edinburgh Julian F. Miller The University of Birmingham Birmingham Towards the Automatic Design of More Efficient Digital Circuits Vesselin K. Vassilev South Bank University London Dominic Job Napier University Edinburgh Julian F. Miller The University of Birmingham Birmingham

More information

Evolutionary Othello Players Boosted by Opening Knowledge

Evolutionary Othello Players Boosted by Opening Knowledge 26 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 26 Evolutionary Othello Players Boosted by Opening Knowledge Kyung-Joong Kim and Sung-Bae

More information

Using Coevolution to Understand and Validate Game Balance in Continuous Games

Using Coevolution to Understand and Validate Game Balance in Continuous Games Using Coevolution to Understand and Validate Game Balance in Continuous Games Ryan Leigh University of Nevada, Reno Reno, Nevada, United States leigh@cse.unr.edu Justin Schonfeld University of Nevada,

More information

CMSC 671 Project Report- Google AI Challenge: Planet Wars

CMSC 671 Project Report- Google AI Challenge: Planet Wars 1. Introduction Purpose The purpose of the project is to apply relevant AI techniques learned during the course with a view to develop an intelligent game playing bot for the game of Planet Wars. Planet

More information

Electronic Research Archive of Blekinge Institute of Technology

Electronic Research Archive of Blekinge Institute of Technology Electronic Research Archive of Blekinge Institute of Technology http://www.bth.se/fou/ This is an author produced version of a conference paper. The paper has been peer-reviewed but may not include the

More information

SWORDS & WIZARDRY ATTACK TABLE Consult this table whenever an attack is made. Find the name of the attacking piece in the left hand column, the name

SWORDS & WIZARDRY ATTACK TABLE Consult this table whenever an attack is made. Find the name of the attacking piece in the left hand column, the name SWORDS & WIZARDRY ATTACK TABLE Consult this table whenever an attack is made. Find the name of the attacking piece in the left hand column, the name of the defending piece along the top of the table and

More information

Strategic and Tactical Reasoning with Waypoints Lars Lidén Valve Software

Strategic and Tactical Reasoning with Waypoints Lars Lidén Valve Software Strategic and Tactical Reasoning with Waypoints Lars Lidén Valve Software lars@valvesoftware.com For the behavior of computer controlled characters to become more sophisticated, efficient algorithms are

More information

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Capturing and Adapting Traces for Character Control in Computer Role Playing Games Capturing and Adapting Traces for Character Control in Computer Role Playing Games Jonathan Rubin and Ashwin Ram Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 USA Jonathan.Rubin@parc.com,

More information

Multi-Robot Coordination. Chapter 11

Multi-Robot Coordination. Chapter 11 Multi-Robot Coordination Chapter 11 Objectives To understand some of the problems being studied with multiple robots To understand the challenges involved with coordinating robots To investigate a simple

More information

Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax

Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax Tang, Marco Kwan Ho (20306981) Tse, Wai Ho (20355528) Zhao, Vincent Ruidong (20233835) Yap, Alistair Yun Hee (20306450) Introduction

More information

Evolution of Sensor Suites for Complex Environments

Evolution of Sensor Suites for Complex Environments Evolution of Sensor Suites for Complex Environments Annie S. Wu, Ayse S. Yilmaz, and John C. Sciortino, Jr. Abstract We present a genetic algorithm (GA) based decision tool for the design and configuration

More information

Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game

Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game Jung-Ying Wang and Yong-Bin Lin Abstract For a car racing game, the most

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

AI Plays Yun Nie (yunn), Wenqi Hou (wenqihou), Yicheng An (yicheng)

AI Plays Yun Nie (yunn), Wenqi Hou (wenqihou), Yicheng An (yicheng) AI Plays 2048 Yun Nie (yunn), Wenqi Hou (wenqihou), Yicheng An (yicheng) Abstract The strategy game 2048 gained great popularity quickly. Although it is easy to play, people cannot win the game easily,

More information

A Genetic Algorithm for Solving Beehive Hidato Puzzles

A Genetic Algorithm for Solving Beehive Hidato Puzzles A Genetic Algorithm for Solving Beehive Hidato Puzzles Matheus Müller Pereira da Silva and Camila Silva de Magalhães Universidade Federal do Rio de Janeiro - UFRJ, Campus Xerém, Duque de Caxias, RJ 25245-390,

More information

Lightweight Decentralized Algorithm for Localizing Reactive Jammers in Wireless Sensor Network

Lightweight Decentralized Algorithm for Localizing Reactive Jammers in Wireless Sensor Network International Journal Of Computational Engineering Research (ijceronline.com) Vol. 3 Issue. 3 Lightweight Decentralized Algorithm for Localizing Reactive Jammers in Wireless Sensor Network 1, Vinothkumar.G,

More information

AI Approaches to Ultimate Tic-Tac-Toe

AI Approaches to Ultimate Tic-Tac-Toe AI Approaches to Ultimate Tic-Tac-Toe Eytan Lifshitz CS Department Hebrew University of Jerusalem, Israel David Tsurel CS Department Hebrew University of Jerusalem, Israel I. INTRODUCTION This report is

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Monte Carlo based battleship agent

Monte Carlo based battleship agent Monte Carlo based battleship agent Written by: Omer Haber, 313302010; Dror Sharf, 315357319 Introduction The game of battleship is a guessing game for two players which has been around for almost a century.

More information

An Artificially Intelligent Ludo Player

An Artificially Intelligent Ludo Player An Artificially Intelligent Ludo Player Andres Calderon Jaramillo and Deepak Aravindakshan Colorado State University {andrescj, deepakar}@cs.colostate.edu Abstract This project replicates results reported

More information

Understanding Coevolution

Understanding Coevolution Understanding Coevolution Theory and Analysis of Coevolutionary Algorithms R. Paul Wiegand Kenneth A. De Jong paul@tesseract.org kdejong@.gmu.edu ECLab Department of Computer Science George Mason University

More information

Extending the STRADA Framework to Design an AI for ORTS

Extending the STRADA Framework to Design an AI for ORTS Extending the STRADA Framework to Design an AI for ORTS Laurent Navarro and Vincent Corruble Laboratoire d Informatique de Paris 6 Université Pierre et Marie Curie (Paris 6) CNRS 4, Place Jussieu 75252

More information

Comparing Methods for Solving Kuromasu Puzzles

Comparing Methods for Solving Kuromasu Puzzles Comparing Methods for Solving Kuromasu Puzzles Leiden Institute of Advanced Computer Science Bachelor Project Report Tim van Meurs Abstract The goal of this bachelor thesis is to examine different methods

More information

Swarm AI: A Solution to Soccer

Swarm AI: A Solution to Soccer Swarm AI: A Solution to Soccer Alex Kutsenok Advisor: Michael Wollowski Senior Thesis Rose-Hulman Institute of Technology Department of Computer Science and Software Engineering May 10th, 2004 Definition

More information

A Robotic Simulator Tool for Mobile Robots

A Robotic Simulator Tool for Mobile Robots 2016 Published in 4th International Symposium on Innovative Technologies in Engineering and Science 3-5 November 2016 (ISITES2016 Alanya/Antalya - Turkey) A Robotic Simulator Tool for Mobile Robots 1 Mehmet

More information

Evolving Behaviour Trees for the Commercial Game DEFCON

Evolving Behaviour Trees for the Commercial Game DEFCON Evolving Behaviour Trees for the Commercial Game DEFCON Chong-U Lim, Robin Baumgarten and Simon Colton Computational Creativity Group Department of Computing, Imperial College, London www.doc.ic.ac.uk/ccg

More information

Using Artificial intelligent to solve the game of 2048

Using Artificial intelligent to solve the game of 2048 Using Artificial intelligent to solve the game of 2048 Ho Shing Hin (20343288) WONG, Ngo Yin (20355097) Lam Ka Wing (20280151) Abstract The report presents the solver of the game 2048 base on artificial

More information

Retaining Learned Behavior During Real-Time Neuroevolution

Retaining Learned Behavior During Real-Time Neuroevolution Retaining Learned Behavior During Real-Time Neuroevolution Thomas D Silva, Roy Janik, Michael Chrien, Kenneth O. Stanley and Risto Miikkulainen Department of Computer Sciences University of Texas at Austin

More information

Artificial Intelligence ( CS 365 ) IMPLEMENTATION OF AI SCRIPT GENERATOR USING DYNAMIC SCRIPTING FOR AOE2 GAME

Artificial Intelligence ( CS 365 ) IMPLEMENTATION OF AI SCRIPT GENERATOR USING DYNAMIC SCRIPTING FOR AOE2 GAME Artificial Intelligence ( CS 365 ) IMPLEMENTATION OF AI SCRIPT GENERATOR USING DYNAMIC SCRIPTING FOR AOE2 GAME Author: Saurabh Chatterjee Guided by: Dr. Amitabha Mukherjee Abstract: I have implemented

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Game Artificial Intelligence ( CS 4731/7632 )

Game Artificial Intelligence ( CS 4731/7632 ) Game Artificial Intelligence ( CS 4731/7632 ) Instructor: Stephen Lee-Urban http://www.cc.gatech.edu/~surban6/2018-gameai/ (soon) Piazza T-square What s this all about? Industry standard approaches to

More information

2048: An Autonomous Solver

2048: An Autonomous Solver 2048: An Autonomous Solver Final Project in Introduction to Artificial Intelligence ABSTRACT. Our goal in this project was to create an automatic solver for the wellknown game 2048 and to analyze how different

More information

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Programming an Othello AI Michael An (man4), Evan Liang (liange) Programming an Othello AI Michael An (man4), Evan Liang (liange) 1 Introduction Othello is a two player board game played on an 8 8 grid. Players take turns placing stones with their assigned color (black

More information

Chapter 14 Optimization of AI Tactic in Action-RPG Game

Chapter 14 Optimization of AI Tactic in Action-RPG Game Chapter 14 Optimization of AI Tactic in Action-RPG Game Kristo Radion Purba Abstract In an Action RPG game, usually there is one or more player character. Also, there are many enemies and bosses. Player

More information

An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics

An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics Kevin Cherry and Jianhua Chen Department of Computer Science, Louisiana State University, Baton Rouge, Louisiana, U.S.A.

More information

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution Eiji Uchibe, Masateru Nakamura, Minoru Asada Dept. of Adaptive Machine Systems, Graduate School of Eng., Osaka University,

More information

High-Level Representations for Game-Tree Search in RTS Games

High-Level Representations for Game-Tree Search in RTS Games Artificial Intelligence in Adversarial Real-Time Games: Papers from the AIIDE Workshop High-Level Representations for Game-Tree Search in RTS Games Alberto Uriarte and Santiago Ontañón Computer Science

More information

Comparison of Monte Carlo Tree Search Methods in the Imperfect Information Card Game Cribbage

Comparison of Monte Carlo Tree Search Methods in the Imperfect Information Card Game Cribbage Comparison of Monte Carlo Tree Search Methods in the Imperfect Information Card Game Cribbage Richard Kelly and David Churchill Computer Science Faculty of Science Memorial University {richard.kelly, dchurchill}@mun.ca

More information

Artificial Intelligence for Games

Artificial Intelligence for Games Artificial Intelligence for Games CSC404: Video Game Design Elias Adum Let s talk about AI Artificial Intelligence AI is the field of creating intelligent behaviour in machines. Intelligence understood

More information

An analysis of Cannon By Keith Carter

An analysis of Cannon By Keith Carter An analysis of Cannon By Keith Carter 1.0 Deploying for Battle Town Location The initial placement of the towns, the relative position to their own soldiers, enemy soldiers, and each other effects the

More information

AI in Computer Games. AI in Computer Games. Goals. Game A(I?) History Game categories

AI in Computer Games. AI in Computer Games. Goals. Game A(I?) History Game categories AI in Computer Games why, where and how AI in Computer Games Goals Game categories History Common issues and methods Issues in various game categories Goals Games are entertainment! Important that things

More information

An Influence Map Model for Playing Ms. Pac-Man

An Influence Map Model for Playing Ms. Pac-Man An Influence Map Model for Playing Ms. Pac-Man Nathan Wirth and Marcus Gallagher, Member, IEEE Abstract In this paper we develop a Ms. Pac-Man playing agent based on an influence map model. The proposed

More information

Utilization-Aware Adaptive Back-Pressure Traffic Signal Control

Utilization-Aware Adaptive Back-Pressure Traffic Signal Control Utilization-Aware Adaptive Back-Pressure Traffic Signal Control Wanli Chang, Samarjit Chakraborty and Anuradha Annaswamy Abstract Back-pressure control of traffic signal, which computes the control phase

More information

Mehrdad Amirghasemi a* Reza Zamani a

Mehrdad Amirghasemi a* Reza Zamani a The roles of evolutionary computation, fitness landscape, constructive methods and local searches in the development of adaptive systems for infrastructure planning Mehrdad Amirghasemi a* Reza Zamani a

More information

CS 441/541 Artificial Intelligence Fall, Homework 6: Genetic Algorithms. Due Monday Nov. 24.

CS 441/541 Artificial Intelligence Fall, Homework 6: Genetic Algorithms. Due Monday Nov. 24. CS 441/541 Artificial Intelligence Fall, 2008 Homework 6: Genetic Algorithms Due Monday Nov. 24. In this assignment you will code and experiment with a genetic algorithm as a method for evolving control

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

A Novel Approach to Solving N-Queens Problem

A Novel Approach to Solving N-Queens Problem A Novel Approach to Solving N-ueens Problem Md. Golam KAOSAR Department of Computer Engineering King Fahd University of Petroleum and Minerals Dhahran, KSA and Mohammad SHORFUZZAMAN and Sayed AHMED Department

More information

the question of whether computers can think is like the question of whether submarines can swim -- Dijkstra

the question of whether computers can think is like the question of whether submarines can swim -- Dijkstra the question of whether computers can think is like the question of whether submarines can swim -- Dijkstra Game AI: The set of algorithms, representations, tools, and tricks that support the creation

More information

Experiments on Alternatives to Minimax

Experiments on Alternatives to Minimax Experiments on Alternatives to Minimax Dana Nau University of Maryland Paul Purdom Indiana University April 23, 1993 Chun-Hung Tzeng Ball State University Abstract In the field of Artificial Intelligence,

More information

Enhancing Embodied Evolution with Punctuated Anytime Learning

Enhancing Embodied Evolution with Punctuated Anytime Learning Enhancing Embodied Evolution with Punctuated Anytime Learning Gary B. Parker, Member IEEE, and Gregory E. Fedynyshyn Abstract This paper discusses a new implementation of embodied evolution that uses the

More information

Who am I? AI in Computer Games. Goals. AI in Computer Games. History Game A(I?)

Who am I? AI in Computer Games. Goals. AI in Computer Games. History Game A(I?) Who am I? AI in Computer Games why, where and how Lecturer at Uppsala University, Dept. of information technology AI, machine learning and natural computation Gamer since 1980 Olle Gällmo AI in Computer

More information

Red Shadow. FPGA Trax Design Competition

Red Shadow. FPGA Trax Design Competition Design Competition placing: Red Shadow (Qing Lu, Bruce Chiu-Wing Sham, Francis C.M. Lau) for coming third equal place in the FPGA Trax Design Competition International Conference on Field Programmable

More information