A CBR Module for a Strategy Videogame

Size: px
Start display at page:

Download "A CBR Module for a Strategy Videogame"

Transcription

1 A CBR Module for a Strategy Videogame Rubén Sánchez-Pelegrín 1, Marco Antonio Gómez-Martín 2, Belén Díaz-Agudo 2 1 CES Felipe II, Aranjuez, Madrid 2 Dep. Sistemas Informáticos y Programación Universidad Complutense de Madrid, Spain rsanchez@cesfelipesegundo.com, {marcoa,belend}@sip.ucm.es Abstract. C-evo is a non-commercial free open-source game based on Civilization, one of the most popular turn-based strategy games. One of the most important goals in the development of C-evo is to support the creation of Artificial Intelligence (AI) modules. We have developed one of such AI modules based on Case Based Reasoning (CBR). The project is in an early stage. In this paper we describe the main ideas behind our approach and certain open tasks and extensions that we want to tackle in a near future. One of this extensions we have considered is the use of TIELT to compare different AI modules. 1 Introduction In the last few years AI researchers have found games as an interesting testbed of techniques. In spite of the importance of CBR on the evolution of AI, the CBR community has not made so much effort to use it in games, specially for its addition in commercial games. Our work presented in this paper consists on the design and implementation of an AI module that uses CBR. We have designed this module in the context of a game engine called C-evo but the ideas underlying our approach can be applied to other games. We are specially concerned about strategy games where several decisions are taken using a big amount of variables that describe a certain situation, and where situations tend to recur. C-evo is an open-source game project based on the famous Sid Meier s Civilization II by Microprose and has many basic ideas in common with it. C-evo is a turn-based empire building game about the origination of the human civilization, from the antiquity until the future 3. One of the most important goals in the development of C-evo is to support the creation of AI modules. There are other projects similar to C-evo. Probably the most popular is FreeCiv 4 [5, 12], also an open-source game based on Civilization. We chose C-evo because it is more oriented to AI development. There are many different AI modules developed for C-evo, and this makes it a good testbed for evaluating the performance of our module. Supported by the Spanish Committee of Science & Technology (TIC )

2 Fig. 1. C-evo screenshot However, comparison between different AI modules had to be done by hand in C-evo. To avoid this we have analysed TIELT[1], a Testbed for Investigating and Evaluating Learning Techniques that we will use to compare different CBR solutions in the near future. This paper runs as follows: Section 2 describes our AI module using CBR to select a military unit behaviour, its current state and our current lines of work. Section 3 describes our analysis of TIELT and how it can be used to optimize the comparisons of several modules. Finally, Section 4 concludes this paper. 2 AI project for C-evo C-evo provides an interface with the game engine that enables developers to create an AI module for managing a civilization. The game has a special architecture, based on exchangeable competitor modules, that allows the exchange of the AI for every single player. Thus anyone can develop his own AI algorithm in a DLL apart from the game in the language of his choice, and watch it play against humans, against the standard AI or other AI modules 5. AI modules can gather exactly the same information that a human player can and make exactly the same actions. Indeed, the player interface is another module that uses the same interface to communicate with the game core. This way AI modules cannot cheat, a very typical way of simulating intelligence in video games. Our goal was to develop an AI module, using advanced AI techniques. In particular we used Case-Based Reasoning (CBR) [7, 6] for solving some of the decision problems presented in the management of an empire. There are lowlevel problems (tactical decisions) and high-level (global strategy of the empire). 5 The documentation to create an artificial intelligence module for C-evo is available at

3 Examples of low-level problems are: to choose the next advance to develop, to select government type and to set tax rates, diplomacy, city production or behaviour of a unit. In this first stage we have focused on a single problem: the selection of a military unit behaviour, a tactical problem concerning action selection. 2.1 A concrete problem: military unit behaviour We have focused on a low-level problem: the control of military units. We have chosen this as the first problem to work on because we think it has a big impact on the result of the game. At this time all the other decisions are taken by hand written code. We have not put much effort on developing it, so we cannot expect our AI module to be really competitive with others. Our AI module uses CBR to assign missions to control units. A new mission is assigned to a unit in its creation and every time it completes the prior assigned mission. We have created three missions for military units: Exploring unknown tiles around a given one. The unit goes to the selected tile and moves to the adjacent tile which has more unknown tiles around it. If none of its adjacent tiles has unknown adjacent tiles, the mission concludes. Defending a given city. The unit goes to the city tile and stays there to defend the position. Attacking a given city. The unit goes to the city tile and attacks any enemy defending it. Our cases are stored in an XML file, and each one is composed of: A description of the unit features, and the state of the game relevant to the problem. This description gathers information about the military unit and about its environment. There are features that store the values for attack, defence, movement and experience of the unit. From the environment, we are interested on information about own cities, enemy cities and unknown tiles. For every city (own or enemy) there are features that stores its distance to the unit, and defence of the unit that defends the city tile. For our own cities, there is also a feature for the number of units in the city tile. 6 For every known tile surrounded by any unknown tile there are a feature that stores its distance to the unit. The solution of the problem, that is the mission assigned to the unit: attack one of the enemy cities, defend one of the own cities or explore from one of the known tiles surrounded by unknown tiles. The result of the case, a description of the outcome of the mission, composed by figures extracted from the game engine. It provides features that stores the duration in turns of the mission, number of attacks won, number of 6 The AI interface doesn t provide this figure for enemy cities.

4 defences won, number of enemy cities conquered, number of own cities lost during the mission, number of explored tiles, and whether the unit died or not during the mission. When a new problem comes up, we select the solution we are going to take basing on the cases stored in the case base. We begin with a new query that describes the problem we are dealing with. Description features are obtained from the game engine. We propose a retrieval method that filters an initial case base in several steps. 1. Filter the whole case base using a similarity function to compare the query with the description of the stored cases. We retrieve from the case base the 50 most similar cases to the query problem. 2. Aggregate the 50 retrieved cases into the missions they have assigned, i.e., the solution of the cases. 3. Compute the expected profit of each mission. It is based on the sum of the expected profit of each one of the cases of this mission. The profit of a case is computed using the result component. 4. Select the mission with the highest expected profit. Next, we describe how we solve each one of these steps: A similarity function gets two case descriptions and returns a measure of how similar they are. The similarity is a combination of local similarities of individual features and combinations of two or more of them. For example, we not only compare attack with attack and defence with defence, but also the proportion between attack and defence. This is because units with similar attack-defence ratios usually are used in a similar way, even if the total amount of attack and defence is very different. Similarity between individual features is the proportion between them: sim(x, y) = min(x, y) max(x, y) For determining the similarity between two problem descriptions, we calculate two partial similarities, the similarity between unit features S u and the similarity between world situations S s. Similarity between unit features is obtained using weighted average of its local features. We consider the attack, defence and experience feature less relevant than the others: S u = Sa + S d + S e + 2S m + 2S ar + 2S dr 9 Where: S a similarity between the attack feature. S d similarity between the defence feature. S e similarity between the experience feature. S m similarity between the movement feature. S ar, S dr the similarity of ratios between attack and defence.

5 Similarity between world situations is calculated using arithmetic average of several features and combinations of features: S s = S dnu + S dne + S dno + S dwe + S dwo + S an + S aw + S dn + S dw + S nn + S nw 11 Where: S dnu, S dne, S dno similarity between the nearest distance to an unknown tile, enemy city and own city, respectively. S dwe, S dwo similarity between the distance to the weakest enemy and own city respectively. S an, S aw similarity between the ratios of unit s attack and defence of the nearest enemy city and unit s attack and defence of the weakest enemy city respectively. S dn, S dw similarity between the ratios of unit s defence and defence of the nearest own city and unit s defence and defence of the weakest own city respectively. S nn, S nw similarity between the number of units on the nearest own city and on the weakest own city respectively. Finally, the global similarity S is the product of partial similarities: S = S us s In order to calculate the profit (P ) obtained in each of the retrieved cases, we use a function that aggregates the case result features, returning a single value that is the measure of the success obtained in the mission. The case result features are number of explored tiles (t), number of attacks won (a), number of defences won (d), number of conquered enemy cities (c w ), number of own cities lost during the mission (c l ), number of turns that the mission took (t) and a boolean feature (e) that says if the unit died (e = 1) or not (e = 0) during the mission. P = (t + 5a + 10d + 50c w 100c l ) e We select the set of possible missions we can assign to the unit. We are going to consider only up to five possible missions: exploring the nearest known tile with an adjacent unknown tile, defending the nearest own city, defending the weakest own city, attacking the nearest enemy city or attacking the weakest enemy city. For each one of these missions we select among the retrieved cases those which have that mission as its solution. With these cases, we calculate an expected profit E for the mission. It is an average of the profits of those cases, weighted by the similarity of each case with the problem case. Si P i E = Si We also calculate an uncertainty measure U for each of the possible missions. It depends on the number of cases used for calculating the expected profit, the variance of their profit, and their similarity with the problem case. This

6 way, if we have got similar expected profit from many cases, probably the real profit of the mission will be very close to the expected profit. In this case we have a low amount of uncertainty. On the other hand, if we have calculated the expected profit from few cases, or from cases with very different profits, the uncertainty will be high. Si (P i E) 2 U = Si We randomly modify the expected profit of each of the possible missions. The randomness of the modification depends on the uncertainty; the higher it is, the bigger the randomness is. This way we avoid the algorithm to be too conservative. If we did not use this random factor, the algorithm would discard some solutions basing on a single case which gave a bad result. Given the complex nature of the game, this bad result could be a bit of bad luck, and it is good to give a second opportunity to solutions, although once they worked badly. E = E + rand(0, U) We select the mission with the highest expected profit. We assign it to the unit, and store it as the solution of the problem case. The problem case is not stored yet in the case base. The mission is tracked for obtaining the result of the case until it is completed. When the mission is completed, the query problem becomes a full case, with its result obtained from the game engine. Then this new case is stored in the case base. Our approach to retrieve cases is quite general and applicable to other domains, specially when there are lots of cases described by a big number of features. However, the success of the algorithm strongly depends on the representation of the description, solution and result of the cases, and on the functions used for calculating similarity, profit, expected profit, uncertainty and the randomness modifying the expected profit. All these functions have been hand-coded and are specific for this game. They were written based on our knowledge of the game. In section 2.3 we discuss how learning could be used for improving them. 2.2 Evaluation C-evo allows to play an AI tournament consistent on several games between several AI modules. We tried to confront our module with a version of itself without learning (an AI dummy module); all we did was disabling the case storing, so the module had always an empty case base. The result was a set of ties. Although the learning could improve slightly the performance of the AI module, it was not enough to beat the opponent. We state that this is because the problem solved is only a small part of all the decisions that an AI module must take. The rest of the module was hand coded in a very simple way, so it doesn t play well enough to get a win. Also, the performance of the algorithm can be improved in several ways, some of them are discussed in the next section.

7 The efficiency was good while the size of the case base was not so big to fill RAM memory. When the case base size grows much the efficiency falls. In the next subsection we also discuss some solutions to this problem. 2.3 Extensions to the basic approach In the previously described algorithm, some hand tuned functions were used for calculating things as similarity between cases or profit of a result. This way, the success of the algorithm heavily depends on the accuracy of the human expert modifying these functions. It would be better to use some machine learning technique for that task. To learn the profit of a mission, we can associate each mission result with the final result (win, lost or draw) that the AI module got in the game where the mission took place. This information enables machine learning to associate which mission results tend to lead to each final result. For those results which occurs in a victory, the profit function would tend to give higher values. This way we would have a two level reasoning between raw data extracted from the game engine and decisions taken. For this purpose, Genetic Algorithms, Neuronal Nets, Bayesian Nets or CBR could be used to learn the profit function. Similarity measures could be learned considering that truly similar cases with similar solutions would tend to lead to similar results. Based on this hypothesis we could learn a similarity function that give high values of similarity for cases with similar results under the same mission, finding which features of cases are more relevant on this similarity and to what extent. For this purpose we are studying previous work in this area [11] and we are considering the use of some classical learning algorithms like genetic algorithms. A very serious trouble of the algorithm is the growth of the case base, that hurts the efficiency. Usually, case-based systems solve this problem storing only new cases different enough from existent cases. However, case-based systems usually only need to recover the most similar case with the problem, and they reason only from one case. This is possible because identical problems should have identical outputs. In our problem this is false; identical decisions taken on identical situations could lead to different results, due to the freedom of choice of the opponent and the incompleteness of the information. Randomness would take part too in most games, although in this one it doesn t exist. We need to store somehow the information of all cases produced, or we will lose valuable information. A system of indexing the cases would lighten this problem very much. This way it would not be necessary to calculate the similarity measure for all cases in the database, but only those that are solid candidates to be similar enough to the problem. However, it would not be a complete solution, because the case base size would keep growing indefinitely. It only defers the problem. The solution to the growth of case base would be clustering cases. If we have identical cases, then we have no need of storing them separately; we can store one case, and have a counter for the number of times it has happened. When reasoning, this case would be weighted by this counter for calculating expected profit

8 and uncertainty. However, case descriptions are complex enough for allowing a very high number of different possible cases. So it would be needed to group very similar cases together in a cluster. A general case would represent the cluster, using a counter for the number of cases implied. The description of the case that represented a cluster would have an average measure of the features that distinguish them, instead of the exact values of each one. Some information would be lost on this process, but it is necessary to maintain the computational feasibility of this method. Ideas for this task could also be extracted from previous work [2]. The use of CBR could be extended to another low level problems of the game. Each problem would have one case base, with a case representation suitable to that problem. However, some problems could share the same case base; for example, the problem of selecting the military unit to be built in a city could use the case base used for the problem of the military unit behaviour. Units produced should be those that produce better results for the current situation of the game. The reasoning would be completely different, but the data would be the same. High level problems could lead to more challenging strategic decisions, where we think CBR could be a very useful technique, given that it usually works well in situations where the environment is complex, and the rules of its behaviour are unknown or fuzzy. Examples of such problems are coordinating units action, or selection of the global strategy for the game. 3 Module comparison with TIELT As described in Section 2.2, we had to test our CBR module against the AI dummy module to check its abilities. This comparison was made by hand, launching the game and selecting the AI modules (DLLs) we wanted to test. This process was also needed when comparing CBR modules with different weights in the similarity function. As described in Section 2.3, there are several extensions that can be done in our module. We envision that we will require a great amount of tests to compare their abilities, so they should be accomplished in a more suitable way. In that sense we have been exploring the possibility of use TIELT (Testbed for Investigating and Evaluating Learning Techniques), a middleware testbed that facilitates and encourages the evaluation of learning systems in the context of interactive computer games [1]. Our first analysis has revealed that we could benefit from TIELT in: We could compare the functionality of our own AI modules without requiring such amount of manual work. We could make use of our AI modules in other game engines. In Section 1 we mentioned FreeCiv as another open-source game with a set of rules very similar to C-evo. If we manage to unify the rules of both games, we will be able to test our AI modules independently of which game engine is being

9 used. Moreover, we could test our AI modules against the default intelligence of different games. As TIELT acts as a middleware, the programming language used for the implementation of the module is not restricted to those imposed by the game engine. Though in the specific context of C-evo the restriction is just the construction of a DLL with a concrete interface, using TIELT we could implement our module in a independent process and use Java as unique language, without the need of implement a DLL that launchs a Java Virtual Machine. According to TIELT documentation our next step is to create a model containing the game rules. As a starting point, we have analysed the study made by Mad Doc Software in [8] and we envision a similar model to the games we are focusing on. The Game Model has to be complete enough to provide all the available information to the different AI modules. We also have to define the Game Model Interface that contains all the information related to the communication between a specific game engine and the middleware. As C-evo (the game engine) allows extensions of AI just by means of DLL programming, we will need to create a simple DLL that acts as communication layer between the game engine and TIELT. From the C-evo point of view, this layer will be an AI module, while from the TIELT point of view, it will be the game engine. This DLL is the only part dependent of C-evo. In the future, if we want to use our AI modules in FreeCiv, we just will have to create the specific communication layer that respects the rules impose by the game API. The last step is the implementation of the proper AI module. Nowadays it consists of a DLL in Visual C++, but we will implement it as an autonomous system process that will communicate with TIELT using sockets. This allows us to implement our new modules using frameworks of machine learning or CBR. In particular, we want to use jcolibri 7 to implement the proposed extensions in Java. 4 Conclusions Our ongoing work studies the applicability and advantages of the CBR basics within the videogames field. In this paper we have described an AI module based on CBR that has been applied to C-Evo, a turn based strategy game. This project is in an early stage and in this paper we have mainly described the main ideas behind our approach and certain open tasks and extensions that we want to tackle in a near future. The construction of the underlying CBR routines has obviously involved a great deal of hand-crafting. To overcome this critical issue our approach proposes to learn the corresponding similarity measures. To the authors knowledge CBR has not typically been used as the main underlying AI technique in commercial video games. There are some research 7

10 papers like [4, 3], but the industry seems not to use CBR in commercial games, although there are several descriptions of similar techniques that take advantage of previous experiences without explicitly naming it CBR [9]. Even so, we consider it a promising technique for AI in computer games. CBR is probably among the AI techniques the closest one to human thinking. AI for computer games should not only be intelligent, but also believable [10]. We think this makes CBR specially suitable to AI for computer games. Though we already have some preliminary experimental results this project is in an early stage. As shown in Section 2.3, there are still several improvements to be developed. References 1. D. W. Aha and M. Molineaux. Integrating learning in interactive gaming simulators. In Challenges in Game AI: Papers of the AAAI O4 Workshop. San Jose, CA: (Technical Report WS-04-04) AAAI Press, R. Bergmann and W. Wilke. On the role of abstraction in case-based reasoning. In EWCBR, pages 28 43, M. Fagan and P. Cunningham. Case-based plan recognition in computer games. In ICCBR, pages , T. Gabel and M. M. Veloso. Selecting heterogeneous team players by case-based reasoning: A case study in robotic soccer simulation. Technical report CMU-CS , Computer Science Department, Carnegie Mellon University, P. A. Houk. A strategic game playing agent for FreeCiv. Technical report NWU-CS , Evanston, IL: Northwestern University, Department of Computer Science, J. Kolodner. Case-based reasoning. Morgan Kaufmann, Calif., US., D. Leake. Case-Based Reasoning: Experiences, Lessons, & Future Directions. AAAI Press / The MIT Press. ISBN X, Mad Doc Software. TIELT challenge problem, Available at TIELT Challenge Problem.pdf. 9. F. Mommersteeg. AI Game Programming Wisdom, chapter Pattern Recognition with Sequential Prediction. Charles River Media, Inc., Rockland, MA, USA, A. Nareyek. AI in computer games. ACM Queue, 1(10):58 65, A. Stahl. Learning of knowledge intensive similarity measures in case based reasoning. Phd thesis, University of Kaiserlauten, P. Ulam, A. Goel, and J. Jones. Reflection in action: Model-based self-adaptation in game playing agents. In D. Fu and J. Orkin, editors, Challenges in Game Artificial Intelligence: Papers from the AAAI Workshop. San Jose, CA: AAAI Press, 2004.

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms Felix Arnold, Bryan Horvat, Albert Sacks Department of Computer Science Georgia Institute of Technology Atlanta, GA 30318 farnold3@gatech.edu

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

On the Effectiveness of Automatic Case Elicitation in a More Complex Domain

On the Effectiveness of Automatic Case Elicitation in a More Complex Domain On the Effectiveness of Automatic Case Elicitation in a More Complex Domain Siva N. Kommuri, Jay H. Powell and John D. Hastings University of Nebraska at Kearney Dept. of Computer Science & Information

More information

Hierarchical Case-Based Reasoning Behavior Control for Humanoid Robot

Hierarchical Case-Based Reasoning Behavior Control for Humanoid Robot Annals of University of Craiova, Math. Comp. Sci. Ser. Volume 36(2), 2009, Pages 131 140 ISSN: 1223-6934 Hierarchical Case-Based Reasoning Behavior Control for Humanoid Robot Bassant Mohamed El-Bagoury,

More information

BLUFF WITH AI. CS297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University. In Partial Fulfillment

BLUFF WITH AI. CS297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University. In Partial Fulfillment BLUFF WITH AI CS297 Report Presented to Dr. Chris Pollett Department of Computer Science San Jose State University In Partial Fulfillment Of the Requirements for the Class CS 297 By Tina Philip May 2017

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Capturing and Adapting Traces for Character Control in Computer Role Playing Games Capturing and Adapting Traces for Character Control in Computer Role Playing Games Jonathan Rubin and Ashwin Ram Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 USA Jonathan.Rubin@parc.com,

More information

Reflections on the First Man vs. Machine No-Limit Texas Hold 'em Competition

Reflections on the First Man vs. Machine No-Limit Texas Hold 'em Competition Reflections on the First Man vs. Machine No-Limit Texas Hold 'em Competition Sam Ganzfried Assistant Professor, Computer Science, Florida International University, Miami FL PhD, Computer Science Department,

More information

Using Reinforcement Learning for City Site Selection in the Turn-Based Strategy Game Civilization IV

Using Reinforcement Learning for City Site Selection in the Turn-Based Strategy Game Civilization IV Using Reinforcement Learning for City Site Selection in the Turn-Based Strategy Game Civilization IV Stefan Wender, Ian Watson Abstract This paper describes the design and implementation of a reinforcement

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

POKER AGENTS LD Miller & Adam Eck April 14 & 19, 2011

POKER AGENTS LD Miller & Adam Eck April 14 & 19, 2011 POKER AGENTS LD Miller & Adam Eck April 14 & 19, 2011 Motivation Classic environment properties of MAS Stochastic behavior (agents and environment) Incomplete information Uncertainty Application Examples

More information

SPQR RoboCup 2016 Standard Platform League Qualification Report

SPQR RoboCup 2016 Standard Platform League Qualification Report SPQR RoboCup 2016 Standard Platform League Qualification Report V. Suriani, F. Riccio, L. Iocchi, D. Nardi Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti Sapienza Università

More information

Extending the STRADA Framework to Design an AI for ORTS

Extending the STRADA Framework to Design an AI for ORTS Extending the STRADA Framework to Design an AI for ORTS Laurent Navarro and Vincent Corruble Laboratoire d Informatique de Paris 6 Université Pierre et Marie Curie (Paris 6) CNRS 4, Place Jussieu 75252

More information

RoboCup. Presented by Shane Murphy April 24, 2003

RoboCup. Presented by Shane Murphy April 24, 2003 RoboCup Presented by Shane Murphy April 24, 2003 RoboCup: : Today and Tomorrow What we have learned Authors Minoru Asada (Osaka University, Japan), Hiroaki Kitano (Sony CS Labs, Japan), Itsuki Noda (Electrotechnical(

More information

Case-based Action Planning in a First Person Scenario Game

Case-based Action Planning in a First Person Scenario Game Case-based Action Planning in a First Person Scenario Game Pascal Reuss 1,2 and Jannis Hillmann 1 and Sebastian Viefhaus 1 and Klaus-Dieter Althoff 1,2 reusspa@uni-hildesheim.de basti.viefhaus@gmail.com

More information

Reactive Planning for Micromanagement in RTS Games

Reactive Planning for Micromanagement in RTS Games Reactive Planning for Micromanagement in RTS Games Ben Weber University of California, Santa Cruz Department of Computer Science Santa Cruz, CA 95064 bweber@soe.ucsc.edu Abstract This paper presents an

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

2048: An Autonomous Solver

2048: An Autonomous Solver 2048: An Autonomous Solver Final Project in Introduction to Artificial Intelligence ABSTRACT. Our goal in this project was to create an automatic solver for the wellknown game 2048 and to analyze how different

More information

Electrical Engineering & Computer Science Department. Technical Report NU-EECS March 30 th, Qualitative Exploration in Freeciv

Electrical Engineering & Computer Science Department. Technical Report NU-EECS March 30 th, Qualitative Exploration in Freeciv Electrical Engineering & Computer Science Department Technical Report NU-EECS-12-02 March 30 th, 2012 Qualitative Exploration in Freeciv Christopher Blair 1.0 Abstract Game artificial intelligence should

More information

A CBR-Inspired Approach to Rapid and Reliable Adaption of Video Game AI

A CBR-Inspired Approach to Rapid and Reliable Adaption of Video Game AI A CBR-Inspired Approach to Rapid and Reliable Adaption of Video Game AI Sander Bakkes, Pieter Spronck, and Jaap van den Herik Amsterdam University of Applied Sciences (HvA), CREATE-IT Applied Research

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Comp 3211 Final Project - Poker AI

Comp 3211 Final Project - Poker AI Comp 3211 Final Project - Poker AI Introduction Poker is a game played with a standard 52 card deck, usually with 4 to 8 players per game. During each hand of poker, players are dealt two cards and must

More information

Playing Othello Using Monte Carlo

Playing Othello Using Monte Carlo June 22, 2007 Abstract This paper deals with the construction of an AI player to play the game Othello. A lot of techniques are already known to let AI players play the game Othello. Some of these techniques

More information

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS Thong B. Trinh, Anwer S. Bashi, Nikhil Deshpande Department of Electrical Engineering University of New Orleans New Orleans, LA 70148 Tel: (504) 280-7383 Fax:

More information

Integrating Learning in a Multi-Scale Agent

Integrating Learning in a Multi-Scale Agent Integrating Learning in a Multi-Scale Agent Ben Weber Dissertation Defense May 18, 2012 Introduction AI has a long history of using games to advance the state of the field [Shannon 1950] Real-Time Strategy

More information

Adjustable Group Behavior of Agents in Action-based Games

Adjustable Group Behavior of Agents in Action-based Games Adjustable Group Behavior of Agents in Action-d Games Westphal, Keith and Mclaughlan, Brian Kwestp2@uafortsmith.edu, brian.mclaughlan@uafs.edu Department of Computer and Information Sciences University

More information

Five-In-Row with Local Evaluation and Beam Search

Five-In-Row with Local Evaluation and Beam Search Five-In-Row with Local Evaluation and Beam Search Jiun-Hung Chen and Adrienne X. Wang jhchen@cs axwang@cs Abstract This report provides a brief overview of the game of five-in-row, also known as Go-Moku,

More information

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Programming an Othello AI Michael An (man4), Evan Liang (liange) Programming an Othello AI Michael An (man4), Evan Liang (liange) 1 Introduction Othello is a two player board game played on an 8 8 grid. Players take turns placing stones with their assigned color (black

More information

CMDragons 2009 Team Description

CMDragons 2009 Team Description CMDragons 2009 Team Description Stefan Zickler, Michael Licitra, Joydeep Biswas, and Manuela Veloso Carnegie Mellon University {szickler,mmv}@cs.cmu.edu {mlicitra,joydeep}@andrew.cmu.edu Abstract. In this

More information

The Behavior Evolving Model and Application of Virtual Robots

The Behavior Evolving Model and Application of Virtual Robots The Behavior Evolving Model and Application of Virtual Robots Suchul Hwang Kyungdal Cho V. Scott Gordon Inha Tech. College Inha Tech College CSUS, Sacramento 253 Yonghyundong Namku 253 Yonghyundong Namku

More information

MONUMENTAL RULES. COMPONENTS Cards AIM OF THE GAME SETUP Funforge. Matthew Dunstan. 1 4 players l min l Ages 14+ Tokens

MONUMENTAL RULES. COMPONENTS Cards AIM OF THE GAME SETUP Funforge. Matthew Dunstan. 1 4 players l min l Ages 14+ Tokens Matthew Dunstan MONUMENTAL 1 4 players l 90-120 min l Ages 14+ RULES In Monumental, each player leads a unique civilization. How will you shape your destiny, and how will history remember you? Dare you

More information

Creating a Poker Playing Program Using Evolutionary Computation

Creating a Poker Playing Program Using Evolutionary Computation Creating a Poker Playing Program Using Evolutionary Computation Simon Olsen and Rob LeGrand, Ph.D. Abstract Artificial intelligence is a rapidly expanding technology. We are surrounded by technology that

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

Optimal Rhode Island Hold em Poker

Optimal Rhode Island Hold em Poker Optimal Rhode Island Hold em Poker Andrew Gilpin and Tuomas Sandholm Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {gilpin,sandholm}@cs.cmu.edu Abstract Rhode Island Hold

More information

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup? The Soccer Robots of Freie Universität Berlin We have been building autonomous mobile robots since 1998. Our team, composed of students and researchers from the Mathematics and Computer Science Department,

More information

Case-Based Goal Formulation

Case-Based Goal Formulation Case-Based Goal Formulation Ben G. Weber and Michael Mateas and Arnav Jhala Expressive Intelligence Studio University of California, Santa Cruz {bweber, michaelm, jhala}@soe.ucsc.edu Abstract Robust AI

More information

Case-Based Goal Formulation

Case-Based Goal Formulation Case-Based Goal Formulation Ben G. Weber and Michael Mateas and Arnav Jhala Expressive Intelligence Studio University of California, Santa Cruz {bweber, michaelm, jhala}@soe.ucsc.edu Abstract Robust AI

More information

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( )

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( ) COMP3211 Project Artificial Intelligence for Tron game Group 7 Chiu Ka Wa (20369737) Chun Wai Wong (20265022) Ku Chun Kit (20123470) Abstract Tron is an old and popular game based on a movie of the same

More information

A Particle Model for State Estimation in Real-Time Strategy Games

A Particle Model for State Estimation in Real-Time Strategy Games Proceedings of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment A Particle Model for State Estimation in Real-Time Strategy Games Ben G. Weber Expressive Intelligence

More information

a b c d e f g h 1 a b c d e f g h C A B B A C C X X C C X X C C A B B A C Diagram 1-2 Square names

a b c d e f g h 1 a b c d e f g h C A B B A C C X X C C X X C C A B B A C Diagram 1-2 Square names Chapter Rules and notation Diagram - shows the standard notation for Othello. The columns are labeled a through h from left to right, and the rows are labeled through from top to bottom. In this book,

More information

CS221 Final Project Report Learn to Play Texas hold em

CS221 Final Project Report Learn to Play Texas hold em CS221 Final Project Report Learn to Play Texas hold em Yixin Tang(yixint), Ruoyu Wang(rwang28), Chang Yue(changyue) 1 Introduction Texas hold em, one of the most popular poker games in casinos, is a variation

More information

Reinforcement Learning in Games Autonomous Learning Systems Seminar

Reinforcement Learning in Games Autonomous Learning Systems Seminar Reinforcement Learning in Games Autonomous Learning Systems Seminar Matthias Zöllner Intelligent Autonomous Systems TU-Darmstadt zoellner@rbg.informatik.tu-darmstadt.de Betreuer: Gerhard Neumann Abstract

More information

Creating a New Angry Birds Competition Track

Creating a New Angry Birds Competition Track Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference Creating a New Angry Birds Competition Track Rohan Verma, Xiaoyu Ge, Jochen Renz Research School

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Agents for the card game of Hearts Joris Teunisse Supervisors: Walter Kosters, Jeanette de Graaf BACHELOR THESIS Leiden Institute of Advanced Computer Science (LIACS) www.liacs.leidenuniv.nl

More information

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Valentijn Muijrers 3275183 Valentijn.Muijrers@phil.uu.nl Supervisor: Gerard Vreeswijk 7,5 ECTS

More information

Learning Unit Values in Wargus Using Temporal Differences

Learning Unit Values in Wargus Using Temporal Differences Learning Unit Values in Wargus Using Temporal Differences P.J.M. Kerbusch 16th June 2005 Abstract In order to use a learning method in a computer game to improve the perfomance of computer controlled entities,

More information

Documentation and Discussion

Documentation and Discussion 1 of 9 11/7/2007 1:21 AM ASSIGNMENT 2 SUBJECT CODE: CS 6300 SUBJECT: ARTIFICIAL INTELLIGENCE LEENA KORA EMAIL:leenak@cs.utah.edu Unid: u0527667 TEEKO GAME IMPLEMENTATION Documentation and Discussion 1.

More information

Elements of Artificial Intelligence and Expert Systems

Elements of Artificial Intelligence and Expert Systems Elements of Artificial Intelligence and Expert Systems Master in Data Science for Economics, Business & Finance Nicola Basilico Dipartimento di Informatica Via Comelico 39/41-20135 Milano (MI) Ufficio

More information

AI Agent for Ants vs. SomeBees: Final Report

AI Agent for Ants vs. SomeBees: Final Report CS 221: ARTIFICIAL INTELLIGENCE: PRINCIPLES AND TECHNIQUES 1 AI Agent for Ants vs. SomeBees: Final Report Wanyi Qian, Yundong Zhang, Xiaotong Duan Abstract This project aims to build a real-time game playing

More information

Tree depth influence in Genetic Programming for generation of competitive agents for RTS games

Tree depth influence in Genetic Programming for generation of competitive agents for RTS games Tree depth influence in Genetic Programming for generation of competitive agents for RTS games P. García-Sánchez, A. Fernández-Ares, A. M. Mora, P. A. Castillo, J. González and J.J. Merelo Dept. of Computer

More information

Unofficial Bolt Action Scenario Book. Leopard, aka Dale Needham

Unofficial Bolt Action Scenario Book. Leopard, aka Dale Needham Unofficial Bolt Action Scenario Book Leopard, aka Dale Needham Issue 0.1, August 2013 2 Chapter 1 Introduction Warlord Game s Bolt Action system includes a number of scenarios on pages 107 120 of the main

More information

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network Controlling Cost and Time of Construction Projects Using Neural Network Li Ping Lo Faculty of Computer Science and Engineering Beijing University China Abstract In order to achieve optimized management,

More information

Computer Science. Using neural networks and genetic algorithms in a Pac-man game

Computer Science. Using neural networks and genetic algorithms in a Pac-man game Computer Science Using neural networks and genetic algorithms in a Pac-man game Jaroslav Klíma Candidate D 0771 008 Gymnázium Jura Hronca 2003 Word count: 3959 Jaroslav Klíma D 0771 008 Page 1 Abstract:

More information

Game Playing for a Variant of Mancala Board Game (Pallanguzhi)

Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Varsha Sankar (SUNet ID: svarsha) 1. INTRODUCTION Game playing is a very interesting area in the field of Artificial Intelligence presently.

More information

Mobile Tourist Guide Services with Software Agents

Mobile Tourist Guide Services with Software Agents Mobile Tourist Guide Services with Software Agents Juan Pavón 1, Juan M. Corchado 2, Jorge J. Gómez-Sanz 1 and Luis F. Castillo Ossa 2 1 Dep. Sistemas Informáticos y Programación Universidad Complutense

More information

Evolving Behaviour Trees for the Commercial Game DEFCON

Evolving Behaviour Trees for the Commercial Game DEFCON Evolving Behaviour Trees for the Commercial Game DEFCON Chong-U Lim, Robin Baumgarten and Simon Colton Computational Creativity Group Department of Computing, Imperial College, London www.doc.ic.ac.uk/ccg

More information

Dota2 is a very popular video game currently.

Dota2 is a very popular video game currently. Dota2 Outcome Prediction Zhengyao Li 1, Dingyue Cui 2 and Chen Li 3 1 ID: A53210709, Email: zhl380@eng.ucsd.edu 2 ID: A53211051, Email: dicui@eng.ucsd.edu 3 ID: A53218665, Email: lic055@eng.ucsd.edu March

More information

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract 2012-07-02 BTH-Blekinge Institute of Technology Uppsats inlämnad som del av examination i DV1446 Kandidatarbete i datavetenskap. Bachelor thesis Influence map based Ms. Pac-Man and Ghost Controller Johan

More information

Estimation of Rates Arriving at the Winning Hands in Multi-Player Games with Imperfect Information

Estimation of Rates Arriving at the Winning Hands in Multi-Player Games with Imperfect Information 2016 4th Intl Conf on Applied Computing and Information Technology/3rd Intl Conf on Computational Science/Intelligence and Applied Informatics/1st Intl Conf on Big Data, Cloud Computing, Data Science &

More information

Multilevel Selection In-Class Activities. Accompanies the article:

Multilevel Selection In-Class Activities. Accompanies the article: Multilevel Selection In-Class Activities Accompanies the article: O Brien, D. T. (2011). A modular approach to teaching multilevel selection. EvoS Journal: The Journal of the Evolutionary Studies Consortium,

More information

Team Playing Behavior in Robot Soccer: A Case-Based Reasoning Approach

Team Playing Behavior in Robot Soccer: A Case-Based Reasoning Approach Team Playing Behavior in Robot Soccer: A Case-Based Reasoning Approach Raquel Ros 1, Ramon López de Màntaras 1, Josep Lluís Arcos 1 and Manuela Veloso 2 1 IIIA - Artificial Intelligence Research Institute

More information

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team Robert Pucher Paul Kleinrath Alexander Hofmann Fritz Schmöllebeck Department of Electronic Abstract: Autonomous Robot

More information

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

CPS331 Lecture: Intelligent Agents last revised July 25, 2018 CPS331 Lecture: Intelligent Agents last revised July 25, 2018 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents Materials: 1. Projectable of Russell and Norvig

More information

Homework Assignment #2

Homework Assignment #2 CS 540-2: Introduction to Artificial Intelligence Homework Assignment #2 Assigned: Thursday, February 15 Due: Sunday, February 25 Hand-in Instructions This homework assignment includes two written problems

More information

Using a genetic algorithm for mining patterns from Endgame Databases

Using a genetic algorithm for mining patterns from Endgame Databases 0 African Conference for Sofware Engineering and Applied Computing Using a genetic algorithm for mining patterns from Endgame Databases Heriniaina Andry RABOANARY Department of Computer Science Institut

More information

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Yu Zhang and Alan K. Mackworth Department of Computer Science, University of British Columbia, Vancouver B.C. V6T 1Z4, Canada,

More information

Multiple AI types in FreeCiv

Multiple AI types in FreeCiv Multiple AI types in FreeCiv Krzysztof Danielowski Przemysław Syktus Project objectives Our project s main goal is to add different and distinguishable AI types to FreeCiv. We wanted to create a possibility

More information

Red Shadow. FPGA Trax Design Competition

Red Shadow. FPGA Trax Design Competition Design Competition placing: Red Shadow (Qing Lu, Bruce Chiu-Wing Sham, Francis C.M. Lau) for coming third equal place in the FPGA Trax Design Competition International Conference on Field Programmable

More information

Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft

Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft Ricardo Parra and Leonardo Garrido Tecnológico de Monterrey, Campus Monterrey Ave. Eugenio Garza Sada 2501. Monterrey,

More information

Artificial Intelligence. Cameron Jett, William Kentris, Arthur Mo, Juan Roman

Artificial Intelligence. Cameron Jett, William Kentris, Arthur Mo, Juan Roman Artificial Intelligence Cameron Jett, William Kentris, Arthur Mo, Juan Roman AI Outline Handicap for AI Machine Learning Monte Carlo Methods Group Intelligence Incorporating stupidity into game AI overview

More information

Applying Goal-Driven Autonomy to StarCraft

Applying Goal-Driven Autonomy to StarCraft Applying Goal-Driven Autonomy to StarCraft Ben G. Weber, Michael Mateas, and Arnav Jhala Expressive Intelligence Studio UC Santa Cruz bweber,michaelm,jhala@soe.ucsc.edu Abstract One of the main challenges

More information

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton Genetic Programming of Autonomous Agents Senior Project Proposal Scott O'Dell Advisors: Dr. Joel Schipper and Dr. Arnold Patton December 9, 2010 GPAA 1 Introduction to Genetic Programming Genetic programming

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Potential-Field Based navigation in StarCraft

Potential-Field Based navigation in StarCraft Potential-Field Based navigation in StarCraft Johan Hagelbäck, Member, IEEE Abstract Real-Time Strategy (RTS) games are a sub-genre of strategy games typically taking place in a war setting. RTS games

More information

BayesChess: A computer chess program based on Bayesian networks

BayesChess: A computer chess program based on Bayesian networks BayesChess: A computer chess program based on Bayesian networks Antonio Fernández and Antonio Salmerón Department of Statistics and Applied Mathematics University of Almería Abstract In this paper we introduce

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

Monte Carlo Tree Search

Monte Carlo Tree Search Monte Carlo Tree Search 1 By the end, you will know Why we use Monte Carlo Search Trees The pros and cons of MCTS How it is applied to Super Mario Brothers and Alpha Go 2 Outline I. Pre-MCTS Algorithms

More information

Mathematical Analysis of 2048, The Game

Mathematical Analysis of 2048, The Game Advances in Applied Mathematical Analysis ISSN 0973-5313 Volume 12, Number 1 (2017), pp. 1-7 Research India Publications http://www.ripublication.com Mathematical Analysis of 2048, The Game Bhargavi Goel

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup

Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup Hakan Duman and Huosheng Hu Department of Computer Science University of Essex Wivenhoe Park, Colchester CO4 3SQ United Kingdom

More information

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario Proceedings of the Fifth Artificial Intelligence for Interactive Digital Entertainment Conference A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario Johan Hagelbäck and Stefan J. Johansson

More information

Case-Based Strategies in Computer Poker

Case-Based Strategies in Computer Poker 1 Case-Based Strategies in Computer Poker Jonathan Rubin a and Ian Watson a a Department of Computer Science. University of Auckland Game AI Group E-mail: jrubin01@gmail.com, E-mail: ian@cs.auckland.ac.nz

More information

The game of Reversi was invented around 1880 by two. Englishmen, Lewis Waterman and John W. Mollett. It later became

The game of Reversi was invented around 1880 by two. Englishmen, Lewis Waterman and John W. Mollett. It later became Reversi Meng Tran tranm@seas.upenn.edu Faculty Advisor: Dr. Barry Silverman Abstract: The game of Reversi was invented around 1880 by two Englishmen, Lewis Waterman and John W. Mollett. It later became

More information

Playing Atari Games with Deep Reinforcement Learning

Playing Atari Games with Deep Reinforcement Learning Playing Atari Games with Deep Reinforcement Learning 1 Playing Atari Games with Deep Reinforcement Learning Varsha Lalwani (varshajn@iitk.ac.in) Masare Akshay Sunil (amasare@iitk.ac.in) IIT Kanpur CS365A

More information

the gamedesigninitiative at cornell university Lecture 23 Strategic AI

the gamedesigninitiative at cornell university Lecture 23 Strategic AI Lecture 23 Role of AI in Games Autonomous Characters (NPCs) Mimics personality of character May be opponent or support character Strategic Opponents AI at player level Closest to classical AI Character

More information

Discussion of Emergent Strategy

Discussion of Emergent Strategy Discussion of Emergent Strategy When Ants Play Chess Mark Jenne and David Pick Presentation Overview Introduction to strategy Previous work on emergent strategies Pengi N-puzzle Sociogenesis in MANTA colonies

More information

Online Interactive Neuro-evolution

Online Interactive Neuro-evolution Appears in Neural Processing Letters, 1999. Online Interactive Neuro-evolution Adrian Agogino (agogino@ece.utexas.edu) Kenneth Stanley (kstanley@cs.utexas.edu) Risto Miikkulainen (risto@cs.utexas.edu)

More information

For slightly more detailed instructions on how to play, visit:

For slightly more detailed instructions on how to play, visit: Introduction to Artificial Intelligence CS 151 Programming Assignment 2 Mancala!! The purpose of this assignment is to program some of the search algorithms and game playing strategies that we have learned

More information

Application of Soft Computing Techniques in Water Resources Engineering

Application of Soft Computing Techniques in Water Resources Engineering International Journal of Dynamics of Fluids. ISSN 0973-1784 Volume 13, Number 2 (2017), pp. 197-202 Research India Publications http://www.ripublication.com Application of Soft Computing Techniques in

More information

COMPUTATONAL INTELLIGENCE

COMPUTATONAL INTELLIGENCE COMPUTATONAL INTELLIGENCE October 2011 November 2011 Siegfried Nijssen partially based on slides by Uzay Kaymak Leiden Institute of Advanced Computer Science e-mail: snijssen@liacs.nl Katholieke Universiteit

More information

Matthew Fox CS229 Final Project Report Beating Daily Fantasy Football. Introduction

Matthew Fox CS229 Final Project Report Beating Daily Fantasy Football. Introduction Matthew Fox CS229 Final Project Report Beating Daily Fantasy Football Introduction In this project, I ve applied machine learning concepts that we ve covered in lecture to create a profitable strategy

More information

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Outline Introduction Soft Computing (SC) vs. Conventional Artificial Intelligence (AI) Neuro-Fuzzy (NF) and SC Characteristics 2 Introduction

More information

Automatically Generating Game Tactics via Evolutionary Learning

Automatically Generating Game Tactics via Evolutionary Learning Automatically Generating Game Tactics via Evolutionary Learning Marc Ponsen Héctor Muñoz-Avila Pieter Spronck David W. Aha August 15, 2006 Abstract The decision-making process of computer-controlled opponents

More information

SCRABBLE ARTIFICIAL INTELLIGENCE GAME. CS 297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University

SCRABBLE ARTIFICIAL INTELLIGENCE GAME. CS 297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University SCRABBLE AI GAME 1 SCRABBLE ARTIFICIAL INTELLIGENCE GAME CS 297 Report Presented to Dr. Chris Pollett Department of Computer Science San Jose State University In Partial Fulfillment Of the Requirements

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Clever Pac-man. Sistemi Intelligenti Reinforcement Learning: Fuzzy Reinforcement Learning

Clever Pac-man. Sistemi Intelligenti Reinforcement Learning: Fuzzy Reinforcement Learning Clever Pac-man Sistemi Intelligenti Reinforcement Learning: Fuzzy Reinforcement Learning Alberto Borghese Università degli Studi di Milano Laboratorio di Sistemi Intelligenti Applicati (AIS-Lab) Dipartimento

More information

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti Basic Information Project Name Supervisor Kung-fu Plants Jakub Gemrot Annotation Kung-fu plants is a game where you can create your characters, train them and fight against the other chemical plants which

More information

COMP9414/ 9814/ 3411: Artificial Intelligence. Week 2. Classifying AI Tasks

COMP9414/ 9814/ 3411: Artificial Intelligence. Week 2. Classifying AI Tasks COMP9414/ 9814/ 3411: Artificial Intelligence Week 2. Classifying AI Tasks Russell & Norvig, Chapter 2. COMP9414/9814/3411 18s1 Tasks & Agent Types 1 Examples of AI Tasks Week 2: Wumpus World, Robocup

More information