CS 680: GAME AI WEEK 4: DECISION MAKING IN RTS GAMES
|
|
- Gregory Curtis
- 5 years ago
- Views:
Transcription
1 CS 680: GAME AI WEEK 4: DECISION MAKING IN RTS GAMES 2/6/2012 Santiago Ontañón
2 Reminders Projects: Project 1 is simpler than it seems: 1) Implement a basic AI (doesn t have to play well) 2) Pick a path-finding/decision making algorithm and experiment with it Progress self-check indicator: Your progress is good is you already have a basic AI that can play a complete game of S3/Starcraft (whether it wins or not).
3 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
4 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
5 Decision Making A situation is characterized by: Known information about the state of the world Unknown information about the state of the world Set of possible actions to execute Problem: Given a situation, which of the possible actions is the best?
6 Example: RTS Games Known information: Player data, explored terrain Unknown: Unexplored terrain Enemy strategy Actions: Build barracks Build refinery Build supply depot Wait Explore
7 Example: Final Fantasy VI Known information: Party information Two enemies Unknown: Resistances Attack power Remaining health Actions:
8 Basic RTS AI Diagram Perception Unit Analysis Map Analysis Strategy Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
9 Basic RTS AI Diagram Perception Unit Analysis Decision making is key at these Map Analysis levels Strategy Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
10 Basic RTS AI Diagram And Perception is most important at the Unit Analysis high-level strategy level Map Analysis Strategy Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
11 Example Basic RTS AI: Strategy Finite-State Machine Resource spending: 80% Economy, 20% Military 2 workers wood 1 worker metal Army Composition: 100% footmen After training 4 footmen Resource spending: 20% Economy, 80% Military 2 workers wood 4 workers metal Army Composition: 100% knights If enemy has Flying units Army Composition: 50% knights 50% archers If enemy has no more flying units
12 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
13 Finite State Machines Harvest Minerals 4 SCVs harvesting Build Barracks barracks Train marines Less than 4 SCVs 4 SCVs 4 marines & Enemy seen No marines 4 marines & Enemy unseen Train SCVs Attack Enemy Enemy seen Explore
14 Finite State Machines Easy to implement: switch(state) { case START: if (numscvs<4) state = TRAIN_SCVs; if (numharvestingscvs>=4) state = BUILD_BARRACKS; Unit *SCV = findidlescv(); Unit *mineral = findclosestmineral(scv); SCV->harvest(mineral); break; case TRAIN_SCVs: if (numscvs>=4) state = START; Unit *base = findidlebase(); base->train(unittype::scv); break; case BUILD_BARRACKS: }
15 Basic RTS AI Diagram Perception Unit Analysis Map Analysis Harvest Minerals 4 SCVs harvesting Build Barracks barracks Train marines Strategy/ Give Orders Less than 4 SCVs Train SCVs 4 SCVs 4 marines & Enemy seen Attack Enemy No marines Enemy seen 4 marines & Enemy unseen Explore Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
16 Basic RTS AI Diagram For Simple games or simple AIs, Perception we could substitute both Strategy and Give Orders layers by a FSM Unit Analysis Harvest Minerals 4 SCVs harvesting Build Barracks Map Analysis barracks Train marines Strategy/ Give Orders Less than 4 SCVs Train SCVs 4 SCVs 4 marines & Enemy seen Attack Enemy No marines Enemy seen 4 marines & Enemy unseen Explore Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
17 Basic RTS AI Diagram For mode complex AIs, FMSs are Perception too restrictive, and it s Unit Analysis better to use the architecture as we explained it in Week 2 of class Harvest Minerals 4 SCVs harvesting Build Barracks Map Analysis barracks Train marines Strategy/ Give Orders Less than 4 SCVs Train SCVs 4 SCVs 4 marines & Enemy seen Attack Enemy No marines Enemy seen 4 marines & Enemy unseen Explore Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
18 Finite State Machines Good for simple AIs Become unmanageable for complex tasks Hard to maintain, Example: Imagine we want to add the behavior if enemy inside base then attack him with everything we have We will have to add a new state and transitions from every other state!
19 Finite State Machines (Add a new state) Harvest Minerals 4 SCVs harvesting Build Barracks barracks Train 4 marines Less than 4 SCVs 4 SCVs 4 marines & Enemy seen No marines Enemy unseen Train 4 SCVs Enemy Inside Base Attack Enemy Enemy seen Explore Attack Inside Enemy
20 Hierarchical Finite State Machines FSM inside of the state of another FSM As many levels as needed Can alleviate complexity problem to some extent Enemy Inside Base Standard Strategy No Enemy Inside Base Attack Inside Enemy
21 Hierarchical Finite State Machines FSM inside of the state of another FSM As many levels as needed Can alleviate complexity problem to some extent Harvest Minerals 4 SCVs harvesting Build Barracks barracks Train 4 marines Enemy Inside Base Less than 4 SCVs Train 4 SCVs 4 SCVs 4 marines & Enemy seen Attack Enemy No marines Enemy seen Enemy unseen Explore No Enemy Inside Base Attack Inside Enemy
22 Decision Trees In the FSM examples before, decisions were quite simple: If 4 SCVs then build barracks But those conditions can easily become complex Decision trees offer a way to encode complex decisions in a easy and organized way
23 Example of Complex Decision Decide when to attack the enemy in a RTS game, and what kind of units to build We could try to define a set of rules: If we have not seen the enemy then build ground units If we have seen the enemy and he has no air units and we have more units than him, then attack If we have seen the enemy and he has air units and we do not have air units, then build air units etc. Problems: complex to know if we are missing any scenario, The conditions of the rules might grow very complex
24 Example of Complex Decision: Decision Tree The same decision, can be easily captured in a decision tree Enemy Seen yes Does he have Air Units yes Do we have Antiair units yes no Do we have More units Than him Build Antiair units yes no Attack! Build more units no Build more units no Do we have More units Than him yes no Attack! Build more units
25 Decision Trees Intuitive Help us determine whether we are forgetting a case Easy to implement: Decision trees can be used as paper and pencil technique, to think about the problem, and then just use nested if-then-else statements They can also be implemented in a generic way, and give graphical editors to game designers
26 Finite State Machines with Decision Trees In complex FSMs, conditions in arches might get complex Each state could have a decision tree to determine which state to go next S1 C1 C2 S2 S3
27 Example Basic RTS AI: Strategy Finite-State Machine used as an example in Week 2 of class Resource spending: 80% Economy, 20% Military 2 workers wood 1 worker metal Army Composition: 100% footmen After training 4 footmen Resource spending: 20% Economy, 80% Military 2 workers wood 4 workers metal Army Composition: 100% knights If enemy has Flying units Army Composition: 50% knights 50% archers If enemy has no more flying units
28 Other Approaches Rule-based systems: Not extremely common in games, but very well studied in AI (expert systems) Collection of rules plus an inference engine Problems: hard to scale up (rules have complex interactions when there s many of them) Behavior Trees: Combination of Hierarchical FSMs with planning and execution Very popular in modern games (not so popular in RTS games) Covered in Intro to Game AI (offered next quarter)
29 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
30 Authoring AI vs Autonomous AI FSMs, Decision Trees, Rule-based Systems, Behavior Trees, etc. are useful to hardcode decisions: Game designer tools to make the AI behave the way they want The AI will never do anything the game designers didn t foresee (except for bugs) We will now change our attention to techniques that can be used to let the AI autonomously take decisions The AI takes decisions on its own, and can generate strategies, not foreseen by game designers
31 Decision Theory Given a situation, decide which action to perform depending on the desirability of its immediate outcome Desirability of a situation: utility function U(s) Decision theory is based upon the idea that there is such utility function Example utility function, Chess: Score of a player: 10 points for the queen + 5 points per rook, + 3 points per knight or bishop + 1 point per pawn. Utility for white pieces: U w (s) = Score(white) Score(black) Utility for black pieces: U b (s) = Score(black) Score(white)
32 Example Utility Function for RTS games: Similarly to chess, we can do: U(s) = X t w t (c friendly t c enemy t ) Enemy units must be estimated This is oversimplified, but it is good as a first approach. It can be improved by adding: Resources: (minerals, gas, gold, wood, etc.) Research done Territory under control
33 Is it Realistic to Have a Utility Function? Yes: In hardcoded approaches (FSM, Decision trees, etc.): The AI designer has to tell the AI HOW to play Utility function: Only captures the goal of the game In the simplest form, utility is simply 1 for win, -1 for loss, and 0 otherwise But the more information conveyed in the utility function, the better the AI can decide what to do The AI designer has to tell the AI WHAT is the goal It is easier to define a utility function, than to hardcode a strategy, since utility function has less information (only WHAT, not HOW)
34 Decision Theory Effect of an action a on the state s: Result(s,a) Since we might not know the exact state in which we are, we can only estimate the result of an action: P(Result(s,a) = s e) (e is the information we know about s) Example: a = attack enemy supply depot with 4 marines e = we haven t observed any enemy unit around the supply depot Result(s,a) = supply depot destroyed in 250 cycles, 4 marines intact But we don t know if there were cloaked units, so we can only guess the result of a
35 Maximum Expected Utility Principle (MEU) Select the action with the expected maximum utility: EU(a e) = X s 0 P (Result(a, s) =s 0 e)u(s 0 ) Requires: Utility function (hardcoded or machine learned) Estimation of action effects (hardcoded or machine learned) The AI has to know what the actions do!
36 Example: Target Selection Utility function: 60 points per footman 400 points per barracks 200 points per lumber mill Which action? Attack enemy footman Attack enemy barracks Attack enemy lumber mill Player = blue Enemy = red
37 Example: Target Selection Utility function: 60 points per footman 400 points per barracks 200 points per lumber mill Which action? Attack enemy footman: 2 footmen can kill 1 footman U(s ) = 2* = -480 Player = blue Enemy = red
38 Example: Target Selection Utility function: 60 points per footman 400 points per barracks 200 points per lumber mill Which action? Attack enemy barracks: During the time it takes to destroy the barracks, the enemy footman can kill our 2 footmen U(s ) = = -660 Player = blue Enemy = red
39 Example: Target Selection Utility function: 60 points per footman 400 points per barracks 200 points per lumber mill Which action? Attack enemy lumber mill: During the time it takes to destroy the lumber mill, the enemy footman can kill our 2 footmen U(s ) = = -660 Player = blue Enemy = red
40 Example: Target Selection Utility function: 60 points per footman 400 points per barracks 200 points per lumber mill Which action? Attack enemy footman: -480 Attack enemy barracks: -660 Attack enemy lumber mill: -660 Player = blue Enemy = red
41 Basic RTS AI Diagram Decision Theory can be useful Perception at these three levels. Unit Analysis Map Analysis Notice that, as presented here, it only considers one Strategy action at a time. Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder If used in the Attack module, it is more natural to have one decision theoretic module per squad, so each can take individual decisions.
42 Value of Information How do we know when is it worth spending resources in exploring? Value of perfect information: VPI e (E) = X k P (E = e k )EU (a k e, E = e k )! EU (a e) i.e.: How much utility can we expect to gain if we knew the value of an unknown variable E (that can take k different values e 1,, e k )
43 Example Starcraft: Player force: 4 marines (60 points each) 1 Enemy Command Center spotted (500 points) Enemy defenses: unknown command center? Which action to perform? Attack Train more marines 4 marines
44 Example Starcraft: Player force: 4 marines (60 points each) 1 Enemy Command Center spotted (500 points) Enemy defenses: unknown command center? Which action to perform? Attack: U(s ) = 0.5 U(winning) U(losing) = U(s ) = 0.5 * (4 marines) (-1 command center) = -230 Train more marines: U(s ) = 6 marines (1 command center + 2 SCVs) = marines
45 Example Starcraft: Player force: 4 marines (60 points each) 1 Enemy Command Center spotted (500 points) Enemy defenses: unknown Which action to perform? Attack: U(s ) = 0.5 * (4 marines) (-1 command center) = -130 Train more marines: U(s ) = 6 marines (1 command center + 2 SCVs) = -380 command center 4 marines? If we know no defenses, then for Attack: U(s ) = 4 marines = 240 If we know there are defenses, then for Attack: U(s ) = -1 command center = -500 EU(Attack Defenses) = -500 EU(Attack no Defenses) = 240
46 Example Starcraft: Player force: 4 marines (60 points each) 1 Enemy Command Center spotted (500 points) Enemy defenses: unknown command center? Which action to perform? Attack: U(s ) = 0.5 * (4 marines) (-1 command center) = -230 Train more marines: U(s ) = 6 marines (1 command center + 2 SCVs) = marines VPI(defenses) = (0.5 * EU(More Marines Defenses) * EU(Attack No Defenses) ) (EU(Attack))
47 Example Starcraft: Player force: 4 marines (60 points each) 1 Enemy Command Center spotted (500 points) Enemy defenses: unknown command center? Which action to perform? Attack: U(s ) = 0.5 * (4 marines) (-1 command center) = -230 Train more marines: U(s ) = 6 marines (1 command center + 2 SCVs) = marines VPI(defenses) = (0.5 * * 240 ) (-130) = = 60
48 Decision Theory Basic principles for taking rational decisions Goal of game AI is to be fun: Utility function doesn t have to be tuned to optimal play, but to fun play For example, utility function may penalize too many attacks to the player in a given period of time, in order to let the player breathe. Deals only with immediate utility. No look ahead, or adversarial planning: To deal with that: adversarial search (next section)
49 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
50 Adversarial Search Decision theory is good for lower reactive decisions (e.g. Attack module) At a higher level (strategy), better decisions could be made if the AI could plan for future actions: E.g.: If I attack with 4 marines, I ll be left with 2, then the enemy will overpower me with his 4 Dragoons. In that case I could lure them into the upper defenses, and annihilate them with my 8 tanks. Solution: game tree search
51 Game Tree Decision theory deals with immediate decisions: Current Situation Player 1 action U(s) U(s) U(s)
52 Game Tree Decision theory deals with immediate decisions: Current Situation Player 1 action Pick the action that leads to the state with maximum expected utility U(s) U(s) U(s)
53 Game Tree Game trees capture the effects of successive action executions: Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
54 Game Tree Game trees capture the effects of successive action executions: Current Situation Player 1 action Pick the action that leads to the state with maximum expected utility after taking into account what the other players might do Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
55 Game Tree Game trees capture the effects of successive action executions: In this example, we look ahead only one Current Situation player 1 action and one player 2 action. But we could grow the tree arbitrarily deep Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
56 Minimax Principle Positive utility is good for player 1, and negative for player 2 Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
57 Minimax Principle Positive utility is good for player 1, and negative for player 2 Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
58 Minimax Principle Positive utility is good for player 1, and negative for player 2 Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
59 Minimax Principle Positive utility is good for player 1, and negative for player 2 Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action (max) Player 2 action (min) U(s) = -1 U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
60 Minimax Algorithm Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null (player == 1 && Score>BestScore) (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)
61 Minimax Algorithm Needs: Utility function U Way to determine which actions can a player execute in a given state MAX_DEPTH controls how deep is the search tree going to be: Size of the tree is exponential in MAX_DEPTH Branching factor is the number of moves that can be executed per state The higher MAX_DEPTH, the better the AI will play There are ways to increase speed: alpha-beta pruning
62 Successes of Minimax Deep Blue defeated Kasparov in Chess (1997) Checkers was completely solved by Jonathan Shaeffert (2007): If no players make mistakes, the game is a draw (like tick-tack-toe) Go: Using a variant of minimax, based on Monte Carlo search (UCT), In 2011 The program Zen19S reached 4 dan (professional humans are rated between 1 to 9 dan)
63 Game Tree Search in RTS Games Classic minimax assumes (Chess, Checkers, Go ): 2 players Perfect information Turn-taking game Given a state and an action, we can predict the next state It is easily generalizable to a multiplayer turn-taking game (max^n algorithm) RTS games: Real-time, not turn-taking, simultaneous actions Lots of possible actions: branching factor too large! We cannot exactly predict the next state Imperfect information
64 Game Tree Search in RTS Games Problem: Lots of possible actions, branching factor too large! Solution:??? Problems: real-time, no turn taking, simultaneous actions Solution:???
65 Game Tree Search in RTS Games Problem: Lots of possible actions, branching factor too large! Solution: Sampling (Monte-Carlo Search) Problems: real-time, no turn taking, simultaneous actions Solution:???
66 Monte-Carlo Tree Search: UCT Monte-Carlo Search: For each possible action: play N games at random until the end starting with each action If N is large, the average win ratio converges to the expected utility of the action Upper Confidence Tree (UCT) is a state of the art, simple variant of Monte-Carlo Search, responsible for the recent success of Computer Go programs Idea: Instead of opening the whole Minimax tree or play N random games: Open only the upper part of the tree, and play random games from there
67 Minimax vs Monte-Carlo Minimax: Monte-Carlo: U U U U U U U U U U U U U U U U
68 Minimax vs Monte-Carlo Minimax: Monte-Carlo: U U U U U U U U U U U U U U U U Minimax opens the complete tree (all possible moves) up to a fixed depth. Then, the Utility function is applied to the leaves.
69 Minimax vs Monte-Carlo Minimax: Monte-Carlo: U U U U U U U U U U U U U U U U Monte-Carlo search runs for each possible move at the root node a fixed number K of random complete games. No need for a Utility function (but it can be used), Complete Game
70 UCT Tree Search 0/0 Current State Monte-Carlo Search Current state w/t is the account of how many games starting from this state have be found to be won out of the total games explored in the current search
71 UCT Tree Search 1/1 Monte-Carlo Search win
72 UCT Tree Search 1/2 0/1 Monte-Carlo Search loss At each iteration, one node o the tree (upper part) is selecte and expanded (one node adde to the tree). From this new nod a complete game is played ou at random (Monte-Carlo)
73 UCT Tree Search 2/3 1/1 0/1 Monte-Carlo Search At each iteration, one node o the tree (upper part) is selecte and expanded (one node adde to the tree). From this new nod a complete game is played ou at random (Monte-Carlo) win
74 UCT Tree Search 3/4 2/2 0/1 1/1 Monte-Carlo Search The counts w/t are used to determine which nodes to explore next. Exploration/Exploitation: 50% expand the best node in the tree 50% expand a node at random win
75 UCT Tree Search 3/5 2/3 0/1 1/1 0/1 Monte-Carlo Search The tree ensures all relevant actions are explored (greatly alleviates the randomness that affects Monte-Carlo methods) loss
76 UCT Tree Search 3/5 2/3 0/1 1/1 0/1 Monte-Carlo Search loss The random games played from each node of the tree serve to estimate the Utility function. They can be random, or use an opponent model (if available)
77 UCT After a fixed number of iterations K (or after the assigned time is over), UCT analyzes the resulting trees, and the selected action is that with the highest win ratio. UCT can search in games with much larger state spaces than minimax. It is the standard algorithms for modern (from 2008 to present) Go playing programs
78 Game Tree Search in RTS Games Problem: Lots of possible actions, branching factor too large! Solution: Sampling (Monte-Carlo Search) Problems: real-time, no turn taking, simultaneous actions Solution: Strategy simulation, rather than turn-based action taking
79 Strategy Simulation: Example Assume we want to use UCT for the Strategy module of an RTS AI game Perception Unit Analysis Map Analysis Strategy Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
80 Strategy Simulation: Example Assume we want to use UCT for the Strategy module of an RTS AI game Define a collection of high level actions (or strategies) that make sense for the game. For example, in S3: S1: Attack with the units we have S2: Train 4 footmen S3: Train 4 archers S4: Train 4 catapults S5: Train 4 knights S6: Build 2 defense Towers S7: Build 2 defense Towers around a Gold Mine S8: Build 2 defense Towers around a group of Trees S9: Bring units back to the base S10: Train 2 more peasants to gather resources
81 Strategy Simulation: Example Instead of taking turns in executing actions, we assign a strategy to each player, and simulate it until completion: Player 1, Action 1 Player 2, Action 2 Player 1, Action 3 Standard Minimax Strategy Simulation Player 1: S2 (ETA 240) Player 2: S3 (ETA 400) Player 1: S1 Player 1: S1 (ETA 400) Player 2: S3 (ETA 160) Player 2: S1 Player 1: S1 (ETA 240) Player 2: S1 (ETA 400)
82 Strategy Simulation Requires: A way to simulate strategies: typically a very simplified model E.g. battles just decided by who has more units, or added damage of units (taking into account air/ground units) No pathfinding, etc. Abstracted version of the game, e.g.: divide map in regions, and just count the number of unit types in each region Utility function (optional): If available, there is no need to simulate games till the end when using Monte-Carlo If not available, simply simulate games to the end
83 UCT for RTS Games Applicable to: Strategy (previous example) Attack: where the high-level actions are things like attack enemy X, retreat, etc. Economy In Turn-based games, minimax is executed each turn For RTS games: execute each K cycles (e.g. once per second), or once the current action has finished, or an important event happened (e.g. new enemy sighted) State of the art: No current commercial games use it Research in experimental games shows its potential
84 Overview of Decision Making Hardcoded: FSM/Decision Trees: good for simple AIs and for game designers to author the exact behavior they want the AI to have AI author needs to decide HOW they AI plays Autonomous: Decision Theory: good for reactive behavior (e.g. unit/squad control, Attack module) Adversarial Search: good for high-level strategy AI author only needs to decide WHAT the AI wants to accomplish, the AI will figure out HOW automatically
85 RTS Game AI Overview Week 2: How to create a basic RTS AI Hierarchical system: Strategy: FSM Giving Orders: Different modules (Economy, Logistics, Attack) Pathfinding Building Placer Week 3: Pathfinding A* or TBA* / LRTA* for small games D* Lite for larger games with very dynamic environments Week 4: Decision Making FSMs / Decision Trees: for hardcoding AI behavior (useful for simple games where the game designers can control what they AI does) Decision Theory / Minimax: to let the AI decide what to do in order to maximize the utility function
86 RTS Game AI Overview Perception Unit Analysis Map Analysis Strategy Strategy Give Orders Economy Logistics Attack Arbiter Execute Orders Building Placer Unit Unit AI Unit AI AI Pathfinder
87 Outline Student Presentation: Near Optimal Hierarchical Pathfinding Student Presentation: Intelligent Moving of Groups in Real-Time Strategy Games Decision Making Basics: Hardcoded Methods Decision Theory Adversarial Search Project Discussion
88 Project 1: RTS Games Issues with Starcraft / BWAPI? Issues with S3? Questions about Pathfinding? Options for Project 1 (pick one): Pathfinding: TBA*, LRTA*, D* Lite Decision Making: Utility Function for High-level strategy or attack. Game Tree Search (minimax, Monte Carlo, or UCT) for strategy or attack Other (with permission from instructor)
89 Project 2: Drama Management IFGameEngine demo
CS 480: GAME AI DECISION MAKING AND SCRIPTING
CS 480: GAME AI DECISION MAKING AND SCRIPTING 4/24/2012 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2012/cs480/intro.html Reminders Check BBVista site for the course
More informationCS 387/680: GAME AI DECISION MAKING
CS 387/680: GAME AI DECISION MAKING 4/21/2014 Instructor: Santiago Ontañón santi@cs.drexel.edu TA: Alberto Uriarte office hours: Tuesday 4-6pm, Cyber Learning Center Class website: https://www.cs.drexel.edu/~santi/teaching/2014/cs387-680/intro.html
More informationCS 387/680: GAME AI BOARD GAMES
CS 387/680: GAME AI BOARD GAMES 6/2/2014 Instructor: Santiago Ontañón santi@cs.drexel.edu TA: Alberto Uriarte office hours: Tuesday 4-6pm, Cyber Learning Center Class website: https://www.cs.drexel.edu/~santi/teaching/2014/cs387-680/intro.html
More informationCS 387: GAME AI BOARD GAMES. 5/24/2016 Instructor: Santiago Ontañón
CS 387: GAME AI BOARD GAMES 5/24/2016 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2016/cs387/intro.html Reminders Check BBVista site for the
More informationCS 387/680: GAME AI DECISION MAKING. 4/19/2016 Instructor: Santiago Ontañón
CS 387/680: GAME AI DECISION MAKING 4/19/2016 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2016/cs387/intro.html Reminders Check BBVista site
More informationCS 387: GAME AI BOARD GAMES
CS 387: GAME AI BOARD GAMES 5/28/2015 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2015/cs387/intro.html Reminders Check BBVista site for the
More informationCS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón
CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH Santiago Ontañón so367@drexel.edu Recall: Adversarial Search Idea: When there is only one agent in the world, we can solve problems using DFS, BFS, ID,
More informationCS 680: GAME AI INTRODUCTION TO GAME AI. 1/9/2012 Santiago Ontañón
CS 680: GAME AI INTRODUCTION TO GAME AI 1/9/2012 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2012/cs680/intro.html CS 680 Focus: advanced artificial intelligence techniques
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Adversarial Search Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationArtificial Intelligence
Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Games and game trees Multi-agent systems
More informationAdversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1
Adversarial Search Read AIMA Chapter 5.2-5.5 CIS 421/521 - Intro to AI 1 Adversarial Search Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan
More informationArtificial Intelligence
Artificial Intelligence Adversarial Search Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter Abbeel
More informationGame Playing State-of-the-Art
Adversarial Search [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Game Playing State-of-the-Art
More informationCS 480: GAME AI TACTIC AND STRATEGY. 5/15/2012 Santiago Ontañón
CS 480: GAME AI TACTIC AND STRATEGY 5/15/2012 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2012/cs480/intro.html Reminders Check BBVista site for the course regularly
More informationGame-playing: DeepBlue and AlphaGo
Game-playing: DeepBlue and AlphaGo Brief history of gameplaying frontiers 1990s: Othello world champions refuse to play computers 1994: Chinook defeats Checkers world champion 1997: DeepBlue defeats world
More informationCS 380: ARTIFICIAL INTELLIGENCE
CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH 10/23/2013 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/cs380/intro.html Recall: Problem Solving Idea: represent
More informationAdversary Search. Ref: Chapter 5
Adversary Search Ref: Chapter 5 1 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans is possible. Many games can be modeled very easily, although
More informationCS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search
CS 2710 Foundations of AI Lecture 9 Adversarial search Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square CS 2710 Foundations of AI Game search Game-playing programs developed by AI researchers since
More informationGame Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search
CS 188: Artificial Intelligence Adversarial Search Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca Dragan for CS188 at UC Berkeley)
More informationGame-playing AIs: Games and Adversarial Search I AIMA
Game-playing AIs: Games and Adversarial Search I AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation Functions Part II: Adversarial Search
More informationGames (adversarial search problems)
Mustafa Jarrar: Lecture Notes on Games, Birzeit University, Palestine Fall Semester, 204 Artificial Intelligence Chapter 6 Games (adversarial search problems) Dr. Mustafa Jarrar Sina Institute, University
More informationCS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements
CS 171 Introduction to AI Lecture 1 Adversarial search Milos Hauskrecht milos@cs.pitt.edu 39 Sennott Square Announcements Homework assignment is out Programming and experiments Simulated annealing + Genetic
More informationAdversarial Search. CS 486/686: Introduction to Artificial Intelligence
Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 Introduction So far we have only been concerned with a single agent Today, we introduce an adversary! 2 Outline Games Minimax search
More informationArtificial Intelligence
Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Non-classical search - Path does not
More informationMinimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CSCE 315 Programming Studio Fall 2017 Project 2, Lecture 2 Adapted from slides of Yoonsuck Choe, John Keyser Two-Person Perfect Information Deterministic
More informationGame Playing. Philipp Koehn. 29 September 2015
Game Playing Philipp Koehn 29 September 2015 Outline 1 Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information 2 games
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught
More informationAr#ficial)Intelligence!!
Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic So far we assumed a single-agent environment, but what if there are more agents and
More informationGame Playing: Adversarial Search. Chapter 5
Game Playing: Adversarial Search Chapter 5 Outline Games Perfect play minimax search α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Games vs. Search
More informationArtificial Intelligence
Artificial Intelligence Adversarial Search Vibhav Gogate The University of Texas at Dallas Some material courtesy of Rina Dechter, Alex Ihler and Stuart Russell, Luke Zettlemoyer, Dan Weld Adversarial
More informationArtificial Intelligence. Minimax and alpha-beta pruning
Artificial Intelligence Minimax and alpha-beta pruning In which we examine the problems that arise when we try to plan ahead to get the best result in a world that includes a hostile agent (other agent
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Adversarial Search Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 7: Minimax and Alpha-Beta Search 2/9/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein 1 Announcements W1 out and due Monday 4:59pm P2
More informationARTIFICIAL INTELLIGENCE (CS 370D)
Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-5) ADVERSARIAL SEARCH ADVERSARIAL SEARCH Optimal decisions Min algorithm α-β pruning Imperfect,
More informationAdversarial Search. CS 486/686: Introduction to Artificial Intelligence
Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 AccessAbility Services Volunteer Notetaker Required Interested? Complete an online application using your WATIAM: https://york.accessiblelearning.com/uwaterloo/
More informationCS 387/680: GAME AI TACTIC AND STRATEGY
CS 387/680: GAME AI TACTIC AND STRATEGY 5/12/2014 Instructor: Santiago Ontañón santi@cs.drexel.edu TA: Alberto Uriarte office hours: Tuesday 4-6pm, Cyber Learning Center Class website: https://www.cs.drexel.edu/~santi/teaching/2014/cs387-680/intro.html
More informationComputer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville
Computer Science and Software Engineering University of Wisconsin - Platteville 4. Game Play CS 3030 Lecture Notes Yan Shi UW-Platteville Read: Textbook Chapter 6 What kind of games? 2-player games Zero-sum
More informationAdversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:
Adversarial Search 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/adversarial.pdf Slides are largely based
More informationMore on games (Ch )
More on games (Ch. 5.4-5.6) Announcements Midterm next Tuesday: covers weeks 1-4 (Chapters 1-4) Take the full class period Open book/notes (can use ebook) ^^ No programing/code, internet searches or friends
More informationCPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games
CPS 57: Artificial Intelligence Two-player, zero-sum, perfect-information Games Instructor: Vincent Conitzer Game playing Rich tradition of creating game-playing programs in AI Many similarities to search
More informationAnnouncements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1
Announcements Homework 1 Due tonight at 11:59pm Project 1 Electronic HW1 Written HW1 Due Friday 2/8 at 4:00pm CS 188: Artificial Intelligence Adversarial Search and Game Trees Instructors: Sergey Levine
More informationAdversarial Search Lecture 7
Lecture 7 How can we use search to plan ahead when other agents are planning against us? 1 Agenda Games: context, history Searching via Minimax Scaling α β pruning Depth-limiting Evaluation functions Handling
More informationLecture 5: Game Playing (Adversarial Search)
Lecture 5: Game Playing (Adversarial Search) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 21, 2018 Amarda Shehu (580) 1 1 Outline
More informationCS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón
CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH Santiago Ontañón so367@drexel.edu Recall: Problem Solving Idea: represent the problem we want to solve as: State space Actions Goal check Cost function
More informationGame playing. Outline
Game playing Chapter 6, Sections 1 8 CS 480 Outline Perfect play Resource limits α β pruning Games of chance Games of imperfect information Games vs. search problems Unpredictable opponent solution is
More informationGame-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA
Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation
More informationCS 331: Artificial Intelligence Adversarial Search II. Outline
CS 331: Artificial Intelligence Adversarial Search II 1 Outline 1. Evaluation Functions 2. State-of-the-art game playing programs 3. 2 player zero-sum finite stochastic games of perfect information 2 1
More informationCS510 \ Lecture Ariel Stolerman
CS510 \ Lecture04 2012-10-15 1 Ariel Stolerman Administration Assignment 2: just a programming assignment. Midterm: posted by next week (5), will cover: o Lectures o Readings A midterm review sheet will
More informationCPS331 Lecture: Search in Games last revised 2/16/10
CPS331 Lecture: Search in Games last revised 2/16/10 Objectives: 1. To introduce mini-max search 2. To introduce the use of static evaluation functions 3. To introduce alpha-beta pruning Materials: 1.
More informationMonte Carlo Tree Search
Monte Carlo Tree Search 1 By the end, you will know Why we use Monte Carlo Search Trees The pros and cons of MCTS How it is applied to Super Mario Brothers and Alpha Go 2 Outline I. Pre-MCTS Algorithms
More informationMore on games (Ch )
More on games (Ch. 5.4-5.6) Alpha-beta pruning Previously on CSci 4511... We talked about how to modify the minimax algorithm to prune only bad searches (i.e. alpha-beta pruning) This rule of checking
More informationCOMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search
COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last
More informationarxiv: v1 [cs.ai] 9 Aug 2012
Experiments with Game Tree Search in Real-Time Strategy Games Santiago Ontañón Computer Science Department Drexel University Philadelphia, PA, USA 19104 santi@cs.drexel.edu arxiv:1208.1940v1 [cs.ai] 9
More informationAdversarial Search Aka Games
Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta
More informationCSE 573: Artificial Intelligence
CSE 573: Artificial Intelligence Adversarial Search Dan Weld Based on slides from Dan Klein, Stuart Russell, Pieter Abbeel, Andrew Moore and Luke Zettlemoyer (best illustrations from ai.berkeley.edu) 1
More informationGame Playing AI. Dr. Baldassano Yu s Elite Education
Game Playing AI Dr. Baldassano chrisb@princeton.edu Yu s Elite Education Last 2 weeks recap: Graphs Graphs represent pairwise relationships Directed/undirected, weighted/unweights Common algorithms: Shortest
More informationLecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1
Lecture 14 Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Outline Chapter 5 - Adversarial Search Alpha-Beta Pruning Imperfect Real-Time Decisions Stochastic Games Friday,
More informationAnnouncements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters
CS 188: Artificial Intelligence Spring 2011 Announcements W1 out and due Monday 4:59pm P2 out and due next week Friday 4:59pm Lecture 7: Mini and Alpha-Beta Search 2/9/2011 Pieter Abbeel UC Berkeley Many
More informationMore Adversarial Search
More Adversarial Search CS151 David Kauchak Fall 2010 http://xkcd.com/761/ Some material borrowed from : Sara Owsley Sood and others Admin Written 2 posted Machine requirements for mancala Most of the
More informationGame Playing State of the Art
Game Playing State of the Art Checkers: Chinook ended 40 year reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer
More informationProgramming Project 1: Pacman (Due )
Programming Project 1: Pacman (Due 8.2.18) Registration to the exams 521495A: Artificial Intelligence Adversarial Search (Min-Max) Lectured by Abdenour Hadid Adjunct Professor, CMVS, University of Oulu
More informationAdversarial Search: Game Playing. Reading: Chapter
Adversarial Search: Game Playing Reading: Chapter 6.5-6.8 1 Games and AI Easy to represent, abstract, precise rules One of the first tasks undertaken by AI (since 1950) Better than humans in Othello and
More informationCSE 473: Artificial Intelligence. Outline
CSE 473: Artificial Intelligence Adversarial Search Dan Weld Based on slides from Dan Klein, Stuart Russell, Pieter Abbeel, Andrew Moore and Luke Zettlemoyer (best illustrations from ai.berkeley.edu) 1
More informationCSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State
CSE 473: Artificial Intelligence Fall 2014 Adversarial Search Dan Weld Outline Adversarial Search Minimax search α-β search Evaluation functions Expectimax Reminder: Project 1 due Today Based on slides
More informationOutline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game
Outline Game Playing ECE457 Applied Artificial Intelligence Fall 2007 Lecture #5 Types of games Playing a perfect game Minimax search Alpha-beta pruning Playing an imperfect game Real-time Imperfect information
More informationCOMP219: Artificial Intelligence. Lecture 13: Game Playing
CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will
More informationADVERSARIAL SEARCH. Chapter 5
ADVERSARIAL SEARCH Chapter 5... every game of skill is susceptible of being played by an automaton. from Charles Babbage, The Life of a Philosopher, 1832. Outline Games Perfect play minimax decisions α
More informationCS 188: Artificial Intelligence. Overview
CS 188: Artificial Intelligence Lecture 6 and 7: Search for Games Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein 1 Overview Deterministic zero-sum games Minimax Limited depth and evaluation
More informationCS 480: GAME AI INTRODUCTION TO GAME AI. 4/3/2012 Santiago Ontañón https://www.cs.drexel.edu/~santi/teaching/2012/cs480/intro.
CS 480: GAME AI INTRODUCTION TO GAME AI 4/3/2012 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2012/cs480/intro.html CS 480 Focus: artificial intelligence techniques for
More informationV. Adamchik Data Structures. Game Trees. Lecture 1. Apr. 05, Plan: 1. Introduction. 2. Game of NIM. 3. Minimax
Game Trees Lecture 1 Apr. 05, 2005 Plan: 1. Introduction 2. Game of NIM 3. Minimax V. Adamchik 2 ü Introduction The search problems we have studied so far assume that the situation is not going to change.
More informationAdversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I
Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world
More informationGame playing. Chapter 6. Chapter 6 1
Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.
More informationAdversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley
Adversarial Search Rob Platt Northeastern University Some images and slides are used from: AIMA CS188 UC Berkeley What is adversarial search? Adversarial search: planning used to play a game such as chess
More informationINTRODUCTION TO GAME AI
CS 387: GAME AI INTRODUCTION TO GAME AI 3/31/2016 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2016/cs387/intro.html Outline Game Engines Perception
More informationCS 771 Artificial Intelligence. Adversarial Search
CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation
More informationgame tree complete all possible moves
Game Trees Game Tree A game tree is a tree the nodes of which are positions in a game and edges are moves. The complete game tree for a game is the game tree starting at the initial position and containing
More informationAdversarial Search 1
Adversarial Search 1 Adversarial Search The ghosts trying to make pacman loose Can not come up with a giant program that plans to the end, because of the ghosts and their actions Goal: Eat lots of dots
More informationGoogle DeepMind s AlphaGo vs. world Go champion Lee Sedol
Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Review of Nature paper: Mastering the game of Go with Deep Neural Networks & Tree Search Tapani Raiko Thanks to Antti Tarvainen for some slides
More informationComputing Science (CMPUT) 496
Computing Science (CMPUT) 496 Search, Knowledge, and Simulations Martin Müller Department of Computing Science University of Alberta mmueller@ualberta.ca Winter 2017 Part IV Knowledge 496 Today - Mar 9
More informationArtificial Intelligence. 4. Game Playing. Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder
Artificial Intelligence 4. Game Playing Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder University of Zagreb Faculty of Electrical Engineering and Computing Academic Year 2017/2018 Creative Commons
More informationGame Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search
CSE 473: Artificial Intelligence Fall 2017 Adversarial Search Mini, pruning, Expecti Dieter Fox Based on slides adapted Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Dan Weld, Stuart Russell or Andrew Moore
More informationSet 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask
Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search
More informationGames vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax
Game playing Chapter 6 perfect information imperfect information Types of games deterministic chess, checkers, go, othello battleships, blind tictactoe chance backgammon monopoly bridge, poker, scrabble
More informationEvolutionary Computation for Creativity and Intelligence. By Darwin Johnson, Alice Quintanilla, and Isabel Tweraser
Evolutionary Computation for Creativity and Intelligence By Darwin Johnson, Alice Quintanilla, and Isabel Tweraser Introduction to NEAT Stands for NeuroEvolution of Augmenting Topologies (NEAT) Evolves
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 1 Outline Adversarial Search Optimal decisions Minimax α-β pruning Case study: Deep Blue
More informationAlgorithms for Data Structures: Search for Games. Phillip Smith 27/11/13
Algorithms for Data Structures: Search for Games Phillip Smith 27/11/13 Search for Games Following this lecture you should be able to: Understand the search process in games How an AI decides on the best
More informationModule 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur
Module 3 Problem Solving using Search- (Two agent) 3.1 Instructional Objective The students should understand the formulation of multi-agent search and in detail two-agent search. Students should b familiar
More informationFoundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel
Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search
More informationGame playing. Chapter 6. Chapter 6 1
Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.
More informationGame Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003
Game Playing Dr. Richard J. Povinelli rev 1.1, 9/14/2003 Page 1 Objectives You should be able to provide a definition of a game. be able to evaluate, compare, and implement the minmax and alpha-beta algorithms,
More informationMonte Carlo tree search techniques in the game of Kriegspiel
Monte Carlo tree search techniques in the game of Kriegspiel Paolo Ciancarini and Gian Piero Favini University of Bologna, Italy 22 IJCAI, Pasadena, July 2009 Agenda Kriegspiel as a partial information
More informationToday. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing
COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax
More informationArtificial Intelligence. Topic 5. Game playing
Artificial Intelligence Topic 5 Game playing broadening our world view dealing with incompleteness why play games? perfect decisions the Minimax algorithm dealing with resource limits evaluation functions
More informationCOMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( )
COMP3211 Project Artificial Intelligence for Tron game Group 7 Chiu Ka Wa (20369737) Chun Wai Wong (20265022) Ku Chun Kit (20123470) Abstract Tron is an old and popular game based on a movie of the same
More informationFive-In-Row with Local Evaluation and Beam Search
Five-In-Row with Local Evaluation and Beam Search Jiun-Hung Chen and Adrienne X. Wang jhchen@cs axwang@cs Abstract This report provides a brief overview of the game of five-in-row, also known as Go-Moku,
More informationAdversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5
Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game
More informationFoundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art
Foundations of AI 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents Board Games Minimax
More informationAdversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA
Adversarial Search Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA What is adversarial search? Adversarial search: planning used to play a game
More informationAdversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012
1 Hal Daumé III (me@hal3.name) Adversarial Search Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 9 Feb 2012 Many slides courtesy of Dan
More informationArtificial Intelligence Search III
Artificial Intelligence Search III Lecture 5 Content: Search III Quick Review on Lecture 4 Why Study Games? Game Playing as Search Special Characteristics of Game Playing Search Ingredients of 2-Person
More information