the gamedesigninitiative at cornell university Lecture 23 Strategic AI

Similar documents
CS 354R: Computer Game Technology

CS 4700: Foundations of Artificial Intelligence

the gamedesigninitiative at cornell university Lecture 10 Game Architecture

the gamedesigninitiative at cornell university Lecture 6 Uncertainty & Risk

Artificial Intelligence. Minimax and alpha-beta pruning

Character AI: Sensing & Perception

For slightly more detailed instructions on how to play, visit:

Games (adversarial search problems)

Basic AI Techniques for o N P N C P C Be B h e a h v a i v ou o r u s: s FS F T S N

Game Artificial Intelligence ( CS 4731/7632 )

Raven: An Overview 12/2/14. Raven Game. New Techniques in Raven. Familiar Techniques in Raven

Evolutionary Neural Networks for Non-Player Characters in Quake III

Chapter 1:Object Interaction with Blueprints. Creating a project and the first level

the gamedesigninitiative at cornell university Lecture 20 Optimizing Behavior

Making Simple Decisions CS3523 AI for Computer Games The University of Aberdeen

2014 DigiPen Institute of Technology 2013 Valve Corporation.

Game-playing AIs: Games and Adversarial Search I AIMA

CS 4700: Foundations of Artificial Intelligence

Artificial Intelligence

Learning to Play like an Othello Master CS 229 Project Report. Shir Aharon, Amanda Chang, Kent Koyanagi

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Discussion of Emergent Strategy

Inaction breeds doubt and fear. Action breeds confidence and courage. If you want to conquer fear, do not sit home and think about it.

Artificial Intelligence ( CS 365 ) IMPLEMENTATION OF AI SCRIPT GENERATOR USING DYNAMIC SCRIPTING FOR AOE2 GAME

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

FPS Assignment Call of Duty 4

CS 387/680: GAME AI DECISION MAKING. 4/19/2016 Instructor: Santiago Ontañón

the gamedesigninitiative at cornell university Lecture 5 Rules and Mechanics

A Tic Tac Toe Learning Machine Involving the Automatic Generation and Application of Heuristics

CS151 - Assignment 2 Mancala Due: Tuesday March 5 at the beginning of class

game tree complete all possible moves

John E. Laird. Abstract

Artificial Intelligence Lecture 3

Towards Strategic Kriegspiel Play with Opponent Modeling

Gameplay. Topics in Game Development UNM Spring 2008 ECE 495/595; CS 491/591

ARMOR DIAGRAM ARMOR DIAGRAM. Mech Data. Mech Data BATTLEMECH RECORD SHEET BATTLEMECH RECORD SHEET. Weapons Inventory.

IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente

Adjustable Group Behavior of Agents in Action-based Games

Board Game AIs. With a Focus on Othello. Julian Panetta March 3, 2010

FOR THE CROWN Sample Play

Game Design Courses at WPI. IMGD 1001: Gameplay. Gameplay. Outline. Gameplay Example (1 of 2) Group Exercise

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

ARTIFICIAL INTELLIGENCE (CS 370D)

CS 480: GAME AI TACTIC AND STRATEGY. 5/15/2012 Santiago Ontañón

Math 152: Applicable Mathematics and Computing

Game demo First project with UE Tom Guillermin

CPS331 Lecture: Agents and Robots last revised April 27, 2012

Unit 12: Artificial Intelligence CS 101, Fall 2018

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

UT: Blitz Outline. Basic Game Flow V1.0-04/24/2017

Principles of Computer Game Design and Implementation. Lecture 29

2 person perfect information

the gamedesigninitiative at cornell university Lecture 5 Rules and Mechanics

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Introduction to Artificial Intelligence CS 151 Programming Assignment 2 Mancala!! Due (in dropbox) Tuesday, September 23, 9:34am

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Artificial Intelligence

Reactive Planning for Micromanagement in RTS Games

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

G54GAM - Games. Balance So2ware architecture

Reinforcement Learning in Games Autonomous Learning Systems Seminar

IMGD 1001: Programming Practices; Artificial Intelligence

Building a Better Battle The Halo 3 AI Objectives System

Procedural Content Generation

Procedural Content Generation

IMGD 1001: Programming Practices; Artificial Intelligence

SPACEYARD SCRAPPERS 2-D GAME DESIGN DOCUMENT

Comprehensive Rules Document v1.1

Chapter 4: Internal Economy. Hamzah Asyrani Sulaiman

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

Grading Delays. We don t have permission to grade you (yet) We re working with tstaff on a solution We ll get grades back to you as soon as we can

Tower Defense. CSc 335 Fall Final Project

Solving Problems by Searching: Adversarial Search

PROFILE. Jonathan Sherer 9/10/2015 1

POSITIONAL EVALUATION

The game of Reversi was invented around 1880 by two. Englishmen, Lewis Waterman and John W. Mollett. It later became

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Not-Too-Silly Stories

Lights in the Sky: War among the stars

CS 331: Artificial Intelligence Adversarial Search II. Outline

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

CMPUT 396 Tic-Tac-Toe Game

Adversary Search. Ref: Chapter 5

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

the gamedesigninitiative at cornell university Lecture 3 Design Elements

CS 480: GAME AI DECISION MAKING AND SCRIPTING

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario

Lecture 33: How can computation Win games against you? Chess: Mechanical Turk

COMPONENT OVERVIEW Your copy of Modern Land Battles contains the following components. COUNTERS (54) ACTED COUNTERS (18) DAMAGE COUNTERS (24)

the gamedesigninitiative at cornell university Lecture 3 Design Elements

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Ar#ficial)Intelligence!!

Five-In-Row with Local Evaluation and Beam Search

Exam #2 CMPS 80K Foundations of Interactive Game Design

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Transcription:

Lecture 23

Role of AI in Games Autonomous Characters (NPCs) Mimics personality of character May be opponent or support character Strategic Opponents AI at player level Closest to classical AI Character Dialog Intelligent commentary Narrative management (e.g. Façade) 2

Rule-Based AI If X is true, Then do Y Three-Step Process Match Match For each rule, check if Return all matches Updated State Act Selected Rule Matching Rules Resolve Conflicts Resolve Can only use one rule Use metarule to pick one Act Do n-part 3

Example: Tic-Tac-Toe Next move for player O? If have a winning move, make it If opponent can win, block it Take center if available Corners are better than edges Very easy to program Just check board state Tricky part is prioritization 4

Example: Real Time Strategy Example from Microsoft s Age of Kings ; The AI will attack once at 1100 seconds and n again ; every 1400 sec, provided it has enough defense soldiers. (defrule (game-time > 1100) => (attack-now) (enable-timer 7 1100)) ) (defrule (timer-triggered 7) (defend-soldier-count >= 12) => (attack-now) (disable-timer 7) (enable-timer 7 1400) ) 5

The Problems with Rules Rules only do one step May not be best move Could lose long term Next move for player O? If can win, n do it If X can win, n block it Take center if possible Corners > edges Need to look ahead 6

The Problems with Rules Rules only do one step May not be best move Could lose long term Next move for player O? If can win, n do it If X can win, n block it Take center if possible Corners > edges Need to look ahead 7

Multiple Steps: Planning Plan: actions necessary to reach a goal Goal is a (pseudo) specific game state Actions change game state (e.g. verbs) Planning: steps to generate a plan Initial State: state game is currently in Goal Test: determines if state meets goal Operators: action NPC can perform 8

What Should We Do? Slide courtesy of John Laird Pickup? Shoot? Pickup? 9

Simplification: No Opponent Identify desired goal Ex: Kill enemy, get gold Design appropriate test List all relevant actions Ex: Build, send troops Look-ahead Search Start with initial state Try all actions (look-ahead) act 1 act 2 act 3 initial state later state Stop if reached goal Continue if not at goal Tree Search 10

Planning Issues Exponential choices Search action sequences How far are we searching? Cannot do this in real life! Game state is complex Do we look at entire state? Faster to do than to plan Must limit search Reduce actions examined Simplify game state 11

Internal State Representation Simplified World Model Includes primary resources Example: ammo, health Rough notion of position Example: in/outside room Both characters and items Game mechanic details Example: respawn rate Allows tactical decisions Uses of Internal State Notice changes Health is dropping Enemy must be nearby Remember recent events Enemy has left room Chase after fleeing enemy Remember older events Picked up health 30 sec ago 12

Internal State Representation Simplified World Model Includes primary resources Example: ammo, health Rough notion of position Example: in/outside room Both characters and items Game mechanic details Example: respawn rate Allows tactical decisions Uses of Internal State Notice changes Health is dropping Enemy must be nearby Remember recent events Enemy has left room Chase after fleeing enemy Remember older events Picked up health 30 sec ago 13

Internal State and Memory Each NPC has own state Represents NPC memory Might not be consistent? Useful for character AI Models sensory data Models communication Isolates planning Each NPC plans separately Coordinate planning with a strategic manager 14

Strategy versus Tactics Goal Strategic Manager Slide courtesy of Dave Mark Goal Tactical Manager Tactical Manager Agent Agent Agent Agent Goal Goal Agent Agent Agent Agent Tactical Manager Tactical Manager Agent Agent Agent Agent Agent Agent Agent Agent 15

Internal State for Quake II Self Current-health Last-health Current-weapon Ammo-left Current-room Last-room Current-armor Last-armor Available-weapons Enemy Current-weapon Current-room Last-seen-time Estimated-health Current-time Random-number Powerup Type Room Available Estimated-spawn-time Map Rooms Halls Paths Parameters Full-health Health-powerup-amount Ammo-powerup-amount Respawn-rate 16

Internal Action Representation Simplified Action Model Internal Actions = operators Just mamatical functions Operators alter internal state Pre-conditions What is required for action Often resource requirement Effects How action changes state Both global and for NPC Designing Actions Extrapolate from gameplay Start with an internal state Pick canonical game state Apply game action to state Back to internal state Remove any uncertainty Deterministic NPC behavior Average random results Or pick worse case scenario 17

Internal Action Representation Simplified Action Model Internal Actions = operators Just mamatical functions Operators alter internal state Pre-conditions What is required for action Often resource requirement Effects How action changes state Both global and for NPC Designing Actions Extrapolate from gameplay Start with an internal state Pick canonical game state Apply game action to state Back to internal state Remove any uncertainty Deterministic NPC behavior Average random results Or pick worse case scenario 18

Example: Pick-Up Health Op Preconditions: Self.current-room = Powerup.current-room Self.current-health < full-health Powerup.type = health Powerup.available = yes Effects: Self.last-health = self.current-health Self.current-health = current-health + health-powerup-amount Powerup.available = no Powerup.estimated-spawn-time = current-time + respawn-rate 19

Building Internal Models Planning is only as accurate as model Bad models è bad plans But complex models è slow planning Look at your nondigital prototype! Heavily simplified for playability Resources determine internal state Nondigital verbs are internal actions One of many reasons for this exercise 20

Slide courtesy of John Laird What Should We Do? Pickup? Shoot? Pickup? Self.current-health = 20 Self.current-weapon = blaster Enemy.estimated-health = 50 Powerup.type = health-pak Powerup.available = yes Powerup.type = Railgun Powerup.available = yes 21

Slide courtesy of John Laird One Step: Pick-up Railgun Pickup Shoot? Pickup? Self.current-health = 10 Self.current-weapon = railgun Enemy.estimated-health = 50 Powerup.type = health-pak Powerup.available = yes Powerup.type = Railgun Powerup.available = no 22

Slide courtesy of John Laird One Step: Shoot Enemy Pickup? Shoot Pickup? Self.current-health = 10 Self.current-weapon = blaster Enemy.estimated-health = 40 Powerup.type = health-pak Powerup.available = yes Powerup.type = Railgun Powerup.available = yes 23

Slide courtesy of John Laird One Step: Pick-up Health-Pak Pickup? Shoot? Pickup Self.current-health = 90 Self.current-weapon = blaster Enemy.estimated-health = 50 Powerup.type = health-pak Powerup.available = no Powerup.type = Railgun Powerup.available = yes 24

State Evaluation Function Need to compare states Is eir state better? How far away is goal? Might be partial order Some states incomparable If not goal, just continue <? Purpose of planning Find good states Avoid bad states 25

State Evaluation: Quake II Example 1: Prefer higher self.current-health Always pick up health powerup Counter example: Self.current-health = 99% Enemy.current-health = 1% Example 2: Prefer lower enemy.current-health Always shoot enemy Counter example: Self.current-health = 1% Enemy.current- health = 99% 26

State Evaluation: Quake II Example 3: Prefer higher self.health enemy.health Shoot enemy if I have health to spare Orwise pick up a health pack Counter examples? Examples of more complex evaluations If self.health > 50% prefer lower enemy.health Orwise, want higher self.health If self.health > low-health prefer lower enemy.health Orwise, want higher self.health 27

Slide courtesy of John Laird Two Step Look-Ahead Shoot Pickup Self.current-health = 80 Self.current-weapon = blaster Enemy.estimated-health = 40 Powerup.type = health-pak Powerup.available = no Powerup.type = Railgun Powerup.available = yes 28

Slide courtesy of John Laird Three Step Look-Ahead Shoot Pickup Pickup Self.current-health = 100 Self.current-weapon = railgun Enemy.estimated-health = 0 Powerup.type = health-pak Powerup.available = no Powerup.type = Railgun Powerup.available = no 29

Look-Ahead Search One-Step Lookahead op pickbest(state) { foreach op satisfying precond { newstate = op(state) evaluate newstate } return op with best evaluation } Multistep Tree Search [op] bestpath(&state,depth) { if depth == 0 { return [] } foreach op satisfying precond { newstate = op(state) [nop]=bestpath(newstate,depth-1) evaluate newstate } pick op+[nop] with best state modify state to reflect op+[nop] return op+[nop] } 30

Are more steps better? Look-Ahead Search Longer, more elaborate plans More time & space consuming Opponent or environment can mess up plan Simplicity of internal model causes problems In this class, limit three or four steps Anything more, and AI is too complicated Purpose is to be challenging, not to win 31

Recall: LibGDX Behavior Trees Subtask? Subtask Subtask Selector rules Tests each subtask for success Tasks are tried independently Chooses first one to succeed Subtask Subtask Subtask Sequence rules Tests each subtask for success Tasks are tried in order Does all if succees; else none Subtask Subtask Subtask Parallel rules Tests each subtask for success Tasks are tried simultaneously Does all if succees; else none 32 Thinking and Acting

Recall: LibGDX Behavior Trees Subtask? Subtask Subtask Selector Lookahead rules search, but only checks if plan is acceptable Tests each subtask for success Tasks are tried independently Chooses first one to succeed Subtask Subtask Subtask Sequence rules Tests each subtask for success Tasks are tried in order Does all if succees; else none Subtask Subtask Subtask Parallel rules Tests each subtask for success Tasks are tried simultaneously Does all if succees; else none 33 Thinking and Acting

Slide courtesy of John Laird Opponent: New Problems Pickup? Pickup? Pickup? Shoot? Pickup? Self.current-health = 20 Self.current-weapon = blaster Enemy.estimated-health = 50 Powerup.type = health-pak Powerup.available = yes Powerup.type = Railgun Powerup.available = yes 34

Opponent Model Solution 1: Assume worst Opponent does what would be worst for you Full game tree search; exponential Solution 2: What would I do? Opponent does what you would in same situation Solution 3: Internal opponent model Remember what did last time Or remember what y like to do 35

Opponent Interference Opponent actions may prevent yours Example: Opponent grabs railgun first Need to take into account in your plan Solution: Iteration Plan once with no interference Run again, assuming best plans of opponent Keep iterating until happy (or run out of time) Planning is very expensive! 36

Asynchronous AI Game Thread Second Thread Request Plan Update Check Buffer AI Manager Answer Draw Check for request Compute answer Store in buffer 37 Game Architecture

Alternative: Iterative AI Game Thread AI Manager Update Initialize Asset Update Loader Draw Result 38 Game Architecture

Alternative: Iterative AI Game Thread AI Manager Update Initialize Asset Update Loader Draw Result Looks like asset management 39 Game Architecture

Using Asynchronous AI Give AI a time budget If planning takes too long, abort it Use counter in update loop to track time Beware of stale plans Actual game state has probably changed When find a plan, make sure it is still good Evaluate (quickly) with new internal state Make sure result is close to what thought 40

Planning: Optimization Backwards Planning Idea: few operators achieve goal conditions Implementation: For each operator, reverse effect Check reversed effect satisfies pre-conditions Possible to use backwards and forwards Start on each end, and check for meets Does not work well with numerical resources 41

Advantages To Plan or Not to Plan Less predictable behavior Can handle unexpected situations More accurate than rule-based AI Disadvantages Less predictable behavior (harder to debug) Planning takes a lot of processor time Planning takes memory Need simple but accurate internal representations 42

Or Possibilities There are many more options available Neural nets Decision trees General machine learning Take CS 4700 if want to learn more Quality is a matter of heated debate Better to spend time on internal state design Most AI is focused on perception modeling 43

Summary Rule-based AI is simplest form of strategic AI Only limited to one-step at a time Can easily make decisions that lose in long term More complicated behavior requires planning Simplify game to turn-based format Use classic AI search techniques Planning has advantages and disadvantages Remember, desire is to challenge, not to win 44