Discussion of Emergent Strategy

Similar documents
When Ants Play Chess (Or Can Strategies Emerge From Tactical Behaviors?)

YourTurnMyTurn.com: Go-moku rules. Sjoerd Hemminga (sjoerdje) Copyright 2019 YourTurnMyTurn.com

5.4 Imperfect, Real-Time Decisions

Using Artificial intelligent to solve the game of 2048

Artificial Intelligence. Cameron Jett, William Kentris, Arthur Mo, Juan Roman

Heuristics, and what to do if you don t know what to do. Carl Hultquist

Overview Agents, environments, typical components

More on games (Ch )

After learning the Rules, What should beginners learn next?

5.4 Imperfect, Real-Time Decisions

2 person perfect information

A Simple Pawn End Game

Lecture 19 November 6, 2014

ARTIFICIAL INTELLIGENCE (CS 370D)

More on games (Ch )

Practice Session 2. HW 1 Review

Grade 6 Math Circles Combinatorial Games - Solutions November 3/4, 2015

Five-In-Row with Local Evaluation and Beam Search

IMOK Maclaurin Paper 2014

Chess Handbook: Course One

Grade 7/8 Math Circles Game Theory October 27/28, 2015

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Chess, a mathematical definition

Grade 6 Math Circles Combinatorial Games November 3/4, 2015

inchworm.txt Inchworm

CS 380: ARTIFICIAL INTELLIGENCE RATIONAL AGENTS. Santiago Ontañón

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Artificial Intelligence Search III

B1 Problem Statement Unit Pricing

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Senior Math Circles February 10, 2010 Game Theory II

CS 380: ARTIFICIAL INTELLIGENCE

a b c d e f g h 1 a b c d e f g h C A B B A C C X X C C X X C C A B B A C Diagram 1-2 Square names

Data Structures and Algorithms

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

Towards Strategic Kriegspiel Play with Opponent Modeling

CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

UMBC 671 Midterm Exam 19 October 2009

Multi-Platform Soccer Robot Development System

Second Annual University of Oregon Programming Contest, 1998

Artificial Intelligence. Minimax and alpha-beta pruning

Perry High School. 2 nd Semester!

Probability and Statistics

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Playing Othello Using Monte Carlo

Tournament etiquette is a lot simpler than table manners. We expect Scholastic Players to always demonstrate the following basic courtesies:

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Artificial Intelligence

Game-playing AIs: Games and Adversarial Search I AIMA

Design task: Pacman. Software engineering Szoftvertechnológia. Dr. Balázs Simon BME, IIT

A1 Problem Statement Unit Pricing

CSE 573: Artificial Intelligence Autumn 2010

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

CS 229 Final Project: Using Reinforcement Learning to Play Othello

Lecture 33: How can computation Win games against you? Chess: Mechanical Turk

MyPawns OppPawns MyKings OppKings MyThreatened OppThreatened MyWins OppWins Draws

Programming an Othello AI Michael An (man4), Evan Liang (liange)

RoboCup. Presented by Shane Murphy April 24, 2003

the gamedesigninitiative at cornell university Lecture 23 Strategic AI

LEARN TO PLAY CHESS CONTENTS 1 INTRODUCTION. Terry Marris December 2004

An End Game in West Valley City, Utah (at the Harman Chess Club)

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

CS 188: Artificial Intelligence Spring 2007

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Board Game AIs. With a Focus on Othello. Julian Panetta March 3, 2010

CPS331 Lecture: Agents and Robots last revised April 27, 2012

Generation of Patterns With External Conditions for the Game of Go

Activity 6: Playing Elevens

STRATEGO EXPERT SYSTEM SHELL

All games have an opening. Most games have a middle game. Some games have an ending.

The Sweet Learning Computer

Codebreaker Lesson Plan

Monte Carlo based battleship agent

AI Agent for Ants vs. SomeBees: Final Report

Game Engineering CS F-24 Board / Strategy Games

Ar#ficial)Intelligence!!

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

COMP9414: Artificial Intelligence Problem Solving and Search

Reinforcement Learning in Games Autonomous Learning Systems Seminar

CMPUT 396 Tic-Tac-Toe Game

Hierarchical Controller for Robotic Soccer

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

Chess Rules- The Ultimate Guide for Beginners

Ian Stewart. 8 Whitefield Close Westwood Heath Coventry CV4 8GY UK

CS 354R: Computer Game Technology

Beneficial Role of Humans and AI in a Machine Age of the Telco EcoSystem

Part I: The Swap Puzzle

Artificial Intelligence for Games

PRE-DEPLOYMENT ORDERS Complete the following pre-deployment orders prior to deploying forces and beginning each game:

AI Approaches to Ultimate Tic-Tac-Toe

Here is Part Seven of your 11 part course "Openings and End Game Strategies."

Learning Artificial Intelligence in Large-Scale Video Games

Adversarial Reasoning: Sampling-Based Search with the UCT algorithm. Joint work with Raghuram Ramanujan and Ashish Sabharwal

Queen vs 3 minor pieces

Transcription:

Discussion of Emergent Strategy When Ants Play Chess Mark Jenne and David Pick

Presentation Overview Introduction to strategy Previous work on emergent strategies Pengi N-puzzle Sociogenesis in MANTA colonies Multi-agent reactive chess program MARCH Experiments Conclusions

Strategy Making plan of coordinated actions for reaching a goal Often conflicting with goal of other agent Different resources needed to reach it Relies on two strong assumptions Having global view of current situation Making sure resources perform as intended

Emergent Strategy Global strategy can sometimes end up not being useful Emergent Strategy, as seen by observer, arises from coordination of local behaviors that are not aware of their place in global strategy Some strategy can be viewed as the result of interactions between simple agents with only local information This approach can constitute a constructive lower bound for planning or search through consideration of local behavior

Pengi Based on game called Pengo Combination of simple rules for movement can create appearance of intelligent strategy from an omnipotent entity Pengi underlines three features of emergent strategies Rarely find optimal solutions Difficult to formalize from observation Difficult to reuse

N-Puzzle Slide square tiles to reach goal configuration Each tile as own autonomous agent with own field of perception Simple rules for each piece leads to emergence of truly original strategies for solving problems within the puzzle Placement of corner tiles Sub-optimal algorithm emerges overall, but much simpler than known strategies

Sociogenesis in MANTA Colonies Sociogenesis is the foundation process Modeling and simulation of social organization in ant colony Each organism represented by behavior-specified agent Test hypotheses about emergence of social structures from behavior and interactions among individuals Observed a general strategy based on simple rules for best foundation of colony

MARCH Multi-Agent Reactive CHess Program Chess offers good testing grounds for strategy Global strategy is viewed as essential for success Goal was to program a decent chess-playing program while remaining as simple as possible

Details of MARCH Each chess piece is an autonomous agent with its own behavior and field of perception Each space on chess board knows piece on it and two fields called whitestrength and blackstrength A single turn consists of: Asking each piece to determine pieces it threatens Asking threatened pieces to propagate material value on spaces between self and threatening piece Asking each piece to mark each place it could move to Choosing randomly among pieces with greatest marks and moving it to related space

Experiments with MARCH Played 200 games against average human player Won 57 times Lost 83 times Stalemate 60 times Most of loses occurred early in game, wins late in game Bad at opening, but plays well once pieces deployed Played 50 games against GNU chess program Much stronger player Lost 47 times Stalemate 3 times

Conclusions on MARCH Multi-agent reactive system can play chess with skill equivalent to average human player Some emergent strategies can be observed, but remain partial and unlasting Cannot react in a coordinated way to strong opponent and gets trapped quickly Obtaining good opening move sequences is a primary challenge Environment is open and multi-agent reactive system is not threatened enough to react intelligently

Conclusions on Reactive Systems and Emergent Strategy Limits of emergent strategies can be observed from interactions between reactive agents Possible to obtain long-term emergent strategies with reactive systems Ant colony sociogenesis In many domains, global strategy could be advantageously replaced by set of local tactical behaviors leading to emergent strategy