Cooperative Learning by Replay Files in Real-Time Strategy Game

Similar documents
Replay-based Strategy Prediction and Build Order Adaptation for StarCraft AI Bots

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Using Automated Replay Annotation for Case-Based Planning in Games

Inference of Opponent s Uncertain States in Ghosts Game using Machine Learning

Integrating Learning in a Multi-Scale Agent

Case-based Action Planning in a First Person Scenario Game

An Improved Dataset and Extraction Process for Starcraft AI

Sequential Pattern Mining in StarCraft:Brood War for Short and Long-term Goals

Case-Based Goal Formulation

Case-Based Goal Formulation

Chapter 14 Optimization of AI Tactic in Action-RPG Game

MFF UK Prague

ConvNets and Forward Modeling for StarCraft AI

Tobias Mahlmann and Mike Preuss

Genre-Specific Game Design Issues

Reactive Planning for Micromanagement in RTS Games

The Fault Tolerant Output Selector Based on Fault-detection. Considering Realistic Fault Modes for Pedal Simulator of Brake-by-Wire.

Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

The Development of Sustainable Growth Strategy Model Based on the User Tendency in the Online Game Services

Automatic Learning of Combat Models for RTS Games

Multi-Agent Potential Field Based Architectures for

Charles University in Prague. Faculty of Mathematics and Physics BACHELOR THESIS. Pavel Šmejkal

Server-side Early Detection Method for Detecting Abnormal Players of StarCraft

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions

Approximation Models of Combat in StarCraft 2

Reactive Strategy Choice in StarCraft by Means of Fuzzy Control

arxiv: v1 [cs.ai] 9 Aug 2012

Applying Goal-Driven Autonomy to StarCraft

A CBR-Inspired Approach to Rapid and Reliable Adaption of Video Game AI

Combining Expert Knowledge and Learning from Demonstration in Real-Time Strategy Games

Build Order Optimization in StarCraft

Knowledge Discovery for Characterizing Team Success or Failure in (A)RTS Games

Towards Adaptive Online RTS AI with NEAT

Sequential Pattern Mining in StarCraft: Brood War for Short and Long-Term Goals

Potential-Field Based navigation in StarCraft

Design and Evaluation of an Extended Learning Classifier-based StarCraft Micro AI

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE

The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games

Combining Scripted Behavior with Game Tree Search for Stronger, More Robust Game AI

Convolutional Neural Network-based Steganalysis on Spatial Domain

Learning Unit Values in Wargus Using Temporal Differences

Learning Artificial Intelligence in Large-Scale Video Games

Optimal Rhode Island Hold em Poker

Testing real-time artificial intelligence: an experience with Starcraft c

Zero-Based Code Modulation Technique for Digital Video Fingerprinting

State Evaluation and Opponent Modelling in Real-Time Strategy Games. Graham Erickson

InSciTe Adaptive: Intelligent Technology Analysis Service Considering User Intention

Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft

Basic Tips & Tricks To Becoming A Pro

Global State Evaluation in StarCraft

IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN

A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft

A Particle Model for State Estimation in Real-Time Strategy Games

The Virtual Reality Brain-Computer Interface System for Ubiquitous Home Control

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997)

A Benchmark for StarCraft Intelligent Agents

Ubiquitous Home Simulation Using Augmented Reality

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space

STARCRAFT 2 is a highly dynamic and non-linear game.

Building Placement Optimization in Real-Time Strategy Games

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System

Game-Tree Search over High-Level Game States in RTS Games

Opponent Modelling In World Of Warcraft

Development of Research Topic Map for Analyzing Institute Performed R&D Projects-based on NTIS Data

User Type Identification in Virtual Worlds

A Study to Improve the Public Data Management of the City of Busan

Artificial Intelligence Paper Presentation

Advances In Knowledge Discovery And Data Mining: 12th Pacific-Asia Conference, PAKDD 2008 Osaka, Japan, May 20-23, 2008 Proceedings (Lecture Notes

ROBOT SOCCER STRATEGY ADAPTATION

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

CS 680: GAME AI INTRODUCTION TO GAME AI. 1/9/2012 Santiago Ontañón

Artificial Intelligence ( CS 365 ) IMPLEMENTATION OF AI SCRIPT GENERATOR USING DYNAMIC SCRIPTING FOR AOE2 GAME

A Multi-Agent Potential Field-Based Bot for a Full RTS Game Scenario

A CBR Module for a Strategy Videogame

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Principles of Computer Game Design and Implementation. Lecture 20

LEARNABLE BUDDY: LEARNABLE SUPPORTIVE AI IN COMMERCIAL MMORPG

Modeling Player Retention in Madden NFL 11

Efficient Resource Management in StarCraft: Brood War

Business Plan. Level 8, Admiralty Centre Tower II 18 Harcourt Road Admiralty, HONGKONG

JAIST Reposi. Title Attractiveness of Real Time Strategy. Author(s)Xiong, Shuo; Iida, Hiroyuki

Adjustable Group Behavior of Agents in Action-based Games

SDS PODCAST EPISODE 110 ALPHAGO ZERO

Electronic Research Archive of Blekinge Institute of Technology

INTRODUCTION TO DEEP LEARNING. Steve Tjoa June 2013

Adjutant Bot: An Evaluation of Unit Micromanagement Tactics

A Study of Optimal Spatial Partition Size and Field of View in Massively Multiplayer Online Game Server

A Portable Slave Robot for Robot-Assisted Minimally Invasive Surgery with Capability of Multi-Axis Force Sensing

Reactive Planning Idioms for Multi-Scale Game AI

Context-Aware Interaction in a Mobile Environment

DUPLAYER. 1. COMPANY Profile. Introduction

Evolutionary Image Enhancement for Impulsive Noise Reduction

Free Sample. Clash Royale Game Decks, Cheats, Hacks, Download Guide Unofficial. Copyright 2017 by HSE Games Third Edition, License Notes

Artificial Intelligence. Cameron Jett, William Kentris, Arthur Mo, Juan Roman

StarCraft Winner Prediction Norouzzadeh Ravari, Yaser; Bakkes, Sander; Spronck, Pieter

Development of IoT based Pier collision Monitoring System

Last Stand: Bolos 4 By S. M. Stirling, Keith Laumer READ ONLINE

Effectiveness Analysis of Anti-torpedo Warfare Simulation for Evaluating Mix Strategies of Decoys and Jammers

COMP 400 Report. Balance Modelling and Analysis of Modern Computer Games. Shuo Xu. School of Computer Science McGill University

Transcription:

Cooperative Learning by Replay Files in Real-Time Strategy Game Jaekwang Kim, Kwang Ho Yoon, Taebok Yoon, and Jee-Hyong Lee 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746, Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea linux@ece.skku.ac.kr, yoonkh2000@skku.edu, tbyoon@skku.edu, jhlee@ece.skku.ac.kr Abstract. In real-time strategy game, the game artificial intelligence is not smart enough. That makes people feel boring. In this paper, we suggest a novel method about a cooperative learning of build-order improving the artificial intelligence in real-time strategy game in order to make games funny. We use the huge game replay file for it. Keywords: Game A.I., Real-time strategy game, Build-order, Cooperative Learning, Replay file. 1 Introduction The game industry grows rapidly. Because of the influence of beginning the game conferences, people play game for money as well as for fun. Blizzard, who develops games, has patched last10 years for the balance among the tribes in the Starcraft, realtime strategy game, but the artificial intelligence is not enough smart [1]. In the real-time strategy game, there are tremendous cases, so that the traditional artificial intelligence technology cannot deal with it as human does. In the real-time strategy game, it is very hard the artificial intelligence beats human who is good at it [2]. This causes people feel a repugnance to the game. Most of the real-time strategy game, such as Starcraft, offers the replay files. Players can analyze the game for knowing a cause of defeat, see as interests, or hold in common the replay files. There is already a program that allows computer can play along the replay which is played by human. However the simple imitation of the game replay is not concerned about artificial intelligence. Every game replay uses their own build-order for the other party. In this paper, we suggest an automatic learning of build-order to improve the artificial intelligence in real-time strategy game. We analyze a lot of game replay, so that the computer artificial intelligence can compose build-order dynamically to win the game. We can provide some rules for corresponding the human s build-order. The rest of the paper is organized as follows. Section 2 presents the related work. Section 3 explains how to do an automatic learning of build-order and shows some brief rules for the idea. Finally we conclude with section 4. Y. Luo (Ed.): CDVE 2010, LNCS 6240, pp. 47 51, 2010. Springer-Verlag Berlin Heidelberg 2010

48 J. Kim et al. 2 Related Work There are some researches in improving the build-order in real-time strategy games. Kovarsky proposed the build-order optimization problem for real-time strategy games. This work aimed to minimize the time of making specific units or buildings. However, human s actions are needed for defining build-orders [3]. Lee proposed the A.I. improvement method in the Starcraft. He improved the A.I. for the efficiency of the build-order and the production of units. However human should code statically [4]. Buro presented about real-time strategy which is a new A.I. research challenge [5]. Weber suggested a case-based reasoning for build-order in real-time strategy games [6]. In the existing studies, unfortunately, human should make up the buildorders manually. Because of this, when it comes a new strategy, human work is also needed. In this paper, we suggest an automatic learning of build-order to improve the artificial intelligence in real-time strategy game. 3 Automatic Learning of Build-Order In strategy computer games, a build-order is a linear pattern of production, research, and resource management aimed at achieving a specific and specialized goal. For example, attacking the enemies fast or gathering the resources a lot. Especially in the real-time strategy game, player starts the game with incompleteness information. In general, play can recognize the map as much as he occupied with his units or buildings. This means that it is hard to estimate other player s build-order. Because of this characteristic, it is very important to use build-order method in real-time strategy game. There are some relationships among build-orders. One player produces works to gather resource a lot. The other player produces attack units to attack fast as possible. In this case, the player who produces attack units will win the game. If the game A.I. can aware the build-orders and the relations among build-orders, the game A.I. can play more effectively against human player. 3.1 Game Replay Files As increases sharing of player replay files in online communities, the programs which analyze the player s actions using replay files were created. BWChart which is for Starcraft and W3Chart which is for Warcraft 3 are representative replay file analyzer [3]. There is some information of game such as players ID, date, time, game result, unit productions, building construction, unit/building selection, upgrade-information and so on. Fig. 1 shows the examples of player s action information in replay files using BWChart. 3.2 Extracting and Clustering the Player Behavior We make a line the player s action by time ordering because replay file has records of player s action at each time. In the player s action information, there are build-orders and player s corresponding for each situation. Each build-order takes different

Cooperative Learning by Replay Files in Real-Time Strategy Game 49 Fig. 1. The examples of player s action information in replay files using BWChart (a) (b) Fig. 2. (a) Extracting the player behavior (b) Clustering the player behavior time. If it is to attack enemy fast, the build-order should make units in 3 minutes. On the other hand, if it is to gather resource a lot, the build-order could make units after 5 minutes. In the former case, player attacks enemy s base first and then he corresponds the next situation by the enemy s build-order or unit production. In the latter case, player gathers resources up to 5 minute and then he deals with the next situations. To get the knowledge of relationship between build-order A and build-order B, we should figure out how many times the build-order A won against build-order B and how many times the build-order A lost against build-order B. In order to group the build-orders by player s action information, there exists two ways. One is that experts group the build-order by themselves. The other is that machine does it by some intelligent methods such as rule-based method or comparing similarities between action information of players. In case of exports group the build-order by themselves, it could make the most exact results but it could take very huge time. Fig. 2(a) shows that the rules are defined by each build-order using LordMartin Replay Browser. As Fig. 2(a) shows, it is the build-order of terran race named Fast siege tank drop. This build-order is defined as the player upgrades the technique named siege tank and build control tower in five minutes. Clustering based on replay build-order has a merit on speed. However there is a weak point that human should decide the build-order. The rule is defined by the start time of building construction or the upgrade time of building but it is influenced

50 J. Kim et al. by the enemy s status due to the nature of the real-time strategy game. So, it is very hard and it might be no use of improving game A.I. In this paper, we use the similarity measure which has a purpose to categorize the build-order. To achieve this goal, we extract the information about unit production, building construction, upgrade order from replay files. Through this information, we measure the action similarity of players. Fig. 2(b) shows the clustering result of the player s action information which is shown by Fig. 2(a). 3.3 Generating the Relation Table and the If-Then Rule To understanding the relation among build-orders, we generate build-order relation tables. To make a build-order table, we use the player s outcome of the game and the player s build-order. Table 1 shows the example of the build-order relation table. Table 1. The example of the build-order relation table Build-order1 Build-order2 Build-order3 Build-order4 Build-order1 5-win / 3-loss 6-win / 2-loss 0-win / 5-loss Build-order2 3-win / 5-loss 2-win / 3-loss 6-win / 2-loss Build-order3 2-win / 6-loss 3-win / 2-loss 3-win / 5-loss Build-order4 5-win / 0-loss 2-win / 6-loss 3-win / 4-loss To increase the ability of the computer A.I., we generate the rule. When we select the suitable build-order to win the game, we should know about enemy s build-order choices at that time. Fig. 3 shows the result of preprocessing of build-orders which are shown at Fig. 2(b). We unite the build-orders as a tree and then apply time. If player s state is E, he can choose build-order 3 or 4. At that time enemy s buildorder must be one of the three cases those are state C, D, or E. If enemy s state is D, he can choose the build-order 2. The build-order 3 has 60% winning ratio if it corresponds the build-order 2. The build-order 4 has 25% winning ratio if it corresponds the build-order 3. Therefore, if player want to win the game, he should choose the Fig. 3. The result of preprocessing of the build-orders which are shown as Fig. 2(b)

Cooperative Learning by Replay Files in Real-Time Strategy Game 51 build-order 3. Though this mechanism, we can show the example of the If-Then rule like this. If status = b and opponent_status = c then select e If status = e and opponent_status = c then select g If status = e and opponent_status = d then select f If status = e and opponent_status = e then select f We can apply these If-Then rules to the computer A.I. in order to improve its ability. 4 Conclusion In real-time strategy game, the ability of player increases more and more, however the ability of game artificial intelligence does not. This is because of the difficulty of the learning for artificial intelligence. Because the level of players overwhelms that of game artificial intelligence, the interests of the game decreases. In this paper, we suggest a novel method about a cooperative learning of build-order to improve the A.I. in real-time strategy game. We use the huge game replay file for it. Acknowledgment This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2009-0075109). References 1. Ontanon, S., Misha, K., Sugandh, N., Ram, A.: Case-based Planning and Execution for Real-time Strategy Games. In: Weber, R.O., Richter, M.M. (eds.) ICCBR 2007. LNCS (LNAI), vol. 4626, pp. 164 178. Springer, Heidelberg (2007) 2. Kovarsky, A., Buro, M.: A First Look at Build-order Optimization in Real-time Strategy Games. In: The Proc. of the Game on Conference, pp. 18 22 (2006) 3. Lee, S.H., Huh, J.Y., Joh, Y.K., Hong, J.M.: Programming Method for Improving Performance of Artificial Intelligence on StraCraft. In: The Proc. of the Korea Game Society Winter Conference, pp. 141 146 (2006) 4. Buro, M.: Real-time Strategy Gaines: A New AI Research Challenge. In: The Proc. of the 18th International Joint Conference on Artificial Intelligence, pp. 1534 1535 (2003) 5. Weber, B.G., Mateas, M.: Case-based Reasoning for Build-order in Real-time Strategy Games. In: The Proc. of the 24rd AAAI Conference on Artificial Intelligence, pp. 1313 1318 (2009)