IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER

Size: px
Start display at page:

Download "IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER"

Transcription

1 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER An Analytic and Psychometric Evaluation of Dynamic Game Adaption for Increasing Session-Level Retention in Casual Games Brent Harrison and David L. Roberts Abstract This paper shows how game analytics can be used to dynamically adapt casual game environments in order to increase session-level retention. Our technique involves using game analytics to create an abstracted game analytic space to make the problem tractable. We then model player retention in this space and use these models to make guided changes to game analytics in order to bring about a targeted distribution of game states that will, in turn, influence player behavior. Experiments performed showed that the adaptive versions of two different casual games, Scrabblesque and Sidequest: The Game, were able to better fit a target distribution of game states while also significantly reducing the quitting rate compared to the nonadaptive version of the games. We showed that these gains were not coming at the cost of player experience by performing a psychometric evaluation in which we measured player intrinsic motivation and engagement with the game environments. In both cases, we showed that players playing the adaptive version of the games reported higher intrinsic motivation and engagement scores than players playing the nonadaptive version of the games. Index Terms Casual games, data mining, dynamic game adaption, game analytics, player modeling, retention. I. INTRODUCTION AS casual games continue to grow in popularity, the importance of understanding player retention grows immensely. The term retention in games often refers to the percentage of players that continue to play a game after a certain period of time, be it days, months, or even years. One of the main reasons that retention is important is because of the belief that it is more financially efficient to retain an existing player than it is to acquire a new one. Given the monetization schemes present in most casual games (such as microtransactions and ad traffic), casual games place an increased importance on session-level retention and what can be done to influence it. Session-level retention in games refers to the percentage of players that complete a session in a game. A session could constitute anything from completing a level to completing a predefined set of tasks. Session-level retention is important in Manuscript received November 15, 2013; revised June 16, 2014; accepted February 23, Date of publication March 05, 2015; date of current version September 11, B. Harrison is with the Department of Interactive Computing, Georgia Institute of Technology, Atlanta, GA USA ( brent.harrison@cc.gatech. edu). D. L. Roberts is with the Computer Science Department, North Carolina State University, Raleigh, NC USA ( robertsd@csc.ncsu.edu). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TCIAIG casual game environments as many rely on players completing all available tasks or levels for a given time period and offering the player the ability to purchase additional ones. Much work has been done on predicting player retention and identifying factors that contribute to retention; however, there has been relatively little work done on using these findings to influence retention. In this paper, we examine how dynamic game adaption can be paired with game analytics to increase session-level retention in casual game environments of our own creation: Scrabblesque and Sidequest: The Game. In order to influence player retention, our work leverages the idea that game states exist in both of these environments that can be associated with session-level retention. Using this, we have created a technique that targets a distribution of desirable game states (states that are associated with session-level retention) while avoiding undesirable game states (states that are associated with the player quitting). Games, however, tend to be complex environments which could consist of a large number of unique states, making the problem of explicitly modeling retention in these environments intractable. To alleviate this problem, we represent the set of possible game states in terms of a set of vanity game analytics, analytics that hold a great deal of predictive power but are not directly affectable, creating a smaller game analytic space. Using this strategy, it becomes possible adapt game environments in order to indirectly alter these vanity analytic values through the manipulation of a set of actionable analytics, analytics that are directly affectable. II. BACKGROUND AND RELATED WORK To date, most of the research done on player retention in games has focused on long-term retention over several months or years. Here, we will review the relevant literature on player retention. A great deal of the research on player retention has targeted retention in massively multiplayer online role-playing games (MMORPGs). This is not unexpected because most of these games have monthly subscription fees, which means that player retention has a direct influence on the total revenue for the game. Also, people can play these games for several years, making them an ideal environment to study long-term retention. There have been several studies that explore the possible factors that contribute to retention in these environments. These factors can range from in-game actions [1], [2], demographic information [3], [4], or even player motivation [5]. Recently, work has been done that uses a player's social network as a basis for retention prediction [6]. The underlying theory behind these methods is X 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See for more information.

2 208 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 TABLE I INTRINSIC MOTIVATION INVENTORY. RESPONSES ARE MEASURED ON A SEVEN-POINT LIKERT SCALE WITH 1CORRESPONDING TO NOT TRUE AND 7CORRESPONDING TO VERY TRUE that players will be more likely to quit playing if people in their social networks also quit playing. Modeling player retention in other types of game environments has been considered by Weber, Mateas, and Jhala. They used regression to determine what features most contribute to retention in both Madden'11 [7] and Infinite Mario [8]. In social and casual games, Lin et al. [9] studied motivations for play in social games and found that progression, not social interaction, was often the most important factor in players continuing to play. Contrary to this finding, the existence of a well-defined game community can have a noticeable influence on player retention [10]. Work on documenting how secondary objectives in casual games could lead to a drop in player retention has also been carried out [11]. A. Psychometric Instruments In both of our case studies, we perform a psychometric evaluation in order to measure intrinsic motivation and engagement in various game environments. In these, we use two validated instruments for measuring these psychometric phenomenon: the intrinsic motivation inventory (IMI) [12] and the games engagement questionnaire (GEQ) [13]. Each of these instruments will be introduced and discussed in greater detail later. 1) Intrinsic Motivation Inventory: In order to measure intrinsic motivation, we are using the Intrinsic Motivation Inventory (IMI). The IMI is a multidimensional measurement device used to determine a participant's intrinsic motivation. Use of the full measurement device yields seven subscores that refer to the following: interest/enjoyment; perceived competence; effort; value/usefulness; felt pressure and tension; perceived choice; relatedness. Currently, only the interest/enjoyment subscale is considered to be a self-report measure of intrinsic motivation [14]. As a result, we only use statements drawn from the interest/enjoyment subscale in our case studies. The interest/enjoyment subscale consists of seven statements including two statements that are reversed. A reversed statement is a statement that measures the opposite of what the subscale is attempting to measure. These reversed questions are meant to add validity to the results gathered using this measure by controlling for people that may not be paying attention while responding to these items. Participants are given seven statements and then asked to rate their agreement with these statements on a scale of 1 to 7, with 1 corresponding to not true at all, 4 corresponding to somewhat true, and 7 corresponding to very true. The statements used in this subscale can be seen in Table I. In this table, statements 2 and 5 are the reversed statements. 2) Game Engagement Questionnaire: Thegameengagement questionnaire (GEQ) [13] is a measurement tool used to measure the levels of engagement experienced while playing video games. The GEQ consists of 19 statements that are meant to measure several aspects of engagement. The aspects of engagement that are measured are flow, immersion, presence, and absorption. Participants report their agreement with these statements using a 3 point scale with 1 indicating disagreement, 2 indicating that the participant is unsure of agreement or disagreement, and 3 indicating agreement. In this questionnaire, 1 statement measures immersion, 4 statements measure presence, 9 statements measure flow, and 5 statements measure absorption. For our experiments, we have chosen to use all statements present in the GEQ. Table II lists the statements on the GEQ. In this table, statement 1 is measures immersion, statements 2 5 measure presence, statements 6 14 measure flow, and statements measure absorption. It is important to note that the order that these questions are given to participants is randomized to remove any bias that grouping the questions based on what they are measuring may introduce. III. METHODOLOGY A description of our technique to increase session-level retention in casual games is shown in Fig. 1. At a high level, it consists of three steps: abstract the set of possible game states using game analytics, create models of session-level retention in this abstracted space, then target a distribution of game states that are associated with players finishing a game session. Each of these steps will be discussed in greater detail in the following sections. A. Game State Abstraction In order to model player behavior in a game world, we must first find a representation of the game world that is descriptive of the true state of the world while being simple enough that it can be modeled in a reasonable amount of time. To illustrate the need for this, consider a first-person shooter game with a single room. In this simple example, choosing to explicitly represent every aspect of the game world leads to a large number of unique game states. This complexity makes it difficult to model

3 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 209 TABLE II GAME ENGAGEMENT QUESTIONNAIRE. RESPONSES ARE MEASURED USING A THREE-POINT LIKERT SCALE WITH 1CORRESPONDING TO DISAGREEMENT, 2 CORRESPONDING TO NEITHER AGREEMENT OR DISAGREEMENT, AND 3CORRESPONDING TO AGREEMENT by making a Markov assumption that the current game state depends on only a subset of previous game states (the previous game states, to be exact). To calculate the probability that players will end their game session prematurely, we use the following: (1) Fig. 1. Overview of the steps taken by our algorithmic adaption strategy. The first step is to abstract the board state space into a game analytic space. The second step is to build models in this space. The third step is to determine which actions the AI should take to bring about certain game states in this game analytic space. behavior in this space since the number of observations required to explore this space grows exponentially with its complexity. To deal with this, we choose to create a game analytic space, a representation of the game world in terms of a set of vanity analytics. The term vanity analytics has been used often in business intelligence to describe analytics that are descriptive, yet difficult to influence [15]. In this paper, we use it to refer to analytics that describe the state of the world or the player in the world but are not directly under our control. B. -Gram Models of Session-Level Retention Once we have created the game analytic space, we model session-level retention using this reduced set of game analytics. In particular, we examine the history of game states and determine the probability that the player will quit the game early based on this history. In this paper, we have chosen to use -gram models to predict session-level retention. -gram models work where is the player class label (whether the player quit the game early or not), is the turn number, and is the sequence of previous game states. Using this, we can identify sequences of game states that are associated with the player ending their game session early. To do this, we define a probability threshold to determine which sequences we consider predictive. In both case studies, we used the aprioriprobability of observing an unfinished game session based on our training sets. This means that if is greater than this threshold, then that sequence of game states predicts games in which the player will quit early better than a random guess. C. Goal Targeting Using the sequences identified in the previous state, we construct a target distribution of game states that we will use to determine how to adapt the game. While there are many different ways to construct a target distribution, we use Markov chain Monte Carlo sampling methods to construct them. For a more detailed discussion on constructing target distributions, we direct the reader to [16]. Once this has been done our goal is to select target game states from this distribution and adapt the game world accordingly. Recall, however, that these game states were expressed in terms of vanity analytics, meaning that we do not have direct control over them. In order to adapt the game world, we must now express the game world in terms of a set of actionable analytics. Contrary to vanity analytics, actionableanalyticsarethosethat we can directly manipulate [15]. Once the game state has been represented in terms of a set of actionable analytics, adaptions can be made by altering these values.

4 210 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 Fig. 2. Flowchart showing the execution of our algorithmic adaption strategy. First, a goal is chosen. Then, instances of this goal are found in the knowledge corpus and used to evaluate a set of candidate actions. The process for making adaptions to the game world is summarized in Fig. 2. As is shown in the figure, this process involves first selecting a target goal state. Then, a set of game states that match the current goal state is retrieved and expressed in terms of a set of actionable analytics. Once this has been done, a set of candidate adaptions are made and their effect on the game world is observed. These candidate game states are then expressed in terms of the same set of actionable analytics. The distance between these candidate game states and the goal states is calculated and the adaption that minimizes the distance between these sets of states is ultimately performed. IV. CASE STUDY: SCRABBLESQUE The first game environment that we examine is Scrabblesque (see Fig. 3). Scrabblesque is a Flash game that is a modified version of the popular board game Scrabble. Scrabblesque simulates a casual game environment by providing players with a limited number of possible actions and a limited amount of ways to interact with the game. It is also relatively simple to both start andstopplayingscrabblesque. InScrabblesque, playerscompete against a computer-controlled AI until either player obtains 150 points. Scrabblesque is designed to log several low level analytics describing gameplay. These analytics describe things such as the words that the computer plays and their score, the words that the player plays and their score, and the state of the player's rack of letter tiles. A. Game State Abstraction The first step in our algorithm is to abstract the game space in order to reduce its complexity. This makes it feasible to both model session-level retention and implement the adaptions necessary to increase it. In Scrabblesque, wechoosetorepresent the game using the following set of vanity analytics. Score difference: The difference between the player's and the computer's score. Word Length: The length of the last word played. Word Score: The point value of the last word played. We choose these analytics to model the game space because these analytics describe various aspects of player behavior in the game. Also, each of these analytics can be, at least intuitively speaking, tied to session-level player retention. For example, if thedifferenceinscorebecomestoolargeitcouldbeasignthat Fig. 3. Screenshot of Scrabblesque. the game's outcome is no longer in question, which could lead to the player quitting out of frustration (if the player is losing) or boredom (if the player is winning). For more information on these analytics, we direct the reader to refer to [17]. B. -Gram Models of Session-Level Retention Before we calculate, we transform the raw values of these analytics using symbolic aggregate approximation (SAX) [18]. This transformation expresses each analytic in terms of how much it deviates from the expected value of that analytic using the following: where is the value of feature on turn for player, and is the number of players in the training set. We showed in previous work [19] that this transformation produces models (2)

5 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 211 that are often more predictive of session-level retention than using the raw analytic values alone. Once this is done, each analytic value is discretized into one of three equal-sized bins corresponding to low, medium, and high deviations. Next, we build models of session-level retention in this space using -grams of game states. In Scrabblesque, we chose to balance the predictive power gained by including more prior observations with the problem of data sparsity due to the curse of dimensionality. To illustrate this issue, let us consider a 10 turn game of Scrabblesque. If (the value of analytic for player on turn ) can only take on three values, it would still take on the order of 59, 049 games to explore the space of possible configurations for a 10-turn game. Using a bigram model ( ) means that fewer observations are required to fully observe the space of possible state configurations (it only requires on the order of 81 games to explore a 10-turn game); however, we lose predictive power since we have fewer previous states that serve as evidence. If we use a quadgram model ( ), we gain predictive power since we have more states that serve as evidence; however, doing this increases the data sparsity issues that are present (it requires on the order of 567 games to explore a 10-turn game). During this phase, we consider each of these analytics independently when calculating, meaning that the end results of this step are three sets of sequences that are associated with session-level retention in Scrabblesque. C. Goal Targeting Before we can make adaptions to the game, we must first express game states in terms of a set of actionable analytics, analytics that we can directly manipulate or control. In Scrabblesque, the only thing we can control is the way the AI behaves which includes playing words on the board and giving the player letter tiles after they have played a word on the board. As such, we use a set of actionable analytics to describe the board state as well as the state of the players rack [20]. We use the following actionable analytics to describe the board state. Number of candidate tiles: The number of eligible tiles on the game board that enable the player to connect the words they play to existing words on the board. Number of consonant/vowel candidate tiles: The number of candidate tiles on the game board that are consonants/ vowels. Average tile value: The average value (in terms of game score) of candidate tiles on the game board. Proximity to bonus squares: The number of bonus squares on the game board that the player can reach in a single turn using the tiles in their rack. We also use the following analytics to describe the tiles in the player's rack. Number of consonant/vowel tiles: The number of consonant/vowel tiles present on the player's rack. Average tile value: The average value of the tiles in the player's rack. Number of repeated tiles: The number of tiles that are repeated in the player's rack. To show how game adaptions are implemented in Scrabblesque, we have chosen to use an illustrative example. Let us assume that the goal state that we are targeting involves achieving a score difference of less than 20 points. This means that the difference in player score and computer score must not deviate from the expected value of this analytic by more than 20 points. Since score difference depends on both the computer's and the player's turn, we must have the computer move in such a way as to produce a board state that, before the player makes their move, is conducive to bringing about the desired score difference after the player has made their move. To achieve this, we query a corpus of board states for examples that exhibit the desired score differential. This corpus is composed of board states gathered from players that have played games of Scrabblesque in the past. For each board state that has a score difference of less than 20 points, we retrieve the preceding intermediate board state after the computer's turn. An intermediate board state is the state the board is in after the computer takes their turn, but before the player takes theirs. So, we would retrieve the board state that resulted from the actions that the computer had taken before the current turn. Once we have these intermediate board states, we represent them in terms of the directly affectable game analytics that we discussed previously. To determine the move that the computer should make we simply simulate possible actions that the computer can make (placing words and giving the player tiles) and then measure the distance between this set of resultant intermediate board states and the set of intermediate board states extracted from the corpus. It is important to note here that the computer is limited in the types of moves it can make. It can only play words on the board or give tiles to the player if the tiles involved in performing those actions have not already been used. In other words, the computer can only simulate legal moves that it can take. The computer then takes the move that minimizes the distance between the intermediate board state that would result from the action being taken and the set of intermediate board states retrieved from the corpus. Here, we calculate Euclidean distance using the set of directly affectable game analytics; however, any distance or divergence measure like or Bhattacharyya divergence [21] can be used. -divergence D. Evaluation In order to evaluate how well our game adaption algorithm is able to influence session-level retention, we performed a user study and calculated two metrics to determine how successful it was. The first of these is how well the observed behavior distribution produced by the game fits the target distribution. The second of these was the quitting rate, the percentage of players that quit the game before it naturally ended. In addition, we performed an analysis of the psychometric side effects that this technique had on play experience. In particular, we measure player engagement (as described by the GEQ) and intrinsic motivation (as described by the IMI). 1) Data Collection and Methodology: Data collection proceeded in two phases. During both phases of data collection, we deployed Scrabblesque online and recruited participants via distribution lists and social networking sites (Facebook,

6 212 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 TABLE III -DIVERGENCE VALUES COMPARING THE NONADAPTIVE DISTRIBUTION AND THE ADAPTIVE DISTRIBUTION IN SCRABBLESQUE.SINCE -DIVERGENCE IS ASYMMETRIC, COMPARISONS WERE DONE IN BOTH DIRECTIONS AND THEN AVERAGED Fig. 4. Comparison between the distribution of adaptive games, versus baseline games, versus the target distribution in Scrabblesque. TABLE IV COMPARISON BETWEEN THE BASELINE (NONADAPTIVE) SCRABBLESQUE AND THE ADAPTIVE VERSION OF SCRABBLESQUE IN TERMS OF THE NUMBER OF FINISHED AND UNFINISHED GAMES. Twitter, etc.). We used snowball sampling as we encouraged participants who had taken the study to share the experiment with their friends and family. During the first round of data collection, players could only play the nonadaptive version of the game in which the computer-controlled AI played words and distributed tiles to the player randomly. At the end of this round of data collection we had gathered 195 game logs. This data was used to create the models of session-level retention. During the second phase of data collection, we asked players to play a modified version of Scrabblesque in which our adaption strategy had been implemented. All users that played Scrabblesque during this phase of data collection played the adaptive version of Scrabblesque. After three weeks, we had obtained 62 games of Scrabblesque. We will refer to this data as the adaptive data from now on. To serve as a baseline for comparison, we chose to use the data gathered from the first round of data collection that was used to create the models of session-level retention (referred to as the nonadaptive data from now on). It is important to note that the nonadaptive data and the adaptive data were collected separately. Roughly five months passed between the first data collection and the second, so we feel that any possible biases caused by using two data collections were mitigated. 2) Session-Level Retention: In order to validate how well our algorithm can influence player behavior, we analyze how much the behavior distribution for both the nonadaptive and adaptive version of Scrabblesque diverged from the targeted distribution of game analytics. Fig. 4 shows a visual comparison of the game analytic distributions produced by both versions of Scrabblesque compared against the target distribution. In this figure, the Game Analytic State ID ( axis) refers to an arbitrary ID given to each game analytic state. As you can see in the figure, there are 4 peaks in the target distribution: game analytic state ID 7, game analytic state ID 10, game analytic state ID 16, and game analytic state ID 19. The figure shows that the adaptive distribution more closely fits some peaks in this figure (game analytic state ID 7 best of all); however, visual comparison is not enough to determine if our technique succeeded. In order to statistically evaluate how well the game analytic distribution produced by our algorithmic adaption strategy fit the target distribution, we calculated the -divergence between the target distribution and the distributions produced by both adaptive and nonadaptive versions of Scrabblesque. -divergence is not a true distance, as it is asymmetric; however, it is a well-understood measure with several important properties. In particular, it is consistent, always nonnegative, andzeroonlywhenthetwodistributions are exactly equal. Since the -divergence is an asymmetric divergence value, we chose to turn this into a distance value by using the following formula: where indicates that we calculated the -divergence between a test distribution and the target distribution. As you can see, we calculate the -divergence in both directions and then take the average to turn it into a distance measure. The results of this analysis can be seen in Table III. As you can see in the Table, we found that in the adaptive condition -divergence values were 0.59 and 0.36 for the baseline distribution versus the target distribution and the target distribution versus the baseline distribution, respectively. This leads to an average -divergence value of 0.48, which is lower than the -divergence values achieved by the nonadaptive version of Scrabblesque. From this, we can conclude that our adaption strategy does induce a shift in the game analytic distribution towards the target distribution. Our second analysis involves examining the quitting rate in both the adaptive and nonadaptive versions of Scrabblesque to determine if these changes in behavior resulted in an increase in session-level retention. According to the data we had gathered previously on the nonadaptive version of Scrabblesque, 24.1% of all games were ended prematurely. Using our algorithmic adaption strategy, we were able to reduce this percentage to 11.3%. A summary of this result can be seen in Table IV. This difference was significant according Fisher's exact test ( 0.03). We also chose to analyze the length of games in each version of the game. In this analysis, we created bins corresponding to games that ended in a low number of turns, medium number of turns, and high number of turns. Bins were determined by, first, finding the average length of a completed game regardless of (3)

7 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 213 TABLE V PERCENTAGE OF GAMES THAT ENDEDINALOW, MEDIUM, OR HIGH NUMBER OF TURNS IN SCRABBLESQUE TABLE VI SUMMARY OF IMI RESULTS IN SCRABBLESQUE.DIFFERENCES OBSERVED WERE STATISTICALLY SIGNIFICANT ACCORDING TO A ONE-TAILED INDEPENDENT SAMPLES T-TEST (, ) treatment. This came out to be 10 turns with a standard deviation of 2 turns. Using these, we defined a low number of turns as less then 8, a medium number of turns as anything between 8 and 12 turns, and a high number of turns as greater than 12 turns. Table V shows the results of this analysis. As shown in the table, game lengths in the adaptive version of Scrabblesque are skewed towards a medium number of turns whereas the nonadaptive games are a bit more uniform. 3) Long-Term Retention: We also chose to do a surface analysis of how our algorithm affected long-term retention. Although the main purpose of this research is to influence session-level retention, increasing session-level retention at the cost of long-term retention makes it of questionable use to game designers and developers. In this analysis, we examined the percentage of players that played more than one game of the nonadaptive version of Scrabblesque as well as the adaptive version. The results of this analysis showed that 30.8% of players played the adaptive version of Scrabblesque multiple times whereas 30.2% of players played the nonadaptive version of Scrabblesque multiple times. The nonadaptive and adaptive versions of Scrabblesque performed comparably in terms of our measure of long-term retention ( using Fisher's exact test). 4) Psychometric Side Effects: This study involved performingathirddatacollectioninwhich we asked participants to play either the adaptive version of Scrabblesque or the nonadaptive version at random. When the player finished their first game, they were given the option to take the GEQ and the IMI. As with previous user studies, participants were recruited from a combination of mailing lists and social media web sites. Snowball sampling was used and we encouraged participants to direct their friends and family members to this study in order to gather more participants. At the conclusion of our data collection, 47 participants had chosen to take the survey. Since participants could skip portions of the survey, we chose to only look at those participants that completed the survey in full. After doing this data cleaning, we had 39 people who had completed the IMI and 37 people who had completed the GEQ. A summary of the data gathered for these people can be found in TableVIandinTableVII. As you can see in Table VI and in Table VII, the adaptive version of Scrabblesque outperforms the nonadaptive version in terms of average score for both intrinsic motivation and engagement. To verify that these differences were statistically significant, we ran a one-tailed independent samples T-test with the alternative hypothesis that the scores associated with the adaptive game were higher than those associated with the nonadaptive game. It is important to note that we are using parametric statistical tests and reporting the means and standard deviations of this data because we consider the data produced by each of these TABLE VII SUMMARY OF GEQ RESULTS IN SCRABBLESQUE. DIFFERENCES OBSERVED WERE STATISTICALLY SIGNIFICANT ACCORDING TO A ONE-TAILED INDEPENDENT SAMPLEST-TEST (, ) surveys to be ratio data. The reason that we make this assumption is because each of these surveys are validated measures of intrinsic motivation and engagement, which gives their scores the ability to be treated as numeric for the purposes of analysis. The differences in response scores between the adaptive version of Scrabblesque and the nonadaptive version of Scrabblesque are statistically significant for both the IMI (, )andthegeq(, ). This difference implies that our algorithmic adaption strategy has a statistically relevant effect on both the intrinsic motivation experienced by people who played the game and the amount of engagement that those players felt. Since the GEQ contains four subscales that measure different intensities of engagement, we also examined how our algorithmic adaption strategy affected each of these subscales. To perform this analysis, we performed a one-tailed independent samples T-test on each subscale of the GEQ against each other subscale. In other words, each subscale was compared against each other subscale to see if player responses on any particular subscale were significantly different than the others. The results of this analysis can be seen in Table VIII. As you can see in the table, we did not observe a statistically significant difference in response values for shallow forms of engagement ( and for immersion and presence, respectively); however, we observed a marginally significant difference ( ) in flow response values and a statistically significant difference ( ) in absorption response values. V. CASE STUDY: SIDEQUEST: THE GAME Sidequest: The Game (see Fig. 5), is a 2-D adventure game coded in Flash in which the player takes control of a nameless hero with the goal of becoming an adventurer. The hero is free to explore the world and is able to talk to friendly nonplayer characters (NPCs) to receive quests. The goal of the game is to complete three game stages by completing three quests in each stage. During each stage of the game, different quests are made available to the player. Each stage contains ten unique quests which are assigned to NPCs throughout the world. In total, there are 30 possible quests for the player to complete. Once the player has finished the three quests that are required to advance to the next stage, it is not possible to accept any other

8 214 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 TABLE VIII SUMMARY OF DATA AND T-TEST RESULTS FOR GEQ SUBSCALES IN SCRABBLESQUE TABLE IX GOAL TRANSITION MATRIX FOR THE SECOND STAGE OF SIDEQUEST: THE GAME Fig. 5. Screenshot of Sidequest: The Game. quests. This means that a player that completes the game will finish a total of nine quests. Although there are 30 unique quests in the game, there are only a limited number of quest types to complete. These quest types include quests that involve killing some number of enemy NPCs, quests that involve talking to certain NPCs in other areas of the world, and quests that involve solving puzzles or riddles in the game. Throughout the course of the game, the player can accept any number of possible quests, but they can only have one active quest at a time. If a player wants to change quests, they need only abandon their current one by accepting a different one from a different quest-giver. Players are also free to reject any quests that do not sound appealing based on the description. This was done to give the player the freedom to perform the types of tasks that they enjoy and still give us an idea of what specific goal they are working toward. The game logs several low-level and high-level features about gameplay. These features include information on the quests that a player accepts/rejects/completes/abandons, the number of enemies defeated, the NPCs that the player interacts with, and how close a each quest-giving NPC is to the player at any given time. This game represents a significant step up in complexity from Scrabblesque in many ways. The first of these is that SQ:TG presents the user with a game environment that is much larger than Scrabblesque in terms of virtual space as well as complexity. The user is free to explore the game world as they please (although certain parts of the game world are unavailable to the player until they have reached a certain stage of the game), and they are free to perform quests at their leisure. In addition, there are many different ways that the player can interact with this virtual environment. The player is able to talk to NPCs, fight enemies, as well as (to a limited extent) control the state of the environment through the destruction of certain types of terrain. A. Game State Abstraction In SQ:TG, accepting and completing quests is the core game mechanic. While each quest has different tasks that the player must complete, the game, in general, is about selecting the quests that you want to complete and then finishing them in order to progress to the next stage. As such, we chose to abstract the space of possible games using vanity analytics that describe how players interacted with quests [22]. In SQ:TG, the player can interact with a quest by accepting it, rejecting it, abandoning it, or completing it. Thus, the game analytic space is defined in terms of the interactions that players had with the quests available in the game world. B. -Gram Models of Session-Level Retention As with Scrabblesque, the next step is to create models of session-level retention in this game analytic space. For the same reasons as described with Scrabblesque, we choose to use -gram models to generate a sequence of game states that are associated with session-level retention in SQ:TG. Due to the complexity of SQ:TG, calculating can be difficult. This is mainly because there is no clear idea of a turn in SQ:TG. In this paper, we define a turn in SQ:TG as ending when a player completes a quest. This means that all the actions that occur after a player has completed a quest until the player completes a new quest occur on the same turn. This also means that a completed game of SQ:TG will contain nine turns. With this defined, it is possible to calculate on each turn of the game. As with Scrabblesque, if is greater than the aprioriprobability of randomly predicting if a player ended the game early, then the sequence is considered to be associated with players quitting the game early. Once this is done, these sequences can be used to create a target distribution of game states. In SQ:TG, we chose to create three separate target distributions for each stage of the game. Since this target distribution consists of actions that the player can take, we felt that it would be easier to generate a goal state if these target distributions were converted to transition matrices. These transition matrices give the probability that a player will perform an action given their previous action. An example of one of these transition matrices is shown in Table IX. This transition matrix shows, for example, that a player that accepts a

9 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 215 quest is mostly likely (with a 64% probability) to complete that quest. C. Goal Targeting Before any adaptions to the game environment can be made, we must define a set of actionable analytics that describe each game state. For SQ:TG, we choose to describe the game state in terms of how close certain quests are to the player at a given time. In SQ:TG, we can not directly control where certain NPCs are; however, we can control which quests certain NPCs present to the player. As such, we can control how close certain quests are to the player by controlling which quests certain NPCs hand out [22]. With this in mind, goal targeting proceeds in a similar fashion to how it does in Scrabblesque. As with Scrabblesque, wewill describe how goal targeting and the subsequent adaptions are performed using an informative example. Before any adaptions can be made, a goal must be generated from the target distribution of game states. This involves using the transition matrices that were generated using our target distributions to generate a sequence of events that we will target. Our system uses the transition matrix to generate a sequence of actions that begins with the player accepting a quest and ends with them completing a quest. The resulting sequence is the target game sequence. In our example, let's consider the target game sequence of Accept, Reject, Complete generated by the transition matrix described in Table IX. Next, we must retrieve a set of game states that are likely to result in the target game sequence. To do this, we return the set of -nearest neighbors of the current game state in terms of the quests that the current player has accepted, rejected, abandoned, and completed. For this implementation, we have arbitrarily chosen, meaning we retrieve the set of quests that were accepted, rejected, completed, and abandoned by the five players most similar to the current player. Here, similarity is defined by the number of quests that each game state has in common. Thus, similar game states are those where the players accepted, rejected, abandoned, and completed the same quests. This set of states will be referred to as the candidate game states for SQ:TG. Once these states have been retrieved, candidate actions must be generated. Recall that actions in this environment consist of assigning quests to quest-givers in order to encourage the sequence Accept, Reject, Complete to occur. This means that we must place quests that the player is likely to accept and eventually complete (since the goal sequence does not involve the player abandoning their current quest). We also must place quests that the player is likely to reject. The set of candidate quests to place are generated by examining the candidate game states gathered earlier. In this case, the set of candidate quests to place would be each valid quest that was either completed or rejected in the candidate game states. A valid quest is a quest that the player has not interacted with in the current game. This is because quests that the player has interacted with are locked to the current quest-giving NPC and cannot be assigned to a different NPC. By limiting quest placement to valid quests, our system hides the fact that it is altering the game world in response to the player. Fig. 6. Screenshot of characters in Sidequest: The Game. Characters circled in yellow are quest-giving NPCs. Quest-giving NPCs are also numbered 1 through 5. The character circled in blue is the player's character. Once our system has generated the set of candidate quests, it only needs to determine the best way to assign quests to questgivers. In our example, we need the player to accept a quest that they will eventually complete after first rejecting a second quest. As such, the system would place a quest that the player is likely to complete such that the proximity in terms of Euclidean distance between its quest-giver and the player at the time of placement is minimized. The exact quest given is the valid quest that was completed most often in the candidate game states retrieved earlier. For an example of this placement, refer to Fig. 6. This figure shows the player character and five quest-giving NPCs. In this example, the most proximal quest-giver is the one labeled 1. in the figure, meaning that they would be assigned the quest that the player is most likely to complete. The next step is to give the next most proximal quest-giver (the NPC labeled 2. in Fig. 6) the quest that the player is most likely to reject (since the next action to be completed in the goal sequence is to reject a quest). This process is performed just as it was for placing a quest that the player is likely to complete. This process repeats until all quest-giving NPCs on the screen have a quest associated with them. D. Evaluation In order to evaluate how well our adaption technique performed in SQ:TG, we evaluate it as we did with Scrabblesque. This means that we measure how well the distributions produced by the game fit the target distribution as well as the quitting rate observed. In addition, we also performed an evaluation of the psychometric side-effects that our technique had on player experience in this environment as defined by the GEQ and the IMI. 1) Data Collection and Methodology: For these analyses, we performed two separate data collections. During the first one, players could only play the nonadaptive version of the game in which quests were assigned to quest-givers randomly. This round of data collection yielded 266 game traces, of which 141 were complete games and 125 were incomplete. This data was

10 216 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 TABLE X JENSEN SHANNON DIVERGENCE VALUES COMPARING THE DISTRIBUTIONS CREATED BY THE ADAPTIVE/NONADAPTIVE VERSION OF SIDEQUEST: THE GAME AND THE TARGET DISTRIBUTION TABLE XI COMPARISON BETWEEN THE NONADAPTIVE AND ADAPTIVE VERSIONS OF SIDEQUEST: THE GAME IN TERMS OF FINISHED AND UNFINISHED GAMES. ALSO GIVEN IS THEPERCENTAGE OF TOTAL GAMES THAT WERE UNFINISHED Fig. 7. Comparing the target distribution to the distributions created by the adaptive and nonadaptive versions of Sidequest: The Game in each stage. used to generate the -gram models of session-level retention that are used to generate the target distributions. Once this collection was complete, we used the data to create an adaptive version of SQ:TG and performed a second data collection. During this round of data collection, all players played this adaptive version of SQ:TG. By the end of this round of data collection, we had gathered 138 game traces. This data will be referred to as the adaptive data for the remainder of the paper. 2) Session-Level Retention: As with Scrabblesque, wefirst analyzed how well our algorithm is able to induce the targeted distribution of behavior. In these experiments, we also used the data gathered from the model building phase of the algorithm as the baseline for comparison. This data is referred to as the nonadaptive data for the remainder of this discussion. Fig. 7 shows a visual comparison of the distribution generated by the nonadaptive version of SQ:TG, the distribution generated by the adaptive version of SQ:TG, and the target distribution. In both Fig. 7(a) and (c), it appears as though the adaptive version of SQ:TG is able to better fit the two desired peaks in both Stage 1 and Stage 3 of the game. It is more difficult to determine which version of the game better fits the target distribution for Stage 2 of the game [seen in Fig. 7(b)]. To statistically evaluate these distributions, we calculate the Jensen Shannon divergence [23] between each distribution and the target distribution. Jensen Shannon divergence is considered to be a more general form of -divergence. It is used here because using -divergence requires that the distributions be absolutely continuous with respect to each other. This means that no part in either distribution being tested can occur with 0 probability. In Scrabblesque, the distributions created met this requirement; however, the distributions produced by SQ:TG do not. Jensen Shannon divergence relaxes this requirement, making it applicable to the distributions in SQ:TG. Another added benefit of Jensen Shannon divergence is that it is a symmetric measure, meaning that calculations do not need to be made in both directions and then averaged. Results of this analysis are shown in Table X. As you can see, the adaptive version of SQ:TG is able to produce a distribution that better fits the target distribution in two out of the three stages of the game. Stage 2 was the only stage where we did not see a decrease in Jensen Shannon divergence. The second analysis performed is a comparison of session level retention as measured by the quitting rate in the adaptive and nonadaptive versions of SQ:TG. A summary of this data is shownintablexi. As seen in the table, the quitting rate in the adaptive version of SQ:TG is 34.10% while the quitting rate in the nonadaptive version of SQ:TG is 47.00%. This is a difference of 12.9%. We used Fisher's exact test to determine if this difference was statistically significant. Using this test we found that, with a -value of 0.015, this difference is statistically significant. In addition to this general quitting rate analysis, we performed an analysis on when players quit. In this analysis, we examined how many players completed each stage of the game to determine at which stage of the game players are quitting. These resultsareshownintablexii. These results show that the adaptive version was better able to retain players at each stage of SQ:TG. For both Stage 1 and Stage 2, the difference in completion percentages is roughly 8%. The difference in completion percentages for Stage 3 grows to

11 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 217 TABLE XII THE PERCENTAGE OF PLAYERS THAT COMPLETED EACH STAGE OF SIDEQUEST: THE GAME IN THE ADAPTIVE AND NONADAPTIVE VERSIONS TABLE XIV SUMMARY OF IMI RESULTS IN SIDEQUEST: THE GAME TABLE XIII THE PERCENTAGE OF PLAYERS THAT QUIT AT EACH STAGE OF SIDEQUEST: THE GAME TABLE XV SUMMARY OF GEQ RESULTS IN SIDEQUEST: THE GAME roughly 12% since everyone who completed Stage 2 in the adaptive version of SQ:TG also completed Stage 3. It is also interesting to view these results with respect to how many people were lost at each stage. These data are shown in Table XIII. While it is true that the adaptive version of SQ:TG loses fewer people at each stage, it is interesting to note that during Stage 2 the percentage of players lost is similar to the percentage of players lost in the nonadaptive version. 3) Long-Term Retention: As with Scrabblesque,wealsoperformed a long-term retention analysis to get an idea of if our technique for dynamic game adaption would have any effects on play experience either positive or negative. This analysis shows that SQ:TG was not able to get many players to play multiple games in either condition. 0.0% of players that played the adaptive version of SQ:TG played multiple games, while 1.2% of players that played the nonadaptive version of SQ:TG played multiple games. This is likely due to the length of the game. AsinglegameofScrabblesque can be completed in about 5 min whereas it can take upwards of 30 minutes to finish SQ:TG. This might discourage players from wanting to invest the time in multiple playthroughs. Regardless, this result also shows that our algorithm did not seem to definitively affect long-term retention in either a positive or negative way. 4) Psychometric Side-Effects: During the data collections for SQ:TG that were detailed earlier, players were offered the chance to take the IMI and the GEQ once they had completed the game. These surveys were optional and players were not required to answer every question on either survey. As with the experiment performed on Scrabblesque, players always took the IMI before the GEQ, but the order of the individual questions on each survey was randomized. After all data collections were completed, 134 players had completed the IMI and 122 players had completed the GEQ in the nonadaptive version of SQ:TG. In the adaptive version of SQ:TG, 84 people finished the IMI and 88 people completed the GEQ. Table XIV contains a summary of the IMI data, and Table XV contains a summary of the GEQ data. Recall that, as with Scrabblesque, survey responses for the GEQ and IMI were only considered if the player had responded to every statement. So, if a player responded to each statement on the GEQ but not on the IMI, then their responses would be used in the analysis on the GEQ, but not on the IMI. Examining Tables XIV and XV show that the adaptive version of SQ:TG outperforms the nonadaptive version for both player engagement and intrinsic motivation. On the IMI, the players that played the adaptive version of SQ:TG reported a score of on average while players who played the nonadaptive version reported an average score of This difference was found to be statistically significant (, ) using a two-tailed, independent samples T-test. For the GEQ, players who played the adaptive version of the game reported an average score of while players who played the nonadaptive version reported an average score of This difference was also found to be statistically significant (, ) using a two-tailed independent samples T-test. E. Analysis of the Aspects of Engagement In addition, we repeated the analysis performed in Scrabblesque that described how each version of the game performed on the individual aspects of player engagement in SQ:TG. The results of this analysis are shown in Table XVI. The adaptive version of SQ:TG produces higher scores for each subscale of the GEQ, although these differences are not always statistically significant. For the shallower forms of engagement (immersion and presence), the differences reported were not statistically significant ( and for immersion and presence, respectively). Moving to the deeper forms of engagement, however, we found that the differences between the nonadaptive version of SQ:TG and the adaptive version become statistically significant ( and for flow and absorption, respectively). VI. DISCUSSION The most important finding from these case studies is that our technique for dynamic game adaption is able to successfully increase session-level retention in both game environments without negatively impacting long-term retention or worse, engagement. In most cases, it was also able to produce behavior distributions that better fit the target distribution than a nonadaptive game. This shows that our technique is powerful enough to produce the desired results in terms of distribution fitting and session-level retention and is also generalizable to more than one type of game. The analysis of psychometric side effects also yield interesting results. According to those studies, our technique positively impacts player experience. While it was not our goal to specifically improve upon these metrics, it shows that the gains in session-level retention do not come at the cost of player experience. A possible direction for future work is to explore if it is our technique, rather than some other confounding factor, that

12 218 IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, VOL. 7, NO. 3, SEPTEMBER 2015 SUMMARY OF DATA AND TABLE XVI -TEST RESULTS FOR GEQ SUBSCALES IN SIDEQUEST: THE GAME contributes to this increase in intrinsic motivation and engagement. The results of the analysis of the aspects of engagement merit further discussion as well. The results show that the adaptive version of both games performed comparably to their nonadaptive counterparts with respect to immersion and presence. We feel that this is expected behavior since, as Brockmyer et al. [13] explain, feelings of immersion and presence are not uncommon in games. Notice that as we move to measures of deeper engagement, such as flow and absorption, the adaptive version of each game begins to outperform the nonadaptive versions of the games. In other words, the adaptive version of each game environment enables players to feel a deeper sense of engagement whereas both versions enable players to feel a shallow sense of engagement. The results of the distribution analysis in SQ:TG point to a possible limitation of our technique. In stage 2, both versions of the game performed similarly (with the nonadaptive version performing slightly better) with respect to fitting the target distribution. One explanation for this phenomenon has to do with the complexity of the distribution that needed to be fit and the relative complexity of the game environment. It is possible that our technique implemented in a complex game environment (such as SQ:TG) can only induce relatively simple target distributions, whereas our technique implementedinasimplergameenvironment (such as Scrabblesque) has the ability to induce more complex target distributions. It is interesting to notice that engagement and intrinsic motivation scores associated with SQ:TG were much lower than the scores associated with Scrabblesque. Sowhilewefind that there are similar gains in engagement/intrinsic motivation across game environments, SQ:TG does not seem to be as engaging/intrinsically motivating as Scrabblesque. We attribute this to the underlying design of each game environment. Scrabblesque is simply a Flash implementation of Scrabble, agame that has been around since 1938 [24]. This game has managed to survive the test of time, meaning that it is not surprising to find that it has high engagement and intrinsic motivation values associated with it. SQ:TG, however, is a game of our own creation. It is likely that SQ:TG simply does not provide as engaging or intrinsically motivating an experience as Scrabblesque since we are not experts in game design. VII. CONCLUSION In this paper, we have presented a technique that leverages the prescriptive and descriptive power of game analytics to dynamically adapt games in order to improve session-level retention in a variety of game environments. We have also performed a psychometric evaluation which shows that the gains made in session-level retention do not come at the cost of player experience or engagement. This system uses -gram models of session-level retention to identify game states in the game analytic space that are predictive of players quitting the game and then uses these to make intelligent changes to the game environment in order to avoid these states in order to increase session-level retention. This paper serves as evidence to the power that game analytics have for more than just being purely descriptive or predictive tools in games. We have shown that they can be used to dynamically make changes to game environments in order to influence player behavior. We hope that this work will encourage others to discover new ways that game analytics can be used not just to describe or predict behaviors, but to dynamically create and shape new player experiences. REFERENCES [1] P. Tarng, K. Chen, and P. Huang, An analysis of wow players' game hours, in Proc. 7th ACM SIGCOMM Workshop Netw. Syst. Support Games, 2008, pp [2] P.-Y. Tarng, K.-T. Chen, and P. Huang, On prophesying online gamer departure, in Proc. 8th Ann. Workshop Netw. Syst. Support Games, Nov. 2009, pp [3] T. Debeauvais, B. Nardi, D. J. Schiano, N. Ducheneaut, and N. Yee, If you build it they might stay: Retention mechanisms in World of Warcraft, in Proc. 6th Int. Conf. Found. Digit. Games, 2011, pp [4] T. Debeauvais, C. V. Lopes, N. Yee, and N. Ducheneaut, Retention and progression: Seven months in World of Warcraft, in Proc. 9th Int. Conf. Found. Digit. Games, [5] Z. Borbora, J. Srivastava, K.-W. Hsu, and D. Williams, Churn prediction in MMORPGs using player motivation theories and an ensemble approach, in Proc. 3rd Int. Conf. Social Comput., 2011, pp [6] J. Kawale, A. Pal, and J. Srivastava, Churn prediction in MMORPGS: A social influence based approach, in Proc. Int. Conf. Comput. Sci. Eng., 2009, pp [7] B. Weber, M. John, M. Mateas, and A. Jhala, Modeling player retention in Madden NFL 11, in Proc. 23rd IAAI Conf., [8] B. Weber, M. Mateas, and A. Jhala, Using data mining to model player experience, in Proc. FDG Workshop Eval. Player Exper. Games, [9] Z.Lin,C.Lewis,S.Kurniawan,andJ.Whitehead, Whyplayersstart and stop playing a Chinese social network game, J. Gam. Virtual Worlds, vol. 5, no. 3, pp , [10] Y.-L. Kuo et al., Community-based game design: Experiments on social games for commonsense data collection, in Proc. Workshop Human Comput., 2009, pp [11] E. Andersen et al., On the harmfulness of secondary game objectives, in Proc. 6th Int. Conf. Found. Digit. Games, 2011, pp [12] The Intrinsic Motivation Inventory [Online]. Available: selfdeterminationtheory.org/questionnaires/10-questionnaires/50 [13] J. Brockmyer et al., The development of the game engagement questionnaire: A measure of engagement in video game-playing, J. Exp. Social Psychol., vol. 45, no. 4, pp , [14] E. McAuley, T. Duncan, and V. V. Tammen, Psychometric properties of the intrinsic motivation inventory in a competitive sport setting: A confirmatory factor analysis, Res. Quarter. Exercise Sport, vol. 60, no. 1, p. 48, [15] E. Ries, The Lean Startup: How Today's Entrepreneurs use Continuous Innovation to Create Radically Successful Businesses. New York, NY, USA: Random House, 2011.

13 HARRISON AND ROBERTS: ANALYTIC AND PSYCHOMETRIC EVALUATION OF DYNAMIC GAME ADAPTION 219 [16] D. L. Roberts, S. Bhat, K. S. Clair, and C. L. Isbell, Authorial Idioms for Target Distributions in TTD-MDPs, in Proc. 22nd Conf. Artif. Intell., [17] B. Harrison and D. Roberts, When players quit (playing Scrabble), in Proc. 8th Annu. AAAI Conf. Artif. Intell. Interact. Digit. Entertain., [18] J. Lin, E. Keogh, S. Lonardi, and B. Chiu, A symbolic representation of time series, with implications for streaming algorithms, in Proc. 8th ACM SIGMOD Workshop Res. Issues Data Mining Knowl. Disc., 2003, pp [19] B. Harrison and D. Roberts, When players quit (playing Scrabble), in Proc. Conf. Artif. Intell. Interact. Digit. Entertain., [20] B. Harrison and D. L. Roberts, Analytics-driven dynamic game adaption for player retention in Scrabble, in Proc. IEEE Conf. Comput. Intell. Games (CIG), 2013, pp [21] A. Bhattacharyya, On a measure of divergence between two statistical populations defined by their probability distributions, Bull. Calcutta Math. Soc., vol. 35, pp , [22] B. Harrison and D. Roberts, Analytics-driven dynamic game adaption for player retention in a 2-dimensional adventure game, in Proc. 10th Artif. Intell. Interact. Digit. Entertain. Conf., 2014, pp [23] J. Lin, Divergence measures based on the Shannon entropy, IEEE Trans. Inf. Theory, vol. 37, no. 1, pp , [24] J. Brunot, Scrabble, Brent Harrison received the B.S. degree in computer science and the B.A. degree in English from Auburn University, Auburn, AL, USA, in He received the M.S. and Ph.D. degrees in computer science from North Carolina State University, Raleigh, USA, in 2012 and 2014, respectively. He is currently a research scientist with the College of Computing, Georgia Institute of Technology, Atlanta, USA, working in the Entertainment Intelligence Lab. His research interests are in using machine learning, player modeling, and artificial intelligence for to enhance game design. Dr. Harrison is a member of the Association for the Advancement of Artificial Intelligence. David L. Roberts received the B.A. degree in computer science and mathematics from Colgate University, Hamilton, NY, USA, in 2003, and the Ph.D. degree in computer science from the College of Computing, Georgia Institute of Technology, Atlanta, USA, in He is currently an Assistant Professor of Computer Science, North Carolina State University, Raleigh, USA. His research interests lie at the intersection of machine learning, social and behavioral psychology, and human-computer interaction. He has a particular focus on computation as a tool to provide insight into human behavior in narrative, virtual world, and game environments. Dr. Roberts is a member of the Association for the Advancement of Artificial Intelligence and Association for Computing Machinery.

When Players Quit (Playing Scrabble)

When Players Quit (Playing Scrabble) When Players Quit (Playing Scrabble) Brent Harrison and David L. Roberts North Carolina State University Raleigh, North Carolina 27606 Abstract What features contribute to player enjoyment and player retention

More information

Statistical Analysis of Nuel Tournaments Department of Statistics University of California, Berkeley

Statistical Analysis of Nuel Tournaments Department of Statistics University of California, Berkeley Statistical Analysis of Nuel Tournaments Department of Statistics University of California, Berkeley MoonSoo Choi Department of Industrial Engineering & Operations Research Under Guidance of Professor.

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

AI Approaches to Ultimate Tic-Tac-Toe

AI Approaches to Ultimate Tic-Tac-Toe AI Approaches to Ultimate Tic-Tac-Toe Eytan Lifshitz CS Department Hebrew University of Jerusalem, Israel David Tsurel CS Department Hebrew University of Jerusalem, Israel I. INTRODUCTION This report is

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

How Representation of Game Information Affects Player Performance

How Representation of Game Information Affects Player Performance How Representation of Game Information Affects Player Performance Matthew Paul Bryan June 2018 Senior Project Computer Science Department California Polytechnic State University Table of Contents Abstract

More information

Variance Decomposition and Replication In Scrabble: When You Can Blame Your Tiles?

Variance Decomposition and Replication In Scrabble: When You Can Blame Your Tiles? Variance Decomposition and Replication In Scrabble: When You Can Blame Your Tiles? Andrew C. Thomas December 7, 2017 arxiv:1107.2456v1 [stat.ap] 13 Jul 2011 Abstract In the game of Scrabble, letter tiles

More information

MMORPGs And Women: An Investigative Study of the Appeal of Massively Multiplayer Online Roleplaying Games. and Female Gamers.

MMORPGs And Women: An Investigative Study of the Appeal of Massively Multiplayer Online Roleplaying Games. and Female Gamers. MMORPGs And Women 1 MMORPGs And Women: An Investigative Study of the Appeal of Massively Multiplayer Online Roleplaying Games and Female Gamers. Julia Jones May 3 rd, 2013 MMORPGs And Women 2 Abstract:

More information

Demand for Commitment in Online Gaming: A Large-Scale Field Experiment

Demand for Commitment in Online Gaming: A Large-Scale Field Experiment Demand for Commitment in Online Gaming: A Large-Scale Field Experiment Vinci Y.C. Chow and Dan Acland University of California, Berkeley April 15th 2011 1 Introduction Video gaming is now the leisure activity

More information

Adapting IRIS, a Non-Interactive Narrative Generation System, to an Interactive Text Adventure Game

Adapting IRIS, a Non-Interactive Narrative Generation System, to an Interactive Text Adventure Game Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference Adapting IRIS, a Non-Interactive Narrative Generation System, to an Interactive Text Adventure

More information

Game Playing for a Variant of Mancala Board Game (Pallanguzhi)

Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Game Playing for a Variant of Mancala Board Game (Pallanguzhi) Varsha Sankar (SUNet ID: svarsha) 1. INTRODUCTION Game playing is a very interesting area in the field of Artificial Intelligence presently.

More information

World of Warcraft: Quest Types Generalized Over Level Groups

World of Warcraft: Quest Types Generalized Over Level Groups 1 World of Warcraft: Quest Types Generalized Over Level Groups Max Evans, Brittany Cariou, Abby Bashore Writ 1133: World of Rhetoric Abstract Examining the ratios of quest types in the game World of Warcraft

More information

An Empirical Evaluation of Policy Rollout for Clue

An Empirical Evaluation of Policy Rollout for Clue An Empirical Evaluation of Policy Rollout for Clue Eric Marshall Oregon State University M.S. Final Project marshaer@oregonstate.edu Adviser: Professor Alan Fern Abstract We model the popular board game

More information

Guess the Mean. Joshua Hill. January 2, 2010

Guess the Mean. Joshua Hill. January 2, 2010 Guess the Mean Joshua Hill January, 010 Challenge: Provide a rational number in the interval [1, 100]. The winner will be the person whose guess is closest to /3rds of the mean of all the guesses. Answer:

More information

Effect of Information Exchange in a Social Network on Investment: a study of Herd Effect in Group Parrondo Games

Effect of Information Exchange in a Social Network on Investment: a study of Herd Effect in Group Parrondo Games Effect of Information Exchange in a Social Network on Investment: a study of Herd Effect in Group Parrondo Games Ho Fai MA, Ka Wai CHEUNG, Ga Ching LUI, Degang Wu, Kwok Yip Szeto 1 Department of Phyiscs,

More information

Tiling Problems. This document supersedes the earlier notes posted about the tiling problem. 1 An Undecidable Problem about Tilings of the Plane

Tiling Problems. This document supersedes the earlier notes posted about the tiling problem. 1 An Undecidable Problem about Tilings of the Plane Tiling Problems This document supersedes the earlier notes posted about the tiling problem. 1 An Undecidable Problem about Tilings of the Plane The undecidable problems we saw at the start of our unit

More information

Adjustable Group Behavior of Agents in Action-based Games

Adjustable Group Behavior of Agents in Action-based Games Adjustable Group Behavior of Agents in Action-d Games Westphal, Keith and Mclaughlan, Brian Kwestp2@uafortsmith.edu, brian.mclaughlan@uafs.edu Department of Computer and Information Sciences University

More information

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Capturing and Adapting Traces for Character Control in Computer Role Playing Games Capturing and Adapting Traces for Character Control in Computer Role Playing Games Jonathan Rubin and Ashwin Ram Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 USA Jonathan.Rubin@parc.com,

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Randomized Evaluations in Practice: Opportunities and Challenges. Kyle Murphy Policy Manager, J-PAL January 30 th, 2017

Randomized Evaluations in Practice: Opportunities and Challenges. Kyle Murphy Policy Manager, J-PAL January 30 th, 2017 Randomized Evaluations in Practice: Opportunities and Challenges Kyle Murphy Policy Manager, J-PAL January 30 th, 2017 Overview Background What is a randomized evaluation? Why randomize? Advantages and

More information

SCRABBLE ARTIFICIAL INTELLIGENCE GAME. CS 297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University

SCRABBLE ARTIFICIAL INTELLIGENCE GAME. CS 297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University SCRABBLE AI GAME 1 SCRABBLE ARTIFICIAL INTELLIGENCE GAME CS 297 Report Presented to Dr. Chris Pollett Department of Computer Science San Jose State University In Partial Fulfillment Of the Requirements

More information

User Type Identification in Virtual Worlds

User Type Identification in Virtual Worlds User Type Identification in Virtual Worlds Ruck Thawonmas, Ji-Young Ho, and Yoshitaka Matsumoto Introduction In this chapter, we discuss an approach for identification of user types in virtual worlds.

More information

Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax

Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax Approaching The Royal Game of Ur with Genetic Algorithms and ExpectiMax Tang, Marco Kwan Ho (20306981) Tse, Wai Ho (20355528) Zhao, Vincent Ruidong (20233835) Yap, Alistair Yun Hee (20306450) Introduction

More information

Who plays mobile games? Player insights to help developers win

Who plays mobile games? Player insights to help developers win Who plays mobile games? Player insights to help developers win June 2017 Mobile games are an essential part of the Android user experience. Google Play commissioned a large scale international research

More information

Dota2 is a very popular video game currently.

Dota2 is a very popular video game currently. Dota2 Outcome Prediction Zhengyao Li 1, Dingyue Cui 2 and Chen Li 3 1 ID: A53210709, Email: zhl380@eng.ucsd.edu 2 ID: A53211051, Email: dicui@eng.ucsd.edu 3 ID: A53218665, Email: lic055@eng.ucsd.edu March

More information

Opponent Modelling In World Of Warcraft

Opponent Modelling In World Of Warcraft Opponent Modelling In World Of Warcraft A.J.J. Valkenberg 19th June 2007 Abstract In tactical commercial games, knowledge of an opponent s location is advantageous when designing a tactic. This paper proposes

More information

Patent Mining: Use of Data/Text Mining for Supporting Patent Retrieval and Analysis

Patent Mining: Use of Data/Text Mining for Supporting Patent Retrieval and Analysis Patent Mining: Use of Data/Text Mining for Supporting Patent Retrieval and Analysis by Chih-Ping Wei ( 魏志平 ), PhD Institute of Service Science and Institute of Technology Management National Tsing Hua

More information

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information

More information

The real impact of using artificial intelligence in legal research. A study conducted by the attorneys of the National Legal Research Group, Inc.

The real impact of using artificial intelligence in legal research. A study conducted by the attorneys of the National Legal Research Group, Inc. The real impact of using artificial intelligence in legal research A study conducted by the attorneys of the National Legal Research Group, Inc. Executive Summary This study explores the effect that using

More information

Key Words: age-order, last birthday, full roster, full enumeration, rostering, online survey, within-household selection. 1.

Key Words: age-order, last birthday, full roster, full enumeration, rostering, online survey, within-household selection. 1. Comparing Alternative Methods for the Random Selection of a Respondent within a Household for Online Surveys Geneviève Vézina and Pierre Caron Statistics Canada, 100 Tunney s Pasture Driveway, Ottawa,

More information

THE PRESENT AND THE FUTURE OF igaming

THE PRESENT AND THE FUTURE OF igaming THE PRESENT AND THE FUTURE OF igaming Contents 1. Introduction 2. Aspects of AI in the igaming Industry 2.1 Personalization through data acquisition and analytics 2.2 AI as the core tool for an optimal

More information

Authoring adaptive game world generation

Authoring adaptive game world generation IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES 1 Authoring adaptive game world generation Ricardo Lopes, Elmar Eisemann, and Rafael Bidarra Abstract Current research on adaptive games

More information

AUTOMATED MUSIC TRACK GENERATION

AUTOMATED MUSIC TRACK GENERATION AUTOMATED MUSIC TRACK GENERATION LOUIS EUGENE Stanford University leugene@stanford.edu GUILLAUME ROSTAING Stanford University rostaing@stanford.edu Abstract: This paper aims at presenting our method to

More information

CHAPTER LEARNING OUTCOMES. By the end of this section, students will be able to:

CHAPTER LEARNING OUTCOMES. By the end of this section, students will be able to: CHAPTER 4 4.1 LEARNING OUTCOMES By the end of this section, students will be able to: Understand what is meant by a Bayesian Nash Equilibrium (BNE) Calculate the BNE in a Cournot game with incomplete information

More information

Experiments on Alternatives to Minimax

Experiments on Alternatives to Minimax Experiments on Alternatives to Minimax Dana Nau University of Maryland Paul Purdom Indiana University April 23, 1993 Chun-Hung Tzeng Ball State University Abstract In the field of Artificial Intelligence,

More information

Privacy, Due Process and the Computational Turn: The philosophy of law meets the philosophy of technology

Privacy, Due Process and the Computational Turn: The philosophy of law meets the philosophy of technology Privacy, Due Process and the Computational Turn: The philosophy of law meets the philosophy of technology Edited by Mireille Hildebrandt and Katja de Vries New York, New York, Routledge, 2013, ISBN 978-0-415-64481-5

More information

Analyzing Games.

Analyzing Games. Analyzing Games staffan.bjork@chalmers.se Structure of today s lecture Motives for analyzing games With a structural focus General components of games Example from course book Example from Rules of Play

More information

Procedural Level Generation for a 2D Platformer

Procedural Level Generation for a 2D Platformer Procedural Level Generation for a 2D Platformer Brian Egana California Polytechnic State University, San Luis Obispo Computer Science Department June 2018 2018 Brian Egana 2 Introduction Procedural Content

More information

Monte Carlo based battleship agent

Monte Carlo based battleship agent Monte Carlo based battleship agent Written by: Omer Haber, 313302010; Dror Sharf, 315357319 Introduction The game of battleship is a guessing game for two players which has been around for almost a century.

More information

introduction to the course course structure topics

introduction to the course course structure topics topics: introduction to the course brief overview of game programming how to learn a programming language sample environment: scratch to do instructor: cisc1110 introduction to computing using c++ gaming

More information

Game Design 2. Table of Contents

Game Design 2. Table of Contents Course Syllabus Course Code: EDL082 Required Materials 1. Computer with: OS: Windows 7 SP1+, 8, 10; Mac OS X 10.8+. Windows XP & Vista are not supported; and server versions of Windows & OS X are not tested.

More information

1995 Video Lottery Survey - Results by Player Type

1995 Video Lottery Survey - Results by Player Type 1995 Video Lottery Survey - Results by Player Type Patricia A. Gwartney, Amy E. L. Barlow, and Kimberlee Langolf Oregon Survey Research Laboratory June 1995 INTRODUCTION This report's purpose is to examine

More information

Using Administrative Records for Imputation in the Decennial Census 1

Using Administrative Records for Imputation in the Decennial Census 1 Using Administrative Records for Imputation in the Decennial Census 1 James Farber, Deborah Wagner, and Dean Resnick U.S. Census Bureau James Farber, U.S. Census Bureau, Washington, DC 20233-9200 Keywords:

More information

Rapid Skill Capture in a First-Person Shooter

Rapid Skill Capture in a First-Person Shooter MANUSCRIPT FOR THE IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES 1 Rapid Skill Capture in a First-Person Shooter David Buckley, Ke Chen, and Joshua Knowles arxiv:1411.1316v2 [cs.hc] 6

More information

Cutting a Pie Is Not a Piece of Cake

Cutting a Pie Is Not a Piece of Cake Cutting a Pie Is Not a Piece of Cake Julius B. Barbanel Department of Mathematics Union College Schenectady, NY 12308 barbanej@union.edu Steven J. Brams Department of Politics New York University New York,

More information

Reinforcement Learning Applied to a Game of Deceit

Reinforcement Learning Applied to a Game of Deceit Reinforcement Learning Applied to a Game of Deceit Theory and Reinforcement Learning Hana Lee leehana@stanford.edu December 15, 2017 Figure 1: Skull and flower tiles from the game of Skull. 1 Introduction

More information

1.1 Introduction WBC-The Board Game is a game for 3-5 players, who will share the fun of the

1.1 Introduction WBC-The Board Game is a game for 3-5 players, who will share the fun of the 1.1 Introduction WBC-The Board Game is a game for 3-5 players, who will share the fun of the week-long World Boardgaming Championships, contesting convention events in a quest for Laurels and competing

More information

Procedural Content Generation

Procedural Content Generation Lecture 14 Generation In Beginning, There Was Rogue 2 In Beginning, There Was Rogue Roguelike Genre Classic RPG style Procedural dungeons Permadeath 3 A Brief History of Roguelikes Precursors (1978) Beneath

More information

Procedural Content Generation

Procedural Content Generation Lecture 13 Generation In Beginning, There Was Rogue 2 In Beginning, There Was Rogue Roguelike Genre Classic RPG style Procedural dungeons Permadeath 3 A Brief History of Roguelikes Precursors (1978) Beneath

More information

Gameplay as On-Line Mediation Search

Gameplay as On-Line Mediation Search Gameplay as On-Line Mediation Search Justus Robertson and R. Michael Young Liquid Narrative Group Department of Computer Science North Carolina State University Raleigh, NC 27695 jjrobert@ncsu.edu, young@csc.ncsu.edu

More information

The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681

The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681 The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681 College of William & Mary, Williamsburg, Virginia 23187

More information

1. The chance of getting a flush in a 5-card poker hand is about 2 in 1000.

1. The chance of getting a flush in a 5-card poker hand is about 2 in 1000. CS 70 Discrete Mathematics for CS Spring 2008 David Wagner Note 15 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice, roulette wheels. Today

More information

CandyCrush.ai: An AI Agent for Candy Crush

CandyCrush.ai: An AI Agent for Candy Crush CandyCrush.ai: An AI Agent for Candy Crush Jiwoo Lee, Niranjan Balachandar, Karan Singhal December 16, 2016 1 Introduction Candy Crush, a mobile puzzle game, has become very popular in the past few years.

More information

Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target

Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target 14th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 11 Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target Mark Silbert and Core

More information

**Gettysburg Address Spotlight Task

**Gettysburg Address Spotlight Task **Gettysburg Address Spotlight Task Authorship of literary works is often a topic for debate. One method researchers use to decide who was the author is to look at word patterns from known writing of the

More information

Channel Sensing Order in Multi-user Cognitive Radio Networks

Channel Sensing Order in Multi-user Cognitive Radio Networks 2012 IEEE International Symposium on Dynamic Spectrum Access Networks Channel Sensing Order in Multi-user Cognitive Radio Networks Jie Zhao and Xin Wang Department of Electrical and Computer Engineering

More information

Automatically Adjusting Player Models for Given Stories in Role- Playing Games

Automatically Adjusting Player Models for Given Stories in Role- Playing Games Automatically Adjusting Player Models for Given Stories in Role- Playing Games Natham Thammanichanon Department of Computer Engineering Chulalongkorn University, Payathai Rd. Patumwan Bangkok, Thailand

More information

Techniques for Generating Sudoku Instances

Techniques for Generating Sudoku Instances Chapter Techniques for Generating Sudoku Instances Overview Sudoku puzzles become worldwide popular among many players in different intellectual levels. In this chapter, we are going to discuss different

More information

Variations on the Two Envelopes Problem

Variations on the Two Envelopes Problem Variations on the Two Envelopes Problem Panagiotis Tsikogiannopoulos pantsik@yahoo.gr Abstract There are many papers written on the Two Envelopes Problem that usually study some of its variations. In this

More information

Optimal Yahtzee performance in multi-player games

Optimal Yahtzee performance in multi-player games Optimal Yahtzee performance in multi-player games Andreas Serra aserra@kth.se Kai Widell Niigata kaiwn@kth.se April 12, 2013 Abstract Yahtzee is a game with a moderately large search space, dependent on

More information

Cracking the Sudoku: A Deterministic Approach

Cracking the Sudoku: A Deterministic Approach Cracking the Sudoku: A Deterministic Approach David Martin Erica Cross Matt Alexander Youngstown State University Youngstown, OH Advisor: George T. Yates Summary Cracking the Sodoku 381 We formulate a

More information

Autocomplete Sketch Tool

Autocomplete Sketch Tool Autocomplete Sketch Tool Sam Seifert, Georgia Institute of Technology Advanced Computer Vision Spring 2016 I. ABSTRACT This work details an application that can be used for sketch auto-completion. Sketch

More information

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti Basic Information Project Name Supervisor Kung-fu Plants Jakub Gemrot Annotation Kung-fu plants is a game where you can create your characters, train them and fight against the other chemical plants which

More information

CS221 Project Final Report Gomoku Game Agent

CS221 Project Final Report Gomoku Game Agent CS221 Project Final Report Gomoku Game Agent Qiao Tan qtan@stanford.edu Xiaoti Hu xiaotihu@stanford.edu 1 Introduction Gomoku, also know as five-in-a-row, is a strategy board game which is traditionally

More information

Playing Othello Using Monte Carlo

Playing Othello Using Monte Carlo June 22, 2007 Abstract This paper deals with the construction of an AI player to play the game Othello. A lot of techniques are already known to let AI players play the game Othello. Some of these techniques

More information

AI Learning Agent for the Game of Battleship

AI Learning Agent for the Game of Battleship CS 221 Fall 2016 AI Learning Agent for the Game of Battleship Jordan Ebel (jebel) Kai Yee Wan (kaiw) Abstract This project implements a Battleship-playing agent that uses reinforcement learning to become

More information

Al-Jabar A mathematical game of strategy Cyrus Hettle and Robert Schneider

Al-Jabar A mathematical game of strategy Cyrus Hettle and Robert Schneider Al-Jabar A mathematical game of strategy Cyrus Hettle and Robert Schneider 1 Color-mixing arithmetic The game of Al-Jabar is based on concepts of color-mixing familiar to most of us from childhood, and

More information

Game Theory and Randomized Algorithms

Game Theory and Randomized Algorithms Game Theory and Randomized Algorithms Guy Aridor Game theory is a set of tools that allow us to understand how decisionmakers interact with each other. It has practical applications in economics, international

More information

Appendix A A Primer in Game Theory

Appendix A A Primer in Game Theory Appendix A A Primer in Game Theory This presentation of the main ideas and concepts of game theory required to understand the discussion in this book is intended for readers without previous exposure to

More information

A New Design and Analysis Methodology Based On Player Experience

A New Design and Analysis Methodology Based On Player Experience A New Design and Analysis Methodology Based On Player Experience Ali Alkhafaji, DePaul University, ali.a.alkhafaji@gmail.com Brian Grey, DePaul University, brian.r.grey@gmail.com Peter Hastings, DePaul

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Activity: Do You Know Your s? (Part 1) TEKS: (4.13) Probability and statistics. The student solves problems by collecting, organizing, displaying, and interpreting sets of data.

More information

Star-Crossed Competitive Analysis

Star-Crossed Competitive Analysis Star-Crossed Competitive Analysis Kristina Cunningham Masters of Arts Department of Telecommunications, Information Studies, and Media College of Communication Arts and Sciences Michigan State University

More information

AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX. Dana Nau 1 Computer Science Department University of Maryland College Park, MD 20742

AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX. Dana Nau 1 Computer Science Department University of Maryland College Park, MD 20742 Uncertainty in Artificial Intelligence L.N. Kanal and J.F. Lemmer (Editors) Elsevier Science Publishers B.V. (North-Holland), 1986 505 AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX Dana Nau 1 University

More information

Creating a Dominion AI Using Genetic Algorithms

Creating a Dominion AI Using Genetic Algorithms Creating a Dominion AI Using Genetic Algorithms Abstract Mok Ming Foong Dominion is a deck-building card game. It allows for complex strategies, has an aspect of randomness in card drawing, and no obvious

More information

Benford s Law, data mining, and financial fraud: a case study in New York State Medicaid data

Benford s Law, data mining, and financial fraud: a case study in New York State Medicaid data Data Mining IX 195 Benford s Law, data mining, and financial fraud: a case study in New York State Medicaid data B. Little 1, R. Rejesus 2, M. Schucking 3 & R. Harris 4 1 Department of Mathematics, Physics,

More information

Assignment Cover Sheet Faculty of Science and Technology

Assignment Cover Sheet Faculty of Science and Technology Assignment Cover Sheet Faculty of Science and Technology NAME: Andrew Fox STUDENT ID: UNIT CODE: ASSIGNMENT/PRAC No.: 2 ASSIGNMENT/PRAC NAME: Gameplay Concept DUE DATE: 5 th May 2010 Plagiarism and collusion

More information

CPS331 Lecture: Heuristic Search last revised 6/18/09

CPS331 Lecture: Heuristic Search last revised 6/18/09 CPS331 Lecture: Heuristic Search last revised 6/18/09 Objectives: 1. To introduce the use of heuristics in searches 2. To introduce some standard heuristic algorithms 3. To introduce criteria for evaluating

More information

League of Legends: Dynamic Team Builder

League of Legends: Dynamic Team Builder League of Legends: Dynamic Team Builder Blake Reed Overview The project that I will be working on is a League of Legends companion application which provides a user data about different aspects of the

More information

Run Ant Runt! Game Design Document. Created: November 20, 2013 Updated: November 20, 2013

Run Ant Runt! Game Design Document. Created: November 20, 2013 Updated: November 20, 2013 Run Ant Runt! Game Design Document Created: November 20, 2013 Updated: November 20, 2013 1 Overview... 1 1.1 In One Sentence... 1 1.2 Intro... 1 1.3 Genre... 1 1.4 Platform, Minimum Specs... 1 1.5 Target

More information

Drum Transcription Based on Independent Subspace Analysis

Drum Transcription Based on Independent Subspace Analysis Report for EE 391 Special Studies and Reports for Electrical Engineering Drum Transcription Based on Independent Subspace Analysis Yinyi Guo Center for Computer Research in Music and Acoustics, Stanford,

More information

Special Notice. Rules. Weiss Schwarz Comprehensive Rules ver Last updated: September 3, Outline of the Game

Special Notice. Rules. Weiss Schwarz Comprehensive Rules ver Last updated: September 3, Outline of the Game Weiss Schwarz Comprehensive Rules ver. 1.66 Last updated: September 3, 2015 Contents Page 1. Outline of the Game. 1 2. Characteristics of a Card. 2 3. Zones of the Game... 4 4. Basic Concept... 6 5. Setting

More information

Recommender Systems TIETS43 Collaborative Filtering

Recommender Systems TIETS43 Collaborative Filtering + Recommender Systems TIETS43 Collaborative Filtering Fall 2017 Kostas Stefanidis kostas.stefanidis@uta.fi https://coursepages.uta.fi/tiets43/ selection Amazon generates 35% of their sales through recommendations

More information

A Comparative Study of Quality of Service Routing Schemes That Tolerate Imprecise State Information

A Comparative Study of Quality of Service Routing Schemes That Tolerate Imprecise State Information A Comparative Study of Quality of Service Routing Schemes That Tolerate Imprecise State Information Xin Yuan Wei Zheng Department of Computer Science, Florida State University, Tallahassee, FL 330 {xyuan,zheng}@cs.fsu.edu

More information

Statistical House Edge Analysis for Proposed Casino Game Jacks

Statistical House Edge Analysis for Proposed Casino Game Jacks Statistical House Edge Analysis for Proposed Casino Game Jacks Prepared by: Precision Consulting Company, LLC Date: October 1, 2011 228 PARK AVENUE SOUTH NEW YORK, NEW YORK 10003 TELEPHONE 646/553-4730

More information

Overall approach, including resources required. Session Goals

Overall approach, including resources required. Session Goals Participants Method Date Session Numbers Who (characteristics of your play-tester) Overall approach, including resources required Session Goals What to measure How to test How to Analyse 24/04/17 1 3 Lachlan

More information

Solving Coup as an MDP/POMDP

Solving Coup as an MDP/POMDP Solving Coup as an MDP/POMDP Semir Shafi Dept. of Computer Science Stanford University Stanford, USA semir@stanford.edu Adrien Truong Dept. of Computer Science Stanford University Stanford, USA aqtruong@stanford.edu

More information

Monte-Carlo Tree Search in Ms. Pac-Man

Monte-Carlo Tree Search in Ms. Pac-Man Monte-Carlo Tree Search in Ms. Pac-Man Nozomu Ikehata and Takeshi Ito Abstract This paper proposes a method for solving the problem of avoiding pincer moves of the ghosts in the game of Ms. Pac-Man to

More information

Comparison of Receive Signal Level Measurement Techniques in GSM Cellular Networks

Comparison of Receive Signal Level Measurement Techniques in GSM Cellular Networks Comparison of Receive Signal Level Measurement Techniques in GSM Cellular Networks Nenad Mijatovic *, Ivica Kostanic * and Sergey Dickey + * Florida Institute of Technology, Melbourne, FL, USA nmijatov@fit.edu,

More information

Decision Making in Multiplayer Environments Application in Backgammon Variants

Decision Making in Multiplayer Environments Application in Backgammon Variants Decision Making in Multiplayer Environments Application in Backgammon Variants PhD Thesis by Nikolaos Papahristou AI researcher Department of Applied Informatics Thessaloniki, Greece Contributions Expert

More information

Poverty in the United Way Service Area

Poverty in the United Way Service Area Poverty in the United Way Service Area Year 2 Update 2012 The Institute for Urban Policy Research At The University of Texas at Dallas Poverty in the United Way Service Area Year 2 Update 2012 Introduction

More information

When placed on Towers, Player Marker L-Hexes show ownership of that Tower and indicate the Level of that Tower. At Level 1, orient the L-Hex

When placed on Towers, Player Marker L-Hexes show ownership of that Tower and indicate the Level of that Tower. At Level 1, orient the L-Hex Tower Defense Players: 1-4. Playtime: 60-90 Minutes (approximately 10 minutes per Wave). Recommended Age: 10+ Genre: Turn-based strategy. Resource management. Tile-based. Campaign scenarios. Sandbox mode.

More information

CHANNEL ASSIGNMENT AND LOAD DISTRIBUTION IN A POWER- MANAGED WLAN

CHANNEL ASSIGNMENT AND LOAD DISTRIBUTION IN A POWER- MANAGED WLAN CHANNEL ASSIGNMENT AND LOAD DISTRIBUTION IN A POWER- MANAGED WLAN Mohamad Haidar Robert Akl Hussain Al-Rizzo Yupo Chan University of Arkansas at University of Arkansas at University of Arkansas at University

More information

Red Dragon Inn Tournament Rules

Red Dragon Inn Tournament Rules Red Dragon Inn Tournament Rules last updated Aug 11, 2016 The Organized Play program for The Red Dragon Inn ( RDI ), sponsored by SlugFest Games ( SFG ), follows the rules and formats provided herein.

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

Learning Dota 2 Team Compositions

Learning Dota 2 Team Compositions Learning Dota 2 Team Compositions Atish Agarwala atisha@stanford.edu Michael Pearce pearcemt@stanford.edu Abstract Dota 2 is a multiplayer online game in which two teams of five players control heroes

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

POWER consumption has become a bottleneck in microprocessor

POWER consumption has become a bottleneck in microprocessor 746 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 15, NO. 7, JULY 2007 Variations-Aware Low-Power Design and Block Clustering With Voltage Scaling Navid Azizi, Student Member,

More information

Motif finding. GCB 535 / CIS 535 M. T. Lee, 10 Oct 2004

Motif finding. GCB 535 / CIS 535 M. T. Lee, 10 Oct 2004 Motif finding GCB 535 / CIS 535 M. T. Lee, 10 Oct 2004 Our goal is to identify significant patterns of letters (nucleotides, amino acids) contained within long sequences. The pattern is called a motif.

More information

Texas Hold em Inference Bot Proposal. By: Brian Mihok & Michael Terry Date Due: Monday, April 11, 2005

Texas Hold em Inference Bot Proposal. By: Brian Mihok & Michael Terry Date Due: Monday, April 11, 2005 Texas Hold em Inference Bot Proposal By: Brian Mihok & Michael Terry Date Due: Monday, April 11, 2005 1 Introduction One of the key goals in Artificial Intelligence is to create cognitive systems that

More information

Colour Profiling Using Multiple Colour Spaces

Colour Profiling Using Multiple Colour Spaces Colour Profiling Using Multiple Colour Spaces Nicola Duffy and Gerard Lacey Computer Vision and Robotics Group, Trinity College, Dublin.Ireland duffynn@cs.tcd.ie Abstract This paper presents an original

More information