Lectures: Feb 27 + Mar 1 + Mar 3, 2017

Similar documents
CSE101: Algorithm Design and Analysis. Ragesh Jaiswal, CSE, UCSD

Math 152: Applicable Mathematics and Computing

Game Theory and Randomized Algorithms

Problem 1 (15 points: Graded by Shahin) Recall the network structure of our in-class trading experiment shown in Figure 1

Convergence in competitive games

Graphs and Network Flows IE411. Lecture 14. Dr. Ted Ralphs

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/4/14

Section Notes 6. Game Theory. Applied Math 121. Week of March 22, understand the difference between pure and mixed strategies.

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

BMT 2018 Combinatorics Test Solutions March 18, 2018

Chapter 3 Learning in Two-Player Matrix Games

Edge-disjoint tree representation of three tree degree sequences

Lecture5: Lossless Compression Techniques

CS 445 HW#2 Solutions

CS510 \ Lecture Ariel Stolerman

Lecture 12: Divide and Conquer Algorithms. Divide and Conquer Algorithms

Coding for Efficiency

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Algorithmic Game Theory Date: 12/6/18

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

Network-building. Introduction. Page 1 of 6

Game Theory and Algorithms Lecture 19: Nim & Impartial Combinatorial Games

MITOCW watch?v=krzi60lkpek

Fast Sorting and Pattern-Avoiding Permutations

Module 3 Greedy Strategy

Game Theory. Problem data representing the situation are constant. They do not vary with respect to time or any other basis.

Game Theory two-person, zero-sum games

Communication Theory II

10/5/2015. Constraint Satisfaction Problems. Example: Cryptarithmetic. Example: Map-coloring. Example: Map-coloring. Constraint Satisfaction Problems

Chapter 5 Backtracking. The Backtracking Technique The n-queens Problem The Sum-of-Subsets Problem Graph Coloring The 0-1 Knapsack Problem

COMP Online Algorithms. Paging and k-server Problem. Shahin Kamali. Lecture 11 - Oct. 11, 2018 University of Manitoba

6.006 Introduction to Algorithms. Lecture 20: Dynamic Programming III Prof. Erik Demaine

CSE373: Data Structure & Algorithms Lecture 23: More Sorting and Other Classes of Algorithms. Nicki Dell Spring 2014

Exercises for Introduction to Game Theory SOLUTIONS

Notes for Recitation 3

SCRABBLE ARTIFICIAL INTELLIGENCE GAME. CS 297 Report. Presented to. Dr. Chris Pollett. Department of Computer Science. San Jose State University

CSE 573 Problem Set 1. Answers on 10/17/08

Design of Parallel Algorithms. Communication Algorithms

Link State Routing. Brad Karp UCL Computer Science. CS 3035/GZ01 3 rd December 2013

Computing Nash Equilibrium; Maxmin

Mixed Strategies; Maxmin

Tiling Problems. This document supersedes the earlier notes posted about the tiling problem. 1 An Undecidable Problem about Tilings of the Plane

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Past questions from the last 6 years of exams for programming 101 with answers.

Minmax and Dominance

Lecture 10: September 2

CS 787: Advanced Algorithms Homework 1

Lecture 20 November 13, 2014

Activity Sheet #1 Presentation #617, Annin/Aguayo,

Merge Sort. Note that the recursion bottoms out when the subarray has just one element, so that it is trivially sorted.

Link State Routing. Stefano Vissicchio UCL Computer Science CS 3035/GZ01

Selected Game Examples

Summary Overview of Topics in Econ 30200b: Decision theory: strong and weak domination by randomized strategies, domination theorem, expected utility

Second Annual University of Oregon Programming Contest, 1998

Games in Extensive Form

lecture notes September 2, Batcher s Algorithm

Advanced Microeconomics (Economics 104) Spring 2011 Strategic games I

Inputs. Outputs. Outputs. Inputs. Outputs. Inputs

SOME MORE DECREASE AND CONQUER ALGORITHMS

Algorithmique appliquée Projet UNO

Lecture 20: Combinatorial Search (1997) Steven Skiena. skiena

Stanford University CS261: Optimization Handout 9 Luca Trevisan February 1, 2011

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 6 Lecture - 37 Divide and Conquer: Counting Inversions

: Principles of Automated Reasoning and Decision Making Midterm

ECON 2100 Principles of Microeconomics (Summer 2016) Game Theory and Oligopoly

Module 3 Greedy Strategy

Lecture 2. 1 Nondeterministic Communication Complexity

Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Section 3 Fall 2010

Three of these grids share a property that the other three do not. Can you find such a property? + mod

CS188 Spring 2010 Section 3: Game Trees

Information Theory and Communication Optimal Codes

A Memory-Efficient Method for Fast Computation of Short 15-Puzzle Solutions

Collectives Pattern CS 472 Concurrent & Parallel Programming University of Evansville

CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Sep 25. Homework #1. ( Due: Oct 10 ) Figure 1: The laser game.

Shuffling with ordered cards

2. The Extensive Form of a Game

4.4 Shortest Paths in a Graph Revisited

AIMA 3.5. Smarter Search. David Cline

UMBC 671 Midterm Exam 19 October 2009

CSC 396 : Introduction to Artificial Intelligence

Greedy Flipping of Pancakes and Burnt Pancakes

Chapter 4 Number Theory

Spring 06 Assignment 2: Constraint Satisfaction Problems

COS 226 Algorithms and Data Structures Fall Midterm Exam

Reading 14 : Counting

CS61B Lecture #22. Today: Backtracking searches, game trees (DSIJ, Section 6.5) Last modified: Mon Oct 17 20:55: CS61B: Lecture #22 1

Fictitious Play applied on a simplified poker game

Noncooperative Games COMP4418 Knowledge Representation and Reasoning

Week 1. 1 What Is Combinatorics?

6.02 Introduction to EECS II Spring Quiz 1

COS 226 Algorithms and Data Structures Fall Midterm Exam

Universal Cycles for Permutations Theory and Applications

12. 6 jokes are minimal.

CS431 homework 2. 8 June Question 1 (page 54, problem 2.3). Is lg n = O(n)? Is lg n = Ω(n)? Is lg n = Θ(n)?

Southeastern European Regional Programming Contest Bucharest, Romania Vinnytsya, Ukraine October 21, Problem A Concerts

Final Practice Problems: Dynamic Programming and Max Flow Problems (I) Dynamic Programming Practice Problems

NORMAL FORM GAMES: invariance and refinements DYNAMIC GAMES: extensive form

Some algorithmic and combinatorial problems on permutation classes

On Range of Skill. Thomas Dueholm Hansen and Peter Bro Miltersen and Troels Bjerre Sørensen Department of Computer Science University of Aarhus

UPenn NETS 412: Algorithmic Game Theory Game Theory Practice. Clyde Silent Confess Silent 1, 1 10, 0 Confess 0, 10 5, 5

Collectives Pattern. Parallel Computing CIS 410/510 Department of Computer and Information Science. Lecture 8 Collective Pattern

Transcription:

CS420+500: Advanced Algorithm Design and Analysis Lectures: Feb 27 + Mar 1 + Mar 3, 2017 Prof. Will Evans Scribe: Adrian She In this lecture we: Summarized how linear programs can be used to model zero-sum games; Motivated/reviewed dynamic programming algorithms using the longest common subsequence problem; Developed a fast algorithm for longest increasing subsequence using a variant of dynamic programming. 1 Zero Sum Games: Summary Consider a game with two players Row and Col with payoff matrix: [ 3 ] 1 2 1 Our goal is to find a mixed strategy (probability distribution on possible plays) for Row which maximizes their expected winnings no matter what strategy Col chooses to execute. Let z denote the winnings of Row player using strategy (x 1, x 2 ). Then the solution to the following linear program describes the optimal strategy for Row, since based on any strategy Row chooses, Col may choose a pure strategy which maximize Row s winnings: max z subject to z 3x 1 2x 2 (Col chooses pure strategy 1) z x 1 + x 2 (Col chooses pure strategy 2) x 1 + x 2 = 1 x 1, x 2 0 (1) On the other hand, Col wants to find a mixed strategy (y 1, y 2 ) which minimizes their loss, no matter what pure strategy Row chooses. The solution to the following linear program describes the optimal strategy to minimize Col s losses: min w subject to w 3y 1 y 2 (Row chooses pure strategy 1) w 2y 1 + y 2 (Row chooses pure strategy 2) y 1 + y 2 = 1 y 1, y 2 0 (2) 1

Note now that linear programs (1) and (2) are dual to each other! Row max 0x 1 + 0x 2 + 1x 3 3x 1 + 2x 2 + x 3 0 x 1 x 2 + x 3 0 x 1 + x 2 = 1 x 1, x 2 0 Col min 0y 1 + 0y 2 + 1y 3 3y 1 + y 2 + y 3 0 2y 1 y 2 + y 3 0 y 1 + y 2 = 1 y 1, y 2 0 Hence, they have a common optimum v by the linear programming duality theorem. The solutions are (x 1, x 2 ) = ( 3 7, 4 7 ) and (y 1, y 2 ) = ( 2 7, 5 7 ) respectively, which achieve optimum (expected winnings) v = 1 7. This point is known as the Nash equilibrium for the game, as neither player can improve their expected winnings (or losses) as a result of changing their strategy at this point. 2 Dynamic Programming and Friends We now switch gears to dynamic programming. 2.1 Longest Common Subsequence Problem Given two character strings X of m characters, and string Y of n characters, what is a longest common subsequence of X and Y? Denote a longest common subsequence of X, Y by LCS(X,Y). This is the longest non-crossing matching in a graph B whose nodes are the characters of X, Y respectively, and where an edge is drawn between a character of X and a character of Y if they are the same. Definition 1. A subsequence of a character string (array) X is X[i 1 ]X[i 2 ]...X[i k ] where i 1 < i 2 <... < i k For instance if X = ABANDON and Y = BADNODNO, then LCS(X,Y) is one of BADON, BANDO, or BANON. We may be tempted to use a greedy strategy by directly aligning X, Y and collecting all common characters between X, Y as the longest common subsequence. However, this method does not work. Consider the strings X = axxbxx and Y = bxxaxxbxx: axxbxx bxxaxxbxx The longest subsequence found through direct alignment is xxxx, but the longest common subsequence is actually axxbxx. By considering strings X = (axxbxx) n and Y = bxx(axxbxx) n, the length of the longest subsequence found using direct alignment may be much shorter than the actual longest common subsequence as n increases. 2

Hence, we will attempt to use dynamic programming to solve this problem. The first step is to formulate recursive subproblems. To do this, we consider the relationship between X[m] and Y[n]: If X[m] = Y[n], we will match them and add it to LCS(X[1... m-1], Y[1...n-1]). If not, we consider the longer of LCS(X[1...m-1], Y[1..,n]) and LCS(X[1...m], Y[1...n-1]) as our LCS up to that point. Hence, we may mn subproblems of form LCS(X[1...i], Y[1...j]) for 1 i m and 1 j n. To keep track of the solution of these subproblems, we use a table F, where the entry F [i, j] denotes the length of LCS(X[1...i], Y[1...j]). We compute F [i, j] by: F [i 1, j 1] + 1 X[i] = Y [j] max(f [i 1, j], F [i, j 1]) X[i] Y [j] F [i, j] = 0 i = 0 0 j = 0 (3) Note that F [i, j] where i = 0 or j = 0 are the base cases. Using the above recurrence relation (3), the dynamic programming algorithm for longest common subsequence is then to initialize a m n table F, compute each entry of F using equation (3), then backtrack through the table to reconstruct the longest common subsequence. A B A N D O N 0 0 0 0 0 0 0 0 B 0 0 1 1 1 1 1 1 A 0 0 1 2 2 2 2 2 D 0 0 1 2 2 3 3 3 N 0 0 1 2 3 3 3 4 O 0 0 1 2 3 3 4 4 D 0 0 1 2 3 4 4 4 N 0 0 1 2 3 4 4 5 O 0 0 1 2 3 4 5 5 Table 1: Dynamic Programming Table for Example Strings It was suggested that only two rows of the table need to be kept (instead of all rows) in order to save space. Furthermore, the algorithm takes O(mn) time to compute the longest common subsequence of two strings. Finally we note that longest common subsequence may be used the solve edit distance. Edit Distance Given two strings A of n characters and B of m characters, what is the fewest number of insertions and deletions needed to transform A into B? This is because the edit distance between A, B is n + m 2l where l is the length of LCS(A,B). 3

2.2 Longest Increasing Subsequence Now we consider the longest increasing subsequence problem. Longest Increasing Subsequence Given a sequence of numbers R of length n, find the longest subsequence in R which is increasing. For instance, if R is the sequence 5,3,4,9,6,2,1,8, then the increasing subsequence would be 3,4,6,8. There is a reduction from longest increasing subsequence to longest common subsequence. Sort R and call the sorted list R sorted. Then LCS(R, R sorted ) is the longest increasing subsequence in R. This takes O(n 2 ) time. We may provide a faster solution, motivated by scanning the array for all possible subsequence, which may be an initial attempt at solving this problem. When finding the longest increasing subsequence of R[1...k], what do we want to keep about the array R[1...k-1]? We may want to keep the best or most extensible subsequence of R[1...k-1] as we scan the k th element of the array, namely the longest increasing sequence with smallest final element. However, consider the array R = [1 2 10 9 7 8 3 4 5]. The most extensible sequence up to the 6 th element would be S = [1 2 7 8], although we cannot extend S as we scan the rest of the array. At the same time, S is not the longest increasing subsequence since S = [1 2 3 4 5] is longer. We will then keep the family of best increasing subsequences of lengths 1,2,3..,j. Call them BIS[1],..., BIS[j] respectively. For instance of R = [8 3 4 9 6 2 1 5 7 6], then the family of best increasing subsequences are as follows: After seeing R[7]=1 R[8] = 5 R[9] = 7 BIS[1] 1 1 1 BIS[2] 3,4 3,4 3,4 BIS[3] 3,4,6 3,4,5 3,4,5 BIS[4] does not exist does not exist 3,4,5,7 Updating the family of best increasing subsequences if the whole subsequence is stored may take O(n) time, however, we can do the update in logarithmic time. To do this, we consider BIS[i] as the list of end numbers of the best increasing subsequences of length i, as the numbers are encountered in a left to right sweep of the input. There is a pointer from an element x in BIS[i] to y in BIS[i-1] if x is part of a sequence which extends y. For instance, the following would be the tree generated on the sample input R. Storing the last elements instead of the full sequence enables adding the next number of R in O(log n) time. The last numbers of each BIS[i] are in sorted order, so we may perform binary search on i to determine whether to put the next number, and we point it to the end value of the previous BIS. This yields a O(n log n) time algorithm for longest increasing subsequence. 4

BIS[1] BIS[2] BIS[3] BIS[4] 8 3 2 1 4 9 6 5 7 6 Figure 1: Collection of BIS[i] when algorithm finishes processing R 2.2.1 Reduction to LCS Now, we can use longest increasing subsequence to create an algorithm for longest common subsequence. We start with the two strings X, Y and build a table, noting the positions in the strings whose characters match. A B C B A C C B B x x x C x x x D A x x B x x x C x x x C x x x Table 2: Sample table for comparison to two strings Finding a longest common subsequence of the two strings then means finding a sequence of index pairs of matching characters, for which the sequence increases in both coordinates. We may write the matching index pairs of the example as follows: r 1 1 1 2 2 2... c 2 4 8 3 6 7... We are now faced with a problem, since running longest increasing subsequence on c may return multiple matches per row (and we may not match a single character in X with multiple characters in Y ). Reverse the order of coordinates per row in c resolves this problem, as it then enforces picking at most one character in c per row in r. r 1 1 1 2 2 2... c 8 4 2 7 6 3... Running longest increasing subsequence on the reversed sequence of column numbers then provides the longest common subsequence of X, Y. This takes O(n 2 log n) time, for there may be O(n 2 ) matching pairs, although if the number of matching pairs is small, then this may be faster. 5

Recall that longest common subsequence can be used to solve the edit distance problem. The above approach of building a table of matching characters can be used to solve the edit distance problem as well (Myers 1986). We construct a graph from the table of matching characters by connecting (i, j) with (i, j + 1), (i 1, j) with weight 1, since we need to make an insertion or deletion in this case. Next, we connect (i, j), (i 1, j 1) with weight 0 iff the i th character of X and the j th character of Y match. These edge weights are based on our discussion of the recurrence relation for longest common subsequence. Then the edit distance of X, Y corresponds to the shortest path between (0, 0) and (n, m) where n = X, m = Y in the LCS graph. We may run then run Dijkstra s algorithm to compute this path, which will have length D. At this point, Dijkstra s algorithm will have explored only those vertices (i, j) that are at most distance D from the origin. Since a horizontal or a vertical edge has length one, the coordinates must satisfy i j D. In an n m array, there are D min{n, m} such vertices. Furthermore, Djikstra s algorithm runs in time O( V + E ) since the edge weights of the LCS graph are bounded. Hence, Myers algorithm run time in time O(min{n, m}d) since the edge weights in the LCS graph are bounded each vertex is connected to at most two other vertices. 6