CS 171, Intro to A.I. Midterm Exam Fall Quarter, 2016

Similar documents
CS-171, Intro to A.I. Mid-term Exam Winter Quarter, 2015

Written examination TIN175/DIT411, Introduction to Artificial Intelligence

Games and Adversarial Search II

UMBC CMSC 671 Midterm Exam 22 October 2012

10/5/2015. Constraint Satisfaction Problems. Example: Cryptarithmetic. Example: Map-coloring. Example: Map-coloring. Constraint Satisfaction Problems

UMBC 671 Midterm Exam 19 October 2009

: Principles of Automated Reasoning and Decision Making Midterm

UNIVERSITY of PENNSYLVANIA CIS 391/521: Fundamentals of AI Midterm 1, Spring 2010

CS 188: Artificial Intelligence Spring 2007

Adverserial Search Chapter 5 minmax algorithm alpha-beta pruning TDDC17. Problems. Why Board Games?

Midterm Examination. CSCI 561: Artificial Intelligence

Midterm. CS440, Fall 2003

CSE 473 Midterm Exam Feb 8, 2018

Section Marks Agents / 8. Search / 10. Games / 13. Logic / 15. Total / 46

Problem 1. (15 points) Consider the so-called Cryptarithmetic problem shown below.

CS 188 Fall Introduction to Artificial Intelligence Midterm 1

Solving Problems by Searching

COMP5211 Lecture 3: Agents that Search

CS 229 Final Project: Using Reinforcement Learning to Play Othello

Q1. [11 pts] Foodie Pacman

ARTIFICIAL INTELLIGENCE (CS 370D)

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Game-Playing & Adversarial Search

2 person perfect information

Artificial Intelligence Adversarial Search

Informed Search. Read AIMA Some materials will not be covered in lecture, but will be on the midterm.

Outline for today s lecture Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing

Adversary Search. Ref: Chapter 5

Comp th February Due: 11:59pm, 25th February 2014

6.034 Quiz 2 20 October 2010

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

Artificial Intelligence Lecture 3

Spring 06 Assignment 2: Constraint Satisfaction Problems

Problem Solving and Search

Name: Your EdX Login: SID: Name of person to left: Exam Room: Name of person to right: Primary TA:

Artificial Intelligence Search III

AIMA 3.5. Smarter Search. David Cline

Announcements. CS 188: Artificial Intelligence Fall Today. Tree-Structured CSPs. Nearly Tree-Structured CSPs. Tree Decompositions*

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Heuristics & Pattern Databases for Search Dan Weld

Project 1. Out of 20 points. Only 30% of final grade 5-6 projects in total. Extra day: 10%

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

COMP9414: Artificial Intelligence Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search

game tree complete all possible moves

6.034 Quiz 1 October 13, 2005

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

More on games (Ch )

CS 188 Introduction to Fall 2014 Artificial Intelligence Midterm

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

mywbut.com Two agent games : alpha beta pruning

CSC 396 : Introduction to Artificial Intelligence

Lecture 7. Review Blind search Chess & search. CS-424 Gregory Dudek

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Intuition Mini-Max 2

Search then involves moving from state-to-state in the problem space to find a goal (or to terminate without finding a goal).

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Adversarial Search 1

Artificial Intelligence. Minimax and alpha-beta pruning

Foundations of Artificial Intelligence

Adversarial Search and Game Playing

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Practice Session 2. HW 1 Review

Spring 06 Assignment 2: Constraint Satisfaction Problems

Adversarial Search. CMPSCI 383 September 29, 2011

22c:145 Artificial Intelligence

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

CMPUT 396 Tic-Tac-Toe Game

Game-playing AIs: Games and Adversarial Search I AIMA

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Game-playing: DeepBlue and AlphaGo

CS 188: Artificial Intelligence Spring Announcements

CS188 Spring 2014 Section 3: Games

Artificial Intelligence 1: game playing

15-381: Artificial Intelligence Assignment 3: Midterm Review

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

CS 188: Artificial Intelligence. Overview

Documentation and Discussion

CS 540: Introduction to Artificial Intelligence

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Homework Assignment #1

CSC 380 Final Presentation. Connect 4 David Alligood, Scott Swiger, Jo Van Voorhis

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Heuristics, and what to do if you don t know what to do. Carl Hultquist

6.034 Quiz September Jake Barnwell Michaela Ennis Rebecca Kekelishvili. Vinny Chakradhar Phil Ferguson Nathan Landman

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Midterm 2 6:00-8:00pm, 16 April

16.410/413 Principles of Autonomy and Decision Making

Programming Project 1: Pacman (Due )

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

For slightly more detailed instructions on how to play, visit:

CS188 Spring 2010 Section 3: Game Trees

1. Compare between monotonic and commutative production system. 2. What is uninformed (or blind) search and how does it differ from informed (or

Artificial Intelligence

CS 540-2: Introduction to Artificial Intelligence Homework Assignment #2. Assigned: Monday, February 6 Due: Saturday, February 18

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

2359 (i.e. 11:59:00 pm) on 4/16/18 via Blackboard

Games we will consider. CS 331: Artificial Intelligence Adversarial Search. What makes games hard? Formal Definition of a Game.

Transcription:

CS 171, Intro to A.I. Midterm Exam all Quarter, 2016 YOUR NAME: YOUR ID: ROW: SEAT: The exam will begin on the next page. Please, do not turn the page until told. When you are told to begin the exam, please check first to make sure that you have all eight pages, as numbered 1-8 in the bottom-right corner of each page. We wish to avoid copy problems. We will supply a new exam for any copy problems. The exam is closed-notes, closed-book. No calculators, cell phones, electronics. Please turn off all cell phones now. Please clear your desk entirely, except for pen, pencil, eraser, a blank piece of paper (for scratch pad use), and an optional water bottle. Please write your name and ID# on the blank piece of paper and turn it in with your exam. This page summarizes the points for each question, so you can plan your time. 1. (12 pts total) TRUE / ALSE 2. (8 pts total) SEARCH PROPERTIES. 3. (4 pts total, 1 pt each) TASK ENVIRONMENT. 4. (20 pts total, 5 pts each) STATE-SPACE SEARCH STRATEGIES. 5. (9 pts total) DOMINATING HEURISTICS. 6. (10 pts total) MINIMAX. 7. (8 pts total) ALPHA-BETA PRUNING. 8. (21 pts total) CONSTRAINT SATISACTION PROBLEMS. 9. (8 pts total) CONSTRAINT SATISACTION (CSP) CONCEPTS. The Exam is printed on both sides to save trees! Work both sides of each page! 1

1. (12 pts total, 1 pt each) TRUE/ALSE. Mark the following statements True (T) or alse (). T T T T Uniform-cost search will never expand more nodes than A*-search. Depth-first search will always expand more nodes than breadth-first search. Let h1(n) and h2(n) both be admissible heuristics. Then, min(h1, h2) is necessarily admissible. Let h1(n) be an admissible heuristic, and let h2(n) be an inadmissible heuristic. Then (h1 + h2)/2 is necessarily admissible. Let h1(n) be an admissible heuristic, and h2(n) = 2*h1(n). The solution found by A* tree search with h2(n) is guaranteed to have a cost at most twice as much as the optimal path. RBS will possibly re-expand some node that it has visited before, but SMA* will not. The most-constrained variable heuristic provides a way to select the next variable to assign in a backtracking search for solving a CSP. The purpose of the least-constraining value heuristic is to reduce the branching factor of the backtracking search. By using the most-constrained variable heuristic and the least-constraining value heuristic we can solve every CSP in time linear in the number of variables. When enforcing arc consistency in a CSP, the set of values that remain when the algorithm terminates does not depend on the order in which arcs are processed from the queue. When using alpha-beta pruning, the computational savings are independent of the order in which children are expanded. When using expectimax to compute a policy, re-scaling the values of all the leaf nodes by multiplying them all by 10 can result in a different policy being optimal. 2

2. (8 pts total, -1 pt each wrong answer, but not negative) SEARCH PROPERTIES. ill in the values of the four evaluation criteria for each search strategy shown. Assume a tree search where b is the finite branching factor; d is the depth to the shallowest goal node; m is the maximum depth of the search tree; C* is the cost of the optimal solution; step costs are greater than some positive ε; and in Bidirectional search both directions use breadth-first search. Note that these conditions satisfy all of the footnotes of ig. 3.21 in your book. Criterion Complete? Time complexity Space complexity Optimal? Breadth-irst T O(b^d) O(b^d) T/ Uniform-Cost T O(b^ (1+C/e)) O(b^(1+d)) O(b^d) O(b^ (1+C/e)) O(b^(1+d)) O(b^d) Depth-irst O(b^m) O(bm) Iterative Deepening T O(b^d) O(bd) T/ T Bidirectional (if applicable) T O(b^ (d/2) ) O(b^ (d/2) ) T/ (Partial Credit) Deduct -1 for each wrong answer. 3. (4 pts total, 1 pt each) TASK ENVIRONMENT. Your book defines a task environment as a set of four things, with the acronym PEAS. ill in the blanks with the names of the PEAS components. Performance measure Environment Actuators Sensors 3

4. (20 pts total, 5 pts each) STATE-SPACE SEARCH STRATEGIES. Execute Tree Search through this graph (i.e., do not remember visited nodes). Step costs are given next to each arc. Heuristic values are given in the table on the right. The successors of each node are indicated by the arrows out of that node. Successors are returned in left-to-right order, i.e., successors of S are (A, G), successors of A are (B, C), and successors of C are (D, G), in that order. or each search strategy below, show the order in which nodes are expanded (i.e., to expand a node means that its children are generated), ending with the goal node that is found. Show the path from start to goal, and give the cost of the path that is found. The first one is done for you as an example. 4.a. DEPTH IRST SEARCH. Order of node expansion: S (G) _ Path found: S G Cost of path found: 12 4.b. (5 pts) UNIORM COST SEARCH. [-1 if expansion was SACD(G)] (2 pts) Order of node expansion: S A C D B (G) (2 pts) Path found: S A C G (1 pt) Cost of path found: 4 4.c. (5 pts) GREEDY (BEST-IRST) SEARCH. (2 pts) Order of node expansion: S (G) (2 pts) Path found: S G (1 pt) Cost of path found: 12 4d. (5 pts) ITERATED DEEPENING SEARCH. (2 pts) Order of node expansion: S (G) this is an example of IDS being not optimal, path cost decreasing (2 pts) Path found: S G (1 pt) Cost of path found: 12 4.e. (5 pts) A* SEARCH WITH h(n). (2 pts) Order of node expansion: S A C (G) (2 pts) Path found: S A C G (1 pt) Cost of path found: 4 (Partial Credits) : Compare the wrong answer with the correct one, -1 point per disagreement. 4

e.g. 4.e) correct answer SAC or SAC(G) wrong answer SABC then lose (-1) point 5. (9 pts total) DOMINATING HEURISTICS. In this question, you are asked to compare different heuristics and to determine which, if any, dominate each other. You are executing Tree Search through this graph (i.e., you do not remember previously visited nodes). The start node (= initial state) is S, and the goal node is G. Actual step costs are shown next to each link. Heuristics are given in the following table. As is usual in your book, h* is the true (= optimal) heuristic; here, h_i are various other heuristics. 9 3 S C 2 D 3 E 7 1 1 B 3 1 1 G Node h1 h2 h3 h* (optimal) S-Start 5 5 5 8 B 4 4 5 7 C 5 3 2 4 D 3 1 4 6 E 4 3 4 8 1 0 1 1 G-Goal 0 0 0 0 5.a. (2 pts) Which heuristic functions are admissible among h1, h2 and h3? h2, h3 (Partial credit) -1 point per disagreement, e.g. h1, h2, h3-1 point 5.b. (2 pt) Which heuristic functions are consistent among h1, h2 and h3? H1 inadmissible, h2 (fails at D) h3 (Partial credit) -1 point per disagreement, e.g. h2, he -1 point 5.c. (5 pts, -1 pt for each error but not negative) Which of the following statements are true? (write T=True, =alse) (a) h1 dominates h2. (T or ) T/ (b) h1 dominates h3. (T or ) 5

(c) h2 dominates h1. (T or ) (d) h2 dominates h3. (T or ) (e) h3 dominates h1. (T or ) (f) h3 dominates h2. (T or ) 6

6. (8 pts total, -1 pts for each error, but not negative) MINI-MAX SEARCH IN GAME TREES. The game tree below illustrates a position reached in the game. Process the tree left-to-right. It is Max's turn to move. At each leaf node is the estimated score returned by the heuristic static evaluator. 6.a. ill in each blank square with the proper mini-max search value. 6.b. What is the best move for Max? (write A, B, or C) A 6.c. What score does Max expect to achieve? 4 (Max) 4 (Min) 4 (A) 3 (B) 1 (C) (Max) 5 4 8 3 8 4 6 6 1 7 5 5 2 3 4 7 1 1 8 3 2 8 6 3 2 1 4 3 1 6 4 6 5 1 7 3 2 7. (10 pts total, -1 for each error, but not negative) ALPHA-BETA PRUNING. Process the tree left-toright. This is the same tree as above (1.a). You do not need to indicate the branch node values again. Cross out each leaf node that will be pruned by Alpha-Beta Pruning. Do not just draw pruning lines. (Max) (Min) (A) (B) (C) (Max) 5 5 2 3 4 7 1 1 8 3 2 8 6 3 2 1 4 3 1 6 4 6 5 1 7 3 2 7

8. (21 pts total) CONSTRAINT SATISACTION. Consider the following graph with 6 square-shaped vertices and 7 undirected edges. In this problem, you can color each edge using one color from the following set of 3 colors, { Red, Green, Blue }, and you are asked to solve this edge-coloring problem as constraint satisfaction problem. E1 V4 V1 E4 E2 V5 V2 E3 The edge-coloring of a graph is an assignment of colors to the edges of the graph so that no two adjacent edges have the same color. Let's call this constraint the edge-coloring'' constraint. or example, e1 and e2 cannot have the same color because both are adjacent at the vertex v4. On the other hand, the graph doesn't restrict you to use the same color on e2 and e4 because they are not adjacent at any vertex. 8.a (3 pts) Constraint Graph. Draw the constraint graph associated with your CSP. The nodes are provided for you. Draw the arcs. E6 E5 V6 E7 V3 (Partial credit) -1 point per disagreement; missing arcs or excessive ars 8.b (3 pts) Degree Heuristic. Assume that you have not assigned any variables yet. List all variables that might be selected by the Degree Heuristic: E3 (Partial credit) -1 point per disagreement; e.g. E2, E3-1 point, E1-2 point 8.c (3 pts) orward Checking. Consider the assignment below. E2 is assigned R. Cross out all the values that would be eliminated by forward checking: E1 E2 E3 E4 E5 E6 E7 R G B R R G B R G B R G B R G B R G B (Partial credit) -1 point per disagreement; 8.d (3 pts) Minimum Remaining Values Heuristic. Consider the assignment below. E7 is assigned R and constraint propagation has been done. Correction E3 = {G, B}, E4 = {R, G, B}, Announced 8

E1 E2 E3 E4 E5 E6 E7 R G B R G B R G B R G B G B G B R List all variables that might be selected by the MRV Heuristic: E3, E5, E6 (Partial credit) -1 point per disagreement; 8.e (3 pts) Least Constraining Value Heuristic. Consider the assignment below. E1 is assigned R, E6 assigned G, and constraint propagation has been done. Assume you have selected E5. Correction E5 = {R, B}, Announced E1 E2 E3 E4 E5 E6 E7 R B G G B R G B G R B List all values that might be selected by the LCV Heuristic: R (Partial credit) -1 point per disagreement; 8.f (3 pts) Arc Consistency. Consider the assignment below; E2 is assigned R, and E7 is assigned B, but no constraint propagation has been done. Cross out all values that would be eliminated by Arc Consistency (AC-1 or AC-3). E1 E2 E3 E4 E5 E6 E7 B R G G R G B (Partial credit) -1 point per disagreement; 8.g (3 pts) Min Conflicts Local Search. Consider the complete but inconsistent assignment below. E1 is selected to be assigned a new value. [E1 should have been B. E1 wouldn t be chosen at all!] E1 E2 E3 E4 E5 E6 E7 R B G G R G G List all values that could be chosen by the Min-Conflicts Algorithm: all get 3 pts for this problem 9

9. (8 pts total, 1 pt each) CONSTRAINT SATISACTION PROBLEM (CSP) CONCEPTS. or each of the following terms on the left, write in the letter corresponding to the best answer or the correct definition on the right. Minimum Remaining A Specifies the allowable combinations of variable values Values Heuristic G Solution to a CSP B The values assigned to variables do not violate any constraints H Least Constraining Value C Set of allowed values for some variable Heuristic C Domain D Every variable is associated with a value A Constraint E Nodes correspond to variables, links connect variables that participate in a constraint B Consistent Assignment Chooses the next variable to expand to have the fewest legal values in its domain D Complete Assignment G A complete and consistent assignment E Constraint Graph H Prefers to search next the value that rules out the fewest choices for the neighboring variables in the constraint graph 10