Intelligent Agents & Search Problem Formulation. AIMA, Chapters 2,

Similar documents
Agent. Pengju Ren. Institute of Artificial Intelligence and Robotics

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Administrivia. CS 188: Artificial Intelligence Spring Agents and Environments. Today. Vacuum-Cleaner World. A Reflex Vacuum-Cleaner

Intelligent Agents p.1/25. Intelligent Agents. Chapter 2

HIT3002: Introduction to Artificial Intelligence

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

Lecture 2: Problem Formulation

CMSC 372 Artificial Intelligence What is AI? Thinking Like Acting Like Humans Humans Thought Processes Behaviors

Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE RATIONAL AGENTS. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE

COMP9414/ 9814/ 3411: Artificial Intelligence. Week 2. Classifying AI Tasks

Structure of Intelligent Agents. Examples of Agents 1. Examples of Agents 2. Intelligent Agents and their Environments. An agent:

Inf2D 01: Intelligent Agents and their Environments

Problem solving. Chapter 3, Sections 1 3

Solving Problems by Searching

COMP9414: Artificial Intelligence Problem Solving and Search

Informatics 2D: Tutorial 1 (Solutions)

COMP9414/ 9814/ 3411: Artificial Intelligence. 2. Environment Types. UNSW c Alan Blair,

Artificial Intelligence: Definition

Overview Agents, environments, typical components

CISC 1600 Lecture 3.4 Agent-based programming

Our 2-course meal for this evening

Course Info. CS 486/686 Artificial Intelligence. Outline. Artificial Intelligence (AI)

WHAT THE COURSE IS AND ISN T ABOUT. Welcome to CIS 391. Introduction to Artificial Intelligence. Grading & Homework. Welcome to CIS 391

Problem Solving and Search

Artificial Intelligence Uninformed search

Informed Search. Read AIMA Some materials will not be covered in lecture, but will be on the midterm.

CS 486/686 Artificial Intelligence

Last Time: Acting Humanly: The Full Turing Test

Game-playing AIs: Games and Adversarial Search I AIMA

Artificial Intelligence: An overview

Outline. Introduction to AI. Artificial Intelligence. What is an AI? What is an AI? Agents Environments

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

Introduction to Artificial Intelligence

Outline for today s lecture Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

CPS331 Lecture: Agents and Robots last revised April 27, 2012

3.1 Agents. Foundations of Artificial Intelligence. 3.1 Agents. 3.2 Rationality. 3.3 Summary. Introduction: Overview. 3. Introduction: Rational Agents

2. Environment Types. COMP9414/ 9814/ 3411: Artificial Intelligence. Agent Model. Agents as functions. The PEAS model of an Agent

UMBC 671 Midterm Exam 19 October 2009

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Introduction to Multiagent Systems

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Adversarial Search 1

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Logical Agents (AIMA - Chapter 7)

11/18/2015. Outline. Logical Agents. The Wumpus World. 1. Automating Hunt the Wumpus : A different kind of problem

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

ARTIFICIAL INTELLIGENCE (CS 370D)

5.4 Imperfect, Real-Time Decisions

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 1, 2015

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 1: Intro

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

16.410/413 Principles of Autonomy and Decision Making

CMPT 310 Assignment 1

Conversion Masters in IT (MIT) AI as Representation and Search. (Representation and Search Strategies) Lecture 002. Sandro Spina

5.1 State-Space Search Problems

CS 771 Artificial Intelligence. Adversarial Search

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

Artificial Intelligence

Overview PROBLEM SOLVING AGENTS. Problem Solving Agents

Search then involves moving from state-to-state in the problem space to find a goal (or to terminate without finding a goal).

COMP5211 Lecture 3: Agents that Search

CS510 \ Lecture Ariel Stolerman

CS 5522: Artificial Intelligence II

Section Marks Agents / 8. Search / 10. Games / 13. Logic / 15. Total / 46

Adversarial Search Lecture 7

Artificial Intelligence Adversarial Search

Outline. What is AI? A brief history of AI State of the art

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Game Playing State-of-the-Art

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

5.4 Imperfect, Real-Time Decisions

CS 188: Artificial Intelligence Spring Announcements

Searching for Solu4ons. Searching for Solu4ons. Example: Traveling Romania. Example: Vacuum World 9/8/09

Introduction to Multi-Agent Systems. Michal Pechoucek & Branislav Bošanský AE4M36MAS Autumn Lect. 1

Artificial Intelligence. Minimax and alpha-beta pruning

2 person perfect information

Simple Search Algorithms

Artificial Intelligence

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Game-Playing & Adversarial Search

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach

Artificial Intelligence (Introduction to)

Instructor. Artificial Intelligence (Introduction to) What is AI? Introduction. Dr Sergio Tessaris

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

CS 188 Introduction to Fall 2014 Artificial Intelligence Midterm

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Introduction to Artificial Intelligence: cs580

Artificial Intelligence 1: game playing

Elements of Artificial Intelligence and Expert Systems

Practice Session 2. HW 1 Review

Transcription:

Intelligent Agents & Search Problem Formulation AIMA, Chapters 2, 3.1-3.2

Outline for today s lecture Intelligent Agents (AIMA 2.1-2) Task Environments Formulating Search Problems CIS 421/521 - Intro to AI - Fall 2017 2

Review: What is AI? Views of AI fall into four categories: Thinking humanly Acting humanly Thinking rationally Acting rationally Thinking humanly Thinking rationally Acting humanly Acting rationally We will focus on "acting rationally CIS 521 - Intro to AI - Fall 2017 3

Review: Acting rationally: rational agents Thinking humanly Acting humanly Thinking rationally Acting rationally Rational behavior: doing the right thing The right thing: that which is expected to maximize goal achievement, given the available information Rational agent: An agent is an entity that perceives and acts rationally This course is about effective programming techniques for designing rational agents CIS 521 - Intro to AI - Fall 2017 4

Agents and environments An agent is specified by an agent function f:p a that maps a sequence of percept vectors P to an action a from a set A: P=[p 0, p 1,, p t ] A={a 0, a 1,, a k } CIS 421/521 - Intro to AI - Fall 2017 5

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: Sensors: eyes, ears,... Actuators: hands, legs, mouth, Robotic agent: Sensors: cameras and infrared range finders Actuators: various motors Agents include humans, robots, softbots, thermostats, CIS 421/521 - Intro to AI - Fall 2017 6

Agent function & program The agent program runs on the physical architecture to produce f agent = architecture + program Easy solution: table that maps every possible sequence P to an action a One small problem: exponential in length of P CIS 421/521 - Intro to AI - Fall 2017 7

Rational agents II Rational Agent: For each possible percept sequence P, a rational agent selects an action a expected to maximize its performance measure Performance measure: An objective criterion for success of an agent's behavior, given the evidence provided by the percept sequence. Revised: Rational Agent: For each possible percept sequence P, a rational agent selects an action a that maximizes the expected value of its performance measure CIS 421/521 - Intro to AI - Fall 2017 8

Performance measure - example A performance measure for a vacuum-cleaner agent might include e.g. some subset of: +1 point for each clean square in time T +1 point for clean square, -1 for each move -1000 for more than k dirty squares CIS 421/521 - Intro to AI - Fall 2017 9

Rationality is not omniscience Ideal agent: maximizes actual performance, but needs to be omniscient. Usually impossible.. But consider tic-tac-toe agent Rationality Guaranteed Success Caveat: computational limitations make complete rationality unachievable design best program for given machine resources In Economics: Bounded Rationality Behavioral Economics CIS 421/521 - Intro to AI - Fall 2017 10

Outline for today s lecture Intelligent Agents Task Environments (AIMA 2.3) Formulating Search Problems CIS 421/521 - Intro to AI - Fall 2017 11

Task environments To design a rational agent we need to specify a task environment a problem specification for which the agent is a solution PEAS: to specify a task environment Performance measure Environment Actuators Sensors CIS 421/521 - Intro to AI - Fall 2017 12

PEAS: Specifying an automated taxi driver Performance measure:? Environment:? Actuators:? Sensors:? CIS 421/521 - Intro to AI - Fall 2017 13

PEAS: Specifying an automated taxi driver Performance measure: safe, fast, legal, comfortable, maximize profits Environment: roads, other traffic, pedestrians, customers Actuators: steering, accelerator, brake, signal, horn Sensors: cameras, sonar, speedometer, GPS CIS 421/521 - Intro to AI - Fall 2017 14

PEAS: Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (form including: questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) From: The New Yorker April 2017 CIS 421/521 - Intro to AI - Fall 2017 15

The rational agent designer s goal Goal of AI practitioner who designs rational agents: given a PEAS task environment, 1. Construct agent function f that maximizes the expected value of the performance measure, 2. Design an agent program that implements f on a particular architecture CIS 421/521 - Intro to AI - Fall 2017 16

Environment types: Definitions I Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. If the environment is deterministic except for the actions of other agents, then the environment is strategic. Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" during which the agent perceives and then performs a single action, and the choice of action in each episode does not depend on any previous action. (example: classification task) CIS 421/521 - Intro to AI - Fall 2017 17

Environment types: Definitions II Static (vs. dynamic): The environment is unchanged while an agent is deliberating. The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does. Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment. (See examples in AIMA, however I don t agree with some of the judgments) CIS 421/521 - Intro to AI - Fall 2017 18

Environment Restrictions for Now We will assume environment is Static Fully Observable Deterministic Discrete CIS 421/521 - Intro to AI - Fall 2017 19

Problem Solving Agents & Problem Formulation AIMA 3.1-2 CIS 421/521 - Intro to AI - Fall 2017 20

Outline for today s lecture Intelligent Agents Task Environments Formulating Search Problems (AIMA, 3.1-3.2) CIS 421/521 - Intro to AI - Fall 2017 21

Example search problem: 8-puzzle Formulate goal Pieces to end up in order as shown Formulate search problem States: configurations of the puzzle (9! configurations) Actions: Move one of the movable pieces ( 4 possible) Performance measure: minimize total moves Find solution Sequence of pieces moved: 3,1,6,3,1, CIS 421/521 - Intro to AI - Fall 2017 22

Example search problem: holiday in Romania You are here CIS 421/521 - Intro to AI - Fall 2017 You need to be here 23

Holiday in Romania II On holiday in Romania; currently in Arad Flight leaves tomorrow from Bucharest Formulate goal Be in Bucharest Formulate search problem States: various cities Actions: drive between cities Performance measure: minimize distance Find solution Sequence of cities; e.g. Arad, Sibiu, Fagaras, Bucharest, CIS 421/521 - Intro to AI - Fall 2017 24

More formally, a problem is defined by: 1. States: a set S 2. An initial state s i S 3. Actions: a set A s Actions(s) = the set of actions that can be executed in s, that are applicable in s. 4. Transition Model: s a Actions(s) Result(s, a) s r s r is called a successor of s {s i } Successors(s i )* = state space 5. Path cost (Performance Measure): Must be additive e.g. sum of distances, number of actions executed, c(x,a,y) is the step cost, assumed 0 (where action a goes from state x to state y) 6. Goal test: Goal(s) Can be implicit, e.g. checkmate(s) s is a goal state if Goal(s) is true CIS 421/521 - Intro to AI - Fall 2017 25

Solutions & Optimal Solutions A solution is a sequence of actions from the initial state to a goal state. Optimal Solution: A solution is optimal if no solution has a lower path cost. CIS 421/521 - Intro to AI - Fall 2017 26

Art: Formulating a Search Problem Decide: Which properties matter & how to represent Initial State, Goal State, Possible Intermediate States Which actions are possible & how to represent Operator Set: Actions and Transition Model Which action is next Path Cost Function Formulation greatly affects combinatorics of search space and therefore speed of search CIS 421/521 - Intro to AI - Fall 2017 27

Example: 8-puzzle States?? Initial state?? Actions?? Transition Model?? Goal test?? Path cost?? CIS 421/521 - Intro to AI - Fall 2017 28

Example: 8-puzzle States?? List of 9 locations- e.g., [7,2,4,5,-,6,8,3,1] Initial state?? [7,2,4,5,-,6,8,3,1] Actions?? {Left, Right, Up, Down} Transition Model??... Goal test?? Check if goal configuration is reached Path cost?? Number of actions to reach goal CIS 421/521 - Intro to AI - Fall 2017 29

Example: 8-puzzle States?? List of 9 locations- e.g., [7,2,4,5,-,6,8,3,1] Initial state?? [7,2,4,5,-,6,8,3,1] Actions?? {Left, Right, Up, Down} Transition Model??... Goal test?? Check if goal configuration is reached Path cost?? Number of actions to reach goal CIS 421/521 - Intro to AI - Fall 2017 30

Hard subtask: Selecting a state space Real world is absurdly complex State space must be abstracted for problem solving (abstract) State = set (equivalence class) of real world states (abstract) Action = equivalence class of combinations of real world actions e.g. Arad Zerind represents a complex set of possible routes, detours, rest stops, etc The abstraction is valid if the path between two states is reflected in the real world Each abstract action should be easier than the real problem CIS 421/521 - Intro to AI - Fall 2017 31

IF TIME ALLOWS. CIS 421/521 - Intro to AI - Fall 2017 32

Outline for today s lecture Intelligent Agents Task Environments Formulating Search Problems Search Fundamentals (AIMA 3.3) CIS 421/521 - Intro to AI - Fall 2017 33

Useful Concepts State space: the set of all states reachable from the initial state by any sequence of actions When several operators can apply to each state, this gets large very quickly Might be a proper subset of the set of configurations Path: a sequence of actions leading from one state s j to another state s k Frontier: those states that are available for expanding (for applying legal actions to) Solution: a path from the initial state s i to a state s f that satisfies the goal test CIS 421/521 - Intro to AI - Fall 2017 34

Basic search algorithms: Tree Search Generalized algorithm to solve search problems (Review) Enumerate in some order all possible paths from the initial state Here: search through explicit tree generation ROOT= initial state. Nodes in search tree generated through transition model Tree search treats different paths to the same node as distinct CIS 421/521 - Intro to AI - Fall 2017 35

Review: Generalized tree search function TREE-SEARCH(problem, strategy) return a solution or failure Initialize frontier to the initial state of the problem do if the frontier is empty then return failure Determines search process!! choose leaf node for expansion according to strategy & remove from frontier if node contains goal state then return solution else expand the node and add resulting nodes to the frontier CIS 421/521 - Intro to AI - Fall 2017 36 36

8-Puzzle: States and Nodes A state is a (representation of a) physical configuration A node is a data structure constituting part of a search tree Also includes parent, children, depth, path cost g(x) Here node= <state, parent-node, children, action, path-cost, depth> States do not have parents, children, depth or path cost! State 7 2 4 5 6 1 8 3 state Node parent children Action= Up Cost = 6 Depth = 6 The EXPAND function uses the Actions and Transition Model to create the corresponding states creates new nodes, fills in the various fields CIS 421/521 - Intro to AI - Fall 2017 37

8-Puzzle Search Tree (Nodes show state, parent, children - leaving Action, Cost, Depth Implicit) 7 2 4 5 6 8 3 1 Suppressing useless backwards moves 7 2 4 5 6 8 3 1 7 4 5 2 6 8 3 1 7 2 4 5 6 8 3 1 2 4 7 2 4 7 4 7 4 7 2 7 2 4 7 5 6 8 5 6 5 2 6 5 2 6 5 6 4 5 6 1 8 3 1 3 1 8 3 1 8 3 1 8 3 1 8 3 CIS 421/521 - Intro to AI - Fall 2017 38

Problem: Repeated states Failure to detect repeated states can turn a linear problem into an exponential one! CIS 421/521 - Intro to AI - Fall 2017 39

Solution: Graph Search! B S S B C C State Space Graph search C S B S Search Tree Optimal but memory inefficient Simple Mod from tree search: Check to see if a node has been visited before adding to search queue must keep track of all possible states (can use a lot of memory) e.g., 8-puzzle problem, we have 9!/2 182K states CIS 421/521 - Intro to AI - Fall 2017 40

Graph Search vs Tree Search CIS 421/521 - Intro to AI - Fall 2017 41