Inf2D 01: Intelligent Agents and their Environments

Similar documents
Structure of Intelligent Agents. Examples of Agents 1. Examples of Agents 2. Intelligent Agents and their Environments. An agent:

Agent. Pengju Ren. Institute of Artificial Intelligence and Robotics

CS 380: ARTIFICIAL INTELLIGENCE RATIONAL AGENTS. Santiago Ontañón

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

HIT3002: Introduction to Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE

Informatics 2D: Tutorial 1 (Solutions)

Intelligent Agents p.1/25. Intelligent Agents. Chapter 2

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

Intelligent Agents & Search Problem Formulation. AIMA, Chapters 2,

Administrivia. CS 188: Artificial Intelligence Spring Agents and Environments. Today. Vacuum-Cleaner World. A Reflex Vacuum-Cleaner

COMP9414/ 9814/ 3411: Artificial Intelligence. Week 2. Classifying AI Tasks

CMSC 372 Artificial Intelligence What is AI? Thinking Like Acting Like Humans Humans Thought Processes Behaviors

Outline. Introduction to AI. Artificial Intelligence. What is an AI? What is an AI? Agents Environments

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

Overview Agents, environments, typical components

COMP9414/ 9814/ 3411: Artificial Intelligence. 2. Environment Types. UNSW c Alan Blair,

CISC 1600 Lecture 3.4 Agent-based programming

Introduction to Multiagent Systems

CS 486/686 Artificial Intelligence

Course Info. CS 486/686 Artificial Intelligence. Outline. Artificial Intelligence (AI)

CPS331 Lecture: Agents and Robots last revised April 27, 2012

Artificial Intelligence

Last Time: Acting Humanly: The Full Turing Test

Artificial Intelligence: Definition

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Introduction to Artificial Intelligence

2. Environment Types. COMP9414/ 9814/ 3411: Artificial Intelligence. Agent Model. Agents as functions. The PEAS model of an Agent

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Our 2-course meal for this evening

Lecture Overview. c D. Poole and A. Mackworth 2017 Artificial Intelligence, Lecture 1.1, Page 1 1 / 15

CS343 Artificial Intelligence

Agent-Based Systems. Agent-Based Systems. Agent-Based Systems. Five pervasive trends in computing history. Agent-Based Systems. Agent-Based Systems

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

Multi-Robot Teamwork Cooperative Multi-Robot Systems

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Solving Problems by Searching

CMPT 310 Assignment 1

Introduction to Computer Science

Intro to Artificial Intelligence Lecture 1. Ahmed Sallam { }

Intelligent Driving Agents

Instructor. Artificial Intelligence (Introduction to) What is AI? Introduction. Dr Sergio Tessaris

Introduction to Multi-Agent Systems. Michal Pechoucek & Branislav Bošanský AE4M36MAS Autumn Lect. 1

Planning in autonomous mobile robotics

Characteristics of Routes in a Road Traffic Assignment

Robotics and Autonomous Systems

Artificial Intelligence (Introduction to)

Autonomous Agents and MultiAgent Systems* Lecture 2

3.1 Agents. Foundations of Artificial Intelligence. 3.1 Agents. 3.2 Rationality. 3.3 Summary. Introduction: Overview. 3. Introduction: Rational Agents

Section Marks Agents / 8. Search / 10. Games / 13. Logic / 15. Total / 46

Artificial Intelligence: An overview

Elements of Artificial Intelligence and Expert Systems

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Introduction to Vision & Robotics

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 1, 2015

Introduction to Vision & Robotics

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 16 January, 2018

Introduction to Vision & Robotics

CORC Exploring Robotics. Unit A: Introduction To Robotics

and : Principles of Autonomy and Decision Making. Prof Brian Williams, Prof Emilio Frazzoli and Sertac Karaman September, 8 th, 2010

CS343 Introduction to Artificial Intelligence Spring 2010

Agents and Introduction to AI

Automatic Control Systems

CSIS 4463: Artificial Intelligence. Introduction: Chapter 1

CS343 Introduction to Artificial Intelligence Spring 2012

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT

COS Lecture 7 Autonomous Robot Navigation

Affordable Real-Time Vision Guidance for Robot Motion Control

LECTURE 26: GAME THEORY 1

Problem solving. Chapter 3, Sections 1 3

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Russell and Norvig: an active, artificial agent. continuum of physical configurations and motions

Connected Car Networking

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

EARIN Jarosław Arabas Room #223, Electronics Bldg.

A Winning Combination

1 Abstract and Motivation

CS 188 Introduction to Fall 2014 Artificial Intelligence Midterm

Recommended Text. Logistics. Course Logistics. Intelligent Robotic Systems

RoboCup. Presented by Shane Murphy April 24, 2003

An Introduction to Agent-Based Modeling Unit 5: Components of an Agent-Based Model

CMU-Q Lecture 20:

CS494/594: Software for Intelligent Robotics

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

Big data in Thessaloniki

Reinforcement Learning Simulations and Robotics

WHAT THE COURSE IS AND ISN T ABOUT. Welcome to CIS 391. Introduction to Artificial Intelligence. Grading & Homework. Welcome to CIS 391

Autonomous Vehicle Simulation (MDAS.ai)

CS325 Artificial Intelligence Robotics I Autonomous Robots (Ch. 25)

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

REINFORCEMENT LEARNING (DD3359) O-03 END-TO-END LEARNING

MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011

4D-Particle filter localization for a simulated UAV

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Using FMI/ SSP for Development of Autonomous Driving

Tech Center a-drive: EUR 7.5 Million for Automated Driving

Eleonora Escalante, MBA - MEng Strategic Corporate Advisory Services Creating Corporate Integral Value (CIV)

Transcription:

Inf2D 01: Intelligent Agents and their Environments School of Informatics, University of Edinburgh 16/01/18 Slide Credits: Jacques Fleuriot, Michael Rovatsos, Michael Herrmann

Structure of Intelligent Agents An agent: Perceives its environment, Through its sensors, Then achieves its goals By acting on its environment via actuators.

Structure of Intelligent Agents

Examples of Agents 1 Agent: mail sorting robot Environment: conveyor belt of letters Goals: route letter into correct bin Percepts: array of pixel intensities Actions: route letter into bin Side info: https://en.wikipedia.org/wiki/mail_sorter

Examples of Agents 2 Agent: intelligent house Environment: occupants enter and leave house, occupants enter and leave rooms; daily variation in outside light and temperature Goals: occupants warm, room lights are on when room is occupied, house energy efficient Percepts: signals from temperature sensor, movement sensor, clock, sound sensor Actions: room heaters on/off, lights on/off Side info: https://en.wikipedia.org/wiki/home_automation

Examples of Agents 3 Agent: autonomous car. Environment: streets, other vehicles, pedestrians, traffic signals/lights/signs. Goals: safe, fast, legal trip. Percepts: camera, GPS signals, speedometer, sonar. Actions: steer, accelerate, brake. Side info: https://en.wikipedia.org/wiki/autonomous_car

Simple Reflex Agents Action depends only on immediate percepts. Implement by condition-action rules. Example: Agent: Mail sorting robot Environment: Conveyor belt of letters Rule: e.g. city=edinburgh put Scotland bag https://en.wikipedia.org/wiki/intelligent_agent

Simple Reflex Agents

Model-Based Reflex Agents Action may depend on history or unperceived aspects of the world. Need to maintain internal world model. Example: Agent: robot vacuum cleaner Environment: dirty room, furniture. Model: map of room, which areas already cleaned. Sensor/model trade-off.

Model-Based Agents

Goal-Based Agents Agents so far have fixed, implicit goals. We want agents with variable goals. Forming plans to achieve goals is later topic. Example: Agent: household service robot Environment: house & people. Goals: clean clothes, tidy room, table laid, etc.

Goal-Based Agents

Utility-Based Agents Agents so far have had a single goal. Agents may have to juggle conflicting goals. Need to optimise utility over a range of goals. Utility: measure of goodness (a real number). Combine with probability of success to get expected utility. Example: Agent: automatic car. Environment: roads, vehicles, signs, etc. Goals: stay safe, reach destination, be quick, obey law, save fuel, etc.

Utility-Based Agents We will not be covering utility-based agents, but this topic is discussed in Russell & Norvig, Chapters 16 and 17.

Learning Agents How do agents improve their performance in the light of experience? Generate problems which will test performance. Perform activities according to rules, goals, model, utilities, etc. Monitor performance and identify non-optimal activity. Identify and implement improvements. We will not be covering learning agents, but this topic is dealt with in several honours-level courses (see also R&N, Ch. 18-21).

Mid-Lecture Problem Consider a chess playing program. What sort of agent would it need to be?

Solution(s) Simple-reflex agent: but some actions require some memory (e.g. castling in chess: http://en.wikipedia.org/wiki/castling). Model-based reflex agent: but needs to reason about future. Goal-based agent: but only has one goal. Utility-based agent: might consider multiple goals with limited lookahead. Learning agent: Learns from experience or self-play

Types of Environment 1 Fully Observable vs. Partially Observable: Full: agent s sensors describe environment state fully. Partial: some parts of environment not visible, noisy sensors. Deterministic vs. Stochastic: Deterministic: next state fully determined by current state and agent s actions. Stochastic: random changes (can t be predicted exactly). An environment may appear stochastic if it is only partially observable.

Types of Environment 2 Episodic vs. Sequential: Episodic: next action does not depend on previous actions. Mail-sorting robot vs. crossword puzzle. Static vs. Dynamic: Static: environment unchanged while agent deliberates. Crossword puzzle vs. chess. Industrial robot vs. robot car

Types of Environment 3 Discrete vs. Continuous: Discrete: percepts, actions and episodes are discrete. Chess vs. robot car. Single Agent vs. Multi-Agent: How many objects must be modelled as agents. Crossword vs. poker. Element of choice over which objects are considered agents.

Types of Environment 4 An agent may have any combination of these properties: from benign (i.e., fully observable, deterministic, episodic, static, discrete and single agent) to chaotic (i.e., partially observable, stochastic, sequential, dynamic, continuous and multi-agent). What are the properties of the environment that would be experienced by a mail-sorting robot? an intelligent house? a car-driving robot?

Summary Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning agents Properties of environments