Agent. Pengju Ren. Institute of Artificial Intelligence and Robotics

Similar documents
Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Intelligent Agents p.1/25. Intelligent Agents. Chapter 2

Intelligent Agents & Search Problem Formulation. AIMA, Chapters 2,

HIT3002: Introduction to Artificial Intelligence

Administrivia. CS 188: Artificial Intelligence Spring Agents and Environments. Today. Vacuum-Cleaner World. A Reflex Vacuum-Cleaner

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE RATIONAL AGENTS. Santiago Ontañón

CMSC 372 Artificial Intelligence What is AI? Thinking Like Acting Like Humans Humans Thought Processes Behaviors

CS 380: ARTIFICIAL INTELLIGENCE

Artificial Intelligence: Definition

Inf2D 01: Intelligent Agents and their Environments

Structure of Intelligent Agents. Examples of Agents 1. Examples of Agents 2. Intelligent Agents and their Environments. An agent:

CISC 1600 Lecture 3.4 Agent-based programming

COMP9414/ 9814/ 3411: Artificial Intelligence. Week 2. Classifying AI Tasks

Outline. Introduction to AI. Artificial Intelligence. What is an AI? What is an AI? Agents Environments

Last Time: Acting Humanly: The Full Turing Test

Overview Agents, environments, typical components

Course Info. CS 486/686 Artificial Intelligence. Outline. Artificial Intelligence (AI)

CS 486/686 Artificial Intelligence

Our 2-course meal for this evening

COMP9414/ 9814/ 3411: Artificial Intelligence. 2. Environment Types. UNSW c Alan Blair,

Introduction to Multiagent Systems

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

3.1 Agents. Foundations of Artificial Intelligence. 3.1 Agents. 3.2 Rationality. 3.3 Summary. Introduction: Overview. 3. Introduction: Rational Agents

CPS331 Lecture: Intelligent Agents last revised July 25, 2018

Introduction to Artificial Intelligence

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 1, 2015

Introduction to Multi-Agent Systems. Michal Pechoucek & Branislav Bošanský AE4M36MAS Autumn Lect. 1

2. Environment Types. COMP9414/ 9814/ 3411: Artificial Intelligence. Agent Model. Agents as functions. The PEAS model of an Agent

Informatics 2D: Tutorial 1 (Solutions)

CPS331 Lecture: Agents and Robots last revised April 27, 2012

Artificial Intelligence (Introduction to)

Instructor. Artificial Intelligence (Introduction to) What is AI? Introduction. Dr Sergio Tessaris

Artificial Intelligence: An overview

Agents and Introduction to AI

CS343 Artificial Intelligence

Multi-Robot Teamwork Cooperative Multi-Robot Systems

A.I in Automotive? Why and When.

Artificial Intelligence

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Introduction to Computer Science

Lecture Overview. c D. Poole and A. Mackworth 2017 Artificial Intelligence, Lecture 1.1, Page 1 1 / 15

WHAT THE COURSE IS AND ISN T ABOUT. Welcome to CIS 391. Introduction to Artificial Intelligence. Grading & Homework. Welcome to CIS 391

CMPT 310 Assignment 1

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

CMU-Q Lecture 20:

Reinforcement Learning in Games Autonomous Learning Systems Seminar

Solving Problems by Searching

Artificial Intelligence. Minimax and alpha-beta pruning

LECTURE 26: GAME THEORY 1

Logic Programming. Dr. : Mohamed Mostafa

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

ES 492: SCIENCE IN THE MOVIES

Agent-Based Systems. Agent-Based Systems. Agent-Based Systems. Five pervasive trends in computing history. Agent-Based Systems. Agent-Based Systems

Intro to Artificial Intelligence Lecture 1. Ahmed Sallam { }

CS 188 Introduction to Fall 2014 Artificial Intelligence Midterm

VSI Labs The Build Up of Automated Driving

CS 188: Artificial Intelligence

Horizon 2020 ICT Robotics Work Programme (draft - Publication: 20 October 2015)

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

WELCOME TO THE SEMINAR ON INTRODUCTION TO ROBOTICS

Artificial Intelligence: Implications for Autonomous Weapons. Stuart Russell University of California, Berkeley

Impacts and Risks Caused by AI Networking, and Future Challenges

Intelligent driving TH« TNO I Innovation for live

Stanford Center for AI Safety

Combining ROS and AI for fail-operational automated driving

Evaluation based on drivers' needs analysis

Autonomous Agents and MultiAgent Systems* Lecture 2

MACHINE EXECUTION OF HUMAN INTENTIONS. Mark Waser Digital Wisdom Institute

Smart eye using Ultrasonic sensor in Electrical vehicles for Differently Able.

MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Game Theory. Vincent Kubala

Russell and Norvig: an active, artificial agent. continuum of physical configurations and motions

Game-playing AIs: Games and Adversarial Search I AIMA

Planning in autonomous mobile robotics

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

ARTIFICIAL INTELLIGENCE (CS 370D)

Towards Strategic Kriegspiel Play with Opponent Modeling

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

Programming Project 1: Pacman (Due )

Intelligent Driving Agents

Embedding Artificial Intelligence into Our Lives

Artificial Intelligence

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

CS510 \ Lecture Ariel Stolerman

CS494/594: Software for Intelligent Robotics

Artificial Intelligence

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Funzionalità per la navigazione di robot mobili. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Spring 19 Planning Techniques for Robotics Introduction; What is Planning for Robotics?

Artificial Intelligence

CS 5522: Artificial Intelligence II

RoboCup. Presented by Shane Murphy April 24, 2003

Outline. What is AI? A brief history of AI State of the art

Cognitive Robotics 2017/2018

Reinforcement Learning Simulations and Robotics

Transcription:

Agent Pengju Ren Institute of Artificial Intelligence and Robotics pengjuren@xjtu.edu.cn 1

Review: What is AI? Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. --- from Wikipedia 2

Agents An agent(software/hardware) is anything that can be viewed as perceiving its environment through Sensors and acting upon that environment though actuators. Human agents: eyes, ears, and other organs for sensors; hands, legs, mouth and other body parts for actuators. Robotic agents: cameras and infrared range finders for sensors; Various motors for actuators Xianer Robot monk Rethink Robotics

Beyond the Human Senses 4

Sensors for Robotics and Drones 5

Why Self-driving Car has the potential to outperform Human Driver Still has a long way to go e.g. Traffic Infrastructure, Legal issue, Computing capability 6

Agents and Environment Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept sequences to actions [f: P* A] The agent program is the implementation of the produce f Agent = Architecture + program

Agents functions and programs An agent is completely specified by the agent function mapping percept sequences to actions (Formulation) One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely For example: Table-lookup Agent \input{algorithms/table-agent-algorithm} Drawbacks: Huge table (Memory) Take a long time to build the table No autonomy Even with learning, need a long time to learn the table entries

Vacuum Cleaner Percepts: Location and Content e.g. [A, Dirty]. Actions: Turn left, Turn Right, Suck, NoOp

A Vacuum-cleaner Agent t(i) t(i+1) t(i+2) What is the right way? (Look up Table or a small agent program?) What makes an agent good or bad, intelligent or stupid?

Rationality Fixed performance measure evaluates the environment sequence one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares? A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date and whatever built-in knowledge the agent has. Rational omniscient percepts may not supply all relevant information and all-knowing with infinite knowledge Rational clairvoyant action outcomes may not be as expected Hence, rational successful Rational >> Information gathering, exploration, learning, autonomy Tasla s accident@2016

12

A Smart Vacuum-cleaner Agent 13

PEAS To design a rational agent, we must specify the task environment. Agent Type Auto Vehicle Performance Measure Safety Destination Profits Legality Comfort Environment Actuators Sensors Roads, Traffic lights, Pedestrians, Customers, Raining Snowing Steering, Accelerator, Brake, Horn Accelerometers, Camera, Engine Sensors, GPS, Laser Medical diagnosis system Healthy patient, minimize costs, lawsuits Patient, hospital, staff Screen display (questions, tests, diagnoses, treatments, referrals) Keyboard (entry of symptoms, findings, patient's answers)

PEAS To design a rational agent, we must specify the task environment.

PEAS To design a rational agent, we must specify the task environment.

Environment Types Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Environment Types Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment.

Environment Types Chess Go Vehicle Image analysis Observability Fully Partially Partially Fully Agents Deterministic/ Stochastic Episodic /Sequential Discrete /Continuous Multi Competitive Multi Competitive Multi Competitive Single Deterministic Stochastic Stochastic Deterministic Sequential Sequential Sequential Episodic Static/Dynamic Static/Semi Static/Semi Dynamic Static Discrete Discrete Continuous Discrete The environment type largely determines the agent design The real world is always partially observable, stochastic, sequential, dynamic, continuous, multi-agents.

Agent Types Four basic types in order of increasing generality Simple reflex agents. Model-based agents. Goal-based agents. Utility-based agents. All these can be turned into learning agents

Simple Reflex Agent

Simple Reflex Agent Reflex-Vacuum-Agent is a simple reflex agent. Actions rely purely on condition-action rules: if condition then action. Also called memory-less or state-less Works only if the correct decision can be made on the basis of only the current percept. Works only if the environment is fully observable. Often trapped in infinite loops if the environment is partial observable.

Model-Based Agent

Model-Based Agent Handle partial observability by keeping track of the part of the world it can t see now. Maintain internal states to model the world. The model of the world represents the agent s best guess(or prediction), can t be exact. Internal states can also be used to maintain the status of the agent instead of the world.

Goal-Based Agent Instead of using condition-action rules, the agent uses goals to decide what action it does. Search (Chapters 3, 4 and 5) and Planning (Chapters 10 and 11).

Utility-based Agent Utility function: Happiness of the agent. Maximizing the expected utility.

Learning Agents Learning from rewards (or penalty). Learning techniques form another field called machine learning.

Summary Agents interact with environments through actuators and sensors. The agent function describes what the agent does in all circumstances. The performance measure evaluates the environment sequence. A perfectly rational agent maximizes expected performance. Agent programs implement (some) agent functions. PEAS descriptions define task environments. Different environments & agent types. observable? deterministic? episodic? static? discrete? single-agent? reflex, reflex with state, goal-based, utility-based, learning agents All agents can improve their performances through learning.