COMP310 Multi-Agent Systems Chapter 3 - Deductive Reasoning Agents. Dr Terry R. Payne Department of Computer Science

Similar documents
Cap. 5. Mecanismos de Raciocínio

Computational Logic and Agents Miniscuola WOA 2009

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

A review of Reasoning About Rational Agents by Michael Wooldridge, MIT Press Gordon Beavers and Henry Hexmoor

MYWORLD: AN AGENT-ORIENTED TESTBED FOR DISTRIBUTED ARTIFICIAL INTELLIGENCE

Intelligent Agents. Introduction to Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 23.

Overview Agents, environments, typical components

Agents in the Real World Agents and Knowledge Representation and Reasoning

Lecture 1: Introduction

Intelligent Agents. Inference First Order Logic. Ute Schmid. Applied Computer Science, Bamberg University. last change: 9.

Multi-Agent Systems in Distributed Communication Environments

Agent-Based Systems. Agent-Based Systems. Agent-Based Systems. Five pervasive trends in computing history. Agent-Based Systems. Agent-Based Systems

An Unreal Based Platform for Developing Intelligent Virtual Agents

Introduction to Artificial Intelligence

Where are we? Knowledge Engineering Semester 2, Speech Act Theory. Categories of Agent Interaction

Lecture Notes in Artificial Intelligence. Lecture Notes in Computer Science

Logical Agents (AIMA - Chapter 7)

11/18/2015. Outline. Logical Agents. The Wumpus World. 1. Automating Hunt the Wumpus : A different kind of problem

IBM Rational Software

A Model-Theoretic Approach to the Verification of Situated Reasoning Systems

Automated Reasoning. Satisfiability Checking


Informatics 2D: Tutorial 1 (Solutions)

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

SENG609.22: Agent-Based Software Engineering Assignment. Agent-Oriented Engineering Survey

What is AI? AI is the reproduction of human reasoning and intelligent behavior by computational methods. an attempt of. Intelligent behavior Computer

Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE RATIONAL AGENTS. Santiago Ontañón

A Formal Model for Situated Multi-Agent Systems

Introduction to cognitive science Session 3: Cognitivism

Unit 1: Introduction to Autonomous Robotics

Artificial Intelligence Introduction to the Intelligent Agents Technology

CS 380: ARTIFICIAL INTELLIGENCE

Outline. What is AI? A brief history of AI State of the art

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Autonomous Robotic (Cyber) Weapons?

A Concise Overview of Software Agent Research, Modeling, and Development

Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands

Component Based Mechatronics Modelling Methodology

School of Computing, National University of Singapore 3 Science Drive 2, Singapore ABSTRACT

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS

Designing 3D Virtual Worlds as a Society of Agents

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Twenty Years of Engineering MAS. The shaping of the agent-oriented mindset

Unit 1: Introduction to Autonomous Robotics

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Intelligent Agents & Search Problem Formulation. AIMA, Chapters 2,

II. ROBOT SYSTEMS ENGINEERING

Introduction.

Years 9 and 10 standard elaborations Australian Curriculum: Digital Technologies

Introduction to Artificial Intelligence: cs580

Modal logic. Benzmüller/Rojas, 2014 Artificial Intelligence 2

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9

Introduction: What are the agents?

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Co-evolution of agent-oriented conceptual models and CASO agent programs

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Computer Science and Philosophy Information Sheet for entry in 2018

Verifiable Autonomy. Michael Fisher. University of Liverpool, 11th September 2015

Introduction to Multi-Agent Systems. Michal Pechoucek & Branislav Bošanský AE4M36MAS Autumn Lect. 1

Hybrid architectures. IAR Lecture 6 Barbara Webb

Artificial Intelligence

Verification and Validation for Safety in Robots Kerstin Eder

Administrivia. CS 188: Artificial Intelligence Spring Agents and Environments. Today. Vacuum-Cleaner World. A Reflex Vacuum-Cleaner

Intelligent Agents: Software and Robotic. Sycara s Requirements for Agents. Key Issues. Commonalities in Agent Definitions

Development of an Intelligent Agent based Manufacturing System

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind

Solving Problems by Searching

CMSC 372 Artificial Intelligence. Fall Administrivia

COMP219: Artificial Intelligence. Lecture 2: AI Problems and Applications

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

STRATEGO EXPERT SYSTEM SHELL

Appendices master s degree programme Artificial Intelligence

DiVA Digitala Vetenskapliga Arkivet

Master Artificial Intelligence

An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

Methodology for Agent-Oriented Software

IHK: Intelligent Autonomous Agent Model and Architecture towards Multi-agent Healthcare Knowledge Infostructure

Pure Versus Applied Informatics

A future for agent programming?

5.4 Imperfect, Real-Time Decisions

Synergies Between Symbolic and Sub-symbolic Artificial Intelligence

From Purple Prose to Machine-Checkable Proofs: Levels of rigor in family history tools

APPROXIMATE KNOWLEDGE OF MANY AGENTS AND DISCOVERY SYSTEMS

Glossary of terms. Short explanation

Logic and Artificial Intelligence Lecture 23

Multi-Platform Soccer Robot Development System

Artificial Intelligence. What is AI?

Path Planning for Mobile Robots Based on Hybrid Architecture Platform

Intelligent Agents p.1/25. Intelligent Agents. Chapter 2

AOSE Technical Forum Group

General Game Playing (GGP) Winter term 2013/ Summary

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

Logic and Artificial Intelligence Lecture 16

Application of Definitive Scripts to Computer Aided Conceptual Design

in the New Zealand Curriculum

Transcription:

COMP310 Multi-Agent Systems Chapter 3 - Deductive Reasoning Agents Dr Terry R. Payne Department of Computer Science

Agent Architectures Pattie Maes (1991) Leslie Kaebling (1991)... [A] particular methodology for building [agents]. It specifies how... the agent can be decomposed into the construction of a set of component modules and how these modules should be made to interact. The total set of modules and their interactions has to provide an answer to the question of how the sensor data and the current internal state of the agent determine the actions... and future internal state of the agent. An architecture encompasses techniques and algorithms that support this methodology...... [A] specific collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology for designing particular modular decompositions for particular tasks... 2

Classes of Architecture 1956 present: Symbolic Reasoning Agents Agents make decisions about what to do via symbol manipulation. Its purest expression, proposes that agents use explicit logical reasoning in order to decide what to do. 1985 present: Reactive Agents Problems with symbolic reasoning led to a reaction against this led to the reactive agents movement, 1985 present. 1990-present: Hybrid Agents Hybrid architectures attempt to combine the best of reasoning and reactive architectures. 3

Symbolic Reasoning Agents The classical approach to building agents is to view them as a particular type of knowledge-based system, and bring all the associated methodologies of such systems to bear. This paradigm is known as symbolic AI. We define a deliberative agent or agent architecture to be one that: contains an explicitly represented, symbolic model of the world; makes decisions (for example about what actions to perform) via symbolic reasoning. 4

Two issues The Transduction Problem Identifying objects is hard!!! The Representation/Reasoning Problem Representing objects is harder! The transduction problem is that of translating the real world into an accurate, adequate symbolic description, in time for that description to be useful. This has led onto research into vision, speech understanding, learning How to symbolically represent information about complex real-world entities and processes, and how to get agents to reason with this information in time for the results to be useful. This has led onto research into knowledge representation, automated reasoning, planning Most researchers accept that neither problem is anywhere near solved. 5

The representation / reasoning problem The underlying problem with knowledge representation/ reasoning lies with the complexity of symbol manipulation algorithms. In general many (most) search-based symbol manipulation algorithms of interest are highly intractable. Hard to find compact representations. Because of these problems, some researchers have looked to alternative techniques for building agents; we look at these later. 6

Deductive Reasoning Agents How can an agent decide what to do using theorem proving? Basic idea is to use logic to encode a theory stating the best action to perform in any given situation. Let: ρ be this theory (typically a set of rules); be a logical database that describes the current state of the world; Ac be the set of actions the agent can perform; ρ φ means that φ can be proved from using ρ. 7

Deductive Reasoning Agents How does this fit into the abstract description we talked about last time? The perception function is as before: see : E! Per of course, this is (much) easier said than done. The next state function revises the database : And the action function? next : P er! Well a possible action function is on the next slide. 8

Action Function for each 2 Ac do /* try to find an action explicitly prescribed */ if ` Do( ) then return end-for end-if for each 2 Ac do /* try to find an action not excluded */ if 6` Do( ) then return end-for end-if return null /* no action found */ 9

An example: The Vacuum World The Vacuum World The goal is for the robot to clear up all the dirt. Uses 3 domain predicates in this exercise: In(x,y) Dirt(x,y) Facing(d) Possible Actions: agent is at (x,y) there is dirt at (x,y) the agent is facing direction d Ac = {turn, forward, suck} 2 1 0 Note: turn means turn right 0 1 2 10

The Vacuum World 2 In(0, 0) Facing(north) Dirt(0,2) Dirt(1,2) forward turn In(0, 1) Facing(north) Dirt(0,2) Dirt(1,2) In(0, 0) Facing(east) Dirt(0,2) Dirt(1,2) forward forward In(0, 2) Facing(north) Dirt(0,2) Dirt(1,2) suck In(1, 0) Facing(east) Dirt(0,2) Dirt(1,2) suck forward turn In(0, 2) Facing(north) Dirt(1,2) In(2, 0) Facing(east) Dirt(0,2) Dirt(1,2) 1 0 With the system as depicted above, here are some possible ways that the system might run. In(1, 0) Facing(south) Dirt(0,2) Dirt(1,2) 0 1 2 11

The Vacuum World Rules ρ for determining what to do: Uses 3 domain predicates in this exercise: In(x,y) Dirt(x,y) agent is at (x,y) there is dirt at (x,y) Facing(d) the agent is facing direction d In(0, 0) ^ Facing(north) ^ Dirt(0, 0)! Do(forward) In(0, 1) ^ Facing(north) ^ Dirt(0, 1)! Do(forward) In(0, 2) ^ Facing(north) ^ Dirt(0, 2)! Do(turn) Possible Actions: Ac = {turn, forward, suck} Note: turn means turn right In(0, 2) ^ Facing(east)! Do(forward)... and so on! 2 1 Using these rules (+ other obvious ones), starting at (0, 0) the robot will 0 clear up dirt. 0 1 2 12

The Vacuum World Problems: how to convert video camera input to Dirt(0, 1)? decision making assumes a static environment: calculative rationality. Typical solutions: weaken the logic; use symbolic, non-logical representations; shift the emphasis of reasoning from run time to design time. decision making using first-order logic is undecidable! 13

Agent-oriented programming Yoav Shoham introduced agent-oriented programming in 1990:... new programming paradigm, based on a societal view of computation... The key idea: directly programming agents in terms of intentional notions like belief, desire, and intention Adopts the same abstraction as humans Resulted in the Agent0 programming language 14

Agent0 AGENT0 is implemented as an extension to LISP. Each agent in AGENT0 has 4 components: a set of capabilities (things the agent can do); a set of initial beliefs; a set of initial commitments (things the agent will do); and a set of commitment rules. The key component, which determines how the agent acts, is the commitment rule set. Each commitment rule contains a message condition; a mental condition; and an action. 15

Agent0 Decision Cycle On each decision cycle... The message condition is matched against the messages the agent has received; The mental condition is matched against the beliefs of the agent. If the rule fires, then the agent becomes committed to the action (the action gets added to the agents commitment set). Actions may be... Private An externally executed computation Communicative Sending messages Messages are constrained to be one of three types... requests To commit to action unrequests To refrain from action Informs Which pass on information 16

Commitment Rules This rule may be paraphrased as follows: if I receive a message from agent which requests me to do action at time, and I believe that: agent is currently a friend; I can do the action; at time, I am not committed to doing any other action, then commit to doing action at time. A commitment Rule COMMIT( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B, [now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;; mental condition self, DO(time, action) ) 17

PLACA A more refined implementation was developed by Becky Thomas, for her 1993 doctoral thesis. Her Planning Communicating Agents (PLACA) language was intended to address one severe drawback to AGENT0 the inability of agents to plan, and communicate requests for action via high-level goals. Agents in PLACA are programmed in much the same way as in AGENT0, in terms of mental change rules. COMP310: Chapter 3 Copyright: M. J. Wooldridge & S.Parsons, used with permission/updated by Terry R. Payne, Spring 2013 18

PLACA A more refined implementation was developed by Becky Thomas, for her 1993 doctoral thesis. Her Planning Communicating Agents (PLACA) language was intended to address one severe drawback to AGENT0 the inability of agents to plan, and communicate requests for action via highlevel goals. Agents in PLACA are programmed in much the same way as in AGENT0, in terms of mental change rules. 19

PLACA: Mental Change Rule If: someone asks you to xerox something x at time t and you can, and you don t believe that they re a VIP, or that you re supposed to be shelving books Then: A PLACA Mental Change Rule (((self?agent REQUEST (?t (xeroxed?x))) (AND (CAN-ACHIEVE (?t xeroxed?x))) (NOT (BEL (*now* shelving))) (NOT (BEL (*now* (vip?agent)))) ((ADOPT (INTEND (5pm (xeroxed?x))))) ((?agent self INFORM (*now* (INTEND (5pm (xeroxed?x))))))) adopt the intention to xerox it by 5pm, and inform them of your newly adopted intention. 20

Concurrent MetateM Concurrent METATEM is a multi-agent language, developed by Michael Fisher Each agent is programmed by giving it a temporal logic specification of the behaviour it should exhibit. These specifications are executed directly in order to generate the behaviour of the agent. Temporal logic is classical logic augmented by modal operators for describing how the truth of propositions changes over time. Think of the world as being a number of discrete states. There is a single past history, but a number of possible futures all the possible ways that the world might develop. 21

MetateM Agents A Concurrent MetateM system contains a number of agents (objects) Each object has 3 attributes: a name an interface a MetateM program An agent s interface contains two sets: messages the agent will accept; For example, a stack object s interface: stack(pop, push)[popped, stackfull] {pop, push} = messages received {popped, stackfull} = messages sent messages the agent may send. 22

MetateM The root of the MetateM concept is Gabbay s separation theorem: Any arbitrary temporal logic formula can be rewritten in a logically equivalent past future form. Execution proceeds by a process of continually matching rules against a history, and firing those rules whose antecedents are satisfied. The instantiated future-time consequents become commitments which must subsequently be satisfied. 23

Examples important(agents) }important(concurrentmetatem) } important(prolog) ( friends(us)) U apologise(you) g apologise(you) means it is now, and will always be true that agents are important means sometime in the future, ConcurrentMetateM will be important means sometime in the past it was true that Prolog was important means we are not friends until you apologise means tomorrow (in the next state), you apologise bcdef apologise(you) ) g friends(us) means if you apologised yesterday, then tomorrow we will be friends friends(us) S apologise(you) means we have been friends since you apologised 24

Summary This chapter has focussed on Agent Architectures and general approaches to programming an agent. We defined the notion of symbolic reasoning agents, and discussed......how can deductive reasoning be achieved through the use of logic; and...the Transduction and Representation Problems We introduced the concept of Agent Oriented Programming, and looked at examples of AOP languages, including: Agent0 and PLACA Concurrent MetateM and temporal logic Class Reading (Chapter 3): Agent Oriented Programming, Yoav Shoham. Artificial Intelligence Journal 60(1), March 1993. pp51-92. This paper introduced agent-oriented programming and throughout the late 90ies was one of the most cited articles in the agent community. One of the main points was the notion of using mental states, and introduced the programming language Agent0. In the next chapter, we will consider the merits of practical reasoning agents. 25