Elements of Artificial Intelligence and Expert Systems Master in Data Science for Economics, Business & Finance Nicola Basilico Dipartimento di Informatica Via Comelico 39/41-20135 Milano (MI) Ufficio S242 nicola.basilico@unimi.it +39 02.503.16294
Course organization Total of 20 hour organized in two independent parts: 10 hours, Nicola Basilico 10 hours, Silvia Salini Calendar for this part: May 17th, 2 hours June 7th, 4 hours June 9, 2 hours June 30, 2 hours Exam for this part of the course: Choose one of the topics addressed in class Write a short report on it, additional references will be provided
Course goals Giving a general overview of the field of Artificial Intelligence (AI) Describing some important types of problems addressed by AI Describing some important techniques to solve those problems Reference book: Stuart J. Russell and Peter Norvig, Artificial intelligence: a modern approach. (3 rd edition) References provided during lectures are taken from conferences like AAAI, AAMAS, IJCAI
Roadmap (tentative) Overview of the field Single-agent settings (Search, Markov Decision Processes) Multi-agent settings, game theoretical foundations
Introduction We can start from the very basic question: what is AI? Let s take a bottom-up approach: what comes to our mind when thinking to AI?
AI is in the (tech) news We feel that AI is related to some recent technological advancements that are making new cool things possible Some popular examples: Task planning Targeted advertising Autonomous driving Virtual assistants Games Scene recognition
AI is changing things We feel that societal and ethical implications might be on stake Ethical issues On May 2018, Google presented Duplex: an AI system for accomplishing real-world tasks over the phone Privacy and fairness concerns Impact on our economy and society Will machines take our jobs? Will machines take our boss job?
AI is attracting a lot of money We feel that governments and institutions are investing large funds in AI technologies On May 8, 2018, the White House hosts a summit on Artificial Intelligence for American industry with ~100 stakeholders (source whitehouse.gov) On 20 July 2018, China released ambitious plans to become the world leader in artificial intelligence (AI) by 2030. (source nature.com) The European Union is aims at reaching 20 billion euros of investment in artificial intelligence by 2020 (about 1/3 of what the other two are investing)
AI index (https://aiindex.org/) Very few people today think about dystopic views of AI Some believe we are in a new golden age of AI, mostly due the recent success in predictive technologies and data science (Are we in a bubble? The field already had two winter ages)
AI is a field in Computer Science What actually lies beyond our perception of AI is a young but rich field of science, born in 1956 A definition by a computer scientist: A field within computer science that is attempting to build enhanced intelligence into computer systems (Nils J. Nilsson)
A more systematic view How? humanly rationally What? thinking From a theory of mind to a computer program From a theory of right thinking to a computer program acting A computer program that behaves like a human A computer program that behaves rationally Simulation: we want intelligence to emerge from a system that is internally modeled just like the target system Emulation: we want intelligence to emerge from a system that has no constraints on how it is internally modeled
Acting Humanly: the Turing test Definition of the Turing test: Test setting: Users A, B, C in different rooms A is a female, B is a male They can exchange text messages Mechanism: C has to guess the sex of A and B B wants to cooperate with C A wans to mislead C We play the tests a large number of times and we estimate C s success rate Does the success rate stay the same?
Acting Humanly: the Turing test The Turing test is important for two three reasons: It was one of the first definition of AI, it gave to the researchers of the field a concrete objective to seek It focuses on the behavior of the AI and not on its internals It was formulated by Alan Turing Criticized a lot but still a reference in philosophy of AI Today s research is focusing on more specific domains, even if there are competitions
Acting rationally: our favorite definition Acting rationally is our reference definition: emulation is less restrictive rationality can be modeled mathematically We will focus on this definition of AI although the four categories are not mutually exclusive Introducing the agent and her rationality, two central concepts Agent We need to build a system that can act rationally, where do we start?
AI approaches: the main ingredients Problem Solution Problems in AI have different dimensions of complexity computation: some problems are difficult in the sense that designing an efficient algorithm for their resolution might be difficult or even not possible information: the resolution of some problems might require the availability and the capability to process large amounts of data To have a more concrete view let s consider two (classical) running problems: P2) Find the fastest route from A to B? P1) Is there any person in this picture?
First approach: learning P1 Problem Building Model Using Solution Building the model from data, selecting from a family Extracting relevant features and understanding how they map to a solution how some features of the world map to the solution Agent: the entity that solves the problem The agent queries the model, she s a reactive or reflex agent Typically difficult Typically easy
Second approach: inference P2 Problem Building Model Using Solution Building a model of the problem descriptions of some features of the world and how they change Agent: the entity that solves the problem Computing the solution by inference on the model The agent must search, reason, explore different directions Typically easy We will focus on this approach Typically difficult
Agents perceptions Agent actions environment The model we want to build is called agent The agent works on a problem model maintaining an internal representation of it integrating the environment perceptions It can perform actions that change the environment and, as a consequence, the internal representation The agent wants to accomplish something, it has goals/preferences and acts rationally with respect to them
Agents State-based representation of the environment It can be modeled with a graph where vertices represent states and arcs transitions between them State 1 State 2 This representation can be characterized along different dimensions of complexity: Uncertainty Knowledge Interactions
The uncertainty dimension Consider an example where our agent is represented by a mobile robot that moves within an environment represented as a graph Are the effects of my actions perfectly predictable? Am I always sure about what s going on? Deterministic vs Stochastic transitions Fully observable vs Partially observable states
The uncertainty dimension Deterministic transitions, fully observable states Only actuation is needed, no sensing! A move B B
The uncertainty dimension Stochastic transitions, fully observable states B A Move to B C Surveillance robot Broken
The uncertainty dimension? A move B? D move C Deterministic transitions, partially observable states
The uncertainty dimension? A move 0.99 B? D move 0.99 C Broken Stochastic transitions, partially observable states
The knowledge dimension Is all the information we need to solve the problem available in advance? Offline resolution vs online resolution In online resolution we acquire information about the problem while trying to solve it Example: explore an unknown environment in the shortest time
The interaction dimension Am I the only one acting rationally in the environment Is the environment changing only because of my actions and environmental dynamics or can it change also because the actions of another rational agent? If another agent is present how her preferences go along with mine? Single gent settings: problem solving, optimization, decision theory Multi-agent setting: game theory vs
Self-interested agents Agents have preferences or goals with respect to the possible states they might encounter Self-interested agents: Knows what states of the world like more than others Her decision try to bring the world in the states she likes more Utility theory, grounded in the concept of preferences: Weak preference Indifference Strict preference Set of possible states We can extend preferences to lotteries, i.e., situations where the state is uncertain
Self-interested agents Is any preference relation specified in such way meaningful? In general, no. We need to require some properties Completeness (C) Transitivity (T) Substitutability (S)
Self-interested agents Decomposability (D) Probability with which lottery e selects state i Monotonicity (M) Continuity (CC)
Self-interested agents Von Neumann and Morgenstein: if all the properties hold, then there exists a function such that and If our preferences are well-formed then we can quantify the agent s preference degrees with an utility function that: assigns to each state a value (utility) takes the form of an expectancy when uncertainty is involved
Example 1 Expected utility for each action: 0 2 What action is chosen by the agent? -1