AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications How simulations can act as scientific theories The Computational and Representational Understanding of Mind Boundaries of Cognitive Science AI and Mathematical knowledge AI and Dynamic Systems AI and Emotion AI and Consciousness Embodied AI AI Applications for student presentations 1
Paradigms Frameworks Theories Models Simulations From Paradigms to Simulations Newtonian versus Relativity and QM Behaviourism versus 'Thought as computation' GOFAI and nouvelle AI Explanations for a set of empirically observable phenomena Well specified theories that may give rise to precise predictions, and may be presented in formal mathematical terms Models that can be run on a computer 2
Benefits and problems using mathematical models and Simulations Vigorous specification of theory precision of terms new tools for studying concepts revelation of hidden assumptions specification problems choosing the wrong detail problems when single successes are over-generalised communication problems Exploration of complex domains economical explanations simulations can go beyond mathematical boundaries Bonini's Paradox simulations more complex than reality Validation problem Serendipity Emergence and surprise (Dawson Minds and Machines) 3
The special case of simulation in AI and Cognitive Science Kenneth Craik, The Nature of Explanation (1943): By a model we thus mean any physical or chemical system which has a similar relation-structure to that of the processes it imitates. By 'relation-structure' I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin's tide-predictor, which consists of a number of pulleys on levers, does not resemble a tide in appearance, but it works in the same way in certain essential respects (page 51, Craik 1943) 4
The special case of simulation in AI and Cognitive Science Dawson (2004): Intuitively, a model is an artifact that can be mapped on to a phenomenon that we are having difficulty understanding. By examining the model we can increase our understanding of what we are modeling. For it to be useful, the artifact must be easier to work with or easier to understand than is the phenomenon being modeled. This usually results because the model reflects some of the phenomenons properties, and does not reflect them all. A model is useful because it simplifies the situation by omitting some characteristics Models should be easier to work with than reality, but there is a trade-off. Some of the complexity of reality needs to be omitted by a process of abstraction. 5
The special case of simulation in AI and Cognitive Science Kenneth Craik, The Nature of Explanation (1943): My hypothesis then is that thought models, or parallels reality that its essential feature is not 'the mind,' 'the self,' 'sense-data,' nor propositions but symbolism, and that this symbolism is largely of the same kind as that which is familiar to us in mechanical devices which aid thought and calculation Craik is saying that Symbols in the mind are a similar kind of thing, used in a similar kind of way, to symbols used within computers. Humans possess models of reality inside their heads in the same way that scientists model phenomena. Models in AI form both kinds of engineering and psychological model. 6
Program ---------- data structures + algorithms = running program The Computational and Representational Understanding of the Mind (CRUM) Do brains work just like digital computers? Mind ---------- mental representations + computational procedures = thinking Metaphors of Mind follow the technology of the day Victorians compared mental processes to mechanical processes. Levels of description will be discussed in lecture 8 in week 4. (Reading - Thagard, Mind: Introduction to Cognitive Science, chapter 1) 7
Forward engineering AND reverse engineering in AI Daniel Dennett (1994): The forward engineer builds an artifact (a robot or software program) that accomplishes a capability, however he wants to The reverse engineer would show, through building, that he had have figured out how the human mechanism works The reverse engineer makes the assumption that although the historical design process of evolution doesn't proceed by an exact analogue of the top-down engineering process.... Reverse engineering is just as applicable a methodology to systems designed by Nature, as to systems designed by engineers. This is because even though the forward processes have been different, the products are of the same sort, so that the reverse process of functional analysis should work as well on both sorts of product. 8
Variety of perspectives with the Computational and Representational Understanding of the Mind (CRUM) CRUM = symbolic (GOFAI) and connectionist (nouvelle) computation Logic Rules Concepts Analogies Images Connections Dennett - Darwinian, Skinnerian, Popperian, Gregorian, Minsky - layer framework The Emotion Machine Nilsson - Iconic versus feature based, Sloman Analogical versus Fregean Nilsson Reactive versus deliberative; Reasoning versus projecting (Reading - Thagard, Mind: Introduction to Cognitive Science, chapters 1-8, Dennett Kinds of Minds, Nilsson Artificial Intelligence pp, Sloman (1971) - Interactions between Philosophy and Artificial Intelligence: The role of intuition and non-logical reasoning in intelligence) 9
Boundaries of Cognitive Science AI and Emotion AI and Consciousness Embodied AI Situated AI AI and Dynamic Systems AI and Mathematical knowledge 10
AI and Emotion How can artifacts possess emotions? A functional explanation for emotion Herbert Simon (1967) emotion as a global interrupt to processing Emotions linked to goals John McCarthy The robot and the baby Drew McDermot Artificial Intelligence Meets Natural Stupidity Aaron Sloman (1996) a functional explanation for Grief The Cognition and Affect Directory many papers with an AI approach to emotion 11
AI and Consciousness Different meanings for the term 'consciousness'. Easy and hard problem (Chalmers) Can artifacts possess consciousness? Is this a more difficult question than that for emotion? Explain your answer. What has the Turing Test got to do with consciousness? Sensory qualia: difference between Cheddar and Wensleydale cheese Bernard Baars global workspace theory of consciousness invoked by Stan Franklin, Murray Shanahan 12
Embodied AI Being in the World (Heidegger, Dreyfus, Brooks) Intelligence is essentially non-representational (by this researchers in embodied cognition mean with central representations like symbols). Direct perception (Gibson) rejects inferential view of perception, we perceive affordances (Reading: Thagard, Mind chapter 10, Haugeland, Mind Design II chapter 6, Dreyfus From Micro-Worlds to Knowledge Representation: AI at an Impasse, - chapter 15, Brooks, Intelligence Without Representation, Clark Being There) 13
Dynamic Systems Non-(computational-representational) approaches to human thinking Thagard, (page 170) Instead of proposing a set of representations and processes, we should follow the successful example of physics and biology and try to develop equations that describe how the mind changes over time. Theoretical tools of a dynamic systems analysis: state space, attractors, chaotic systems, phase transitions, saddle points Will a dynamic systems analysis facilitate the engineering aims of AI? (Reading: Thagard, Mind chapter 11, Haugeland, Mind Design II, chapter 16, van Gelder Dynamics and Cognition) 14
Mathematical knowledge Deriving all mathematical knowledge from a few basic assumptions is not possible Godel's incompleteness theorem Thagard on the argument that a computational account of mind is impossible: 1 Any computer that claims to model the human mind is an instantiation of a formal system 2 If this formal system is consistent and adequate for arithmetic, then by Godel's theorem it is incomplete in having a formula that is neither provable nor disprovable 3 But the human mind can see that this formula is true, so there is something that the mind can do that the computer cannot do. 4 Hence, the mind is not a computer (Reading: Thagard, Mind chapter 11, Hofstader, Godel, Escher, Bach) 15
History of AI Review of subjects for student presentations Boundaries of Cognitive Science AI Applications list of weblinks on module page (copied from last year) Subjects that should not form presentations Deadline for informing me of your intended subject: Monday 29 th January 16
Sources of information for student presentations Your tutor, other lecturers, other students, demonstrators etc Look at the research pages of the school website to see the kind of research different lecturers do. I recommend emailing to arrange an appointment, this will give the lecturer time to think about your request and perhaps give you more information. Google, Wikipedia.com, The websites of famous AI Universities: Stanford, MIT, Carnegie Mellon, in the UK Edinburgh and Sussex Text-books and magazines like New Scientist (which you can search online) Recently published AI books for the general reader: Stan Franklin Artificial Minds, Andy Clark Being There Or other types of review book: Margaret Boden Mind as Machine 17
Deadline Reminder Email d.d.petters@cs.bham.ac.uk by 6 th February with your intended subject for presentation 18