Similar documents
Artificial Intelligence CS365. Amitabha Mukerjee

CSC 550: Introduction to Artificial Intelligence. Fall 2004

History and Philosophical Underpinnings

Machine and Thought: The Turing Test

Introduction and History of AI

ENTRY ARTIFICIAL INTELLIGENCE

Lecture 1 What is AI?

mywbut.com Introduction to AI

CSE 473 Artificial Intelligence (AI) Outline

Introduction to Artificial Intelligence

Intro to Artificial Intelligence Lecture 1. Ahmed Sallam { }

universe: How does a human mind work? Can Some accept that machines can do things that

Artificial Intelligence A Very Brief Overview of a Big Field

Introduction to AI. What is Artificial Intelligence?

Outline. What is AI? A brief history of AI State of the art

Philosophy. AI Slides (5e) c Lin

Computer Science as a Discipline

Artificial Intelligence. An Introductory Course

The Impact of Artificial Intelligence. By: Steven Williamson

Welcome to CompSci 171 Fall 2010 Introduction to AI.

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

A Balanced Introduction to Computer Science, 3/E

Goals of this Course. CSE 473 Artificial Intelligence. AI as Science. AI as Engineering. Dieter Fox Colin Zheng

CSE 473 Artificial Intelligence (AI)

Philosophical Foundations. Artificial Intelligence Santa Clara University 2016

Artificial Intelligence. What is AI?

CS360: AI & Robotics. TTh 9:25 am - 10:40 am. Shereen Khoja 8/29/03 CS360 AI & Robotics 1

Philosophical Foundations

Artificial Intelligence

HUMAN-LEVEL ARTIFICIAL INTELIGENCE & COGNITIVE SCIENCE

UNIT 13A AI: Games & Search Strategies

Overview. Pre AI developments. Birth of AI, early successes. Overwhelming optimism underwhelming results

Introduction to Talking Robots

Lecture 1 What is AI? EECS 348 Intro to Artificial Intelligence Doug Downey

Artificial Intelligence

Dr Rong Qu History of AI

UNIT 13A AI: Games & Search Strategies. Announcements

Turing s model of the mind

ARTIFICIAL INTELLIGENCE

Final Lecture: Fun, mainly

CMSC 372 Artificial Intelligence. Fall Administrivia

This tutorial is prepared for the students at beginner level who aspire to learn Artificial Intelligence.

CS 1571 Introduction to AI Lecture 1. Course overview. CS 1571 Intro to AI. Course administrivia

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Artificial Intelligence. Shobhanjana Kalita Dept. of Computer Science & Engineering Tezpur University

COMS 493 AI, ROBOTS & COMMUNICATION

Overview. Introduction to Artificial Intelligence. What is Intelligence? What is Artificial Intelligence? Influential areas for AI

Turing Centenary Celebration

Artificial Intelligence

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

Brain-inspired information processing: Beyond the Turing machine

CE213 Artificial Intelligence Lecture 1

History of AI. History of AI. History of AI. History of AI History of AI

Artificial Intelligence

Introduction to cognitive science Session 3: Cognitivism

Computer Science 1400: Part #8: Where We Are: Artificial Intelligence WHAT IS ARTIFICIAL INTELLIGENCE (AI)? AI IN SOCIETY RELATING WITH AI

AI: The New Electricity to Harness Our Digital Future Lindholmen Software Development Day Oct

Random Administrivia. In CMC 306 on Monday for LISP lab

My AI in Peace Machine

CSCE 315: Programming Studio

Awareness and Understanding in Computer Programs A Review of Shadows of the Mind by Roger Penrose

What's involved in Intelligence?

Artificial Intelligence: Your Phone Is Smart, but Can It Think?

AI in Business Enterprises

CMSC 421, Artificial Intelligence

Lecture 1 What is AI?

Ar#ficial)Intelligence!!

Emily Dobson, Sydney Reed, Steve Smoak

Introduction to Artificial Intelligence: cs580

CSIS 4463: Artificial Intelligence. Introduction: Chapter 1

AI: The New Electricity

Artificial Intelligence

Introduction to Artificial Intelligence. Department of Electronic Engineering 2k10 Session - Artificial Intelligence

Actually 3 objectives of AI:[ Winston & Prendergast ] Make machines smarter Understand what intelligence is Make machines more useful

Rise of the Machines. How AI is Transforming IT and the Self-Service Experience. Ian Aitchison Snr Director, ITSM, Ivanti

Artificial Intelligence 人工智慧. Lecture 1 February 22, 2012 洪國寶

Creating a Poker Playing Program Using Evolutionary Computation

Knowledge Enhanced Electronic Logic for Embedded Intelligence

What We Talk About When We Talk About AI

Artificial Intelligence

COS402 Artificial Intelligence Fall, Lecture I: Introduction

How to AI COGS 105. Traditional Rule Concept. if (wus=="hi") { was = "hi back to ya"; }

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving?

ARTIFICIAL INTELLIGENCE UNIT I INTRODUCTION TO AI

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind

AI History. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2012

Master Artificial Intelligence

What's involved in Intelligence?

What is AI? AI is the reproduction of human reasoning and intelligent behavior by computational methods. an attempt of. Intelligent behavior Computer

Artificial Intelligence

Quick work: Memory allocation

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 1: Intro

INTRODUCTION to ROBOTICS

Lecture 1 Introduction to knowledge-base intelligent systems. Dark Ages to knowledge-based systems Summary

Unit 8: Problems of Common Sense

CMSC 471 Spring Introduction. Tim Finin,

Inteligência Artificial. Arlindo Oliveira

Artificial Intelligence

Is Artificial Intelligence an empirical or a priori science?

Artificial Intelligence

Transcription:

http://www.youtube.com/watch?v=dnbsnde1ika&feature=related http://www.youtube.com/watch?v=jlnc9yvku0k&feature=playlist&p=ad3bb14f42437555&index=1 http://www.youtube.com/watch?v=axwaqtluzmi&feature=playlist&p=ad3bb14f42437555&index=2

M.Wilkes with mercury delay lines www.cl.cam.ac.uk/relics/

Turing: On Computable Numbers (WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM) 1936 The computable numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique. I hope shortly to give an account of the relations of the computable numbers, functions, and so forth to one another. This will include a development of the theory of functions of a real variable expressed in terms of computable numbers. According to my definition, a number is computable if its decimal can be written down by a machine.

scanner 1 1 1 x x x x x 1 1 1 1 tape with symbols 4 'configurations' -scan next right square -scan next left square -erase symbol -print symbol -> Read + Erase + Write

The behaviour of the computer at any moment is determined by the symbols which he is observing. and his state of mind at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be arbitrarily close and will be confused. Again, the restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing more symbols on the tape. Let us imagine the operations performed by the computer to be split up into simple operations which are so elementary that it is not easy to imagine them further divided. Every such operation consists of some change of the physical system consisting of the computer and his tape. We know the state of the system if we know the sequence of symbols on the tape, which of these are observed by the computer (possibly with a special order), and the state of mind of the computer. We may suppose that in a simple operation not more than one symbol is altered. Any other changes can be set up into simple changes of this kind. The situation in regard to the squares whose symbols may be altered in this way is the same as in regard to the observed squares. We may, therefore, without loss of generality, assume that the squares whose symbols are changed are always observed squares...

The simple operations must therefore include: (a) Changes of the symbol on one of the observed squares. (b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares.... The operation actually performed is determined...by the state of mind of the computer and the observed symbols. In particular, they determine the state of mind of the computer after the operation is carried out. We may now construct a machine to do the work of this computer. To each state of mind of the computer corresponds an m-configuration of the machine. The machine scans B squares corresponding to the B squares observed by the computer. In any move the machine can change a symbol on a scanned square or can change anyone of the scanned squares to another square distant not more than L squares from one of the other scanned {252} squares. The move which is done, and the succeeding configuration, are determined by the scanned symbol and the m-configuration. The machines just described do not differ very essentially from computing machines as defined in 2, and corresponding to any machine of this type a computing machine can be constructed to compute the same sequence, that is to say the sequence computed by the computer.

Turing: Computing, Machinery and Intelligence (Mind, 59, 433-460 1950) 1 The Imitation Game I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine 'and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

The new form of the problem can be described' in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either 'X is A and Y is B' or 'X is B and Y is A'.... We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'

The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man. No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a 'thinking machine' more human by dressing it up in such artificial flesh. The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices. Some other advantages of the proposed criterion may be shown up by specimen questions and answers. Thus: Q: Please write me a sonnet on the subject of the Forth Bridge. A: Count me out on this one. I never could write poetry. The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include. We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane.

It might be urged that when playing the 'imitation game' the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man.... we only permit digital computers to take part in our game....we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well,but whether there are imaginable computers which would do well.

A digital computer can usually be regarded as consisting of three parts: (i) Store. (ii) Executive unit. (iii) Control. The store is a store of information, and corresponds to the human computer's paper, whether this is the paper on which he does his calculations or that on which his book of rules is printed. In so far as the human computer does calculations in his head a part of the store will correspond to his memory. The executive unit is the part which carries out the various individual operations involved in a calculation. What these individual operations are will vary from machine to machine. Usually fairly lengthy operations can be done such as 'Multiply 3540675445 by 7076345687' but in some machines only very simple ones such as 'Write down 0' are possible.

'If position 4505 contains 0 obey next the instruction stored in 6707, otherwise continue straight on.' Instructions of these latter types are very important because they make it possible for a sequence of operations to be repeated over and over again until some condition is fulfilled, but in doing so to obey, not fresh instructions on each repetition, but the same ones over and over again.... The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact mimic the actions of a human computer very closely.

Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical. Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. Of course electricity usually comes in where fast signalling is concerned, so that it is not surprising that we find it in both these connections. In the nervous system chemical phenomena are at least as important as electrical. In certain computers the storage system is mainly acoustic. The feature of using electricity is thus seen to be only a very superficial similarity. If we wish to find such similarities we should look rather for mathematical analogies of function. The book of rules which we have described our human computer as using is of course a convenient fiction. Actual human computers really remember what they have got to do. If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table. Constructing instruction tables is usually described as 'programming'. To 'programme a machine to carry out the operation A' means to put the appropriate instruction table into the machine so that it will do A....

5 Universality of Digital Computers The digital computers considered in the last section may be classified amongst the 'discrete state machines' these are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machine, which can profitably be thought of as being discrete state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them.... This special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be {p.442} done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent.

We may now consider again the point raised at the end of 3. It was suggested tentatively that the question, 'Can machines think?' should be replaced by 'Are there imaginable digital computers which would do well in the imitation game?' If we wish we can make this superficially more general and ask 'Are there discrete state machines which would do well?'... The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.

Objections The Theological Objection (playing god) The 'Heads in the Sand' Objection (too aweful to contemplate) The Mathematical Objection (machine is limited) The Argument from Consciousness (machines are not conscious) Arguments from Various Disabilities (you can do X but not Z) Lady Lovelace's Objection (machines lack originality) Argument from Continuity in the Nervous System (not continuous true not important) The Argument from Informality of Behaviour (humans do not have formal rules, no?) The Argument from Extra-Sensory Perception (machines do not have clairvoyance) Learning Machines Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child-brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets.... We have thus divided our problem into two parts. The child-programme and the education process. These two remain very closely connected. We cannot expect to find a good child-machine at the first attempt. One must experiment with teaching one such machine and see how well it learns.

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child.

50s-60s: golden age of AI: strong AI (domain universal) -search (forward-backward chaining) -reasoning 70s: applied AI: expert systems: -natural language understanding -knowledge representation -object oriented programming languages -computer vision 80s: applied AI: crisis (weak AI, application specific) -reinforcement learning -supervised machine learning 90/00s hardware related performance jump -military (cruise missiles) -domain specific expert systems (world class chess) -data mining - unsupervised learning -embodied robotics

1950 -Alan Turing published "Computing Machinery and Intelligence" -Claude Shannon published detailed analysis of chess playing as search 1956 -John McCarthy coined the term "artificial intelligence" as the topic of the Dartmouth Conference, the first conference devoted to the subject. -Demonstration of the first running AI program, the Logic Theorist, written by Allen Newell, J.C. Shaw and Herbert Simon 1958 John McCarthy (MIT) invented the Lisp language. 1963 Ivan Sutherland's Sketchpad introduced the idea of interactive graphics into computing 1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English on any topic. 1967 Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford) interpreted mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning 1971 Terry Winograd's (MIT) demonstrated the ability of computers to understand English sentences

mid 90's AI-based information extraction programs in widespread use of www resources 1997 The Deep Blue chess program beats the current world chess champion, Garry Kasparov 2004 Nasa develops AI control for planetary rovers 2005 AI in games (neural networks and natural language processing) 2006 onwards Humanoid robots, fault tolerant robot systems, massive searchable data repositories, semantics still resists automation.

The History of Hacking 3/5 http://www.youtube.com/watch?v=t6ic6as9pcm The History of Hacking 4/5 http://www.youtube.com/watch?v=mj6_dvs0ygo The History of Hacking 5/5 http://www.youtube.com/watch?v=wsg8whhgapy&mode=related&search=

Circuit design advances over the past 60years 1947 First (point contact) transistor (Brattain, Bardeen), Bell labs

1959 flat transistor: first planar transistor (imprinted semiconducitng and insulating channels onto a silicon wafer

1961 resistor transistor logic chip: first commercial integrated circuit

1979 Motorola 68000, 68000 transistors

2000 Pentium IV, 42x 10^6 transistors

2007 Intel Core2Duo, 410 x 10^6 transistors