Philosophical Foundations Artificial Intelligence Santa Clara University 2016
Weak AI: Can machines act intelligently? 1956 AI Summer Workshop Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. Sayre 1993: Artificial intelligence pursued within the cult of computationalism stands not even a ghost of a chance of producing durable results.
Weak AI: Can machines act intelligently? Turing: Instead of asking whether machines can think Ask: Can we build machines that pass a behavioral intelligence test Known as the Turing Test
Weak AI: Can machines act intelligently? Turing test Loebner competition: Gold price (video/audio interface) and Silver price (text only) have not been given Reading university 2014 won by the Russian chatter bot Eugene Goostman. The bot, during a series of five-minute-long text conversations, convinced 33% of the contest's judges that it was human.
Weak AI: Can machines act intelligently? Turing test: People confused computers with human being when they were not judging Eliza, the reflection-psychotherapist robot Mgonz and Natachata chat robots CyberLover attracted attention of law enforcement, because correspondents revealed enough personal details to allow identity theft.
Weak AI: Can machines act intelligently? Turing Test Can be criticized for intrinsic weaknesses Judges judged a mindless program humanlike because it made typing errors
Weak AI: Can machines act intelligently? Arguments against the Turing Test Argument from disability: Turing: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its won thought, have as much diversity of behavior as man, do something really new
Weak AI: Can machines act intelligently? The mathematical objection Turing & Gödel established mathematical impossibility results for any formal system Lucas (1961): Computers are Turing machines Even if we grant that computers are Turing machines, does this imply that humans can transcend these limits?
Weak AI: Can machines act intelligently? Argument from informality of behavior Quantification Problem Human behavior is too complex to be captured by any simple set of rules Criticizes in fact Good Old Fashioned AI (GOFAI) Logical agents suffer from the quantification problem Probabilistic agents do better Learning agents also do better
Jefferson (1949) quoted by Turing Not until a machine could write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain that is, not only write it but know that it had written it.
Turing calls this the argument from consciousness Jefferson s object is phenomenology (study of direct experience) Intentionality: machine s purported beliefs, desires, about" something in the real world Turing s answer: Lack of experience with intelligent machines
Seattle (1980): Computers are doing something different No one supposes that a computer simulation of a storm will leave us all wet Why on earth would anyone in his right mind suppose a computer simulation of mental processes actually have mental processes? A Hollywood simulation of a storm will however make the actors wet.
Turing s answer: The issue will go away once machines reach a certain level of sophistication We would have no problem interacting with intelligent robots as if they had mental processes See many movies
Philosophical problem: Mind-body problem Dualist approach: Mind and body exist in different realms Physicalist approach: Mental states are physical states Brain in the vat scenarios: If your brain is placed in a vat and a computer feeds you sensory imports that simulate a real world. Narrow content: Mental state does not refer to reality
Functionalism and brain replacement (thought) experiment Functionalism: Mental state is any intermediate causal condition between input and output Hence, a computer program can have the same mental states as a person Brain replacement: use chips to remove parts of the brain You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when doctors test your vision, you hear them say: We are holding a red object in front of you, please tell us what you see. You want to cry out I can t see anything, I m going totally blind. But you hear your voice saying in a way that is completely out of your control, I see a red object in front of me. your conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same. (Searle, 1992)
Functionalism and brain replacement experiments Possible conclusions: The causal mechanisms of consciousness that generate these kinds of outputs in normal brains are still operating in the electronic version, which is therefore conscious. The conscious mental events in the normal brain have no causal connection to behavior and are missing from the electronic brain, which is therefore not conscious. The experiment is impossible, and therefore speculation about it is meaningless.
Church land (1986): Functionalist arguments working at the level of a neuron can operate also at larger levels a set of neurons a mental module a lobe a hemisphere a whole brain If the brain replacement experiment shows that the replacement parts are conscious, then you agree that a circuit that replaces a brain is conscious.
Searle: Biological naturalism Mental states are high-level emergent features that are caused by low-level physical processes in the neurons. The properties of the neurons matter. Chinese room experiment: Passes Turing test But has no consciousness (according to Searle)
Searle (1990) axioms: 1. Computer programs are formal (syntactics) 2. Human minds have mental contents (semantics) 3. Syntax by itself is neither constitutive of nor sufficient for semantics. 4. Brains cause minds. Therefore, programs are not sufficient for minds.
Explanatory gap: Humans are simply incapable of forming a proper understanding of their own consciousness. Qualia: Intrinsic nature of experiences Inverted spectrum experiment X sees green when we see red X still functions in the world as if there were no change But his subjective experience is different Seems to challenge functionalism But qualia are really just offspring of philosophical confusion. (Dennett 1994)
Risks of AI People might loose their jobs to AI People might have too much leisure time People might loose their sense of being unique AI systems might be used towards undesirable ends The use of AI might result in a loss of accountability The success of AI might mean the end of the human race