The Elusive Machine Intelligence Prof. Suash Deb

Similar documents
ISSCC 2003 / SESSION 1 / PLENARY / 1.1

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

BLUE BRAIN - The name of the world s first virtual brain. That means a machine that can function as human brain.

MICROPROCESSOR TECHNOLOGY

The Three Laws of Artificial Intelligence

Backgrounder. From Rock n Roll to Hafnium The Transistor turns 60. Background Summary

Progress due to: Feature size reduction - 0.7X/3 years (Moore s Law). Increasing chip size - 16% per year. Creativity in implementing functions.

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

What We Talk About When We Talk About AI

MINE 432 Industrial Automation and Robotics

Computer Science as a Discipline

Nanoelectronics the Original Positronic Brain?

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

A Balanced Introduction to Computer Science, 3/E

Artificial Neural Networks

detection; creative industrial design systems; sophisticated computer games in which the non-player characters are driven

Course Objectives. This course gives a basic neural network architectures and learning rules.

Proposers Day Workshop

A Comprehensive Study of Artificial Neural Networks

Technologists and economists both think about the future sometimes, but they each have blind spots.

Introduction to Artificial Intelligence. Department of Electronic Engineering 2k10 Session - Artificial Intelligence

On Intelligence Jeff Hawkins

Artificial Intelligence. What is AI?

What can evolution tell us about the feasibility of artificial intelligence? Carl Shulman Singularity Institute for Artificial Intelligence

Computational Intelligence Introduction

Global Technology Trends. Spring 2014

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Intelligent Systems. Lecture 1 - Introduction

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN?

MA/CS 109 Computer Science Lectures. Wayne Snyder Computer Science Department Boston University

Creating a Poker Playing Program Using Evolutionary Computation

Neuromorphic Analog VLSI

Nanotechnology and Artificial Life. Intertwined from the beginning. Living systems are frequently held up as proof that nano-machines are feasible.

ES 492: SCIENCE IN THE MOVIES

The Transistor. Survey: What is Moore s Law? Survey: What is Moore s Law? Technology Unit Overview. Technology Generations

An insight into the posthuman era. Rohan Railkar Sameer Vijaykar Ashwin Jiwane Avijit Satoskar

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors

History and Philosophical Underpinnings

Big Data Analytics in Science and Research: New Drivers for Growth and Global Challenges

Emily Dobson, Sydney Reed, Steve Smoak

Introduction. Reading: Chapter 1. Courtesy of Dr. Dansereau, Dr. Brown, Dr. Vranesic, Dr. Harris, and Dr. Choi.

Artificial Intelligence

Chapter 1. Engineering and Society

Neural Network Application in Robotics

Instructors: Prof. Takashi Hiyama (TH) Prof. Hassan Bevrani (HB) Syafaruddin, D.Eng (S) Time: Wednesday,

Putting It All Together: Computer Architecture and the Digital Camera

THE USE OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN SPEECH RECOGNITION. A CS Approach By Uniphore Software Systems

Raising the Bar Sydney 2018 Zdenka Kuncic Build a brain

The immortalist: Uploading the mind to a computer

1 Introduction. w k x k (1.1)

Artificial Intelligence Elman Backpropagation Computing Models for Predicting Shelf Life of. Processed Cheese

EMT 251 Introduction to IC Design

Dr. Ashish Dutta. Professor, Dept. of Mechanical Engineering Indian Institute of Technology Kanpur, INDIA

The Power of Exponential Thinking

The Singularity is Near: When Humans Transcend Biology. by Ray Kurzweil. Book Review by Pete Vogel

GPU Computing for Cognitive Robotics

COMPUTATONAL INTELLIGENCE

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network

What can Computer Science. learn from Biology in order. to Program Nanobots safely? Susan Stepney. Non-Standard Computation Group,

Implicit Fitness Functions for Evolving a Drawing Robot

Humanification Go Digital, Stay Human

10/4/10. An overview using Alan Turing s Forgotten Ideas in Computer Science as well as sources listed on last slide.

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

The Behavior Evolving Model and Application of Virtual Robots

Introduction to AI. What is Artificial Intelligence?

LECTURE 2 Wires and Models

Machine Learning and Decision Making for Sustainability

An Auditory Localization and Coordinate Transform Chip

Artificial intelligence, made simple. Written by: Dale Benton Produced by: Danielle Harris

Machines that dream: A brief introduction into developing artificial general intelligence through AI- Kindergarten

Chapter Sixteen. Inventing the Future

RISTO MIIKKULAINEN, SENTIENT ( SATIENT/) APRIL 3, :23 PM

Research Statement. Sorin Cotofana

Goals of this Course. CSE 473 Artificial Intelligence. AI as Science. AI as Engineering. Dieter Fox Colin Zheng

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

International Center on Design for Nanotechnology Workshop August, 2006 Hangzhou, Zhejiang, P. R. China

Introduction to Artificial Intelligence

Intel s Breakthrough in High-K Gate Dielectric Drives Moore s Law Well into the Future

AIEDAM Special Issue: Sketching, and Pen-based Design Interaction Edited by: Maria C. Yang and Levent Burak Kara

Lecture 6 6 Color, Waves, and Dispersion Reading Assignment: Read Kipnis Chapter 7 Colors, Section I, II, III 6.1 Overview and History

CSE 473 Artificial Intelligence (AI) Outline

Philosophical Foundations

The Science In Computer Science

Introduction. Lecture 0 ICOM 4075

Lecture 1 What is AI?

Unit 3 Digital Circuits (Logic)

What is a Meme? Brent Silby 1. What is a Meme? By BRENT SILBY. Department of Philosophy University of Canterbury Copyright Brent Silby 2000

URI Imagine the Future

The ERC: a contribution to society and the knowledge-based economy

Outline. What is AI? A brief history of AI State of the art

بسم اهلل الرحمن الرحيم. Introduction to Neural Networks

THE U.S. SEMICONDUCTOR INDUSTRY:

What we are expecting from this presentation:

What is matter, never mind What is mind, doesn t matter. Or Does it!!??

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play

1 Introduction 1.1 HISTORICAL DEVELOPMENT OF MICROELECTRONICS

Analog Devices: High Efficiency, Low Cost, Sensorless Motor Control.

The Evolution of Artificial Intelligence in Workplaces

Transcription:

The Elusive Machine Intelligence Prof. Suash Deb Dept. of Computer Science & Engineering C. V. Raman College of Engineering Bidyanagar, Mahura, Bhubaneswar ORISSA, INDIA

MACHINE INTELLIGENCE Any aspect of a machine s actions that we would call intelligent if done by a person. The main areas of interest include: 1. Machine Learning 2. Knowledge-Based Systems 3. Machine Vision 4. Voice Recognition

What is Meant by Learning? A Computer Program is said to Learn from Experience E w.r.t.. Some Class of Tasks T & Performance Measure P,, if its Performance at Tasks in T as Measured by P, Improves with Experience E

WHY LEARNING? Working Environments of Machines May Not be Known at the Design Time Explicit Knowledge Encoding May be Difficult & Not Available Environments Change Over Time Some Tasks can t be Defined Well Except by Example Biological Systems Learn

KNOWLEDGE-BASED SYSTEMS A knowledge-based system (KBS) is a computer system that is programmed to imitate human problem-solving by means of Artificial Intelligence. i.e. KBS are those based on the methods and techniques of Artificial Intelligence.. Their core components are the knowledge base and the inference mechanisms. While for some authors Expert Systems, Case- Based Reasoning and Neural Networks are all particular types of knowledge-based systems,, there are others who consider that Neural Networks are different and exclude it from this category.

Why Machine Intelligence? Early Robots: MECHANICAL DEVICES, Performing Pick-&-Place Operations. With Progress in Computer Technology Robots Became More Sophisticated, Performing more Precise Industrial Operations Welding, Spray Painting and Simple Parts Assembly Repetitive Tasks Robots are simply Preprogrammed. If anything Interferes with the Programmed tasks, Work stops. i.e. Robots are NOT capable of SENSING its EXTERNAL ENVIRONMENT & THINKING its way OUT of a PROBLEM. Hence the need of Sensory Perception & Artificial Intelligence

ARTIFICIAL INTELLIGENCE The term was 1 st used in a conference in 1956. Scientists Predicted: By 1970 s there would be Intelligent Machines that would replace Human Beings in many areas. Predictions till date, proved to be Premature. Although Significant Progress has been made. Definition of AI MARVIN MINSKY-M.I.T. (USA) The Science of Making Machines do things that would require Intelligence if done by men. Definition of Intelligence: No consensus Learning, Managing Ambiguity, Reasoning, Inferencing, Prioritizing

ARTIFICIAL FLIGHT 1 st School of Thought: To Imitate Birds (i.e. Bird Wings). Successfully Developed (WRIGHT BROTHERS) Because of Perception of Flight in terms of LIFT, CONTROL & POWER i.e. AIRFLOW & PRESSURE. Superiority of Birds over Aircrafts Limitations of Aircrafts: Lack of precision of birds Cannot change direction instantaneously Cannot scoop fish from the ocean Superiority of Aircrafts: Can fly at a height of 45,000 ft Can fly faster than sound (Supersonic)

Argument of AI Community: Impossible to Discover the Computational Principles of Intelligence by examining the Intricacies of Human Thinking. Just as it was impossible to discover the Principles of Aerodynamics by examining bird wings. Roadmap for AI Community: How Computational Systems must be organized in order to behave intelligently. The Scientific Aim of AI Research: To Understand Intelligence as Computation. The Engineering Aim of AI Research: To build up machines that Surpass or Extend Human Abilities in Some way. Hence Imitation of Human Intelligence contribute little to either ambition.

The Flaws of Artificial Intelligence: Attempting to Program Computers to Exhibit Intelligence Without First Addressing What Intelligence Is & What It means to Understand. They Left Out The Most Important Part of Machine Intelligence the Intelligence. Brains & Computers Do Fundamentally Different Things.

HUMAN BRAIN Basic Characteristics Composed of Billions of Nerve Cells - Neurons Neuron = Cell Body + Dendrites + Axons Don t Act Like Computer Memory No Cell ALONE Holds Information About One Object/Situation Eliminate a No. of Cell WITHOUT Loosing Any Information. Then How do We Memorize, Recall & Learn? Ans: The Electrical Connections Between Cells the Synapses. Superiority of Human Beings & Computers Human Beings: Creativity, Common Sense & Reasoning. Computers: Numerical Computations.

A Neuron

BIOLOGICAL NEURONS Characteristics: Weak Computing Power of Each Converts Into a Massively Parallel Complex Network Thus Making it a Powerful Information Processor Inputs & Outputs: Receives all Inputs Through the Dendrites, Sums Them & Produces an Output if the Sum is Greater than a Threshold Value.

SYNAPSE The Fibres of Neurons Come Together To Form Junction Points called SYNAPSES. Nerve Impulses from Adjacant Neurons Excite & Bridge a Synapse when The Sum of Several Input Pulses Exceeds a Threshold Voltage Level for a Given Synapse. [Threshold Level for a Given Synapse = Minimum Stimulation Level Required to excite the Synapse.]

TO SUMMARIZE A neuron does nothing unless the collective influence of all of its inputs reaches a threshold level. Whenever that threshold level is reached, the neuron produces a full strength output that proceeds from the cell body and through to the axons branches. Whenever it happens, the neuron is said to fire.

NEURAL NETWORK ARCHITECTURES 1 a) Single Layer Feedforward Network: Composed of 2 Layers Input & Output Called Single Layer since it is the Output Layer ALONE which carries out the Computation. The Input Layer Merely Transmits the Signals to the Output Layer.

NEURAL NETWORK ARCHITECTURES 2 b) Multilayer Feedforward Network:

NEURAL NETWORK ARCHITECTURES 3 c) Recurrent Network

CHARACTERISTICS OF ARTIFICIAL NEURAL NETWORKS Mapping Capabilities: They can Map Input Patterns to their Associated Output Patterns. Learning: They Learn by Example i.e Experience Needs to be Trained with Known Examples of A Problem Before Testing their Inference Capability on Unknown ones. Capability to Generalize: They can Predict New Outcomes from Past Trends (e.g. Share Price). Fault Tolerant: They can Recall Full Patterns from Incomplete, Partial or Noisy Patterns. Massive Parallelism: Can Process Information in Parallel, at High Speed & in a Distributed Manner.

CONVENTIONAL COMPUTERS vs. ANN COMPUTERS (A) Conventional Computers: Obeys Software Instructions According to a Set of Rules (B) Neural Network Computers: Learn by Example/Experience By Fine Tuning its Connection in Response to Every Situation it Encounters.

Neural Networks Are Modelled After Human Brain Of Course, Artificial Neural Networks Are Modelled After Human Nervous Systems. Brain is Indeed, Composed of a No. of Neurons But Without First Understanding What the Brain Does, Simple Neural Networks Can t Carry Out Tasks Beyond a Limited Ones.

Challenge of Machine Intelligence How Can A Brain Perform Difficult Tasks in One Hundred Steps that The Most Powerful Supercomputers Can t t Do In a Million or Billion Steps?

The Answer is : 1) Brain Does Not Compute Answers to the Problems 2) It Retrieves Answers from Memory

Brain & Memory The Answers were Stored in Memory a Long Time Ago. It only takes a Few Steps to Retrieve Something from Memory. Slow Neurons Are Not Only Fast Enough to Do this But They Constitute the Memory Themselves. The Entire Cortex is a Memory Systems.

Catching a Ball Our Brain Has a Stored Memory of the Muscle Commands Required to Catch a Ball (Along with many other Learned Behaviors)

What Happens When a Ball is Thrown? The Appropriate Memory is Automatically Recalled by the Sight of the Ball The Memory Actually Recalls a Temporal Sequence of Muscle Commands The Retrieved Memory is Adjusted (As it is Recalled) to Accommodate the Particulars of the Moment e.g. Ball s s Actual Path & the Position of the Person Catching it.

To Summarize : The Memory of How to Catch a Ball is Not Programmed into Our Brain. It is Learned Over Years of Repetitive Practice & It Is Stored & Not Calculated in Our Neurons.

Shrinking Computer Civilization Has Advanced With the Ability of Human Beings to Exploit Various Physical Resources (matl( matl,, energy etc.) in a New Way. The History of Computer Technology Has Involved a Sequence of Changes from Gears to Relays to Valves to Transistors to Integrated Circuits

Recap of the Past Computer Revolution Truly Started with the Invention of Transistor Shockley, Bell Labs 1947 As A Semiconductor Switch, it Replaced the Vacuum Tube Integrated Circuit (IC) was Developed Independently in 1959 Then VLSI Revolution By Late 1970s Very Complicated Chips Were Being Assembled.

An Alternative to Transistor Technology Why? Periodic Doubling of Computer Speed Was Due To Continual Miniaturization of its Most Basic Element Transistor. As Transistors Became Smaller More Could Be Integrated into a Single Microchip, Resulting in Increase in Computational Power. Miniaturization is Reaching its Limit

Thought for Devising a New Kind of Computer If technology continued to abide by Moore s Law then the ever increasing shrinking size of circuitry packed onto silicon chips would eventually led us to the point where individual elements would not consist of more than a few atoms. The classical physical laws that govern the behavior and properties of the circuit will no longer remain valid.

Why Microelectronics Can t Continue Following Moore s Law Putting More Transistors Into A Chip Is Associated With : The Need To Dissipate Heat From So Many Closely Packed Devices Difficulty of Creating the Devices Possibility of Stray Signals on the Chip etc.

Gordon Moore s s Observation Moore's law describes a long-term trend in the history of computer hardware. Since the invention of the integrated Circuit in 1958, the number of transistors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years. The trend was first observed by Intel co-founder Gordon E. Moore in a 1965 paper. It has continued for almost half of a century and was not expected to stop for another decade at least and perhaps much longer.

Exponential Growth Exponential growth (including exponential decay) occurs when the growth rate of a mathematical function is proportional to the function's current value. In the case of a discrete domain of definition with equal intervals it is also called geometric growth or geometric decay. With exponential growth of a positive value its rate of increase steadily increases, or in the case of exponential decay, its rate of decrease steadily decreases.

Moore s s Law Moore's original statement that transistor counts had doubled every ery year can be found in his publication "Cramming more components onto Integrated Circuits", Electronics Magazine, 19 April 1965: The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

MOORE S LAW- Modified Observed (in mid 1960s) that Density of Transistors on Integrated Circuits was getting DOUBLED EVERY 12 MONTHS. It Implies that Computers were Periodically Doubling BOTH in CAPACITY & in SPEED. In Mid 1970s - Moore Revised His Observation to a MORE ACCURATE ESTIMATE of 24 MONTHS & the Trend Persisted through the 1990s. Between 1910 & 1950 : Computer Speed Per Unit Cost Doubled Every 3 Years. Between 1950 & 1966 : Doubled Every 2 Years. Right Now : Doubling Every Year.

Nanotechnology Nano : One Billionth Part of.. Nanometer : Equal to a Billionth of a Meter Nanotechnology : The Art of Manipulating Materials on an Atomic or Molecular Scale. It is About Building Things One Atom At a Time

Nanotechnology : Its Utility To Build Microscopic Devices Such As Robots. These will, in turn, Assemble Individual Atoms & Molecules Into Products Much As if They Were Lego Blocks.

Atomic Properties Properties of Manufactured Products Depend Upon the Arrangement of Constituent Atoms Within It. Rearrangement of Atoms in Coal Diamond Rearrangement of Atoms in Sand (& Adding a Few Trace Elements) Computer Chips

Transition From Microtechnology To Nanotechnology There is Plenty of Room at the Bottom Richard P. Feynman (Caltech), December, 1959 Writing the whole Encyclopedia Britannica on the Head of a Pen "The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed a a development which I think cannot be avoided."

Nanotechnology Award Feynman Feynman Nanotechnology Grand Prize of US$250,000 Provided Annually to the researchers whose recent work has most advanced the development of molecular nanotechnology.

First Ph.D. K. Drexler, the first recipient of Ph.D. in Nanotechnology from M.I.T. in the year 1991 He was also the first teacher on the subject at Stanford University. His book Engines of Creations dealt with the consequences of new technologies has asked the following : What What can we build with those atom-stacking mechanism?

Universal Assembler Robotic Devices Molecular in Sizes in Precision It Takes Raw Atoms on One Side & Delivers Consumer Goods at the Other Very Small Versions of Their Everyday Macroscopic Counterparts

US National Nanotechnology Initiative (NNI) President Clinton at Cal Tech (2000) : I I Will Support A Major New Initiative, worth US$500 Million. Cal. Tech is No Stranger to the Idea of Nanotechnology the ability to Manipulate Matter at the Atomic & Molecular Level. More than 40 Years Back, Cal Tech s s Own Richard Feynman Asked What Would Happen if We Could Arrange the Atoms One-by by-one the Way We Want Them

Universal Assembler

Universal Assembler If One Such Universal Assembler is Built, the Manufacturing Costs for More Such Systems & the Products they make Will be Very Low

What Does MI Aim For? Not Interested in Only Faster & Smaller Computers Want To Deliver on its (Largely Unspecified) Promise of Machines that Think, Machines that Display Human Intelligence

Some Unanswered Questions : Why is it that if computers have gotten so much faster and cheaper, they have not become any better at understanding what we want them to do? Some of the tasks we take for granted in vision and language are still too difficult for the fastest supercomputers to handle. To us, a picture may be worth a thousand words, but to a machine both are just seemingly random jumbles of numbers. How can we get machines to intelligently process this kind of information?

My Views We Can Learn Much From the Way Biological Systems Compute and Learn. Researchers Should focus on Applying Knowledge About Biological Information Processing Systems to Building Better Artificial Sensorimotor Systems that Can Adapt and Learn from Experience. Thus, Research Should Focus on Neuroscience Models, Theoretical Foundations of Machine Learning Algorithms, As well As Constructing Real-Time Intelligent Robotic Systems.

Ever Increasing Computing Power & MI Can Lead to Smart Machines for Specific Problems Credit Card Fraud Detection etc. The Complexity of the Problems for which Smart Machines were Deployed is Increasing So Machines That can Truly Reason is Facing Problems no doubt.

SOFTWARE FOR INTELLIGENCE Processing Power is a NECESSARY BUT NOT SUFFICIENT CONDITION for Imparting Human Level Intelligence in Machines. Of Greater Importance is the Software of Intelligence

Real Intelligent Machines: Require Both Speed PLUS Software of Intelligence. The Present Semiconductor-Based Solid- State Microelectronics can t t lead to dense & Complex Circuitry Necessary to Provide True Cognitive Capabilities Until Recently The Successor of Solid State Microelectronics Was Not Very Clear

EVOLUTION OF TECHNOLOGY (A Continuation of the Evolutionary Process that Gave Rise to Us the Technology Creating Species) It Took Tens of Thousands of Years (for our Ancestors) to Figure Out that Sharpening Both Sides of a Stone Created Useful Tool. In the Last Millennium, the Time Required for a Major Paradigm Shift in Technology had Shrunk to Hundreds of Years. Technology is Advancing Exponentially. An Exponential Process Starts Slowly But Eventually its Pace Increases Very Rapidly.

Scientific World vs. Business World World of Science Does Not Encourage Risk Taking as Compared to the World of Business. In Academic World A Couple of Years Spent In Pursuing a New Idea that Does Not Work Can Permanently Ruin A Young Career

The Law of Accelerating Returns Fundamental Attribute of Any Evolutionary Process As Order Exponentially Increases, Which Reflects the Essence of Evolution The Time Between Salient Events Grows Shorter. The Acceleration takes Place at a Non Linear Rate.

It Took 90 Years to Achieve the 1-st $1000 Computer Capable of Executing One Million Instructions Per Second (MIPS). Right Now People are Adding Additional MIPS to a $1000 Computer Every Day.

MI Will Remain Elusive For Ever? NO - The Only Impediment is Time After All Humans Exist, so Evolution Has Already Solved the Problem