Policy Forum. Science 26 January 2001: Vol no. 5504, pp DOI: /science Prev Table of Contents Next

Similar documents
On Intelligence Jeff Hawkins

Online Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots

The Science In Computer Science

Birth of An Intelligent Humanoid Robot in Singapore

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Artificial Intelligence. What is AI?

Modeling cortical maps with Topographica

Introduction to AI. What is Artificial Intelligence?

Changjiang Yang. Computer Vision, Pattern Recognition, Machine Learning, Robotics, and Scientific Computing.

AI MAGAZINE AMER ASSOC ARTIFICIAL INTELL UNITED STATES English ANNALS OF MATHEMATICS AND ARTIFICIAL

Appendices master s degree programme Artificial Intelligence

Welcome. PSYCHOLOGY 4145, Section 200. Cognitive Psychology. Fall Handouts Student Information Form Syllabus

Abstract. Keywords: virtual worlds; robots; robotics; standards; communication and interaction.

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving?

Master Artificial Intelligence

Outline. What is AI? A brief history of AI State of the art

By Marek Perkowski ECE Seminar, Friday January 26, 2001

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

Implicit Fitness Functions for Evolving a Drawing Robot

RoboCup. Presented by Shane Murphy April 24, 2003

Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects

How the Body Shapes the Way We Think

Journal Title ISSN 5. MIS QUARTERLY BRIEFINGS IN BIOINFORMATICS

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

CMSC 372 Artificial Intelligence. Fall Administrivia

Artificial Intelligence: An overview

Technology designed to empower people

Enhancing Embodied Evolution with Punctuated Anytime Learning

Neural Models for Multi-Sensor Integration in Robotics

Introduction to Autonomous Agents and Multi-Agent Systems Lecture 1

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

A Divide-and-Conquer Approach to Evolvable Hardware

TECHNOLOGY MIND & SOCIETY

Emergent Nature of Cognition

THE MECA SAPIENS ARCHITECTURE

CSCE 315: Programming Studio

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

Bringing up robot: Fundamental mechanisms for creating a self-motivated, self-organizing architecture

Complex-valued neural networks fertilize electronics

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Fundamentals of Computer Vision

MAKER: Development of Smart Mobile Robot System to Help Middle School Students Learn about Robot Perception

Associated Emotion and its Expression in an Entertainment Robot QRIO

The robot that thinks like you... - Features Print New Scientist

Digital image processing vs. computer vision Higher-level anchoring

Invited Speaker Biographies

Artificial Life Simulation on Distributed Virtual Reality Environments

TECHNOLOGY, MIND & SOCIETY

Computational Thinking for All

National Aeronautics and Space Administration

Neuromorphic and Brain-Based Robots

How Many Pixels Do We Need to See Things?

The secret behind mechatronics

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN?

BOX, Floor 5, Tower 3, Clements Inn, London WC2A 2AZ, United Kingdom

Norbert Kruger John Hallam. The Mærsk Mc-Kinney Møller Institute University of Southern Denmark

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

GPU Computing for Cognitive Robotics

Uploading and Consciousness by David Chalmers Excerpted from The Singularity: A Philosophical Analysis (2010)

The Human in Defense Systems

Creating a 3D environment map from 2D camera images in robotics

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

This list supersedes the one published in the November 2002 issue of CR.

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Technology trends in the digitalization era. ANSYS Innovation Conference Bologna, Italy June 13, 2018 Michele Frascaroli Technical Director, CRIT Srl

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS

GNSS in Autonomous Vehicles MM Vision

Unit 1: Introduction to Autonomous Robotics

The Behavior Evolving Model and Application of Virtual Robots

Ground Robotics Capability Conference and Exhibit. Mr. George Solhan Office of Naval Research Code March 2010

these systems has increased, regardless of the environmental conditions of the systems.

Humanification Go Digital, Stay Human

Various Calibration Functions for Webcams and AIBO under Linux

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Android (Child android)

Robotic Systems ECE 401RB Fall 2007

Introduction to Artificial Intelligence: cs580

Artificial Intelligence (AI) Artificial Intelligent definition, vision, reality and consequences. 1. What is AI, definition and use today?

Learning and Using Models of Kicking Motions for Legged Robots

Advanced Robotics Introduction

II. ROBOT SYSTEMS ENGINEERING

Machines that dream: A brief introduction into developing artificial general intelligence through AI- Kindergarten

Biologically Inspired Embodied Evolution of Survival

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Connecting the Physical and Digital Worlds: Sensing Andrew A. Chien

CPE/CSC 580: Intelligent Agents

COMP 150: Developmental Robotics. Instructor: Jivko Sinapov

An Unreal Based Platform for Developing Intelligent Virtual Agents

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks

Impact of Artificial Intelligence on U.S. Patent Laws FOR THE LICENSING EXECUTIVES SOCIETY SEPTEMBER 25, 2018 JUSTIN D. PETRUZZELLI, ESQ.

FET FLAGSHIPS Preparatory Actions. Proposal "RoboCom: Robot Companions for Citizens"

Short Course on Computational Illumination

10/4/10. An overview using Alan Turing s Forgotten Ideas in Computer Science as well as sources listed on last slide.

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Dr. Ashish Dutta. Professor, Dept. of Mechanical Engineering Indian Institute of Technology Kanpur, INDIA

Artificial Intelligence

Transcription:

Science 26 January 2001: Vol. 291. no. 5504, pp. 599-600 DOI: 10.1126/science.291.5504.599 Prev Table of Contents Next Policy Forum ARTIFICIAL INTELLIGENCE: Autonomous Mental Development by Robots and Animals Juyang Weng, * James McClelland, Alex Pentland, Olaf Sporns, Ida Stockman, Mriganka Sur, Esther Thelen How does one create an intelligent machine? This problem has proven difficult. Over the past several decades, scientists have taken one of three approaches: In the first, which is knowledge-based, an intelligent machine in a laboratory is directly programmed to perform a given task. In a second, learning-based approach, a computer is "spoon-fed" human-edited sensory data while the machine is controlled by a task-specific learning program. Finally, by a "genetic search," robots have evolved through generations by the principle of survival of the fittest, mostly in a computer-simulated virtual world. Although notable, none of these is powerful enough to lead to machines having the complex, diverse, and highly integrated capabilities of an adult brain, such as vision, speech, and language. Nevertheless, these traditional approaches have served as the incubator for the birth and growth of a new direction for machine intelligence: autonomous mental development. As Kuhn wrote (1), "Failure of existing rules is the prelude to a search for new ones." A Definition What is autonomous mental development? With time, a brainlike natural or an artificial embodied system, under the control of its intrinsic developmental program (coded in the genes or artificially designed) develops mental capabilities through autonomous real-time interactions with its environments (including its own internal environment and components) by using its own sensors and effectors. Traditionally, a machine is not autonomous when it develops its skills, but a human is autonomous throughout its lifelong mental development. Recent advances in neuroscience illustrate this principle. For example, if the optic nerves originating from the eyes of an animal (i.e., a ferret) are connected into the auditory pathway early in life, the auditory cortex gradually takes on a representation that is normally found in the visual cortex (2). Further, the "rewired" animals successfully learn to perform vision tasks with the auditory cortex. This discovery suggests that the cortex is governed by developmental principles that work for both visual and auditory signals. In another example,

the developmental program of the monkey brain dynamically selects sensory input, (e.g., three fingers instead of one, as normal), according to the actual sensory signal that is received, and this selection process is active throughout adulthood (3). Computational modeling of human neural and cognitive development has just started to be a subject of study (4, 5). To be successful, mainstream cognitive psychology needs to advance from explaining psychological phenomena in specific controlled settings toward deriving underlying computational principles of mental development that are applicable to general settings. Such computational studies are necessary for understanding of mind. The idea of mental development is also applicable to machines, but it has not received serious attention in the artificial intelligence community. In the past, many believed that hand programming alone or task-specific machine learning could be sufficient for constructing an intelligent machine. Nevertheless, recently it was pointed out that to be truly intelligent, machines need autonomous mental development (6). (See the figure, below.) Growing up. Mental development is realized through autonomous interactions with the real physical world. Manual Versus Autonomous Development The traditional manual development paradigm can be described as follows: Start with a task, understood by the human engineer (not the machine). Design a task-specific representation. Program for the specific task using the representation. Run the program on the machine. If, during program execution, sensory data are used to modify the parameters of the above predesigned task-specific representation, we say that this is machine learning. In this traditional paradigm, a machine cannot do anything beyond the predesigned representation. In fact, it does not even "know" what it is doing. All it does is run the program.

The autonomous development paradigm for constructing developmental robots is as follows: Design a body according to the robot's ecological working conditions (e.g., on land or under water). Design a developmental program. At "birth," the robot starts to run the developmental program. To develop its mind, humans mentally "raise" the developmental robot by interacting with it in real time. According to this paradigm, robots should be designed to go through a long period of autonomous mental development, from "infancy" to "adulthood." The essence of mental development is to enable robots to autonomously "live" in the world and to become smart on their own, with some supervision by humans. Our human genetic program has evolved to use our body well. Analogously, the developmental programs for robots should also be body-specific, or specific to robot "species," as are traditional programs. However, a developmental program for developing a robot mind must have other properties (see the table) that set it apart from all the traditional programs: It cannot be task-specific, because the tasks are unknown at the time of programming, and the robots should be enabled to do any job that we can teach them. A human can potentially learn to take any job--as a computer scientist, an artist, or a gymnast. The programmer who writes a developmental program for a robot does not know what tasks the future robot owners will be teaching it. Furthermore, a developmental program for robots must be able to generate automatically representations for unknown knowledge and skills. Like humans and animals, the robots must learn in real time while performing "on the fly." A mental developmental process is also an open-ended cumulative process: A robot cannot learn complex skills successfully without first learning necessary simpler skills, e.g., without learning how to hold a pen, the robot will not be able to learn how to write. DIFFERENCES BETWEEN ROBOT PROGRAMS Properties Traditional Developmental Not task specific No Yes Tasks are unknown No Yes Generates a representation No Yes

of an unknown task Animal-like online learning No Yes Open-ended learning No Yes Early Prototypes Early prototypes of developmental robots include Darwin V (7) and SAIL (6, 8, shown below), developed independently around the same time but with very different goals. Darwin V was designed to provide a concrete example for how the computational weights of neural circuits are determined by the behavioral and environmental interactions of an autonomous device. Through real-world interactions with physical objects, Darwin V developed a capability for positioninvariant object recognition, allowing a transition from simple behaviors to more complex ones. CREDIT: J. WENG The goal of the SAIL developmental robot was to generate automatically representations and architectures for scaling up to more complex capabilities in unconstrained, unknown human environments. For example, after a human pushes the SAIL robot "for a walk" along corridors of a large building, SAIL can navigate on its own in similar environments while "seeing" with its two video cameras. After humans show toys to SAIL and help SAIL's hand to reach

them, SAIL can pay attention to these toys, recognize them, and reach them too. To allow SAIL to learn autonomously, the human robot-sitter lets it explore the world on its own, but encourages and discourages behaviors by pressing its "good" button or "bad" button. Responses invariant to task-unrelated factors are achieved through automatically deriving discriminating features. A real-time speed is reached by self-organizing large memory in a coarse-to-fine way (9). These and other examples that aim at automation of learning [e.g., (10)] have demonstrated robotic capabilities that have not been achieved before or that are very difficult to achieve with traditional methods. The Future Computational studies of autonomous mental development should be significantly more tractable than traditional task-specific approaches to constructing intelligent machines and to understanding natural intelligence, because the developmental principles are more general in nature and are simpler than the world around us. For example, the visual world seen by our eyes is very complex. The light that falls on a particular pixel in a camera depends on many factors--lighting, object shape, object surface reflectance, viewing geometry, camera type, and so on. The developmental principles capture major statistical characteristics from visual signals (e.g., the mean and major directions of signal distribution), rather than every aspect of the world that gives rise to these signals. A task-specific programmer, in contrast, must study aspects of the world around the specific task to be learned; this becomes intractable if such a task, such as vision, speech, or language, requires too many diverse capabilities. This new field will provide a unified framework for many cognitive capabilities--vision, audition, taction, language, planning, decision-making, and task execution. The sharing of common developmental principles by visual and auditory sensing modalities, as recent neuroscience studies have demonstrated, will encourage scientists to further discover underlying developmental principles that are shared, not only by different sensing and effector modalities, but also by different aspects of higher brain functions. Developmental robots can "live" with us and become smarter autonomously, under our human supervision. It is important for neuroscientists and psychologists to discover computational principles of mental development. And in fact, developmental mechanisms are quantitative in nature at the level of neural cells. The precision of knowledge required to verify these principles on robots will improve our chances of answering some major open questions in cognitive science, such as how the human brain develops a sense of the world around it. Advances in creating robots capable of autonomous mental development are likely to improve the quality of human life. When robots can autonomously develop capabilities, such as vision, speech, and language, humans will be able to train them using their own communication modes. Developmental robots will

learn to perform dull and repetitive tasks that humans do not like to do, e.g., carrying out missions in demanding environments such as undersea and space exploration and cleaning up nuclear waste. We believe that there is a need for a special program for funding support of this new field of autonomous mental development. This program should encourage collaboration among fields that study human and machine mental development. Biologically motivated mental development methods for robots and computational modeling of animal mental development should be especially encouraged. There is also a need for a multidisciplinary forum for exchanging the latest research findings in this new field, similar to the Workshop on Development and Learning funded by NSF and Defense Advanced Research Projects Agency held at Michigan State University (11). We anticipate a potentially large impact on science, society, and the economy by advances in this new direction. References and Notes 1. T. S. Kuhn, The Structure of Scientific Revolution (Univ. of Chicago Press, Chicago, 3rd ed., 1996), p. 68. 2. L. von Melchner, S. L. Pallas, M. Sur, Nature 404, 871 (2000). 3. X. Wang, M. M. Merzenich, K. Sameshima, W. M. Jenkins, Nature 378, 13 (1995). 4. J. L. Elman et al., Rethinking Innateness: A Connectionist Perspective on Development (MIT Press, Cambridge, MA, 1997). 5. E. Thelen, E. G. Schoner, C Scheier, L. B. Smith, Behav. Brain Sci., in press. 6. J. Weng, in Learning in Computer Vision and Beyond: Development in Visual Communication and Image Processing, C. W. Chen, Y. Q. Zhang, Eds. (Marcel Dekker, New York, 1998) (Michigan State Univ. tech. rep. CPS 96-60, East Lansing, MI, 1996). 7. N. Almassy, G. M. Edelman, O. Sporns, Cereb. Cortex 8, 346 (1998). 8. J. Weng, W. S. Hwang, Y. Zhang, C. Evans, in Proceedings of the 2nd International Symposium on Humanoid Robots, 8 to 9 October 1999, Tokyo, pp. 57-64. 9. W. S. Hwang, J. Weng, IEEE Trans. Pattern Anal. Machine Intell. 22, 11 (2000). 10. D. Roy, B. Schiele, A. Pentland, in Workshop on Integrating Speech and Image Understanding, Proceedings of an International Conference on Computer Vision, 21 September 1999, Corfu, Greece (IEEE Press, New York, 1999). 11. Proceedings of Workshop on Development and Learning, 5 to 7 April 2000, Michigan State University, East Lansing, MI. www.cse.msu.edu/dl/.

J. Weng is at the Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA. J. McClelland is at the Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, USA. A. Pentland is at The Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. O. Sporns and E. Thelen are at the Department of Psychology, Indiana University, Bloomington, IN 47405, USA. I. Stockman is at the Department of Audiology and Speech Sciences, Michigan State University, East Lansing, MI 48824, USA. M. Sur is at the Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. *To whom correspondence should be addressed. E-mail: weng@cse.msu.edu