Introduction to Neuromorphic Computing Insights and Challenges. Todd Hylton Brain Corporation

Similar documents
Perspectives on Neuromorphic Computing

Proposers Day Workshop

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

FROM BRAIN RESEARCH TO FUTURE TECHNOLOGIES. Dirk Pleiter Post-H2020 Vision for HPC Workshop, Frankfurt

SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW

Artificial Intelligence: An overview

1 Introduction. w k x k (1.1)

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

The Nature of Informatics

SenseMaker IST Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 SenseMaker IST Neuro-IT workshop June 2004 Page 1

Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects

Introduction to Artificial Intelligence. Department of Electronic Engineering 2k10 Session - Artificial Intelligence

Neuromorphic Analog VLSI

Nanoelectronics the Original Positronic Brain?

COMPUTATONAL INTELLIGENCE

Publishable Summary for the Periodic Report Ramp-Up Phase (M1-12)

Binary Neural Network and Its Implementation with 16 Mb RRAM Macro Chip

The Evolution of Artificial Intelligence in Workplaces

Datorstödd Elektronikkonstruktion

Cognitronics: Resource-efficient Architectures for Cognitive Systems. Ulrich Rückert Cognitronics and Sensor Systems.

On Intelligence Jeff Hawkins

SpiNNaker. Human Brain Project. and the. Steve Furber. ICL Professor of Computer Engineering The University of Manchester

Hardware Software Science Co-design in the Human Brain Project

CSE 473 Artificial Intelligence (AI) Outline

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors

CSC 550: Introduction to Artificial Intelligence. Fall 2004

ASIC-based Artificial Neural Networks for Size, Weight, and Power Constrained Applications

Nature Inspired Systems

Parallel Computing 2020: Preparing for the Post-Moore Era. Marc Snir

Outline. What is AI? A brief history of AI State of the art

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

CMSC 372 Artificial Intelligence. Fall Administrivia

Implicit Fitness Functions for Evolving a Drawing Robot

Lecture 6: Electronics Beyond the Logic Switches Xufeng Kou School of Information Science and Technology ShanghaiTech University

Intelligent Systems. Lecture 1 - Introduction

BLUE BRAIN - The name of the world s first virtual brain. That means a machine that can function as human brain.

Energy Efficient and High Performance Current-Mode Neural Network Circuit using Memristors and Digitally Assisted Analog CMOS Neurons

Bricken Technologies Corporation Presentations: Bricken Technologies Corporation Corporate: Bricken Technologies Corporation Marketing:

SYNAPTIC PLASTICITY IN SPINNAKER SIMULATOR

Integrate-and-Fire Neuron Circuit and Synaptic Device using Floating Body MOSFET with Spike Timing- Dependent Plasticity

MINE 432 Industrial Automation and Robotics

THE CONSTRUCTAL THEORY OF INFORMATION

Introduction. Reading: Chapter 1. Courtesy of Dr. Dansereau, Dr. Brown, Dr. Vranesic, Dr. Harris, and Dr. Choi.

Computational Intelligence Introduction

FROM KNIGHTS CORNER TO LANDING: A CASE STUDY BASED ON A HODGKIN- HUXLEY NEURON SIMULATOR

One computer theorist s view of cognitive systems

Glossary of terms. Short explanation

Computer Science as a Discipline

The Science In Computer Science

Neuromorphic Computing based Processors

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks

MODELING COMPLEX SOCIO-TECHNICAL ENTERPRISES. William B. Rouse November 13, 2013

Elements of Artificial Intelligence and Expert Systems

A Balanced Introduction to Computer Science, 3/E

Artificial Intelligence

Job Title: DATA SCIENTIST. Location: Champaign, Illinois. Monsanto Innovation Center - Let s Reimagine Together

UNIT-III LIFE-CYCLE PHASES

Research Statement. Sorin Cotofana

Policy-Based RTL Design

EECS150 - Digital Design Lecture 28 Course Wrap Up. Recap 1

Neural Networks The New Moore s Law

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Creating the Right Environment for Machine Learning Codesign. Cliff Young, Google AI

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS

NEUROMORPHIC COMPUTING: THE POTENTIAL FOR HIGH-PERFORMANCE PROCESSING IN SPACE Gennadi Bersuker, Maribeth Mason, and Karen L.

Computer Aided Design of Electronics

Lecture 1 What is AI?

Artificial Intelligence. What is AI?

Philosophy. AI Slides (5e) c Lin

BCD Smart Power Roadmap Trends and Challenges. Giuseppe Croce NEREID WORKSHOP Smart Energy Bertinoro, October 20 th

INTELLIGENT SYSTEMS ARCHITECTURE DESIGN

Indiana K-12 Computer Science Standards

HUMAN-LEVEL ARTIFICIAL INTELIGENCE & COGNITIVE SCIENCE

CMOS Analog Integrate-and-fire Neuron Circuit for Driving Memristor based on RRAM

An insight into the posthuman era. Rohan Railkar Sameer Vijaykar Ashwin Jiwane Avijit Satoskar

A Synchronized Axon Hillock Neuron for Memristive Neuromorphic Systems

Weebit Nano (ASX: WBT) Silicon Oxide ReRAM Technology

Chalcogenide Memory, Logic and Processing Devices. Prof C David Wright Department of Engineering University of Exeter

The computational brain (or why studying the brain with math is cool )

Hypernetworks in the Science of Complex Systems Part I. 1 st PhD School on Mathematical Modelling of Complex Systems July 2011, Patras, Greece

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Goals of this Course. CSE 473 Artificial Intelligence. AI as Science. AI as Engineering. Dieter Fox Colin Zheng

High-density CMOS Bioelectronic Chip

UNIT-III POWER ESTIMATION AND ANALYSIS

Artificial Intelligence

Computer Progression Pathways statements for KS3 & 4. Year 7 National Expectations. Algorithms

Chapter 7 Information Redux

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind

Artificial Intelligence

Silicon photonics integration roadmap for applications in computing systems

A Divide-and-Conquer Approach to Evolvable Hardware

Intelligent Robot Based on Synaptic Plasticity and Neural Networks

John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720

understand the hardware and software components that make up computer systems, and how they communicate with one another and with other systems

Harnessing the Power of AI: An Easy Start with Lattice s sensai

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Course Outcome of M.Tech (VLSI Design)

Institute of Computer Technology

UNIT-II LOW POWER VLSI DESIGN APPROACHES

Transcription:

Introduction to Neuromorphic Computing Insights and Challenges Todd Hylton Brain Corporation hylton@braincorporation.com

Outline What is a neuromorphic computer? Why is neuromorphic computing confusing? What about building a brain? Todd s Top 10 list of challenges Alternative ways of thinking about a building a brain Closing thoughts

What is a Neuromorphic Computer?

What is a Neuromorphic Computer? A neuromorphic computer is a machine comprising many simple processors / memory structures (e.g. neurons and synapses) communicating using simple messages (e.g. spikes). Neuromorphic algorithms emphasize the temporal interaction among the processing and the memory. Every message has a time stamp (explicit or implicit) Computation is often largely event-driven Neuromorphic computing systems excel at computing complex dynamics using a small set of computational primitives (neurons, synapses, spikes). I think of neuromorphic computers as a kind of dynamical computer in which the algorithms create complex spatio-temporal dynamics on the computing hardware

Neuromorphic Computing Hardware Architecture SpiNNaker ( Spiking Neural Network Architecture ) Router Memory Processor 5/15/2014 Steve Furber, To Build a Brain, IEEE Spectrum, August 2012 5

HRL Labs Neuromorphic Architecture S 1 S 4 N S 2 S 3 5/15/2014 Narayan Srinivasa and Jose M. Cruz-Albrecht, Neuromorphic Adaptive Plastic Scalable Electronics, IEEE PULSE, JANUARY/FEBRUARY 2012 6

Messaging Spike Simplest possible temporal message Facilitates algorithms inspired by biological neural systems Supports time and rate based algorithms Information packet Generalization of spike time message A spike that carries additional information Facilitates other dynamical computing architectures using different primitives Routing of spikes / packets Messages can be packaged with an address and routed over a network (e.g. IBM, SpiNNaker) Messages can be delivered over a switching fabric (e.g. HRL) Networks can be multiscale e.g. on core, on chip, off chip 5/15/2014 7

Key Technology Issues / Choices Distributing large amounts of memory (synapses) among many processors (neurons) on a single chip. Off-chip memory burns power and taxes memory bandwidth DRAM needs large array sizes to be space efficient and does not integrate into most logic processes Back end memory technologies (e.g. memristors, PCM) are immature and not available in SOA CMOS Developing a scalable messaging (spiking) architecture. Selection of computational primitives (e.g. neuron and synapse models) Engineering for scale, space and power efficiency Creating a large-scale simulation capability that accurately models the neuromorphic hardware Creating tools to develop and debug neural algorithms on the simulator and the neuromorphic hardware Writing the algorithms (including those that learn) 5/15/2014 8

Environment Emulation & Simulation Architecture & Tools Hardware SyNAPSE Program Plan Phase 0 Phase 1 Phase 2 Phase 3 Phase 4 Component Synapse Development Process and Component Circuit Development CMOS Process Integration ~10 6 neuron single chip implementation ~10 8 neuron multi-chip robot environment system I/O high-speed global communication system direct programming interface interface interface Microcircuit Architecture Development System Level Architecture Development 10 6 Neuron Design for Simulation and Hardware Layout 10 8 neuron design for simulation and hardware layout Comprehensive Design Capability Simulate Large Neural Subsystem Dynamics ~10 6 neuron level Benchmark ~10 8 neuron level Benchmark Human level Design (~10 10 neuron) Build Expand & Refine Expand & Sustain Sustain 9

SyNAPSE Miscellaneous Lessons Learned There are many, many ways to build a neuromorphic computer Although much can be leveraged from conventional computing technologies, building a neuromorphic computer requires a large investment in development tools Neuromorphic computers can be applied as control systems for agents (e.g. robots) embedded in a dynamic environment. Neuromorphic algorithms can be replicated on a conventional computer, but with much lower efficiency. Biological scale networks are not only possible, but inevitable. The technology issues are challenging but surmountable. The time scale for developing a new memory technology and integrating it into SOA CMOS process is much longer than that needed to build a neuromorphic computer. The biggest current challenge in neuromorphic computing is defining the algorithms i.e. the structure and dynamics of the network. 5/15/2014 10

Why is Neuromorphic Computing Confusing?

Basic neuromorphic / cognitive computing proposition Build computers that learn and generalize in a broad variety of tasks, much as human brains are able to do, in order to employ them in applications that require (too much) human effort. This idea is at least 40 years old, yet we still don t have these kinds of computers. We have become disillusioned with these ideas in the past because the proposition was not fulfilled (AI and neural net winters ) The proposition is (very) popular again because Maturation of the computing industry The successful application of some machine learning techniques Interest and research on the brain

Neuromorphic / cognitive computing philosophy Artificial Intelligence (Cognitive Computing) Cognition = computing Memory = storage of data and algorithms Artificial Neural Networks (Connectionist Computing) Cognitive computing views the brain as a computer and thinking as the execution of algorithms. algorithm: search Thinking = application of algorithms to data algorithm: iterative error reduction Biological memory corresponds to a container holding data and algorithms. Learning fills the container with input-output rules defined on discrete (AI) or continuous (ANN) variables. Algorithms create input-output mappings using rules or weights stored in memory. AI focuses on search algorithms to select production rules. ANN focuses on iterative error reduction algorithms to determine weights yielding the desired input-output relationships. Algorithms are created by humans.

The Source of Confusion The basic neuromorphic / cognitive computing proposition inappropriately mixes ideas and expectations from biological brains and computing.

SyNAPSE Architectural Concept HIGH SPEED BUS Neuromorphic Electronic System Multi-Gbit/sec digital comms LAMINAR CIRCUIT ~5X10 8 long range axons @ 1 Hz CMOS SUBSTRATE 5X10 8 transistors/ cm 2 @ 500 transistors/ neuron CROSSBAR JUNCTION ~10 4 Neurons / cortical column ~10 10 intersections/cm 2 @ 100 nm pitch ~10 6 Neurons /cm 2 ~10 10 synapses /cm 2 Human Brain Approved for Public Release, Distribution Unlimited

Getting it Straight A neuromorphic computer is another kind of repurposable computing platform like a CPU, GPU, FPGA, etc. A neuromorphic computer will be more / less efficient than another computing architecture depending on the algorithm A key question in designing a neuromorphic computer is understanding the structure of the algorithms it will likely run Neuromorphic computers may be good choices for implementing some machine learning algorithms, but these should not be confused with brains A neuromorphic computer is not a brain, although if we were ever to figure out how to simulate a brain on a computer, a neuromorphic computer would likely be an efficient option.

What about building a brain?

Reductionist approach Proposition: By understanding the component parts and functions of the brain, we can build brain-like systems from an arrangement of similar components. Approach: Study the brain as system of components and subsystems and infer their relevance to overall brain function. Create a brain-like system by mimicking the components and structure of the brain. Example: Create dynamical models of biological neurons and synapses and configure them in a network inspired by brain anatomy. Implement these ideas in software or hardware.

Reductionist conundrum What is the appropriate level of abstraction needed to build a brain? What components / functions of the brain correspond to its computational primitives? How do I distinguish relevant from irrelevant features in a real brain? How do I deal with the interactions among the components? How does neuroanatomy correspond to the brain s architecture? How do I deal with the interactions with a larger environment? Is there an algorithm of the brain that the components execute? Reductionism as a strategy to building a brain is equivalent to the basic neuromorphic / cognitive computing proposition

Limits of reductionism Science shows repeatedly that understanding lower levels of organization is insufficient to understand high levels. In general a new description is required at each new level. For example Chemistry cannot be derived from physics Microbiology cannot be derived from chemistry Organisms cannot be derived from microbiology Ecosystems cannot be derived from organisms More is different - Phil Anderson

Why more is different The (typically massive) interaction / feedback that is characteristic of real world systems eliminates the concept of an independent part or piece. When everything is connected to everything, it becomes difficult to assign an independent function (input-output relationship) to the components. Higher levels of organization evolve from their lower level components in response to interaction with their environment. Higher level organization depends strongly on influences external to the system of its components.

Todd s Top 10 List of Challenges in Building a Brain

10. Neuroscience is too little help (tlh) We cannot possibly simulate all the detail of a biological brain We don t understand the function of very simple nervous systems There are far, far too few observables to guide the development of any model or technology

9. Computational Neural Models are tlh Too many assumptions Too many parameters No general organizing principle Models are (usually) incomprehensible Unclear connection to applications

8. Other things that are tlh Cortical column hypothesis Sparse distributed representations Spiking neural networks, STDP Hierarchies of simple and complex cells Insert your favorite ideas here Spatio-temporal, scale invariance Criticality, critical branching Causal entropic forcing

7. Whole System Requirement Brains are embodied and bodies are embedded in an environment (Edelman) Testing often requires embedding the neuromorphic computer in a complex body /environment.

6. Whole System Interdependence Brains / bodies / environments are complex systems whose large scale function (almost certainly) cannot be analytically expressed in terms of its lower level structure / dynamics System design methodologies are inadequate because the system cannot be decomposed into independent parts

5. No Easy Path for Technology Evolution The benchmark for performance comparison is either A human A well-engineered, domain-specific solution

4. Massive Computing Resources Any model that does anything that anyone will care about requires a massive computational resource for development and implementation Development is slow and expensive Custom hardware in state of art process is needed for any large scale application Software and hardware must co-evolve Cannot develop the algorithms first Cannot specify the hardware first

3. Competition for Resources It is easy for anyone who doesn t like your project to claim that It is making no progress It is not competitive with the state of the art You are doing it wrong You are an idiot This happened to me regularly at DARPA

2. Computers can compute anything The computer is a blank slate We must generate all the constraints to build a neuromorphic computer Changing computing architecture only changes the classes of algorithms that it computes efficiently

1. Brains are not Computers Brains are thermodynamical, bio/chemo/ physical systems that evolved from and are embedded in the natural world Computers are symbolic processors executing algorithms designed by humans Brains designed computers. Can computers design brains?

Alternatives Ways of Thinking About Building a Brain

Time Perspective What we need in order to build a brain Practical Computation (~1964), IBM 360 Practical Intelligence Computational Complexity (~1956) Kolmogorov, Trakhtenbrot Electronics Technology (~1946) Eniac, Transistor Implementation Complexity Physics is missing Thermodynamics Locality Causality (New) Electronics Technology Theory of Computation (~1937) Turing, Markov, Von Neumann Theory of Intelligence Boolean Logic / Functions Evolution, Complexity, Probability Intelligence Computation & Computation Intelligence

Life is Autotrophic On hydrothermal vents, life is sustained by chemoautrophic bacteria, which derive energy and materials from purely inorganic sources. These bacteria provide an efficient means to consume energy through a chemical cascade that would otherwise not be possible. At the ecosystem level, all life is autotrophic in that it is derived from inorganic sources (and sunlight). In general, life provides a means to relieve chemical potential gradients that could not otherwise be accessed (because of energetic activation barriers).

Thermodynamically Evolved Structures

Conceptual Issues Foundations of Computing Observation - The Turing machine (and its many equivalents) is the foundational idea in computation and has enabled decades of success in computing, but Complete System Turing Machine Head w/ Fixed Rule Table (Finite State Machine) The machine is an abstraction for symbol manipulation that is disconnected from the physical world. Humans provide contact with the physical world via the creation and evaluation of algorithms. Question With such foundation, is it reasonable to suppose that the machine can understand, adapt and function in a complex, non-deterministic, evolving problem or environment? Tape (Program & Data) Algorithms Value, Creativity, Semantics & Context Hypothesis #1: Intelligence concerns the ability to create (useful) algorithms.

Evolution of Intelligence physical energy biological evolution psychological stimulusresponse gnoseological cognition & mind sociological cooperation & competition Current Paradigm: Cognitive Computing Brains are universal computers Algorithms determine behavior Memory = storage of data and algorithms Thinking = application of algorithms to data Intelligence is algorithmic Intelligence computation input output Where do algorithms come from? unphysical, static, unscalable, black-box efforts targeting the highest levels of intelligence Today s Approach Hypothesis #2: Intelligence is part of a pervasive evolutionary paradigm that applies to the physical and biological world. The computational metaphor for intelligence is inadequate. Intelligence is physical.

adiabatic boundary Thermodynamics of Open Systems Isolated System Open System S = Entropy S(t) S max ds/dt > 0 S=S ext + S int ds/dt > 0 ds int /dt < 0 ds/dt (ds/dt) max? Open thermodynamic systems spontaneously evolve structure via entropy production in the external environment.

Thermodynamic Evolution Paradigm Variation (Entropy) Entity Information Input (Sensory) Interface Energy Environment Structure / Memory Selection (Energy Consumption) Information Evolutionary Event Work Self-organization via entropy production Entropy Production Output (Motor) Interface Complex, probabilistic, freeenergy rich environment of energy / materials Entities extract resources from their environment through evolutionary variation & selection. Entropy production rate selects for (Algorithmic) Structure / Memory among entropic variations. (Algorithmic) Structures / Memories constrain the variation repertoire in future selections. Entities are distinguished from their environment by their level of integration.

Example Evolving System Entity Neuron Information Type Spike Code Input Interface Dendritic Neurons Environment Neural System Variation Stochastic Firing Structure Synapse Array Entropy Production Information Information Evolutionary Event Energy Work Output Interface Axonic Neurons Event Neuron Firing Neural systems qualitatively fit the thermodynamic evolution paradigm.

Structure Growth, Integration & Scaling Network of Entities Entity Environment

Structure Growth, Integration & Scaling Higher Level Entity Environment Networks of entities can grow by attachment to existing entities. The neighborhood of each entity is its unique environment Higher Level Structure Lower-level entities integrate to form higher level entities. Networks of entities evolve and integrate Algorithmic Structure into higher level entities / systems.

Computing in the Physical Intelligence Framework F=ma Entity Information Input Interface Energy Environment 01101 Evolutionary Event Information Output Interface Work Entropy Production Entropy Production Computers enhance our ability to extract energy and resources from the environment by allowing us to evolve and use larger, faster and more complex algorithms.

Closing Thoughts

What has changed in 7 years End of semiconductor scaling clearly in sight Numerous large scale efforts in neuromorphic computing now exist Community has substantially grown Several example systems now exist Deep learning algorithms have matured and are being deployed BRAIN Initiative and Human Brain Project have been announced/started

Think Algorithms and not Brains when building a NC Dynamical Algorithms Represent systems of coupled dynamical equations not just feedforward networks Interact in real-time in the real world (e.g. robotics) Tough to conceive, tough to debug Typical Questions What are the plasticity/adaption rules? / What are the dynamical equations? What network should I build? What is the effect / interaction of the components with the system? What / how should I test it? How can I figure out what is wrong? How do I make it do something (that I want it to do)? 5/15/2014 47

What We Can Do Build new kinds of computers that are capable of efficiently executing new classes of algorithms Build better machine learning algorithms

Recommendation Separate/classify effort into 2 domains Aspirational efforts focused on building a brain (the basic NC proposition) Practical efforts focused on building new, useful computers Avoid the temptation to straddle both domains

Backup

Digital or Analog? Communications Digital no controversy Neurons Digital computed dynamics, scales, reproducible, multiplexes, parameterizes Analog intrinsic dynamics, low power Synapses Digital computed dynamics, scales, reproducible, multiplexes, parameterizes Analog intrinsic dynamics, low power State of the art CMOS technology and design practice generally favors digital implementations Groups of highly multiplexed, digital neurons and synapses resemble small processor cores with dedicated memory (like SpiNNaker) Mixed analog-digital solutions are also possible 5/15/2014 51

User Focused NC Proposition We will build a computer to enable (for example) computational neuroscientists to efficiently model large neural systems. analysts to more easily understand video Comment: This kind of proposition is a long way from an engineering specification.

Algorithm Focused NC Proposition We will build a computer that efficiently computes certain (classes of) machine learning algorithms Comment: This kind of proposition can lead to narrowly focused systems (ASICs).

Architecture Focused NC Proposition We will build a computer featuring the following architectural concepts (for example) SDR event-based execution, asynchronous communication highly distributed simple cores within a dense memory neural/synaptic/columnar computational primitives, criticality/homeostasis. Comments: Before any specification can be created, a description like this is required It isn t obvious from such propositions what the computer will be good at / used for.

The Evolution of NC Has Begun USER STORIES ALGORITHMS ARCHTECTURES IMPLEMENTATIONS