Real Robots Controlled by Brain Signals - A BMI Approach

Similar documents
Neuroprosthetics *= Hecke. CNS-Seminar 2004 Opener p.1

Predicting 3-Dimensional Arm Trajectories from the Activity of Cortical Neurons for Use in Neural Prosthetics

The Data: Multi-cell Recordings

BRAIN COMPUTER INTERFACES FOR MEDICAL APPLICATIONS

Design and implementation of brain controlled wheelchair

THE idea of moving robots or prosthetic devices not by

Breaking the Wall of Neurological Disorder. How Brain-Waves Can Steer Prosthetics.

Presented by: V.Lakshana Regd. No.: Information Technology CET, Bhubaneswar

Non-Invasive Brain-Actuated Control of a Mobile Robot

Adaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers

In vivo Performance Evaluation of Implantable Wireless Neural Signal Transmission System for Brain Machine Interface

Non-Invasive Brain-Actuated Control of a Mobile Robot

Brain Computer Interfaces for Full Body Movement and Embodiment. Intelligent Robotics Seminar Kai Brusch

Classifying the Brain's Motor Activity via Deep Learning

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Optimization of Robot Arm Motion in Human Environment

Motor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

PREDICTION OF FINGER FLEXION FROM ELECTROCORTICOGRAPHY DATA

A Numerical Approach to Understanding Oscillator Neural Networks

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks

Brain Computer Interface Control of a Virtual Robotic System based on SSVEP and EEG Signal

EE 791 EEG-5 Measures of EEG Dynamic Properties

I+ I. Eric Eisenstadt, Ph.D. DARPA Defense Sciences Office. Direct Brain-Machine Interface. Science and Technology Symposium April 2004

INTRACORTICAL microelectrode recordings in motor cortex,

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

from signals to sources asa-lab turnkey solution for ERP research

Learning Algorithms for Servomechanism Time Suboptimal Control

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models

ROBOT APPLICATION OF A BRAIN COMPUTER INTERFACE TO STAUBLI TX40 ROBOTS - EARLY STAGES NICHOLAS WAYTOWICH

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

SERIES (OPEN CONDUCTOR) FAULT DISTANCE LOCATION IN THREE PHASE TRANSMISSION LINE USING ARTIFICIAL NEURAL NETWORK

Brain-Machine Interface for Neural Prosthesis:

Mobile robot control based on noninvasive brain-computer interface using hierarchical classifier of imagined motor commands

Evolving CAM-Brain to control a mobile robot

A Study on Gaze Estimation System using Cross-Channels Electrooculogram Signals

Assistive Humanoid Robot Arm Motion Generation in Dynamic Environment Based on Neural Networks

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada

On-Line Interactive Dexterous Grasping

MINE 432 Industrial Automation and Robotics

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction

GBM8320 Dispositifs Médicaux Intelligents

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

Non Invasive Brain Computer Interface for Movement Control

Evolutions of communication

Evolved Neurodynamics for Robot Control

A COMPARISON OF ARTIFICIAL NEURAL NETWORKS AND OTHER STATISTICAL METHODS FOR ROTATING MACHINE

Coordinate system representations of movement direction in the premotor cortex

ANN BASED ANGLE COMPUTATION UNIT FOR REDUCING THE POWER CONSUMPTION OF THE PARABOLIC ANTENNA CONTROLLER

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

BRAINWAVE RECOGNITION

The Man-Machine-Man(M 3 ) Interfacing With the Blue Brain Technology

An Overview of Brain-Computer Interface Technology Applications in Robotics

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System

Neural control of computer cursor velocity by decoding motor. cortical spiking activity in humans with tetraplegia

Biologically Inspired Embodied Evolution of Survival

Creating a Poker Playing Program Using Evolutionary Computation

Dynamic Throttle Estimation by Machine Learning from Professionals

Cortical signal recording using an economical microelectrode fabricated on printed circuit board

Outline. Artificial Neural Network Importance of ANN Application of ANN is Sports Science

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS

Disruption Classification at JET with Neural Techniques

行政院國家科學委員會補助 大專學生參與專題研究計畫研究成果報告

A wireless neural recording system with a precision motorized microdrive for freely

Neural Recording Stability of Chronic Electrode Arrays in Freely Behaving Primates

Surveillance and Calibration Verification Using Autoassociative Neural Networks

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB

Motor Cortical Representation of Hand Translation and Rotation during Reaching

Neural Networks and Antenna Arrays

Wireless Spectral Prediction by the Modified Echo State Network Based on Leaky Integrate and Fire Neurons

Emergence of Purposive and Grounded Communication through Reinforcement Learning

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Perceptron Learning Strategies

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

BRAINWAVE CONTROLLED WHEEL CHAIR USING EYE BLINKS

INTELLIGENT CONTROL OF AUTONOMOUS SIX-LEGGED ROBOTS BY NEURAL NETWORKS

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife

Chapter 3: Psychophysical studies of visual object recognition

Shape Memory Alloy Actuator Controller Design for Tactile Displays

Design and Testing of an Integrated Circuit for Multi-Electrode Neural Recording

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Using Eye Blinking for EOG-Based Robot Control

1 Introduction. w k x k (1.1)

Synchronous stereo-video and biosignal recording a basic setup for Human-Computer-Interface applications

Band-specific features improve Finger Flexion Prediction from ECoG

Converting Motion between Different Types of Humanoid Robots Using Genetic Algorithms

Performance Improvement of Contactless Distance Sensors using Neural Network

A Brain-Controlled Wheelchair Based on P300 and Path Guidance

Synthetic Brains: Update

Carnegie Mellon University!!

BCI THE NEW CLASS OF BIOENGINEERING

Neural Adaptation and the Effect of Interelectrode Spacing on Epidural Electrocorticography for Brain-Computer Interfaces

ANALYSIS OF CITIES DATA USING PRINCIPAL COMPONENT INPUTS IN AN ARTIFICIAL NEURAL NETWORK

The Plug to the Brain Thoughts out of the Computer

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Chapter 1 Introduction

Figure S3. Histogram of spike widths of recorded units.

SUPPLEMENTARY MATERIAL. Technical Report: A High-Performance Neural Prosthesis Enabled by Control Algorithm Design

Transcription:

International Journal of Advanced Intelligence Volume 2, Number 1, pp.25-35, July, 2010. c AIA International Advanced Information Institute Real Robots Controlled by Brain Signals - A BMI Approach Genci Capi Department of Electric and Electronic Engineering, University of Toyama, Toyama, Japan capi@eng.u-toyama.ac.jp Received (January 2010) Revised (May 2010) Recent works on Brain Machine Interface (BMI) has given promising results for developing prosthetic devices aimed at restoring motor functions in paralyzed patients. The goal of this work is to create a part mechanical, part biological robot that operates on the basis of the neural activity of rat brain cells. In our method, first the rat learns to move the robot by pressing the right and left lever in order to get food. Then, we utilize the data of multi-electrode recordings to train artificial neural controllers, which are later employed to control the robot motion based on the brain activity of rats. The results show a good performance of artificial neural network controlling the real robot. Keywords: BMI; Neural network; Robot. 1. Introduction Controlling the robot with the brain signals is a very challenging research work 1,2,3,4. This approach has proved useful for helping paralyzed or locked in patients develop ways of communication with the external world 5,6. In addition, these experiments have demonstrated that animals can learn to utilize their brain activity to control the displacements of computer cursors 3,4 or one-dimensional to threedimensional movements of simple and elaborate robot arms 1,2. Many electro-biological signals can be used in connection with BMIs. Some of the more commonly adopted signals are the Electro-Myographic (EMG), the Electro-Oculographic (EOG) and the Electro-Encephalographic (EEG) signal. Many research groups have shown impressive results with a microelectrode arrays in motor control areas of the cortex and adaptive algorithms to reconstruct a desired signal using both linear and non-linear algorithms. The EEG signal corresponds to the electrical potential due to brain (neuron) activity, and can be acquired on the scalp (signal amplitude usually under 100 µv) or directly on the cortex (called Electrocorticography - ECoG), the surface of the brain (signal having about 1-2 mv). Despite these initial results, there are several issues that need to be considered. For example, although most agree that a BMI designed to reproduce arm/hand movements will require long-term and stable recordings from cortical neurons 25

26 G. Capi through chronically implanted electrode arrays 7,8,9, there is considerable disagreement on what type of brain signal (single unit, multiunit, or field potentials 10 ) would be the optimal input for a such a device. At the cortical level, a number of different areas are connected in planning and execution of voluntary natural movements, such as grasp or reach. The goals of this research work are (1) to apply knowledge rat s neuro-musculoskeletal motion control to a neural controlled robotic system, and (2) to demonstrate that such a system is able to obtain responses in a similar way to the voluntary rats movements in comparable experiments. This work is a step in the direction toward understanding the working and possibly, the capabilities of the neural circuits in controlling robotic systems. In order to achieve these goals, we have developed a system where the rat control the robot motion by pressing the right and left button in order to get food in scenarios that change from simple to more complicated ones. We utilized the 8 neurons recordings and the right and left lever signals to train artificial neural network, which is later used to control the robot. The results indicate that the robot motions generated by the artificial neural controller and real data collected during the experiments were very similar. This paper is organized as follows. Firstly, the developed system for rat training is described in Section 2. The structure of neural network is presented in Section 3. Experimental results with the neural network controlling the real robot are addressed in Section 4. Finally, a discussion on experimental results and future works are given in Section 5. 2. Developed System Conceptually, a BMI maps some level of neural signal into commands to control an external device e.g. robotic arm, cursor. In order to train the rat to control the robot by pressing the right or left lever, we developed a system, which recognize the lever signals and convert them into the robot motion (Fig. 1). This system is composed by an electronic circuit using PIC18F452 and ADM3202. The circuit is connected with the PC, which runs Matlab, through RS232 serial cable. Based on the input data, the e-puck robot motion is determined and sent by a Bluetooth connection. The rat s food is placed in the upper part of the e-puck robot, as shown in Fig. 2. The rat controls the e-puck robot motion in the environment and brings it close to his mouth in order to get food. The robot utilizes the data of proximity sensors to stop moving when it reaches the rats place. 2.1. E-puck robot In the experiments presented here, we utilize the e-puck robot to carry the rat s food. We selected this robot because it is small; it has a good structure, is flexible, user friendly, and has good robustness and simple maintenance. An e-puck robot

Real Robots Controlled by Brain Signals - A BMI Approach 27 consists of a cylindrical body of 75 mm in diameter (Fig. 3). It has several outputs which can execute behavior or broadcast information into the environment. Fig. 1. Developed system. 2.2. Rat training In a typical experiment, food-deprived rats are trained to perform an instrumental action (such as lever pressing) to obtain a rewarding outcome (food). At the beginning, three rats (Wistar/ST, male, 10 weeks old) were being trained to learn to press the right or left lever as follows: (i) press the right or left button to get food supplied manually (Fig. 4a). (ii) press levers as above, except with their head restricted (Fig. 4b). Later, the food is placed on e-puck robot and the rat learns to direct the robot by pressing the right and left levers in scenarios that change from simple to more complicated ones, as follows: (i) The robot is placed in front of the rat and it moves straight forward when the right or left lever is pressed (Fig. 5a); (ii) The robot is placed in the right (left) side of the rat and it follows half of a U-shape trajectory when only the right (left) lever is pressed (Fig. 5b);

28 G. Capi (iii) The robot is placed initially on the right or left side of the rat and it moves to the right or left of a U-shape when the respective lever is pressed (Fig. 5c). Fig. 2. Rat directing the robot by pressing the right and left lever. Fig. 3. e-puck robot. 3. Artificial Neural Network Predictions of lever position based on recordings of cortical neurons were obtained by applying an artificial neural network. We employ a Multilayer Perceptron Neural Networks (MPLNN), which is a good tool for classification purposes. It can approximate almost any regulatory between its input and output. In our model, P(t) is a matrix of the input patterns with each column corresponding to the brain

Real Robots Controlled by Brain Signals - A BMI Approach 29 recorded data. The T(t) is a matrix of the target outputs, with samples of the lever position. The MPLNN weights are adjusted offline by supervised training procedure. After training the MLPNN, it can successfully apply the acquired skills to the previously unseen samples. It has good extrapolative and interpolative abilities. (a) (b) Fig. 4. Rat during training. (a) Head free; (b) Head restricted. (a) (b) (c) Fig. 5. Rat during training to control the e-puck robot. (a) Straight motion; (b) Half right and half left of a U-shape; (c) U shape.

30 G. Capi We have used hyperbolic tangent as an activation function of hidden neurons and sigmoid function for output neurons. In practice, the value of the output neurons is not exactly 0 or 1. They vary in the range [0; 1] and the vicinity to the ideal values depends on the MPLNN confidence. The closer output values to ideal, the more confidence to the NN decision. In our implementation, the output is 1 if the value is above the threshold (0.9) or 0 otherwise. 4. Results 4.1. Training The response rates during training to press the right and left lever are shown in Fig. 6. Fig. 6(a) and Fig. 6(b) show the frequency and time that the lever was kept pressed for the right and left lever, respectively. The rats successfully learned to press the lever with their head restricted. The rat first learned to press the right (Fig. 6(a)) and then the left lever (Fig. 6(b)). Fig. 6(c) shows the left lever response rates results of the rat, which had already received training with its head restricted. The rat controlled the e-puck robot positioned in its left side in order to get food. Compared to the previous scenario, the rat learned to press the lever after three days of training. The rat first learned to press the right lever (the robot was placed initially in the right side) and then to press the left lever (the robot was placed initially in the left side). Finally, the rat was trained with the U-shape trajectory. Fig. 7 shows the overall percentage of left and right lever pressing for each day of training session when the e-puck robot was placed initially in left side of the rat. This training session started after the rat learned to control the e-puck robot placed initially in the right side. Therefore, in the first day of training the rat pressed primarily the right lever. The rat almost exclusively pressed the left lever from the second day on. 4.2. Neural network After the surgery the animals were placed for daily recording sessions. All the data (neural signal, reward times, and lever position) are time synchronized. The right and left lever position was recorded continuously throughout the session. In order to minimize the number of animals used, it is necessary to maximize time for BMI experiments after surgery. In our experiments, there are 8 neurons recorded in an interval of 30 min. Fig. 8 shows all the recorded data, while they pressed the left and right lever. As the figure shows, some parts of the recorded brain signals are affected by noise. In order to select the best data to train the artificial neural controller, all 8 neural recordings are placed together. In our model, we selected 320 data to train the artificial neural controller. The number of training data for each of four levers position is nearly the same. The neural network has 8, 40, and 2 units in input, hidden and output layer, respectively. The learning continued for 10000 epochs, as shown in Fig. 9.

Real Robots Controlled by Brain Signals - A BMI Approach 31 The performance of trained neural network was evaluated using brain recordings data, which were not used during training. The results show that the neural network model achieved an 80% prediction accuracy rate, for the input brain signals not used during training. (a) (b) (c) Fig. 6. Training results. (a) Pressing the right lever with the head restricted; (b) Pressing the left lever with the head restricted; (c) Pressing the left lever to control the e-pack robot placed in the left side. It must be noted that the output of neural network was exactly the same with the recorded lever positions for the input signals used during training. The output of artificial neural controller, which correspond to the right and left lever, directly control the e-puck robot motion, as follows: (i) No lever pressed [0 0] - right back motion; (ii) Pressing the left lever [1 0] - right forward motion; (iii) Pressing the right lever [0 1] - left forward motion; (iv) Both levers pressed [1 1] - straight forward motion. Ten different sets of brain activity data for each of lever positions were used to evaluate the performance of trained neural network. The video capture of robot motion generated using the rat collected data and artificial neural network are shown in Fig. 10. This figure indicates that the robot motions generated by the artificial neural controller and real data collected during the experiments were very similar. The robot motions generated by the NN using as input the brain signals when no

32 G. Capi Fig. 7. Overall percentage of correct responses for each day of training when the e-puck robot was placed initially in the left side. Fig. 8. Recorded data.

Real Robots Controlled by Brain Signals - A BMI Approach 33 Fig. 9. MSE of neural network during training. lever was pressed ([0, 0]) are shown in Fig. 10(a). Two out of ten robot motions were not correct. The neural network output was [0, 1] and the robot motion was left forward instead of right back. The NN result using the neuron signals when the rat pressed the left lever ([1, 0]) is shown in Fig. 10(b). The first and last robot motions are different. First the robot moves right back ([0, 0]) and the last motion is straight forward, which correspondent to both levers pressed situation ([1, 1]). The largest number of not correct answers of NN was generated by the neuron signals when the rat pressed the right lever. Fig. 10(c) shows that six times the output of the neural controller was correct [0, 1] (left forward motion), and four times the output was not correct [0, 0] (right back motion). The reason is that from 8 electrodes used to record the brain activity, 6 of them were in the right half and two in the left half of the cortex. With only two data the neural network was unable to learn the mapping of input signals to the lever position. The robot motion generated by the brain signals when the rat pressed both levers was 100% correct (Fig. 10(d)). 5. Conclusion In this paper, we presented some preliminary results on training the rats to get food by controlling a mobile robot. The results showed that the rat learned to control the robot by pressing the right and left buttons in simple and complex environment settings. We utilized the brain recorded data to train artificial neural network and evaluated their performance in controlling the mobile robot motions. There were three main components to the system: neural signal processing, control algorithm,

34 G. Capi Fig. 10. Robot motion controlled by rats collected data and neural network. and learning algorithm. The input to the system was the neural signal of the rats and the output of the system was the lever positions. The results showed a good performance of artificial neural networks. References 1. J. K. Chapin, R. A. Markowitz, K. A. Moxon and M.A.L. Nicolelis. Direct Real-time Control of a Robot Arm Using Signals Derived from Neuronal Population Recordings in Motor Cortex, Nature Neuroscience 2, pp. 664-670, 1999. 2. J. Wessberg, C.R. Stambaugh, J.D. Kralik, P.D. Beck, M. Laubach, et al.. Real-time Prediction of Hand Trajectory by Ensembles of Cortical Neurons in Primates, Nature, 408, pp. 361-365, 2000. 3. M. D. Serruya, N.G. Hatsopoulos, L. Paninski, M.R. Fellows and J.P. Donoghue. Instant Neural Control of a Movement Signal, Nature, 416, pp. 141-142, 2002. 4. D. M. Taylor, S.I. Tillery and A.B. Schwartz. Direct Cortical Control of 3D Neuroprosthetic Devices, Science, 296, pp. 1829-1832, 2002. 5. T. Hinterberger, et al.. Neuronal Mechanisms Underlying Control of a Brain-computer Interface, Eur. J.Neurosci, 21, pp. 3169-3181, 2005. 6. A. Kubler, et al.. Brain-computer Communication: Unlocking the Locked in, Psychol. Bull, 127, pp. 358-375, 2001.

Real Robots Controlled by Brain Signals - A BMI Approach 35 7. M. A. L. Nicolelis. Actions from Thoughts, Nature, 409, pp. 403-407, 2001. 8. M. A. L. Nicolelis. Brain-machine Interfaces to Restore Motor Function and Probe Neural Circuits, Nat Rev Neurosci, 4, pp. 417-422, 2003. 9. J. P. Donoghue. Connecting Cortex to Machines: Recent Advances in Brain Interfaces, Nat Neurosci Supp, 5, pp. 1085-1088, 2002. 10. B. Pesaran, J.S. Pezaris, M. Sahani, P.P. Mitra and R.A. Andersen. Temporal Structure in Neuronal Activity During Working Memory in Macaque Parietal Cortex, Nat Neurosci, 5, pp. 805-811, 2002. Genci Capi He received the B.E. degree in mechanical engineering from Polytechnic University of Tirana, in 1993 and the Ph.D. degree in information systems engineering from Yamagata University, in 2002. He was a Researcher at the Department of Computational Neurobiology, ATR Institute from 2002 to 2004. In 2004, he joined Fukuoka Institute of Technology, as an Assistant Professor, and in 2006, he was promoted to Associate Professor. He is currently a Professor in the Department of Electrical and Electronic Systems Engineering, University of Toyama. His research interests include intelligent robots, BMI, multi robot systems, humanoid robots, learning and evolution.