Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller

Similar documents
An Autonomous Mobile Robot Architecture Using Belief Networks and Neural Networks

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Saphira Robot Control Architecture

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Didier Guzzoni, Kurt Konolige, Karen Myers, Adam Cheyer, Luc Julia. SRI International 333 Ravenswood Avenue Menlo Park, CA 94025

Service Robots in an Intelligent House

The XRCL Project: The University of Arkansas Entry into the AAAI 1999 Mobile Robot Competition

Sonar Behavior-Based Fuzzy Control for a Mobile Robot

Robot Architectures. Prof. Yanco , Fall 2011

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Robot Architectures. Prof. Holly Yanco Spring 2014

Robots in a Distributed Agent System

Development of Local Vision-based Behaviors for a Robotic Soccer Player Antonio Salim, Olac Fuentes, Angélica Muñoz

STRATEGO EXPERT SYSTEM SHELL

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots

Sensing and Perception

Implicit Fitness Functions for Evolving a Drawing Robot

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

Emergent Behavior Robot

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

GA-based Learning in Behaviour Based Robotics

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Neural Labyrinth Robot Finding the Best Way in a Connectionist Fashion

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Embodiment from Engineer s Point of View

Fuzzy Logic for Behaviour Co-ordination and Multi-Agent Formation in RoboCup

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

Development of Local Vision-Based Behaviors for a Robotic Soccer Player

Artificial Intelligence

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

ROBCHAIR - A SEMI-AUTONOMOUS WHEELCHAIR FOR DISABLED PEOPLE. G. Pires, U. Nunes, A. T. de Almeida

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

A cognitive agent for searching indoor environments using a mobile robot

Creating a 3D environment map from 2D camera images in robotics

The Use of Neural Network to Recognize the Parts of the Computer Motherboard

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Cooperation among Situated Agents in Learning Intelligent Robots. Yoichi Motomura Isao Hara Kumiko Tanaka

Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment

Towards Integrated Soccer Robots

Multi-Robot Coordination. Chapter 11

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

AISSIG 2004 IGVC Design Report

Lecture information. Intelligent Robotics Mobile robotic technology. Description of our seminar. Content of this course

Lab 7: Introduction to Webots and Sensor Modeling

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Colour Profiling Using Multiple Colour Spaces

This list supersedes the one published in the November 2002 issue of CR.

MAKER: Development of Smart Mobile Robot System to Help Middle School Students Learn about Robot Perception

Development of a telepresence agent

Live Hand Gesture Recognition using an Android Device

Artificial Neural Network based Mobile Robot Navigation

AUTOMATION TECHNOLOGY FOR FABRIC INSPECTION SYSTEM

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

Intelligent Vehicle Localization Using GPS, Compass, and Machine Vision

Incorporating a Software System for Robotics Control and Coordination in Mechatronics Curriculum and Research

Formation and Cooperation for SWARMed Intelligent Robots

MINE 432 Industrial Automation and Robotics

Welcome to EGN-1935: Electrical & Computer Engineering (Ad)Ventures

Artificial Intelligence and Mobile Robots: Successes and Challenges

Supplementary information accompanying the manuscript Biologically Inspired Modular Neural Control for a Leg-Wheel Hybrid Robot

An Agent-Based Architecture for an Adaptive Human-Robot Interface

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

Sensors & Systems for Human Safety Assurance in Collaborative Exploration

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Unit 1: Introduction to Autonomous Robotics

Co-evolution for Communication: An EHW Approach

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

An Open Robot Simulator Environment

Colour Recognition in Images Using Neural Networks

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

BIBLIOGRAFIA. Arkin, Ronald C. Behavior Based Robotics. The MIT Press, Cambridge, Massachusetts, pp

Behavior-Based Control for Autonomous Underwater Exploration

Effective Iconography....convey ideas without words; attract attention...

Today s Menu. Near Infrared Sensors

ES 492: SCIENCE IN THE MOVIES

Cognitive robotics using vision and mapping systems with Soar

Digital image processing vs. computer vision Higher-level anchoring

Chapter 31. Intelligent System Architectures

Hybrid Neuro-Fuzzy System for Mobile Robot Reactive Navigation

User interface for remote control robot

Transcription:

From:MAICS-97 Proceedings. Copyright 1997, AAAI (www.aaai.org). All rights reserved. Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller Douglas S. Blank and J. Oliver Ross Department of Computer Science, University of Arkansas 232 Science-Engineering Building Fayetteville, AR 72701, USA dblank@eomp.uark.edu joross@entropy.uark.edu Abstract This paper describes the initial steps required to incorporate an artificial neural network vision module into an established fuzzy logic, behavior-based mobile robot and controller. This efficient and robust method is demonstrated to show its effectiveness in simple real-world environments. Introduction Although Zadeh defined the basic operations of fuzzy set theory over thirty years ago (Zadeh, 1965), fuzzy logic-based controllers have just recently become the technique of choice for many researchers in robotics. Fuzzy logic controllers allow for the integration of high-level, human-designed plans to operate along side immediate, reactive plans in a robust manner. The key to this successful line of research has been the development of the concept of behaviors. Behaviors Behaviors as a method of controlling robots were inspired by Brooks subsumption architecture (Brooks, 1986) and later generalized by Maes descriptions of behavior-based designs (Maes, 1993). Generally, a behavior is a simple, focused, perceptual trigger and associated action. An example might look like: 1F door-on-left THEN turn-left. The door-onleft is the perceptual trigger that activates the robot to turn-left. The ideas of behaviors and fuzzy logic were a perfect match; behaviors provided an abstract interface for humans to enter plans, and fuzzy logic provided a method for dealing with inexact variable values. A robot under the control of a fuzzy logic, behavior-based rule (such as IF door-on-left THEN Copyright 1997, American Association for Artificial Intelligence (www.aaai.org). All fights reserved. turn-left) will act in an appropriate manner when door-on-left has a value somewhere between absolute true and absolute false. The hope is that general, robust functioning can be attained with just a few simple behaviors mediated by a fuzzy logic controller (Saffiotti, Ruspini, and Konolige, 1993). Lear.lng Our goal was to integrate learning into this wellknown methodology. Although some success has been reached using fuzzy logic controllers without learning, we believe that learning will be necessary as we require robots to process more complex sensory information, and require them to perform more complex tasks. One obvious way to incorporate learning into a fuzzy control system is to attempt to learn the fuzzy logic rules. However, this is exactly where human expertise benefits a fuzzy logic controller the most. That is, humans are able to efficiently and effectively write the abstract rules, such as IF obstacle-on-right THEN turn-left. We believe that a much better task Figure 1: The mobile robot with laptop controller and CCD camera. 8 MAICS-97

for a learning module is in the recognition and categorization of the perceptual triggers (i.e., obstacle-on-right). Although obstacle-on-right may be gleaned directly from low-level sonar sensors, more sophisticated perceptions cannot. For instance, consider the goals of attempting to get close to dogs, but to avoid cats. The human module in the controller can easily create the rules (i.e., IF eat THEN reverse-motors). The hard portion of the problem is coming up with the appropriate truth values for the categories cat and dog. Our solution to this problem was to insert an artificial neural network between the low-level sensors and the fuzzy logic controller. Artificial neural networks can be trained (via back-propagation (Rumelhart, Hinton, and Williams, 1986) or some other method) to read in low-level sensors, and produce activation values on output nodes. The output values, representing possibly high-level categories (like "cat"), can be used directly (or nearly directly) in a fuzzy logic conlxoller. This methodology is quite general. For example, a network can be trained to read in 2D visual images and sonar readings, and produce output nodes representing the presence or absence of a cat. In addition, such a network s output activation will typically gracefully drop as the input image looks less like a cat. As the output value is treated as a likelihood value in the fuzzy logic controller, this is exactly the desired behavior. Saphira, the fuzzy logic controller, is a flexible system designed to handle behaviors as described above. ~ In addition to having purely perceptual triggers, Saphira can also post items to a global blackboard, similar to Erman, Hayes-Roth, Lesser, and Raj Reddy s Hearsay II model (1980). Saphira is capable of handling hierarchies of behaviors like those defined by Brooks (1986). But, more importantly, Sapliira is also capable of smooth behavior blending. Behavior blending allows for robust robot control in light of competing perceptual triggers. For instance, if two competing rules were both being activated by their perceptual triggers, Saphira can often produce a behavior that is an intelligent combination of the two. Our goal was to create an artificial neural network which could be trained in a general manner to categorize objects in the visual field. The output of the network could then be used in abstract fuzzy rules in Saphira. Connectionist Vision Module Specifically, our task was to recognize a small purple moving ball in the visual scene, and to help steer RazorBot toward the ball. 2 After some experimentation, we decided that we needed to keep The Robot and Controller For this research, we chose the low-cost Pioneer 1 mobile robot and Saphira fuzzy logic behavior-based controller from ActivMedia, Inc., in association with Real World Interface, Inc. Our version of the Pioneer 1, called RazorBot, has seven sonar sensors and a slightly wide-angle CCD color camera mounted on the front and center of the robot (see Figure 1). RazorBot has an on-board MC68HC11-based microcontroller. RazorBot weighs 17 pounds and measures 45 cm from stern to stem, 36 cm across the wheels, and 23 cm to the top of the console. It has enough battery power to run for a few hours unattached. For these experiments, a remote PC ran the fuzzy logic controller and captured video signals sent from RazorBot via radio transmitters and receivers. The video was captured by a Matrox Meteor framegrabber on the PC (see Figure 2). The PC controlled the robot by radio modems. Figure 2: Image captured with the Meteor video framegrabber as sent from RazorBot. t See (Konolige, Myers, Ruspini, and Saffiotti, 1996) for a detailed description of Saphira. 2 This ball is often called a Squiggleball because of its apparent random movements. Blank 9

were tested, and it got 80% correct. Further training produced better performance on the training set, but produced worse performance on the generalization set. 3 Now that the network was trained to produce appropriate output categorizations, it was ready to be worked into the fuzzy logic controller. Integration Figure 3: An image as seen from the robot s reduced color and pixel perspective (the ball is in the lower center of the scene). the visual resolution as small as possible while still allowing enough information for object recognition. We settled on an image that was 44 x 48 pixels, with three colors per pixels (RGB). That provided image that has 6336 data points. This was our input into the network (see Figure 3). The output was decided to be a simple categorization of which quadrant the ball was in. Therefore, four output units were created, one for each quad. A feed-forward, fully connected network was then created. Back-propagation was chosen as the training method (Rumelhart, Hinton, and Williams, 1986) (see Figure 4). Before integration into the fuzzy logic controller, we ran many training tests attempting to learn the classification task. We captured 100 images as seen from the robot, and trained various networks on 90 images, keeping 10 to test generalization ability. Initially, we used a real number between 0 and 1 for each of the 6336 RGB data points. However, no configuration of hidden units and parameter settings allowed the network to learn the task. By rounding off each data point to 0 and 1, the network was then able to learn the task. The final network contained 10 hidden units. Momentum was set to 0.1, and all learning rates (unit and bias) were set to 0.1. The network learned to identify the quadrant that contained the ball in 147 sweeps through the 90 image corpus. To test generalization, the remaining untrained 10 images To integrate Saphira and the connectionist vision module (see Figure 5), we needed a method to create fuzzy variables that could be used to control RazorBot s motors. We decided to create a fuzzy variable for each quadrant. Each variable would hold the value indicating the network s estimation of the likelihood that that quad contained the ball. One measure of that likelihood was simply the output activation of each of the associated output units. Let q l be the activation of the unit representing the presence of a ball in quad 1. Therefore, one can write behaviors of the form: 4 Output 10 Hidden 6336 Input ioo ol I Figure 4: The connectionist network was given images and trained to produce the quadrant containing the ball. 3 This is a familiar occur/ ence ill training back-prop networks called over-training. 10 MAICS-97

Fuzzy Logic Controller Ncuml~Nctwork Figure 5: Schematics of processing. The on-board video camera sends an image to the radio receiver. The image is grabbed and sent to the artificial neural network. The ANN propagates the activations through its weights. The outputs are sent to the fuzzy logic controller. Finally, the controller decides the next actions, and sends the commands back to the robot via radio waves. IF (ql OR q3) THEN Turn Left IF (q2 OR q4) THEN Turn Right where ql and q3 are the quadrants on the left side of the visual image. Simply using the output values of the network has the advantage of being easy to compute. However, the output activations do not necessarily correspond to likelihood values. We found it necessary to normalize the output activations prior to using them in Saphira. For example, the output activations for ql, q2, q3, and q4 might be 0.1, 0.0, 0.1, and 0.4 respectively. Although the network produced only a 0.4 activation on the unit associated with q4, this is by far the largest activation of the four output units. However, this activation is not great enough to produce a level of confidence needed to move the robot in Saphira. Sealing the values gives 0.2, 0.0, 0.2, and 0.8, which would be more likely to produce the desired results. After scaling the output activations, we plugged a routine into Saphira to grab an image and propagate the activations through the network. We then wrote a simple behavior similar to that outlined above so that ql through q4 were set to the scaled versions of the associated output units. The network is small enough and the frame-grabber fast enough to allow for many propagations through the network per second. The integrated system can successfully follow a ball rolling on the floor, while simultaneously avoiding collisions with walls and other obstacles. Conclusions Our initial results are promising, however, critical analysis remains to determine how this methodology compares to other techniques. In addition, normalization of the output activations is only one method to link network activations to likelihood values. Our simple method does not work in all cases; for instance, if a ball is not present the four quad values are still normalized. Clearly, some type of threshold regulator is needed. Although the network did correctly categorize 80% of the novel images in the test of generalization, we would, of course, like much better performance. We believe that a much richer training set would help with the generalization, as well as creating output activations that would not need to be normalized. A semi-automatic method of creating a training corpus is being investigated. This method would simply require a human to "drive" RazorBot around, making appropriate moves in response to objects in the visual field. Later, the robot will replay the images and movements, learning to associate one with the other. Currently, the visual image is probably overly boiled-down. At 44 x 48, the resolution is at the limits for even ball recognition. More complex recognition tasks may require a finer resolution. However, recall that each RGB color value was rounded off to 0 or 1. This in effect gives only 9 different colors (32). allowing three different values for each color data Blank 11

point (i.e., 0, 0.5, and 1) effectively increases the colors to 27 (33). We believe that this should also help recognition and generalization, while keeping the task learnable. Finally, we would like to add a self-tuning learning componento the fuzzy logic portion of the system. This would allow the system to automatically adapt to neural network-based fuzzy variables. References Brooks, R. (1986). A Robust Layered Control System for a Mobile Robot. IEEE Journal of Robotics and Automation, March 1986, 14-23. Erman, L., Hayes-Roth, F., Lesser, V. and Raj Reddy, D. (1980). The Hearsay II speech-understanding system: Integrating knowledge to resolve uncertainty. Computing Surveys, 28. Konolige, K., Myers, K., Ruspini, E., Saffiotti, A. (1996). The Saphira Architecture: A Design for Autonomy. Journal of Experimental and Theoretical Artificial Intelligence, Special Issue on Architectures for Physical Agents. Kosko, B. (1992). Neural Networks and Fuzzy Systems, Englewood Cliffs, NJ: Prentice Hall. Maes, P. (1993). Behavior-Based Artificial Intelligence. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, 74-83, Boulder, CO. Rumelhart, D., Hinton, G. and Williams, R. (1986). Learning internal representations by error propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Vol. L (J. McClelland and D. Rumelhart, eds.) Cambridge, MA. Bradford Books/MIT Press. Saffiotti, A., Ruspini, E. and Konolige, K. (1993). Blending Reactivity and Goal-Directedness in a Fuzzy Controller, Proceedings of the Second IEEE International Conference on Fuzzy Systems, 134-139, San Francisco, CA. Zadeh, L. (1965). Fuzzy Sets. Information Control, 8: 338-353. and 12 MAICS-97