Sensations and Perceptions in Cicerobot a Museum Guide Robot

Similar documents
Experiences with CiceRobot, a museum guide cognitive robot

TOWARDS A NEW GENERATION OF CONSCIOUS AUTONOMOUS ROBOTS

A Cognitive Approach to Robot Self-Consciousness

Towards The Adoption of a Perception-Driven Perspective in the Design of Complex Robotic Systems

Towards a Methodology for Designing Artificial Conscious Robotic Systems

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

RoBotanic: a Robot Guide for Botanical Gardens. Early steps.

Making Representations: From Sensation to Perception

Unit 1: Introduction to Autonomous Robotics

Hybrid architectures. IAR Lecture 6 Barbara Webb

Towards the development of cognitive robots

THE MECA SAPIENS ARCHITECTURE

Introduction to Artificial Intelligence: cs580

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Outline. What is AI? A brief history of AI State of the art

Artificial Intelligence: An overview

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Advanced Robotics Introduction

Unit 1: Introduction to Autonomous Robotics

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

24/09/2015. A Bit About Me. Fictional Examples of Conscious Machines. Real Research on Conscious Machines. Types of Machine Consciousness

Digital image processing vs. computer vision Higher-level anchoring

Advanced Robotics Introduction

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Artificial Intelligence. What is AI?

Service Robots in an Intelligent House

Self Organising Neural Place Codes for Vision Based Robot Navigation

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Saphira Robot Control Architecture

Artificial Intelligence

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

CMSC 372 Artificial Intelligence. Fall Administrivia

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

S.P.Q.R. Legged Team Report from RoboCup 2003

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Master Artificial Intelligence

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

The Nature of Informatics

Artificial Intelligence

Artificial Intelligence

Knowledge Representation and Cognition in Natural Language Processing

Artificial Intelligence. Shobhanjana Kalita Dept. of Computer Science & Engineering Tezpur University

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind

An Integrated HMM-Based Intelligent Robotic Assembly System

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Artificial Neural Network based Mobile Robot Navigation

CS:4420 Artificial Intelligence

WHO. 6 staff people. Tel: / Fax: Website: vision.unipv.it

Context-Aware Interaction in a Mobile Environment

ACHIEVING SEMI-AUTONOMOUS ROBOTIC BEHAVIORS USING THE SOAR COGNITIVE ARCHITECTURE

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

Evolved Neurodynamics for Robot Control

Neural Models for Multi-Sensor Integration in Robotics

CMSC 421, Artificial Intelligence

s. Are animals conscious? What is the unconscious? What is free will?

Appendices master s degree programme Artificial Intelligence

Elements of Artificial Intelligence and Expert Systems

Autonomous Robotic (Cyber) Weapons?

Planning in autonomous mobile robotics

Levels of Description: A Role for Robots in Cognitive Science Education

An Autonomous Mobile Robot Architecture Using Belief Networks and Neural Networks

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS

Robot Learning by Demonstration using Forward Models of Schema-Based Behaviors

Introduction to AI. What is Artificial Intelligence?

Autonomous and Mobile Robotics Prof. Giuseppe Oriolo. Introduction: Applications, Problems, Architectures

Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects

GPU Computing for Cognitive Robotics

Booklet of teaching units

Two Perspectives on Logic

Intelligent Systems. Lecture 1 - Introduction

SPQR RoboCup 2016 Standard Platform League Qualification Report

Effective Iconography....convey ideas without words; attract attention...

Embodiment from Engineer s Point of View

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

The Behavior Evolving Model and Application of Virtual Robots

Philosophy. AI Slides (5e) c Lin

Where do Actions Come From? Autonomous Robot Learning of Objects and Actions

CSCE 315: Programming Studio

CPS331 Lecture: Agents and Robots last revised November 18, 2016

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Cognitive Robotics 2017/2018

This list supersedes the one published in the November 2002 issue of CR.

ROBOT CONTROL VIA DIALOGUE. Arkady Yuschenko

Why interest in visual perception?

Overview Agents, environments, typical components

Birth of An Intelligent Humanoid Robot in Singapore

Implicit Fitness Functions for Evolving a Drawing Robot

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

Autonomous Mobile Robots

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Awareness and Understanding in Computer Programs A Review of Shadows of the Mind by Roger Penrose

Course Info. CS 486/686 Artificial Intelligence. Outline. Artificial Intelligence (AI)

Logic Programming. Dr. : Mohamed Mostafa

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Learning a Visual Task by Genetic Programming

Cognitive Robotics 2016/2017

Transcription:

Sensations and Perceptions in Cicerobot a Museum Guide Robot Antonio Chella, Irene Macaluso Dipartimento di Ingegneria Informatica, Università di Palermo Viale delle Scienze, building 6 90128 Palermo, Italy Abstract-The paper discusses a model of phenomenology based on the distinction between sensations and perceptions and it proposes an architecture based on a comparison process between the effective and the expected sensations generated by a 3D robot/environment simulator. The phenomenal perceptions are thus generated by the simulator driven by the comparison process. The paper contributes to the consciousness research field by testing the added value of phenomenology on an effective robot architecture implemented on an operating autonomous robot RWI B21 offering guided tours at the Archaeological Museum of Agrigento, Italy. I. INTRODUCTION An autonomous robot operating in real and unstructured environments interacts with a dynamic world populated with objects, people, and in general, other agents: people and agents may change their position and identity during time, while objects may be moved or dropped. In order to work properly, the robot should be able to grasp consciousness of the surrounding world, in the sense that it should have a phenomenal perception of its environment and it should create links between its sensory inputs and motor actions. The phenomenological account of consciousness has been extensively analyzed in the literature. Block [4] introduced a clean distinction between P-Consciousness or Phenomenal Consciousness and A-Consciousness or Access Consciousness. The first one refers to the personal experiences of a mental state, while the second one refers to the functions of consciousness related to reasoning and rational control of action and speech. In recent years, there has been growing interest towards robot phenomenology, i.e., towards the possibility that a robot experiences some form of sensations and perceptions. Aleksander [2,3] proposes five basic axioms at the basis of artificial phenomenology: a robot, to be an artificial phenomenological system, should be able to build an inner depiction of its external perceptions, it should be able to pay attention to the relevant entities in their environment, it should be able to image, predict and to plan its actions, and to have some sort of emotional mechanism to evaluate the generated plans. Taking into account several results from neuroscience, psychology and philosophy summarized in the next Sect., we hypothesize that at the basis of visual phenomenology there is a continuous comparison process between the expectation of the perceived scene obtained by a 2D projection of the 3D reconstruction of the scene, and the effective scene coming from the sensory input. The paper contributes to the consciousness research field by testing the added value of phenomenology on an effective robot architecture implemented on an operating autonomous robot RWI B21 offering guided tours at the Archaeological Museum of Agrigento (Fig. 1). The paper is organized as follows: Sect. II presents some theoretical remarks about the proposed phenomenological account of consciousness; Sect. III describes the implemented robot architecture, and Sect. IV discusses the capabilities of planning by generation of expectations of the implemented architecture. Sect. V presents the robot at work at the Archeological Museum of Agrigento. Sect. VI discusses the approach with respect to the enactive theory of vision and presents some conclusions. Fig. 1. The robot Cicerobot operating at the Archaeological Museum of Agrigento. II. THEORETICAL REMARKS Analyzing the phenomenological account of consciousness from evolutionary point of view, Humphrey [15,16] makes a clean distinction between sensations and perceptions. Sensations are active responses generated by the body in reaction to external stimuli. They refers to the subject, they are about what is happening to me. Perceptions are mental representations related to something outside the subject. They

are about what is happening out there. Sensations and perceptions are two separate channels; a possible interaction between the two channels is that the perception channel may be recoded in terms of sensations and compared with the effective stimuli from the outside, in order to catch and avoid perceptual errors. This process is similar to the echoing back to source strategy for error detection and correction. Gärdenfors [8] discusses the role of simulators related to sensations and perceptions. He claims that sensations are immediate sensory impressions, while perceptions are built on simulators of the external world. A simulator receives as input the sensations coming from the external world, it fills the gaps and it may also add new information in order to generate perceptions. The perception of an object is therefore more rich and expressive than the corresponding sensation. In Gärdenfors terms, perceptions are sensations that are reinforced with simulations. The role of simulators in motor control has been extensively analyzed from the neuroscience point of view, see [32] for a review. Grush [13,14] proposes several cognitive architectures based on simulators ( emulators in Grush terms). The basic architecture is made up by a feedback loop connecting the controller, the plant to be controlled and a simulator of the plant. The loop is pseudo-closed in the sense that the feedback signal is not directly generated by the plant, but by the simulator of the plant, which parallels the plant and it receives as input the efferent copy of the control signal sent to the plant. In this case, the sensations are generated by the system as the output of the simulator. A more advanced architecture proposed by Grush and inspired to the work of Gerdes and Happee [10] takes into account the basic schema of the Kalman filter. In this case, the residual correction generated by the comparison between the effective plant output and the emulator output are sent to the plant simulator via to the Kalman gain. In turns, the simulator sends its inner variables as feedback to the controller. In this case, the sensations are output of the simulator process and they are of the same type of sensory inputs, while the perceptions are the inner variables of the simulator. The simulator inner variables are more expressive that rough sensations and they may contain also information not directly perceived by the system, as the occurring forces in the perceived scene, or the object-centred parameters, or the variables employed in causal reasoning [9]. Grush [12] also discusses the adoption of neural networks to learn the operations of the simulators, while Oztop et al. [27] propose more sophisticated learning techniques of simulators based on inferences of the theory of mind of others. The hypothesis that the content of consciousness is the output of a comparator system is in line with the behavioural inhibition system (BIS) discussed by Gray [11] starting from deep neuropsychological analysis. Also the intermediate level theory proposed by Jackendoff [19] agrees with the hypothesis. According to Jackendoff, the correct level for phenomenal awareness is an intermediate one between the low sensory level and the higher conceptual level. Moreover, the intermediate level is characterized by the combination of top-down and bottom-up processing of information, as in the feedback loop of the described architectures. From a neurological point of view, Llinas [20] hypothesizes that the CNS is a reality-emulating system and the role of sensory input is to characterize the parameters of the emulation. He also discusses [21] the role of this loop during dreaming activity. An early implementation of a robot architecture based on simulators is due to Mel [25]. He proposed a simulated robot moving in an environment populated with simple 3D objects. The robot is controlled by a neural network that learns the aspects of the objects and their relationships with the corresponding motor commands. It becomes able to simulate and to generate expectations about the expected object views according to the motor commands, i.e., the robot learns to simulate the external environment. A successive system is MURPHY [26] in which a neural network controls a robot arm. The system is able to perform off-line planning of the movements by means of a learned internal simulator of the environment. Other early implementation of robots operating with internal simulators of the external environment are: MetaToto [30], and the internalized plan architecture [29]. In both systems, a robot builds an inner model on the environment reactively explored by simulated sensorimotor actions in order to generate action plans. An effective robot able to build an internal model of the environment has been proposed by Holland and Goodman [17]. The system is based on a neural network that controls a Khepera minirobot able to simulate actions and perceptions and to anticipate perceptual activities is a simplified environment. Holland [18] speculates on the relationships between embodiment, internal models and consciousness. III. ROBOT ARCHITECTURE The robot architecture proposed in this paper is based on an internal 3D simulator of the robot and the environment world that takes into account the previously discussed distinction between sensations and perceptions (Fig. 2). The Robot block is the robot itself and it is equipped with motors and a video camera. It is modelled as a block that receives in input the motor commands M and it sends in output the robot sensations S, i.e., the scene acquired by the robot video camera. Fig. 2. The robot architecture.

The Controller block controls the actuators of the robot and it sends the motor commands M to the robot. The robot moves according to M and its output is the 2D pixel matrix S corresponding to the scene image acquired by the robot video camera. At the same time, an efferent copy of the motor commands is sent to the 3D Robot/Environment Simulator. The simulator is a 3D reconstruction of the robot environment with the robot itself. It is an object-centred representation of the world in the sense of Marr [24]. The simulator receives in input the controller motor command M and it simulates the corresponding motion of the robot in the 3D simulated environment. The output S of the simulator is the expected 2D image obtained as a projection of the simulated scene acquired by the simulated robot. In this sense, S is the effective image scene acquired by the robot and S is the expected image scene acquired by the robot according to the simulator. Both images are viewer-centred, in Marr s terms. The acquired and the expected image scenes are compared by the comparator block c and the resulting error is sent back to the simulator to align the simulated robot with the real robot. At the same time, the simulator send back all the relevant 3D information P about the robot position and its environment to the controller, in order to adjust the motor plans, as described below. It should be noted that S and S are 2D image scenes; they are modal information that may be considered sensations as they refer to what is happening to the robot, i.e., they are responses to the robot visual stimuli referred to the robot itself. Instead, P is amodal information that may be considered the robot perception, as it is a set of 3D information referred to what is happening out there. The image matrix S is the 2D recoding of 3D perception used by the simulator correct and align the simulated robot, while P is the interpretation of robot sensations by means of the 3D simulator. The 2D reconstruction S of the scene is built by the robot as a projection of the 3D entities in the simulator and from the data coming from robot sensors. This process constitutes the phenomenal experience of the robot, i.e., what the robot sees at a given instant. This kind of seeing is an active process, since it is based on a reconstruction of the inner percept in ego coordinates, but it is also driven by the external flow of information. It is the place in which a global consistency is checked between the internal model and the visual data coming from the sensors. The robot acquires phenomenal evidence for what it perceives, and at the same time it interprets visual information according to its internal model. Any discrepancy asks for a readjustment of its internal model. Furthermore, through this 2D image, the robot has an immediate representation of the scene as it appears in front of it, useful for rapid decision and reactive behaviours. This synthesised expected picture S of the world projects back in the external space the geometrical information contained in 3D simulator and, matched to incoming sensor data, it accounts for a complete understanding of the perceptive conscious experience. There is no need for a homunculs that observes it, since it is the ending result of an active reconstruction process, which is altogether conscious to the robot, which sees according to its own interpretation. The phenomenal experience is therefore the stage in which the two flows of information, the internal and the external, compete for a consistent match. There a strong analogy with the phenomenology in human perception: when one perceives the objects of a scene he actually experiences only the surfaces that are in front of him, but at the same time he builds a geometric interpretation of the objects in their whole shape. IV. PLANNING BY EXPECTATIONS The proposed framework for phenomenal perception may be extended to allow the robot to imagine its own sequences of actions. In this perspective, planning may be performed by taking advantage from the representations in the 3D Robot/Environment Simulator. Note that we are not claiming that all kinds of planning must be performed within a simulator, but the forms of planning that are more directly related to perceptual information can take great advantage from phenomenal perception in the described architecture. The signal P is the perception of a situation of the world out there at time t. The simulator, by means of its simulation engine based on expectations (see below), is able to generate expectations of P at time t+1, i.e., it is able to simulate the robot action related with motor command M generated by the controller and the relationship of the action with the external world. The preconditions of an action can be simply verified by geometric inspections in P at time t, while in the STRIPS planner [7] the preconditions are verified by means of logical inferences on symbolic assertions. Also the effects of an action are not described by adding or deleting symbolic assertions, as in STRIPS, but they can be easily described by the situation resulting from the expectations of the execution of the action itself in the simulator, i.e., by considering the expected perception P at time t+1. The recognition of a certain situation by means of the perception P at time t could elicit the expectation of a subsequent situation and the generation of the expected perception P at time t+1. We take into account two main sources of expectations. On the one side, expectations are generated on the basis of the structural information stored in a symbolic knowledge base of the simulator. We call linguistic such expectations. As soon as a situation is perceived which is the precondition of a certain action, then the symbolic description elicit the expectation of the effect situation, i.e., it generates the expected perception P at time t+1. On the other side, expectations could also be generated by a purely Hebbian association between situations. Suppose that the robot has learnt that when it sees somebody pointing on the right, it must turn in that direction. The system learns to associate these situations and to perform the related action. We call associative this kind of expectations.

In order to explain the planning by expectation mechanism, let us suppose that the robot has perceived the current situation P 0 e.g., it is in a certain position of a room. Let us suppose that the robot knows that its goal g is to be in a certain position of another room with a certain orientation. A set of expected perceptions { P 1, P 2, } of situations is generated by means of the interaction of both the linguistic and the associative modalities described above. Each P i in this set can be recognized to be the effect of some action related with a motor command M j in a set of possible motor commands { M 1, M 2, } where each action (and the corresponding motor command) in the set is compatible with the perception P 0 of the current situation. The robot chooses a motor command M j according to some criteria; e.g., it is the action whose expected effect has the minimum Euclidean distance from the goal g, or, for example, considering the utility value of the expected effect. Once that the action to be performed has been chosen, the robot can imagine to execute it by simulating its effects in the 3D simulator then it may update the situation and restart the mechanism of generation of expectations until the plan is complete and ready to be executed. On the one side, linguistic expectations are the main source of deliberative robot plans: the imagination of the effect of an action is driven by the description of the action in the simulator KB. This mechanism is similar to the selection of actions in deliberative forward planners. On the other side, associative expectations are at the basis of a more reactive form of planning: in this latter case, perceived situations can reactively recall some expected effect of an action. Both modalities contribute to the full plan that is imagined by the robot when it simulates the plan by means of the simulator. When the robot becomes fully aware of the plan and of its actions, it can generate judgements about its actions and, if necessary, imagine alternative possibilities. V. THE ROBOT AT WORK The presented ideas have been implemented in Cicerobot, an autonomous robot RWI B21 equipped with sonar, laser rangefinder and a video camera mounted on a pan tilt. The robot has been employed as a museum tour guide operating at the Archaeological Museum of Agrigento, Italy offering guided tours in the Sala Giove of the museum (Fig. 1). A first session of experimentations, based on a previous version of the architecture, has been carried out from January to June 2005 and the results are described in [6,22]. The second session, based on the architecture described in this paper, started in March and ended in July 2006. The task of museum guide is considered a significant case study [5] because it concerns perception, self perception, planning and human-robot interactions. The task is therefore relevant as a test bed for phenomenal consciousness. It can be divided in many subtasks operating in parallel, and at the same time at the best of the robot capabilities. Moreover, the museum is a dynamic and unpredictable environment. To summarize, the task of a museum guide is a hard one for the robot because it must tightly interact with its environment which is dynamic and unpredictable; moreover the robot must be able to rearrange its goals and tasks according to the environment itself. Referring to the architecture in Fig. 2, the controller includes a standard behaviour-based architecture (see, e.g., Arkin [1]) equipped with standard reactive behaviours as the static and dynamic obstacle avoidance, the search of free space, the path following and so on. Fig. 3 shows the object-centred view from the 3D robot/environment simulator. As previously described, the task of the block is to generate the expectations of the interactions between the robot and the environment at the basis of robot phenomenology. It should be noted that the robot also simulates itself in its environment. Fig. 4 shows a 2D image generated from the 3D simulator from the robot point of view. Fig. 3. The object centred view from 3D robot/environment simulator. In order to keep the simulator aligned with the external environment, the simulator engine is equipped with a stochastic algorithm, namely a particle filter (see, e.g., [31]). In brief, the simulator hypothesizes a cloud of expected possible positions of the robot. For each expected position, the corresponding expected image scene S is generated, as in Fig. 5 (right). The comparator thus generates the error measure between the expected and the effective image scene S as in Fig. 5 (left). The error weights the expected position under consideration; in subsequent steps, only the winning expected positions that received the higher weights are taken, while the other ones are dropped. Fig. 6 (left) shows the initial distribution of expected robot position and Fig. 6 (right) shows the small cluster of winning positions. Now the simulator receives the new motor command M related with the chosen action, as described before, and, starting from the winning hypotheses, it generates a new set of hypothesized robot positions. The filter iterates between

these two steps until convergence, i.e., until the winning positions converge to a small set of moving points. of itself or entities in the external environment: it is the environment that generates its own representation. In this way, when the robot needs some information about an entity in the environment, it just retrieves the information by querying its own sensors by the means of its attentive system. According to the enactive approach, the robot should be equipped by a pool of sensorimotor contingencies so that each entity of the environment and also the robot itself are not internally represented in any way, but the entity actives the related sensorimotor contingencies that define the interaction schemas between the robot and the entity itself. So for example a vase, a window, the visitors, etc. will active the related robot sensorimotor contingencies. Fig. 4. The viewer centred image from the robot point of view. Fig. 5. The 2D image output of the robot video camera (left) and the corresponding image generated by the simulator (right). Fig. 5 shows the 2D image S as output of the robot video camera (left) and the corresponding image S generated by the simulator (right) by re-projecting in 2D the 3D information from the current point of view of the robot. The comparator block c compares the two images of Fig. 5 and it generates the error message related to the differences between the expected image scene and the effective scene. As previously stated, this error is sent back to the simulator. Fig. 7 (left) shows the operation of the robot during the tour guide. To compare the operation of the robot, we tested the robot by considering the odometric information only; results are shown in Fig 7 (right). It should be noted that the proposed architecture based on the phenomenological account let the robot to operate more precisely and in a more satisfactory way. In facts, the described sensations and perceptions mechanism let the robot to be aware of its position and of its perceived scene. The robot is therefore able to eventually adjust and correct its own subsequent motion actions. Moreover, the robot is able to imagine its future actions and it is therefore able to choose the best motion actions according to the current perceived situation. VI. DISCUSSIONS AND CONCLUSIONS A debate in the last years in the consciousness community regarded the enactive theories of vision (see O Regan and Noe [28]). The enactive approach would require that the robot accomplishes its own tasks without any inner representation Fig. 6. The operation of the particle filter. The initial distribution of expected robot positions (left), and the cluster of winning expected positions. While our proposed architecture is based on a 3D simulator as inner representation of the external word, it has many contact points with the enactive approach. In facts, the simulator has the role of store of robot sensorimotor contingencies, which are in facts our expectations. Therefore, we agree with the enactive approach that robot phenomenology grows up from the mastery of these contingencies at the basis of the task execution of the robot. The linguistic expectations are preprogrammed in the robot system by design (a sort of phylogenetic contingencies) and stored in the KB of the simulator. During the working life, the robot may acquire novel contingencies and therefore novel expectations and novel way of interacting with the environment, by means of Hebbian associations, as in the associative expectations. Moreover, the robot may acquire new ways of mastery, i.e., new ways to use and combine expectations, in order to generate its own goal tasks and motivations (a sort of ontogenetic contingencies) [23]. During a standard museum visit, the robot will activate its own expectations. From the point of view of phenomenology, the robot has a low degree of perceptual awareness. When something unexpected happens, for example a request from the visitor or the presence of a new object in the museum, the robot arises its own degree of awareness and it copes the situation by mastering suitable expectations.

Fig. 7. The operation of the robot equipped with the architecture (left) and with the reactive controller and the odometric feedback (right). These unexpected situations generate a trace in the robot in order to allow the robot to generate new expectations and/or new ways of mastery of contingencies. In this way, the robot, by its interaction with the environment, is able to modify its own goals or to generate new ones. A new object in the museum will generate new expectations related with the object and the subsequent modifications of the expectations related with the standard museum tour. It should be noticed that the described model of robot perceptual phenomenology highlights open problems from the point of view of the computational requirements. The described architecture requires that the full 3D reconstruction of the dynamic scenes and the comparison with the scene perceived by the robot during its tasks should be computed in real time, and also the corresponding 2D rendering should be computed in real time. At the current state of the art in computer vision and computer graphics literature, this requirement may be satisfied only in case of simplified scenes with a few objects where all the motions are slow. However, we maintain that our proposed architecture is a good starting point to investigate robot phenomenology. As described in the paper it should be remarked that a robot equipped with artificial phenomenology performs complex tasks as museum tours, better and more precisely than an unconscious reactive robot. ACKNOWLEDGMENT Authors would like to thank Salvatore Gaglio, Peter Gärdenfors and Riccardo Manzotti for discussions about the proposed architecture, and the director and the staff of the Archaeological Museum of Agrigento for supporting the Cicerobot project. REFERENCES [1] R.C. Arkin, Behavior-based robotics. Cambridge, MA: MIT Press, 1998. [2] I. Aleksander, B. Dunmall, Axioms and tests for the presence of minimal consciousness in agents, Journal of Consciousness Studies, vol. 10, pp. 7-18, 2003. [3] I. Aleksander, The world in my mind, my mind in the world. Exeter, UK: Imprint Academic, 2005. [4] N. Block, On a confusion about a function of consciousness, Behavioral and Brain Sciences, vol. 18, pp. 227 287, 1995 [5] W. Burgard, A.B. Cremers, D. Fox, D. Hähnel, G. Lakemeyer, D. Schulz, W. Steiner, S. Thrun, Experiences with an interactive museum tour-guide robot, Artificial Intelligence, vol. 114, pp. 3 55, 1999 [6] A. Chella, M. Frixione, S. Gaglio, Planning by imagination in Cicerobot, a robot for museum tours, in: Proc. of AISB 2005 Symposium on Next Generation Approaches to Machine Consciousness, pp. 40-49, University of Hertfordshire, Hatfield, UK. [7] R.E. Fikes and N.J. Nilsson, STRIPS: a new approach to the application of theorem proving to problem solving, Artificial Intelligence, vol. 2, pp. 189-208, 1971. [8] P. Gärdenfors, How Homo Became Sapiens. Oxford: Oxford University Press, 2003. [9] P. Gärdenfors, Emulators as sources of hidden cognitive variables, Behavioral and Brain Sciences, vol. 27, p. 403, 2004. [10] V.G.J. Gerdes and R. Happee, The use of an internal representation in fast goal-directed movements: a modeling approach, Biological Cybernetics, vol. 70, pp. 513-524, 1994. [11] J.A. Gray, The contents of consciousness: A neuropsychological structure, Behavioral and Brain Sciences, vol. 18, pp. 659-722, 1995. [12] R. Grush, Emulation and cognition. Doctoral dissertation, Department of Cognitive Science and Philosophy. University of California, San Diego, 1995. [13] R. Grush, Wahrnehmung, Vorstellung und die sensomotorische Schleife. (English translation: Perception, imagery, and the sensorimotor loop), in Bewußtsein und Repräsentation, F. Esken and H.-D. Heckmann, Eds. Verlag Ferdinand Schöningh. [14] R. Grush, The emulator theory of representation: motor control, imagery and perception, Behavioral and Brain Sciences, vol. 27, pp. 377-442, 2004. [15] N. Humphrey, A History of the Mind. New York: Simon & Schuster, 1992. [16] N. Humphrey, How to solve the mind-body problem, Journal of Consciousness Studies, vol. 7, pp. 5-20, 2000. [17] O. Holland and R. Goodman, Robots with internal models A route to machine consciousness? Journal of Consciousness Studies, vol. 10, pp. 77-109, 2003. [18] O. Holland, The future of embodied artificial intelligence: Machine consciousness? in Embodied artificial intelligence. F. Iida et al. Eds. Berlin, Heidelberg: Springer-Verlag, 2004. [19] R. Jackendoff, Consciousness and the computational mind. Cambridge, MA: MIT Press, 1987. [20] R. Llinas, Consciousness and the thalamocortical loop, International congress series, vol.1250, pp. 409-416, 2003. [21] R. Llinas and D. Parè, Of dreaming and wakefulness, Neuroscience, vol. 44, pp. 521-535, 1991. [22] I. Macaluso, E. Ardizzone, A. Chella, M. Cossentino, A. Gentile, R. Gradino, I. Infantino, M. Liotta, R. Rizzo and G. Scardino, Experiences with Cicerobot, A museum guide cognitive robot, in AI*IA 2005, S. Bandini and S. Manzoni Eds. Berlin, Heidelberg: Springer-Verlag, 2005, pp. 474-482. [23] R. Manzotti, V. Tagliasco, From behaviour-based robots to motivationbased robots, Robotics and Autonomous Systems, vol. 51, pp. 175-190, 2005. [24] D. Marr, Vision. New York: W.H. Freeman, 1982. [25] B.W. Mel, A connectionist learning model for 3-dimensional mental rotation, zoom and pan, in Proc. of the 8 th Ann. Conf. of the Cognitive Science Soc., 1986, pp. 562-571. [26] B.W. Mel, Connectionist robot motion planning: A neurally-inspired approach to visually-guided reaching. Cambridge, MA: Academic Press, 1990. [27] E. Oztop, D. Wolpert, M. Kawato, Mental state inference using visual control parameters, Cognitive Brain Research, vol. 22, pp. 129-151, 2005. [28] K. O Regan and A. Noe, A sensorimotor account of vision and visual consciousness, Behavioral and Brain Sciences, vol. 24, pp. 939-973, 2001. [29] D.W. Payton, Internalized plans: A representation for action resources, Robotics and Autonomous Systems, vol. 6, pp. 89-103, 1990. [30] L.A. Stein, Imagination and situated cognition, MIT AI Memo No. 1277, 1991. [31] S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics. Cambridge, MA: MIT Press, 2005. [32] D.M. Wolpert and Z. Ghahramani, Computational principles of movement neuroscience, Nature neuroscience supplement, vol. 3, pp.1212-1217, 2000.