COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

Similar documents
MOBILE ROBOT VISION SYSTEM FOR OBJECT COLOR TRACKING

Robotic Systems ECE 401RB Fall 2007

KINEMATIC CONTROLLER FOR A MITSUBISHI RM501 ROBOT

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Planning exploration strategies for simultaneous localization and mapping

Concentric Spatial Maps for Neural Network Based Navigation

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

Robotics Links to ACARA

Activity Template. Subject Area(s): Science and Technology Activity Title: Header. Grade Level: 9-12 Time Required: Group Size:

Motion of Robots in a Non Rectangular Workspace K Prasanna Lakshmi Asst. Prof. in Dept of Mechanical Engineering JNTU Hyderabad

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

Simulation of a mobile robot navigation system

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots

CS123. Programming Your Personal Robot. Part 3: Reasoning Under Uncertainty

METBD 110 Hands-On 17 Dimensioning Sketches

Connected Mathematics 2, 6th Grade Units (c) 2006 Correlated to: Utah Core Curriculum for Math (Grade 6)

Non-Invasive Brain-Actuated Control of a Mobile Robot

A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition

Embodiment from Engineer s Point of View

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment

Autonomous Localization

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

interactive IP: Perception platform and modules

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Elko County School District 5 th Grade Math Learning Targets

Learning Behaviors for Environment Modeling by Genetic Algorithm

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications

March 10, Greenbelt Road, Suite 400, Greenbelt, MD Tel: (301) Fax: (301)

Autonomous Mobile Robots

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Effective Iconography....convey ideas without words; attract attention...

Computational Principles of Mobile Robotics

Saphira Robot Control Architecture

Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina. Overview of the Pilot:

Decision Science Letters

Note to Teacher. Description of the investigation. Time Required. Materials. Procedures for Wheel Size Matters TEACHER. LESSONS WHEEL SIZE / Overview

Math 152: Applicable Mathematics and Computing

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Robots in Town Autonomous Challenge. Overview. Challenge. Activity. Difficulty. Materials Needed. Class Time. Grade Level. Objectives.

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

MarineBlue: A Low-Cost Chess Robot

RELEASING APERTURE FILTER CONSTRAINTS

Evolved Neurodynamics for Robot Control

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Transer Learning : Super Intelligence

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

Autofocus Problems The Camera Lens

Mobile Robots Exploration and Mapping in 2D

Moving Path Planning Forward

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

Rearrangement task realization by multiple mobile robots with efficient calculation of task constraints

Structure and Synthesis of Robot Motion

An Introduction To Modular Robots

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

A Vehicular Visual Tracking System Incorporating Global Positioning System

Extracting Navigation States from a Hand-Drawn Map

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

MESA Cyber Robot Challenge: Robot Controller Guide

Chapter 1. Robots and Programs

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Math + 4 (Red) SEMESTER 1. { Pg. 1 } Unit 1: Whole Number Sense. Unit 2: Whole Number Operations. Unit 3: Applications of Operations

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

FSI Machine Vision Training Programs

Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings. Amos Gellert, Nataly Kats

Robot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment

The project. General challenges and problems. Our subjects. The attachment and locomotion system

E190Q Lecture 15 Autonomous Robot Navigation

DREAM BIG ROBOT CHALLENGE. DESIGN CHALLENGE Program a humanoid robot to successfully navigate an obstacle course.

A Neural Model of Landmark Navigation in the Fiddler Crab Uca lactea

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

A Computer-Vision Approach to the Analysis of Peromyscus californicus Behavior

Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

1.1 The Pythagorean Theorem

CS686: High-level Motion/Path Planning Applications

Checkerboard Tracker for Camera Calibration. Andrew DeKelaita EE368

Semi-Autonomous Parking for Enhanced Safety and Efficiency

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

Multi-Robot Coordination. Chapter 11

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Robotics Enabling Autonomy in Challenging Environments

UNDERSTANDING LENSES

A Three-Tier Communication and Control Structure for the Distributed Simulation of an Automated Highway System *

Motion planning in mobile robots. Britta Schulte 3. November 2014

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Note to the Teacher. Description of the investigation. Time Required. Additional Materials VEX KITS AND PARTS NEEDED

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Contents Introduction...2 Revision Information...3 Terms and definitions...4 Overview...5 Part A. Layout and Topology of Wireless Devices...

INTRODUCTION TO VEHICLE NAVIGATION SYSTEM LECTURE 5.1 SGU 4823 SATELLITE NAVIGATION

Unit 12: Artificial Intelligence CS 101, Fall 2018

Transcription:

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Mladen Sučević,MEng, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Keywords: mobile robot, space perception, workspace cognitive model Astract This paper intends to answer the question about minimal set of topologic marks and their properties that are sufficient for independent path planning of a robot from a start to a goal position. The answer is supplied by a space model classified as a cognitive space model because it reminds of a human s space model. The cognitive space model can be divided into parts called districts. At any moment a mobile robot is able to define what it sees from the current position. This definition consists of identification and properties of objects and events. In addition, global space properties and events are calculated. The object that is estimated as the closest object to a robot has a special status because the danger of collision with the closest obstacle is the most probable. Any visible object/obstacle can be chosen as a focus object thus being given special attention. During the motion the goal is continuously monitored and its status is updated. 1. MOTIVATION A human s perception of a space that surrounds him is not determined in terms of numbers. We describe distances, lengths and relations in terms of descriptions that are not very precise and have more than one meaning, but still these descriptions reflect the "real" world very well. For example, to describe a position and orientation, a man does not use coordinate systems. The position and orientation of an object are described by its relation to other objects that surround it and depend on the position and orientation of a person that produces the description. The main benefit of that approach is that complex navigation devices are not necessary and, as we know, such devices designed by biological systems do not exist. Instead of such complex navigation devices, a cognitive model of the world has been developed. Although the cognitive model is not as accurate as numerical approach, it can be realized in a much simpler way and is very effective, as shown by human evolution. Anyone who would like to program a mobile robot today must have some specialist knowledge. This is because a robot s perception of the world that surrounds it is quite different from a human s perception of the same world. As long as this difference exists, the application of robots will be limited. Robots will work in manufacture as industrial robots because it is possible to cover great expenses of their maintenance and programming. Robots will also work in some special applications (military, underwater research, space and planets research), because the economic component is not dominant in these areas. But for now, it is not feasible to use robots in our homes with limited budgets and knowledge. If you have a robot and ask an average person to program it in a variant of BASIC, you can not expect acceptance and understanding if the task is expressed as "Go to the coordinate (1572, 2390), then move on the radius 6327 around the center (4972, 3150), with referent coordinate system RKS1784X2.". But if the same task is expressed as "Come to the entrance of the house behind you, and then go around the house", it will be acceptable and understandable for the majority of ordinary people. Of course, the task defined in such a way is very imprecise and will result in a great number of solutions. During the execution of the task defined in such a way, some other processes will be activated, such as a previous experience of performing the same or a similar task, in order to compare and control the process. This paper investigates how to describe a robot s environment to look like a human s description, in order to accomplish a symbolic goal of robot movement. The description of the robot s environment is cognitive, with no accurate definitions, scales and coordinate systems. The main hypothesis is that it is possible to define a finite number of the environment properties that would enable the identification of the robot s position and orientation (space description) and the planning and control of the robot motion to a desired goal. 2. PREVIOUS WORK Perhaps the best overview from this area is given in [1]. On 40 pages and 121 literature references the authors systematically explain 11 th INTERNATIONAL SCIENTIFIC CONFERENCE ON PRODUCTION ENGINEERING CIM2007 Croatian Association of Production Engineering, Zagreb 2007

CIM2007 June, 13-17, 2007 Biograd, Croatia ideas of space mapping. The basic classifications of space maps are metric maps and topologic maps. In the past works. most researchers preferred to work with metric maps. There are two reasons for that. The first reason is that searching path methods were adapted to the metric map of space, and the second reason is that measuring devices on the robot gave a numerical result of measurement which could be easily included in the metric model of the space. Although we have not given up the metric description of the space, there have been different answers to the question: "How to keep in mind the world we are moving in?" Some ideas come from the animal world. By observing animals and humans we can be quite sure that even very simple organisms build some models of the world in which they live and move. In the literature item [2] this is called "internal world model of navigation" and in the work [4] it is called "mental space". Because of biological realization of these models, it can be concluded that they belong to the class of topological models. Instead of defining each space element, only characteristic points and their interconnections are marked. Moving paths are defined by the sequence and types of moving from one place to another. In that way, a simple and short description of moving through the space is obtained. The main advantage of the topological space mapping is that we do not need either metric sensors or the conversion of their results into a referent coordinate system. In fact, we do not need any referent coordinate system; we need only referent topologic marks. But what we need now are methods and procedures that are able to extract topologic properties from the available information (most often from visual data), and relate them to a space model. Furthermore, the question is what is a set of good topologic marks that describe a space well and at the same time are free from unnecessary details and redundancy? In the work [3] some additional artificial marks are added to the world. These marks are only used for identification and are not a part of any natural process. Although this approach seems to be impractical, if we look at the world around us, we will see that our world is full of these special marks (traffic signs, finger-posts, advertising signs, etc.). But if these special marks do not exists, and sometimes they could not exist, we need to use objects in the space (walls, doors, passes, etc.) as topologic marks. The work [5] introduces an idea called "viewbased navigation". The decision on a movement is based on the information from the actual viewing instead of on the world model (a map). Several pieces of basic information are extracted from the actual picture. According to their relationship and the relation to the goal (estimation of angles and distances), the next step in moving is decided on. The main motive for that kind of reasoning is the fact that even creatures with obviously limited mental abilities (insects, for example), can plan, move and reach a simple goal. The problem of car driving is dealt with in [6]. The presented algorithm is divided into six steps. The fifth step deals with the identification of objects, their properties and events. The whole space description is organized for a process of car driving, therefore adequate space objects are selected (vehicles, traffic signs, horizontal signs, etc.). It is estimated that for a successful car driving, a space model should have up to 1,000 elements, and the number of object properties should be up to 7,000. The whole process of car driving will have approximately 1,000 situations and 10,000 model states. 3. INTRODUCTION The purpose of this paper is to find a set of environment properties that will permit position and orientation identification of a mobile robot, with the intention of robot path planning to reach a goal. Since the physical realization of that task would require massive financial investment and would lead to additional technical problems which are beyond the limits of this research, the method verification will be done by the process simulation. In order to simplify the simulation, the whole process will be verified in two-dimension space. Obstacles are set as closed polygons. The important thing for path solving is that these polygons may be concave. Details on obstacles are defined by the polygon vertex and sides and can be identified in the process of environment recognition. In the technical realization, a vision system would have a task to scan the environment and recognize objects and details on it. The mobile robot is rounded, with dimensions that are comparable to the obstacles and free passes among them. The robot can move forward-backward, left-right and turn-left, turnright. During the robot movement simulation, a possibility of collision in the next robot step is checked, therefore the step is not allowed if there is a collision prediction. The robot movement simulation is carried out in an adequate coordinate system by the methods of numerical mathematics. 2

June, 13-17, 2007 Biograd, Croatia CIM2007 Figure 1 Space where the robot moves set by obstacle identification and divided in districts 4. SPACE DESCRIPTION For the simulation purpose, the space where the robot moves must be known. But if there is some obstacle in the robot path, which is not known to the robot, it will see such an obstacle, but it will not recognize it. Each obstacle has its own local identification sign (in this case it is simply a number), but it is possible to assign any name to each number (car, house, tree, etc.). An obstacle is defined as a polygon by a set of points. These points are connected in the order in which they are given, and they make a close form (the last one is connected by the first one). That is why it can happen that a polygon has concave parts. Concave forms on objects make the environment description and path finding more difficult. Figure 1 is a typical environment in which the robot will move. The robot movement will always start from the figure center (home position). There is also a center of the coordinate system for the simulation. It means that the robot initial state is known. The goal definition can be set in two ways. The simpler one is defined by some object or some detail on it. For example, the goal can be vertex 3 on obstacle 5, or side 1 on object 8. But the goal could also be defined in an abstract way, e.g. the space "between" obstacles 5 and 8. In that case, it is much more difficult to report that the goal has been reached. Space objects/obstacles and the goal are set in the space configuration file. During the robot movement simulation it is necessary to check a possibility of collision between the robot and any obstacle. Such a task has to be performed before the next step execution. The collision is checked according to the exact mathematical model, not according to the picture representing the model. If there are a lot of obstacles in the space, collision checking will be a hard job. To make this task easier, the whole space is divided into smaller units called districts. The district dimension is constant and is a compromise between the amount of required calculation and the computer memory. It is necessary to determine which obstacles form a part of each district. The robot checks on the collision with the obstacles belonging to the same district as the robot does. 3

CIM2007 June, 13-17, 2007 Biograd, Croatia Figure 2 Description of the robot situation Additional collision checking is performed with the obstacles which belong to districts that surround the robot district (because of the transition from one district to another). 5. SITUATION DESCRIPTION Figure 2 shows a typical situation of the robot space. The robot is always in the figure center and it always shows the front robot space. As the robot moves, the "front" space also changes. It means that figure always shows what the robot see of the entire space. The robot movement is defined and limited by its design and dynamic. In such case the robot has two wheels with independent power and one caster wheel. The robot can see its environment up to some distance (according to its current position). The range of watching can be described by several values that can be seen in the group "View range", Fig. 2. The distance range values are: Close, Mid, Far and Very Far. Amounts of these and other parameters are set in the space configuration file and expressed according to the robot dimension. The range of the environment viewing could be changed during the simulation. Also, the range marks can be either visible or not, depending on the "Draw Ranges" box mark. The simulation parameters, such as the robot coordinates (x, y), orientation (fi) and geographic orientation mark (Dir) can be seen in the ROBOT group. There, one can also see in which district the robot is currently (District) and which obstacles are in the viewing range behind it (I see). After each robot step, the environment is identified by a kind of radar scan. This scanning is a substitution for a vision system on a real mobile robot. The scanning starts from the right to the left side of the robot viewing area by the increment of 5. The scanning distance is defined and limited by the sign "View range", Fig. 3. The result of the environment scanning is the information determining the obstacle distance, obstacle identification (name), identification of the obstacle detail (side mark) for each direction. At the bottom of Fig. 2 one can see the result of the object/obstacle identification for each viewing direction. The accuracy of distance estimation depends on the real distance. If some obstacle is closer, the distance estimation is better and vice versa. This is the way how a human estimates distances. 4

June, 13-17, 2007 Biograd, Croatia CIM2007 Figure 3 The environment scanning When all data from the scanning process are collected, a list of obstacles that are "in front" of the robot and which it can see is formed. Additionally, data from the environment scanning are used for building each object properties and for defining properties of the whole situation on the global level. In Figure 3, some of these properties can be seen for three objects. The first specific object/obstacle is the nearest obstacle because we always pay attention to it. The main reason for that is a simple fact that there is the greatest possibility that we will collide with the nearest obstacle. The nearest obstacle properties can be seen in the group labeled with "NEAREST OBSTACLE", and these properties are updated automatically by each robot step. As we move, we almost always pay attention to some special object in the environment, no matter whether we use it as a global mark or we use it as a finger-post to the goal. The identification of such an obstacle and its properties can be seen in the group labeled with "FOCUS OBSTACLE". We can select it any time by "clicking" it with the mouse left button. The third specific object in the space in which we move is certainly the goal. Its properties can be seen in the group labeled with "GOAL". The goal reaching status (the last line in the "GOAL" group) can be the following set: NOT VISIBLE, OBJECT VISIBLE, VISIBLE FACE (i.e. the defined detail on the goal object) and REACHED. 6. OBJECT AND SITUATION PROPERTIES Each object in the robot viewing space can be described by a set of properties. These properties tell us the relation between the robot and the object/obstacle. In the real situation these properties will be the result of the vision system information, realized by a system of independent agents. Table 1 defines a set of the object properties according to the mobile robot position and orientation. Some properties have a numeric estimation (sign * in Table 1). The name of each property explains its sense. But the property "Face" needs additional explanation. If the goal is set only with the identification, it is too general. Therefore it is necessary to set some detail (mark) on the goal object to make a more precise goal definition. Each object/polygon consists of sides and a vertex, therefore these details are chosen for a more precise goal definition. Because polygon vertices are much smaller then the polygon sides (vertices are points), it is very difficult to detect them by the idea shown in Fig. 3. The only, details on the object that can be practically detected are polygon sides. Each polygon vertex is defined by a number, and each side has the same number as the vertex before it. The example of the polygon detail identification (with no concave parts) is shown in Fig. 4. Direction 0 means "in front of the robot", directions left are counted positive, directions right are counted negative. 5

CIM2007 June, 13-17, 2007 Biograd, Croatia Table 1 Object properties and a set of possible values Object properties Set of values Identification <Name> Distance * very close, close, medium, far, very far Direction * trough left, left, front, right, trough right, back left, back, back right Size * very small, small, medium big, big, very big, surrounds us Face <object details> Visibility all, left side, central part, right side, by parts, not visible FOV position on left edge, on right edge, on left and right edge, not on edge Overlapped <obstacle identification> Left pass * no pass, very narrow, narrow, medium, wide, open Right pass * no pass, very narrow, narrow, medium, wide, open Approach free, attention, not possible Speed rest, very slow, slow, medium fast, fast, very fast Speed direction toward us, tow. us right, toward us left, from us, from us right, from us left Collision danger no danger, small, medium, great, very great 1 2 7 5 4 3 Face Direction 2 2 2 2 3 3 3 3 3 3 3 3 3 5 4 3 2 1 0-1 -2-3 -4-5 -6-7 Figure 4 Detail identification on a polygon From that kind of description it is possible to determine, for example, a direction of the vertex 3. It is on the direction where the side 2 changes to the side 3, i.e. in the direction 1.5x5 =7.5 (5 is a scan resolution). The direction of the whole side is determined by the direction of its geometric center. If a polygon side is too long, the geometric center of that side will describe its direction too roughly. In such case it is possible to set the polygon vertex on the polygon side, dividing the side into smaller (shorter) parts more suitable for a precise mark direction definition. For determining the direction of any temporary visible object it is possible to use a function/agent called FindDirection (Obstacle, Mark, Type). Also, from the object detail identification it is possible to find directions of left and right object sides. This information is important if we decide to move beside the object. According to Fig. 4, the left side of the obstacle is in the direction 5.5x5 =27.5, and the right side of the obstacle is in the direction 7.5x5 =-37.5. 6

June, 13-17, 2007 Biograd, Croatia CIM2007 Table 2 Properties and events of the global situation Global properties Set of values View obstacles <obstacle identifications> Nearest obstacle <obstacle identification> Focus obstacle <obstacle identification> Goal obstacle <obstacle identification> Events Set of values Goal status not visible, visible object, visible face, reached Objects appear <obstacle identifications> Objects disappear <obstacle identifications> If the robot would like to know its current position, the estimation of this information is derived from a known space model. In addition to the relation recognition between the mobile robot and each object, it is necessary to know relations between obstacles that the robot sees. These relations can be seen from the properties of obstacles (Visibility, Left pass, etc.), but is not enough. That is why we need to define relation properties between two or more obstacles, and also properties and events of the global situation, Table 2. 7. EXAMPLE Let the robot space be defined by 21 objects as shown in Fig 1. All definitions of viewing ranges, robot dimensions, etc. are set and known. Table 3 describes properties of some objects/obstacles and global events and properties. The mobile robot position is (50, 141, 150) which means that the robot looks from the district (0, 0) to the south. The mobile robot radius is 16 units, and the view range is set to Far, which means to 21 robot radii, i.e. 336 units. From the obstacle properties it is visible that there is a great danger of collision with obstacle 5, which is on the robot s left side and is at a very close distance (estimated 16 units, or one robot radius). But if my goal is object 2, which is on the right side, one could conclude that the robot can turn to the goal and approach it. Other two visible obstacles (3 and 4) are relatively far and at that moment they have no influence on the decision of the robot movement. Since the goal is on the right edge of the robot viewing space (FOV field of view), i.e. it is not visible to the full, all conclusions are of lesser importance. To increase the conclusion level, it will be necessary to rotate the robot to the right before any robot movement. This rotation will have no effect on any distance from the robot to obstacles; therefore the collision danger will not change. But after the robot rotation, the whole goal is visible and any conclusion will have stronger roots. From the global properties and events it is clear that according to the previous robot step, obstacle 4 appears in the robot viewing space. It is at the mid distance (estimated 184 units) from the robot, and only its right side is visible (side 4 and vertex 1). The rest of the object 4 is overlapped with object 5. A direct approach to object 4 is not possible. If the robot wants to move around object 4, starting from its right side, there is a narrow pass (estimated 25 ). The collision danger with obstacle 4 is small. 8. CONCLUSION AND FURTHER WORK For building a good space topologic map it is necessary to extract each object and describe its adequate properties. Beside that, an estimation of the global situation and events in the robot viewing range would be very useful. Such a space model would permit the robot path planning to the goal. The robot movement description would be short and not too precise. For the movement execution, the mobile robot should have additional instructions about the types of moving. The same as with humans, instructions of global movement do not change significantly, but the types of moving can change and be improved in time by some kind of learning process. In that way the robot moving can be more like human moving. Additionally, it means that the robot space perception is more like human space perception. In such a situation, the interaction between a robot and a human becomes very simple, and the robot is well accepted by the users with no expert knowledge because it is not necessary to ask the question "How does it work?" In addition, according to the proposed model, a robot would have the ability to estimate its position in the space. This estimation would be expressed by topologic details, together with the district and geographic orientation. Further work in this area includes the automatic robot path planning 7

CIM2007 June, 13-17, 2007 Biograd, Croatia according to space properties, events and goal reaching. For that purpose some modifications of the current space model will be required. The end goal of this work is to find a model of the robot moving among obstacles to the goal in the completely same way as a human. Table 3 Objects, global properties and events for the situation in Fig. 2 Object properties Object Identification 2 3 4 5 Distance very close / 24 mid / 176 mid / 184 very close / 16 Direction very right / -63 front / -13 left / 30 very left / 62 Size mid / 60 small / 30 very small / 5 mid / 60 Face 111111111222 333333 4 333333333333 Visibility all all right side all FOV position on right edge not on edge not on edge on left edge Overlapped 0 0 5 0 Left pass wide / 65 narrow / 25 no pass no pass Right pass no pass no pass narrow / 25 wide / 65 Approach free free not possible free Speed 0 0 0 0 Speed direction 0 0 0 0 Collision danger great small small very great Global properties View obstacles 2, 3, 4, 5 Nearest obstacle 5 Focus obstacle 3 Goal obstacle 2/ side 1 Global events Goal status face visible Objects appear 4 Objects disappear 9. LITERATURE [1] D. Filliat, J.A. Meyer, Map-based navigation in mobile robots: I. A review of localization strategies, Cognitive system research 4 (2003) 243-282 [2] A. Guillot, J.A. Mayer, The animate contribution to cognitive system research, Cognitive system research 2 (2001) 157-165 [3] M. Mata, J.M. Armingol, A. de la Escalera, M.A. Salichs, Learning visual landmarks for mobile robot navigation, 15 th IFAC World Congress, Barcelona, Spain 2002 [4] M. Crneković, M. Sučević, D. Brezak, J. Kasać, Cognitive Robotics and Robot Path Planning, CIM05 - Computer Integrated Manufacturing and High Speed Machining, Lumbarda 2005, pp. III 15 [5] T. Wagner; U. Visser; O. Herzog: Egocentric qualitative spatial knowledge representation for physical robots, Robotics and Autonomous Systems 49 (2004), 25-42 [6] T. Barbera; J. Albus, E. Messina, C. Schlenoff, J. Horst: How task analysis can be used to derive and organize the knowledge for the control of autonomous vehicles, Robotics and Autonomous Systems 49 (2004), 67-78 This research is NOT supported by the Croatian Ministry of Science, Education and Sport. 8