Limits of a Distributed Intelligent Networked Device in the Intelligence Space. 1 Brief History of the Intelligent Space

Similar documents
Verified Mobile Code Repository Simulator for the Intelligent Space *

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

ReVRSR: Remote Virtual Reality for Service Robots

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

OPEN CV BASED AUTONOMOUS RC-CAR

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

The Future of AI A Robotics Perspective

Mobile Robots Exploration and Mapping in 2D

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

Leica Viva Image Assisted Surveying & Image Notes

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Learning serious knowledge while "playing"with robots

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

1. Describe how a graphic would be stored in memory using a bit-mapped graphics package.

Autonomous Localization

Formation and Cooperation for SWARMed Intelligent Robots

Various Calibration Functions for Webcams and AIBO under Linux

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat

VLSI Implementation of Impulse Noise Suppression in Images

HeroX - Untethered VR Training in Sync'ed Physical Spaces

Volume III July, 2009

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Research Proposal: Autonomous Mobile Robot Platform for Indoor Applications :xwgn zrvd ziad mipt ineyiil zinepehe`e zciip ziheaex dnxethlt

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

Experiments of Vision Guided Walking of Humanoid Robot, KHR-2

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Digital Photographic Imaging Using MOEMS

An External Command Reading White line Follower Robot

The Performance Improvement of a Linear CCD Sensor Using an Automatic Threshold Control Algorithm for Displacement Measurement

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Outline. Comparison of Kinect and Bumblebee2 in Indoor Environments. Introduction (Cont d) Introduction

Putting It All Together: Computer Architecture and the Digital Camera

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

CROWD ANALYSIS WITH FISH EYE CAMERA

2 Our Hardware Architecture

Gesture Based Smart Home Automation System Using Real Time Inputs

IMAGE PROCESSING TECHNIQUES FOR CROWD DENSITY ESTIMATION USING A REFERENCE IMAGE

Chair. Table. Robot. Laser Spot. Fiber Grating. Laser

Team KMUTT: Team Description Paper

VIDEO DATABASE FOR FACE RECOGNITION

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Sensor system of a small biped entertainment robot

CAPACITIES FOR TECHNOLOGY TRANSFER

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

USER-ORIENTED INTERACTIVE BUILDING DESIGN *

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

CONTACTLESS THERMAL CHARACTERIZATION METHOD OF PCB-S USING AN IR SENSOR ARRAY

Figure 1 HDR image fusion example

Development of a telepresence agent

Gesture Recognition with Real World Environment using Kinect: A Review

Introduction. Lighting

Technology offer. Aerial obstacle detection software for the visually impaired

Homeostasis Lighting Control System Using a Sensor Agent Robot

High Performance Imaging Using Large Camera Arrays

Group Robots Forming a Mechanical Structure - Development of slide motion mechanism and estimation of energy consumption of the structural formation -

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE

State Library of Queensland Digitisation Toolkit: Scanning and capture guide for image-based material

An Autonomous Self- Propelled Robot Designed for Obstacle Avoidance and Fire Fighting

MEASUREMENT OF ROUGHNESS USING IMAGE PROCESSING. J. Ondra Department of Mechanical Technology Military Academy Brno, Brno, Czech Republic

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment

An Open Robot Simulator Environment

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

CS295-1 Final Project : AIBO

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

Intelligent Robot Systems based on PDA for Home Automation Systems in Ubiquitous 279

Control of Noise and Background in Scientific CMOS Technology

Masatoshi Ishikawa, Akio Namiki, Takashi Komuro, and Idaku Ishii

Visual Search using Principal Component Analysis

DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR

RB-Ais-01. Aisoy1 Programmable Interactive Robotic Companion. Renewed and funny dialogs

May Edited by: Roemi E. Fernández Héctor Montes

Final Report. Chazer Gator. by Siddharth Garg

Affordance based Human Motion Synthesizing System

Multi-robot Formation Control Based on Leader-follower Method

Global Image Sensor Market with Focus on Automotive CMOS Sensors: Industry Analysis & Outlook ( )

MINHO ROBOTIC FOOTBALL TEAM. Carlos Machado, Sérgio Sampaio, Fernando Ribeiro

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Autonomy, how much human in the loop? Architecting systems for complex contexts

Toward an Augmented Reality System for Violin Learning Support

DSP VLSI Design. DSP Systems. Byungin Moon. Yonsei University

John Henry Foster INTRODUCING OUR NEW ROBOTICS LINE. Imagine Your Business...better. Automate Virtually Anything jhfoster.

Learning and Using Models of Kicking Motions for Legged Robots

Electronics and TELECOMMUNICATIONS- AUTOMATION & CONTROL SYSTEMS GENERAL

Digital Microscopy: New Paradigm's for Teaching Microscopic Anatomy and Pathology

Advanced Test Equipment Rentals ATEC (2832)

Chapter 8. Representing Multimedia Digitally

In 1984, a cell phone in the U.S. cost $3,995 and

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers

RED TACTON ABSTRACT:

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Transcription:

Limits of a Distributed Intelligent Networked Device in the Intelligence Space Gyula Max, Peter Szemes Budapest University of Technology and Economics, H-1521, Budapest, Po. Box. 91. HUNGARY, Tel: +36 1 463 2870, fax: +36 1 463 2871, e-mail: max@aut.bme.hu, Szemes@hlab.iis.u-tokyo.ac.jp Abstract: The paper analyses an existing DIND (Distributed Intelligent Networked Device) in an Intelligent Space (ispace), which has ubiquitous sensory intelligence including sensors, cameras, microphones, haptic devices (for physical contact) and actuators with ubiquitous computing background. Devices use high speed network communication to flow information among them. The various devices are made for welfare supprt. They communicate to each other autonomously using ubiquitous computing intelligence. Intelligent Space can guide and protect humans in a crowded environment by the aid of the devices supported by the DIND. The paper tries to find the boundaries of the the recent Intelleigence System to map the posibilities. 1 Brief History of the Intelligent Space Hashimoto Lab. in University of Tokyo has proposed 'Intelligent Space' since 1996 [1]. At the beginning it consisted of two sets of vision cameras and computers with a home made 3D tracking software, this was written in C and tcl/tk under Linux. Later, a large-sized video projector (100 inches) was added to the Intelligent Space as an actuator. Mobile robots were located in the Intelligent Space for supporting people as well as for being supported. Vision cameras and computers sets were arranged around an entire room and it changed into the Intelligent Space. Conventionally, there is a trend to increase the intelligence of a robot operating in a limited area. The Intelligent Space concept is the opposite of this trend. The surrounding space has sensors and intelligence instead of the robot. A robot without any sensor or own intelligence can operate in an Intelligent Space. In the conventional solution the robot measures, calculates and decides. The heart of the ispace concept is that the robots must not measure, calculate or make decision. They just carry out, execute commands getting information from the distributed devices called Ubiquitous Sensory Intelligence which is realised by Distributed Intelligent Networked Devices (DIND).

The Intelligent Space consists of humans not only sensors cameras or robots. In the Intelligent Space DINDs monitor the space, achieve data and share them trough the network. Since robots in the ispace are equipped with wireless network devices, DINDs and robots together organize a network. The basic concept of Intelligent Space has extended with its development. The ispace is a system for supporting people in it. Events, which happen in it, are understood. However, to support people physically, the intelligent space needs robots to handle real objects. Mobile robots become physical agents of the Intelligent Space and they execute tasks in the physical domain to support people in the space. Task includes movement of objects, providing help to aged or disabled persons etc. Thus, the Intelligent Space is an environmental system, which supports people in it electrically and physically. Another interesting application here is that the room can serve as a high level, context sensitive interface to robots. The Intelligent Space is a platform to which desultory technologies are installed. 2 Basic Elements of Ubiquitous Sensory Intelligence In the Fig.2-1. three interesting elements of the current Intelligent Space with Ubiquitous Sensory Intelligence are selected and briefly described: Distributed Intelligent Network Device (DIND) Virtual Room Ubiquitous Human Machine Interface (UHMI) Fig.2-1. Basic Elements of Ubiquitous Sensory Intelligence

2.1 Distributed Intelligent Network Device We can use as a definition: A space becomes intelligent, when Distributed Intelligent Network Devices (DINDs) are installed in it [2]. DIND is very fundamental element of the Intelligent Space. It consists of three basic elements: - sensors: camera with microphone - processor: computer - communication device: LAN DIND can communicate with other DINDs or robots through the network. DIND uses these elements to achieve four functions: - the sensor monitors the dynamic environment, which contains people and robots - the processor deals with sensed data and makes decisions - the LAN organize communication among the elements - DIND can communicate with other DINDs or robots through the network. In actual system where number of the sensors are above 20, six Sony EVI D30 pan-tilt CCD camera and general bt848 based image capture board is adopted as a sensor [3]. For the processor, industrial standard Pentium III 500MHz PC is used and general 100baseT LAN card is used as a network device. Robots are able to use resources of DINDs as their own parts. However, robots with their own sensors may be considered mobile DINDs. 2.2 Virtual Room The aim of Virtual Room (VR) research project is to recreate an environment of a physical experimental space Fig.2-2. for studying different motion control and vision algorithms for a given robot before real world implementation. The room currently contains the following objects: Passive objects: Desks, chairs Active objects: Three robot agents Sensors: CCD cameras Actuators: Large Screen

Fig. 2-2. Phisical realization of the Virtual Room 2.3 Ubiquitous Human Machine Interface There are three mobile robots in the current Intelligent Space. The most interesting one is a special mobile human-machine interface [4]. There are three basic communication channels, which the people use in daily conversation: audio, visual, and haptic. All three communication-channels are implemented on the UHMI. The human user, who is in the Intelligent Space has an aim in his mind which he wants to realize using different type of commands. Some commands are associated with certain parts of the human body. UHMI has special devices to make connections with certain part of the human body. A video Camera and a TV Screen are mounted on the UHMI for visual communication. Speaker and microphone realize the audio communication. Haptic Interface is mounted on the robot to realize a physical connection. The UHMI can be seen on Fig.2-3. This UHMI is able to move to the user or it can guide him. A very special application could be to guidance a blind or a deaf people.

Monitor with Speaker Mobile Robot Platform Pan-Tilt CCD Camera Microphon e Motivation: Personal Communication And Guiding Haptic Interface Fig. 2-3. An Ubiquitous Human Machine Interface 3 What Can Be Done In Intelligent Space? 3.1 3D Positioning of Human Our aim is to support humans in our Intelligent Space. In order to support them first ispace must recognize them. Recognition of a human is done in two steps [5]: - the area or shape of a human is separated from the background (Fig.3-1.) - features of the human as head, hands, feet, eyes etc. are located (Fig.3-1.) Taking the images of three pairs of cameras (see Fig.2-1 and Fig.2-2.), the 3D position of the human can be calculated. The scanned areas of each parallel camera pairs are overlapped. To calculate 3D from several camera views point correspondences are needed. To establish these correspondences directly from the shape of the human is difficult. Instead of it, the head and hands of the human beings are found first and their centres are used for matching.

A second motivation to further analyse the shape is that adaptive background separation in complex scenes detects recently displaced objects. The algorithms of the recognation are implemented in three different software modules (Camera Server, 3D Reconstruction Module, Calibration Client) of the Intelligent Space. The error of the estimated position of an object changes with the distance from and pose of the camera. The error is influenced by several factors; the performance of each camera, the method of image processing, etc. Kalman filter is applied to smooth the measured data. Fig. 3-1. Separation of Human beings from the background 4 Technical environment As you can see in the Fig.2-1. sensors are located above the space. In actual system two times two Sony EVI D30 pan-tilt CCD cameras hang parallels on the ceiling and the third pair are turned by 90 degrees. They are connected to the general bt848 based image capture board that are adopted as sensors in an industrial standard Pentium III 500MHz PCs as well as general 100baseT LAN cards are used as network devices. According to the concept of the Intelligent Space the sensors must collect information and control robots. Their limits must be known. In case of the simplest systems we want to know how fast the system is or how many cameras are need to observe the events exactly. What do we call as events? Do we have enough cameras? How do we locate cameras to get the most information using less cameras? If numbers of the cameras are increasing do we get more information or will the processing time less? Do we have enough time to evaluate images in a real-time process? Do the cameras process images in the spot or send them to a central computer? And so on What measurements do we make by them and what do we have to transfer to the other computer?

The basement of all questions is the bottleneck? Where are the bottlenecks of our system? 4.1 Cameras In our system cameras are located parallel which imitate human eyes. This arrangement is efficient to create images, to measure near distance or to evaluate colours and inefficient to make different between human beings and objects. As it was said in the 3.1. the method of the exact positioning was difficult. It needed to much calculation because of the overlapping or views point correspondences of several cameras. Do we have to evaluate whole 3D image? Do all cameras must work all time? If PCs with 500 MHz processors are used and the average process time is 8 clock signal / operation then 62.5 million operation are done in 1 sec. A normal camera makes 25 images / second. It means 2.5 million operations can be done during this time. According to the User s Manual of Sony EVI D30 camera (Fig.4-1.) it makes images of 786 x 492 pixels. Overlooked the technical parameters of the general Bt848 based image capture board [6], this card is fast enough to send and receive data from the camera. If all pixels are wanted to evaluate 6.5 operations are available for each pixel. Is this time enough for the evaluation using not the RS-232 cable on the camera? Take an example: Our aim is to recognize persons collapsed in a metro station. Information come to the monitors of the guards room. In case of emergency a signal must be given and the stuff will decide next steps. Since not this system makes the decision and execution, signal could be faulty. (Human decision making is not avoided.) In order to get good result whole picture coming from a camera is not needed to evaluate. Our searching method could be the following: searching human face above the ground upto 40 cm. The other parts of image is not interesting. To get relevant information one image is not enough. Since two parallel cameras are focused to the events two images can be made. If these two pictures show give almost the same result the process can be ended. Or not? Imagine a bag on the ground leaned a colour newpaper to its side. The cameras will recognise a human laying on the ground. The number of the dimensions must be increased. It means one camera is not enough. Neither is two parallel ones. Imagine a collapsed man lying on the ground. Generally he has good contrast from two directions. The third direction is covered by hair. For the sake of the good result at least two cameras are needed for the recognition and cameras are turned 90 degrees to each other as a 2D system of coordinates. As it can be seen above according to the example the bottom part of the images are interesting. First man is to be located. To decide whether a human is lying on the floor or not is enough to find a head. All human head have a special color spectrum. This spectrum must be find. Since a head is big enough from this distance (appr. 2-3 m) not whole image of the bottom part of

the image must be tested. If a human head is found the environment of this part is belonged to the head, too. The bottom part of the images must be splitted into 2-4 cm strips. If none of the lines contain human spectrum then no lying men is on the floor. Using this searching method in the first step only 786 x 20 (15,720) pixels are examined during one sample time period instead of 786 x 492 (386,712) pixels. It means appr. 160 operations / pixels. Fig.4-1. Sony EVI D30 camera and its environment 4.2 LAN Sensors and its additionals are connected to the LAN by general 100baseT LAN cards. These cards supply u s 100 Mbit/s transfer speed. This speed is the absolute speed of the LAN. If frame making or delay of the transfer protocol is considered then this speed is less than 12,5 MB/s. As Fig.4-1 shows one image is over 386,000 pixels. Supposed 256 colors/pixels it means 25 x 386 kb data/sec/camera. It is almost the full capacity of the transfer channel. The transfer channel could be

the bottleneck of the system. Because of the capacity of the transfer channel central processing is unimaginable. Images must be evaluated nearby the camera. It could be done, but then cameras must inform each other on the events. No news is good news, but it means not null information flows. If events is happened then at least (See. Chapter 4.1) two cameras must cooperate to each other. Coordiantes transformations, overlapping and pattern matching must be done in a very sort period. Conclusions Our aim was to find the limits of a Distributed Intelligent Networked Device in the Intelligence Space. In order to find the limits bottleneck of system must be found. In the ispace tasks are distributed. Sensors connected to DIND are responsible for finding information sources. After having found them they have to be collected and evaluated. During these procedures local comuters and LAN are used. According to the technical environment making samples and image recognation are available in the local computers. If the result of the evaluation is positive than the communication must be started among the DIND devices. This communication means the bottleneck of this system. Remember the more devices use the transfer cannel the less effective transfer rates. Fortunately there are many helping methods to reduce transfer time trough the channel. - image compressions o advantage: useage time of the transfer cannel reduces significantly o disadvantage: compression and decompression take time - double transfer channels o advantage: transfer rate is doubled o disadvantage: increasing costs of development - image partitioning o advantage: parts of the images must be transferred o disadvantage: takes time

This paper dealt with finding limits and bottlenecks in an existing DIND in the ispace. We focused on the starting phase of the process i.e. when the information must be completed. The automatization of these procedures are complicate because of the many participants, the huge amount of information as well as the length of the calculation processes. Since the recognation methods are either not fast enough or not safe the human intervention is unavoidable yet. While this fact can not be changed process automation is too difficult. If the human decision making is not wanted to avoid than the task of the ispace is just to call attention to the events. To make the process faster tasks have to be splitted among the sensors while an event happens. One of the sensors are responsible for the recognation of the heads, another is for the hands or the fingers. Remember that these procedures are the starting faces of a complet task. After the recognation phase the answers must be found and exectuted them. To carry them out resources and time are needed again. References [1] http://dfs.iis.u-tokyo.ac.jp/~leejooho/ispace/ [2] J.-H. Lee and H. Hashimoto, Intelligent Space - Its concept and contents, Advanced Robotics Journal, Vol. 16, No. 4, 2002. [3] T. Akiyama, J.-H. Lee, and H. Hashimoto, Evaluation of CCD Camera Arrangement for Positioning System in Intelligent Space, International Symposium on Artificial Life and Robotics, 2001. [4] Peter Korondi, Hideki Hashimoto, INTELLIGENT SPACE, AS AN INTEGRATED INTELLIGENT SYSTEM, Keynote paper of International Conference on Electrical Drives and Power Electronics, Proceedings pp. 24-31. 2003. [5] Kazuyuki Morioka, Joo-Ho Lee, Hideki Hashimoto, Human Centered Robotics in Intelligent Space 2002 IEEE International Conference on Robotics & Automation (ICRA'02), pp.2010-2015, May, 2002 [6] Booktree Division, Rockwell Semiconductor System, Inc.: Bt848/848A849A Single-Chip Video Capture for PCI, February 1997. apps@booktree.com