UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

Similar documents
NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Multi-Robot Coordination. Chapter 11

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Intelligent Tactical Robotics

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Hierarchical Controller for Robotic Soccer

Multi-robot Formation Control Based on Leader-follower Method

Multi-Platform Soccer Robot Development System

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Park Ranger. Li Yang April 21, 2014

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

Autonomous Obstacle Avoiding and Path Following Rover

Multi-Robot Cooperative System For Object Detection

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

UTILIZATION OF ROBOTICS AS CONTEMPORARY TECHNOLOGY AND AN EFFECTIVE TOOL IN TEACHING COMPUTER PROGRAMMING

A Robot-vision System for Autonomous Vehicle Navigation with Fuzzy-logic Control using Lab-View

This list supersedes the one published in the November 2002 issue of CR.

Intuitive Vision Robot Kit For Efficient Education

A simple embedded stereoscopic vision system for an autonomous rover

Design Concept of State-Chart Method Application through Robot Motion Equipped With Webcam Features as E-Learning Media for Children

ZZZ (Advisor: Dr. A.A. Rodriguez, Electrical Engineering)

The Architecture of the Neural System for Control of a Mobile Robot

KINECT CONTROLLED HUMANOID AND HELICOPTER

NUST FALCONS. Team Description for RoboCup Small Size League, 2011

Randomized Motion Planning for Groups of Nonholonomic Robots

Electrical and Automation Engineering, Fall 2018 Spring 2019, modules and courses inside modules.

Last Time: Acting Humanly: The Full Turing Test

Correcting Odometry Errors for Mobile Robots Using Image Processing

Embedded Systems & Robotics (Winter Training Program) 6 Weeks/45 Days

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

The Real-Time Control System for Servomechanisms

CORC 3303 Exploring Robotics. Why Teams?

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING

Rudimentary Swarm Robotics

UNIVERSIDAD DE SEVILLA ESCUELA SUPERIOR DE INGENIEROS INGENIERÍA DE TELECOMUNICACIONES

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Overview Agents, environments, typical components

A Comparison Between Camera Calibration Software Toolboxes

Background Pixel Classification for Motion Detection in Video Image Sequences

Synthetic Brains: Update

CONTACT: , ROBOTIC BASED PROJECTS

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Teleoperation and System Health Monitoring Mo-Yuen Chow, Ph.D.

An Agent-based Heterogeneous UAV Simulator Design

Low cost underwater exploration vehicle

Benefits of using haptic devices in textile architecture

A HOME MADE ROBOTIC PLATFORM BASED ON THEO JANSEN MECHANISM FOR TEACHING ROBOTICS

Formation and Cooperation for SWARMed Intelligent Robots

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Final Report. Chazer Gator. by Siddharth Garg

Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots

Robotics & Embedded Systems (Summer Training Program) 4 Weeks/30 Days

Using Reactive and Adaptive Behaviors to Play Soccer

MarineBlue: A Low-Cost Chess Robot

International Journal of Informative & Futuristic Research ISSN (Online):

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

AUTOMATION & ROBOTICS LABORATORY. Faculty of Electronics and Telecommunications University of Engineering and Technology Vietnam National University

Semi-Autonomous Parking for Enhanced Safety and Efficiency

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

OPEN SOURCES-BASED COURSE «ROBOTICS» FOR INCLUSIVE SCHOOLS IN BELARUS

INTRODUCTION to ROBOTICS

Stress Analysis of Supporting Plate Used in Pneumatic Robot for Drilling Blockage in Borewell

Prey Modeling in Predator/Prey Interaction: Risk Avoidance, Group Foraging, and Communication

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

IMPLEMENTATION OF ROBOTIC OPERATING SYSTEM IN MOBILE ROBOTIC PLATFORM

INESCTEC Marine Robotics Experience

Multi-Agent Planning

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

Robotics Challenge. Team Members Tyler Quintana Tyler Gus Josh Cogdill Raul Davila John Augustine Kelty Tobin

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Intelligent Agents. Introduction to Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 23.

How Students Teach Robots to Think The Example of the Vienna Cubes a Robot Soccer Team

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Automatic Licenses Plate Recognition System

Glossary of terms. Short explanation

On-demand printable robots

EDUCATIONAL ROBOTICS' INTRODUCTORY COURSE

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB

Path Planning for Mobile Robots Based on Hybrid Architecture Platform

Terry Max Christy & Jeremy Borgman Dr. Gary Dempsey & Nick Schmidt November 29, 2011

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

EIE 528 Power System Operation & Control(2 Units)

Soccer Teleworkbench for Development and Analysis of Robot Soccer

Towards Complex Human Robot Cooperation Based on Gesture-Controlled Autonomous Navigation

Automated Driving Car Using Image Processing

Energy-Efficient Mobile Robot Exploration

Learning and Using Models of Kicking Motions for Legged Robots

Manipulation of robotic arm with EEG signal. Autores: Carolina Gonzalez Rodríguez. Cod: Juan Sebastián Lasprilla Hincapié Cod:

H2020 RIA COMANOID H2020-RIA

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO

Real-Time Bilateral Control for an Internet-Based Telerobotic System

Transcription:

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR ROBOT FLEETS ABSTRACT AUTOR: ADRIÁN JIMÉNEZ CÁMARA TUTOR: JAVIER V. GÓMEZ GONZÁLEZ SEPTIEMBRE DE 2012

1. Introduction Nowadays, in robotics there exist very powerful and robust path planning algorithms. Despite the existence of these algorithms, the application of them in an experimental environment is not such simple and it takes so much effort. Systems involving a lot of individual robots, with infinity of data provided by their sensors increase the complexity turning the decision making into a difficult problem. Simplifying the problem is always a good idea and building a stable system is the base to construct later more complex ones. The objective is to design a system involving artificial intelligence (AI) and robotics for experimental purposes in path planning. In an experimental environment and through a web cam be able to identify and localize a fleet of robots as well as control it by radiofrequency communication. The initial problem is reduced to one where the robot sees the rest of the objects as obstacles. These obstacles can be passive like a wall or active like other robot of the fleet, but it does not matter the nature of the obstacle itself because when doing path planning we will have to avoid both of them. 2. State of the Art Mobile robots came up as a tool for exploring inaccessible areas for human beings because of it remoteness location or for being dangerous or expensive. Some tasks are only possible for robots and with a few rare exceptions there is only one robot in the working area. Fleets of robots are the most common distribution used in space exploration, warfare, rescue missions or data acquirement in unexplored territories. One of the most important subjects when working with fleets of robots is to synthesize cooperative behaviors between them. Cooperative robotics has been an active research filed in recent years due to the importance of organization when developing tasks. Using a group of robots working cooperatively to execute tasks not only increases robustness and efficiency of task execution but also allows doing some tasks a priori impossible to do with a single robot. When working with fleets of robots two solutions are presented: - Centralized control: A central unit is responsible of controlling, supervising and establishing the actions concerning the robots in the fleet. All path planning and decisions are taken in the central unit or computer. It is necessary to establish a communication system between the central unit and the robots in the fleet. The advantages of this centralized control are that all conflicts concerning the robots are solved easily by the central unit and the orders can be transmitted

directly by this unit to the fleet. The problem with this system is that if there is a failure in the central unit the whole system will be down. The importance of centralized control can be seen in real time applications where cooperative work is necessary to achieve a common goal [1]. - Autonomous robots: In this solution the robots do not depend in any central unit, they depend on their selves. They have to be able, through sensors, to characterize the environment and take their own decisions such as movements to do the tasks. Currently, path planning algorithms for autonomous robots are based in fuzzy logic [2]. Communication is also needed here in order to speak with the other robots in the fleet. In general these robots are big, with multiple sensors and with self-localization and self-planning systems integrated. The main advantage of this solution is that if there is any problem and one robot gets broken it is not any problem at all for the rest of the fleet. The other robots will consider it as a passive obstacle and will avoid it as if it was a rock. The disadvantage is that this solution is extremely complicated because you have to control a lot of variables. You have to prepare the robots to deal with any circumstance and to think independently as well as teach them how to take the best decisions not only in movement but also in wireless communication to communicate with the rest of the fleet. At the moment, the tendency is to reduce the dependence in a centralized system in order to make the fleets more autonomous to deal with unexpected obstacles or events [3]. Since the guidance of a mobile robot involves its localization in a determined environment we decide for this project to build a centralized system involving vision algorithms to analyze the working area and detect the robots in the fleet as well as it position and orientation. Then through the same computer be able to drive the robots wirelessly by radiofrequency communication. The vision algorithm should be capable to tracking the movement and represent the coordinates of the robots in real time conditions. 3. The project The objective is to construct a centralized system for experimental purposes. The requirements that we will demand to our system will be robustness, being intuitive for the final user, low cost, capable of identifying several robots and low percentage of false positives when detecting. Some restrictions have to be considered since we are working in a limited area. We will have to design simple and small prototypes in order to obtain good results.

The project is divided in two main parts, the one concerning the vision system with the algorithm to detect the robots and the one concerning the electronics and the construction of the robots. Both parts are based in Open Source tools. We believe that these Open Source platforms help spreading knowledge. These projects are worldwide accessible and everyone can benefit from them. This encourages other people to develop and publish so in the end large communities grow with the only objective of teaching and sharing knowledge without cost. As one of the objectives of the project is to make a low cost system we decided that the best way to do it was basing it in these tools. 3.1 Vision This part is devoted to the development of the algorithm capable of tracking the robots. The design of the algorithm is based in color identification. Each robot will have at it top a label consisting in three smaller color labels. This will allow the system to know the position of the robots in the working area as well as its orientation. The algorithm has three main parts: - Calibration. The first part of the algorithm is calibration. As we are detecting colors this step is absolutely necessary to guarantee good results. The user is asked to place in a determined area a card of the color identifying the first robot. Then the same is asked for the second color, which will identify the second robot. We can see the calibration step in figure 3.1. Figure 3.1: Calibration phase

- Segmentation. Next step is to get rid of all useless information of the image provided by the webcam. With the values obtained in the calibration we perform a segmentation that will lead in a binarized image where the white pixels will tell us that the color is in the working area and the black pixels telling that the color is not. In figure 3.2 we can see the image provided by the webcam with some extra information that we will explain in the next part of the algorithm. In figure 3.3 we can see the binarized image resulting of applying this second part of the algorithm to the previous figure detecting green color. Figure 3.2: Label identification Figure 3.3 Green color segmentation - Labeling. The final part of the algorithm searches for pixels conforming unique labels. This is done by grouping sets of pixel that are near to each other. If we apply this label to figure 3.3 the result is in figure 3.2 where all green labels are correctly identified. Note that also blue labels are identified. This is due to the calibration process, the second color calibrated is the blue one and this will identify later the second robot. 3.2 Electronics This part is devoted to the design, construction and programming of a mobile robot capable of communicating with a computer and interpreting movement orders for it teleoperation. The design chosen is a simple one based on a chassis and two wheels. The third supporting point of the robot is a marble of 16mm of diameter that is fit into the structure and provides mobility.

The electronics that will provide the structure mobility and communication are based in Arduino development platform. An Arduino UNO board, containing Atmega328P microcontroler will be the brain of our robot and will be the responsible of transmitting movement orders to the servos as well as communicating with the computer. The radiofrequency communication will be provided by MaxStream XBEE modules based in Zigbee technology. These modules, through an Arduino shield will be directly connected to the board and will provide the robot with wireless communication. In figure 3.4 we can see two robots completely built. Figure 3.4: Robots 4. Results In this chapter we will explain the results obtained. The algorithm also represents in two virtual maps the position of the robots in the working area as well as the space occupied by them. In figure 4.1 we can see the representation of the color labels by circles and the union between them forming a triangle represents the presence of a robot. A small point between the circles represents the centroid of the robot; this is the theoretical center of the robot. And a black line is also represented in order to show the direction where is heading the robot. In figure 4.2 we see another virtual map representing the boundaries of the experimental area as well as the space that will occupy the robots that would be carrying the labels.

Figure 4.1: Position and orientation of the robots Figure 4.2: Space occupied by the robots Also the coordinates of the centroid are printed in terminal both in pixel coordinates as well as real coordinates of the working area. Note that the center of coordinates is situated in the upper left corner of the image. Figure 4.3: Centroid coordinates of each of the robots. By doing several tests we determine that the lighting conditions are essential for the correct performance of the algorithm. A well lit area increases the detection getting tracking percentages above 90%. This data is measured by computing the ratio between the number of frames where both robots are tracked and the total number of frames. In the following figures we can see how the algorithm is capable of tracking the robots

when moving. The robots have at the top of them the previous labels used by testing the algorithm. Figure 4.4: Frame 1 of the movement sequence and results Figure 4.5: Frame 2 of the movement sequence and results Figure 4.6: Frame 3 of the movement sequence and results Processing time is measured in order to see how long it takes the algorithm to analyze and process each frame provided by the camera. The result is that the processing time is independent of the movement of the labels and is around 60 ms/frame not depending if the labels are static or are moving around the working area. This makes sense since each frame is processed individually. Figure 4.7 shows through a histogram the processing time of the algorithm.

Figure 4.7: Processing time for each frame 5. Conclusions The objectives achieved with the project are the following: - Creating an experimental platform with a vision system capable of localizing robots in a working area. - Prototype construction. - Being able to integrate Arduino boards to our robots providing them with movement and communication through servos and radiofrequency modules. - Stablishing serial communications between the Arduino board and the computer. - Being able to teleoperate robots wirelessly. Despite centralized control is a solution already developed for controlling robot fleets with this project is proved that it is possible to build a reliable low cost system based on Open Source tools. With a vision system formed by a webcam and a computer we are able to achieve tracking percentages above 90%. This work has been thought to be the basis to construct more complex systems in the future.

Arduino platform has been proven to be an excellent solution for mobile robot construction. The easiness for connecting sensors and modules in these boards removes obstacles for developers making the learning experience even more enjoyable. Future projects based on this one will be based in path planning. One of the maps represented by the tracking algorithm shows the areas occupied by the robots in the working environment as well as the area boundaries. This information can be very useful when planning trajectories because you know directly which space is occupied. When planning a trajectory from one point to another you will have to avoid these areas in order not to crash with walls or other robots of the fleet. References [1] Norman Weiss, Lars Hildebrand. An exemplary robot soccer Vision System, CLAWAR/EURON Workshop on Robots in Entertainment, Leisure and Hobby, 2004. [2] Vamsi Mohan Peri. Fuzzy logic controller for an Autonomous Robot, master thesis, Department of Electrical and Computer Engineering, Cleveland State University, 2002. [3] R. Alami, S. Fleury, M. Herrb, F. Ingrand, F. Robert. Multi Robot Cooperation in the Martha Project, IEEE Robotics & Automation Magazine, Mar 1998, vol 5, pages 36-47.