CS6510 Dissertation. User Guided Behavior-based Robotic System. Final Report. Submitted. Dr Andy Chun, Associate Professor

Size: px
Start display at page:

Download "CS6510 Dissertation. User Guided Behavior-based Robotic System. Final Report. Submitted. Dr Andy Chun, Associate Professor"

Transcription

1 CS6510 Dissertation User Guided Behavior-based Robotic System Final Report Submitted To Dr Andy Chun, Associate Professor Department of Computer Science City University of Hong Kong By Ho Lok Ping, Wilson (Student ID: ) Date: 7 December 2004

2 ACKNOWLEDGEMENTS Academic work is not achieved by a mere individual; rather, it is the art of communication and collaboration. When I first joined the MSCS course offered by the City University of Hong Kong, I neither imagined that I would write a thesis about robotics, nor even dreamt of making our real robots. However, it all happened after subsequent years by meeting people and working with people. Now I am so grateful for what I have done during my academic years, and what I have done for my thesis work. Therefore, I would like to express my gratitude to the following people for their support and assistance in pursuing my academic career. Firstly, I would like to thank Dr. Andy Chun for giving me an opportunity to work on the robotic project. He has provided me with everything that I need to complete my thesis work and my academic career including the support for invaluable academic and technological advice. Similarly, I cannot help thanking Mr. Johnny Lung from the Robotics Laboratory of the City University of Hong Kong for being kind enough to offer me an opportunity to visit their robotic development works using Lego Mindstorms. I thank my friend and fellow students, particularly, Mr. Albert Chung, who also work in another robotic project. Without his help and collaboration, my academic pursuit would not have ended successfully. Finally, I would like to thank my family who will ever support me in anything I need. 1

3 Table of Contents ACKNOWLEDGEMENTS...1 LIST OF PICTURES...4 I. INTRODUCTION Background Objectives Usefulness of the Topic Potential Outcome...7 II. RELATED WORK Evolution Robotics Robot Control Centre (RCC) Robots by Other Robotics Company Robot Waitress...10 III. SYSTEM MODELING AND STRUCTURE Robot Control Architectures Incremental Design System Overview Concurrent Design Static Structure Main Class Diagrams System Architecture Simple Robotic Behaviors Evolution Robotics ER1 Robot...21 IV. METHODOLOGY AND ALGORITHMS USED IN THE DESIGN / IMPLEMENTATION OF THE SYSTEM Client / Server Architectures Real-time Remote Access Control Socket Programming Hardware Design Robot Kit Camera, IR sensors, Gripper and Miscellaneous Survey Software Image Recognition Sound Recognition

4 4.5.3 Obstacle Avoidance Navigation Model Total Cost for the Robot Development Hardware Software Software Design Robot Control Architecture Communication Protocol Strabo Path Translator Path Editor Converter ER1 Robot Server Program Robotic Command Centre...39 V. ANALYSIS OF ALGORITHM / SYSTEM ER1 Robot Prototype Limitations and Possible Solution Experimental Setup Experimental Results Discussion...51 VI. CONCLUSIONS...54 VII. REFERENCES...55 APPENDICES...59 Appendix A Project Planning key dates...59 Appendix B Robotic Command Centre API

5 LIST OF PICTURES Figure 1 Robot Waitress - Mei Mei...10 Figure 2 Concurrent Design...14 Figure 3 Package Diagram...15 Figure 4 Class Diagram - Network...16 Figure 5 Class Diagram - State...16 Figure 6 Class Diagram - Command...17 Figure 7 Class Diagram - Interface...17 Figure 8 Layered System Architecture...19 Figure 9 Module Diagram...20 Figure 10 Subsumption Network Design Diagram - A Basic Obstacle Avoidance Robot...20 Figure 11 Evolution Robotics Robot Control Centre...21 Figure 12 Stimulus-Response Diagram for our subsumption-based Robot Waiter...22 Figure 13 Communication Diagram of ER1 Robot and Robotic Command Centre..24 Figure 14 ER1 Robot from Evolution Robotics Holding a Can of Coffee...26 Figure 15 ER1 Robot s Gripper Arm...26 Figure 16 Customized ER1 Robot...27 Figure 17 Logitech QuickCam Pro 4000 (Left), IR Sensors (Centre), Gripper (Right)...28 Figure 18 Bundled WebCam...29 Figure 19 Strabo TM Pathfinder main screen...32 Figure 20 Dijkstra path searches

6 Figure 21 A* (Astar) path searches...33 Figure 22 Connection Screen...40 Figure 23 Path Editor Screen...40 Figure 24 Behavior Screen...41 Figure 25 Remote Control Screen...42 Figure 26 Camera Setting Screen...42 Figure 27 Notes Screen...43 Figure 28 About Screen...43 Figure 29 Problem of Holding Paper-made Coffee Cup...45 Figure 30 Rubber-made Coffee Cup is the minimum requirement of the holding media...45 Figure 31 Canned Coffee do not pose any problem with the ER1 Gripper...45 Figure 32 Partial Enviroment Setup...47 Figure 33 Partial Environment Setup - Strabo TM...48 Figure 34 Right Arrow Sign for Homing...50 Figure 35 "Home" Sign for Homing...50 Figure 36 ER1 Robot with Torch Light in the Dark...51 Figure 37 Capture Image of Front vvew from ER1 with Torch Light...51 Figure 38 NorthStar Detector (Left) & NorthStar IR Projector (Right)...52 Figure 39 NorthStar in Operation

7 I. INTRODUCTION 1.1 Background Traditionally, robot has helped people to finish a lot of pre-defined and autonomous job. At the turn of the 21 st century, implementation of different kinds of practices has occurred. Until recently, robot has even performed a more solid part in space exploration which is extremely dangerous for human in this unknown environment. In view of this, we would like to get a more understanding to the robotic topics. We know that the development of the intelligent robots are rather slow compare with another technology, like the increase of the speed of micro-processor doubled every 18 months, etc. However, we hope that the knowledge we gained and the experience we learned can be significant in the later development and exploration of robotic topics for our next generation. 1.2 Objectives In this project, we would like to build a robot which leverages behavior-based approached with user-friendly guided feature. In so doing, we have examined two consumer robots which are of different brands, the first one is the Lego Mindstorms TM Robot and the other one is the ER1 Robot by Evolution Robotics, Inc, and both will react with the environment using different pre-defined behaviors respectively. With this hand-on experience, we are going to make a more formal robotic project a robot with localization system using Evolution Robotics ER1 Robot as the framework. 6

8 1.3 Usefulness of the Topic This paper will leverage modern technologies to fulfill the requirement. Wireless g network will be the foundation between the Robotic Command Centre and the ER1 Robot itself. Different robotic behaviors will be examined with the interaction among different behaviors. Last but not the least, robotic localization and path-finding abilities will be incorporated in the robot to build a semi-autonomous robot. Developing a computer program is generally a time-consuming task, and developing a robot control program to deal with a machine embedded in the physical world is even more challenging than the common computer programs that only deal with abstract entities. To evaluate the performance of the tasks specified in a program, no matter what the tasks are, the software must be integrated into a robot and tested in the physical environment. Therefore, the robot, the program, and perhaps the environment must be arranged for the complete evaluation. 1.4 Potential Outcome We will create a prototype of a robot which can leverage among different behaviors with remote control facility. The software will make use of Socket programming technique, Microsoft Visual C#, Microsoft Visual Basic.NET, Strabo Pathfinder TM by Fusion Robotics, and some other open source components if available, etc. A proposed framework is to design and build a robotic system that is easy to manipulate and easy to expand for future study. In the development of the robot, we consider roughly two kinds of tasks, safety-oriented tasks and navigation-oriented tasks. The safety-oriented tasks include 7

9 behaviors such as collision detection to ensure collision-free navigation for the robot. This group of behaviors generally exhibits behaviors in a reactive manner. The navigation-oriented tasks, on the other hand, involve relatively higher level tasks compared to the safety-oriented tasks. The behaviors such as environmental mapping and route planning are typical examples of the navigation-oriented tasks. There is no choice over which group of tasks is more important than the other. However, the degree of autonomy a robot may affect the prioritization of tasks. For example, developing a semi-autonomous robot usually leaves high-level decisions to the robot and thus prioritizes the safety-oriented tasks. The thesis is founded on an aspiration of building a semi-autonomous intelligent robot. Therefore, the safety-oriented tasks are considered as the first priority. A mobile robot is used as a test bed in the structured indoor environment for experimenting with the robotic system with localization and path-finding abilities. 8

10 II. RELATED WORK Thousands of robot related work have done in the past decades, AI, behaviors, mechanical robot arm, chess opponent, robot architecture, simulation, circuits, design, implementation, etc. However, consumer-level robot with localization and path-finding facilities, and related work is very difficult to find. 2.1 Evolution Robotics Robot Control Centre (RCC) Evolution Robotics Robot Control Centre - Although it is easy to use, it lacks localization and path-finding facility which hinders its overall function and hinder the ER1 Robot to a very elementary level which limits its potential. That is also one of the objective for building the Robotic Command Centre for ER1 Robot. 2.2 Robots by Other Robotics Company Other research projects by ActivMedia Robotics, LLC, such as some developed using AmigoBot TM which is of high cost, normally more than HKD30,000 with embedded system. In view of this, we want to lower the cost for the robot development taken into consideration for the benefit of the public at large, our target price is HKD20,000. Moreover, they are more for the researchers than for the consumers. Their systems developed are too difficult to change without large code re-built which is a major barrier for the common users. 9

11 2.3 Robot Waitress Three is a robot current under experiment which I would also like to implement is a waitress robot in Chinese dress, called "Mei Mei,", carries a couple of glasses of water to a customer's table outside Tokyo. The robot moves to tables using an infrared sensor and serves water and delivers menus to customers with a bow and words of welcome as Figure 1 shows 1. Figure 1 Robot Waitress - Mei Mei As can be seen from the above a robot waitress "Mei Mei," can carry a couple of glasses of water to a customer's table outside Tokyo. The robot moves to tables using an infrared sensor and serves water and delivers menus to customers with a bow and words of welcome. However, we would like to build a robotic system which can not only delivery a cup of coffee to the client s table which Mei Mei does and can also be easily modified by common users with changing environment with localization and path-finding abilities. 1 TLC LifeUnscripted, 2003, 10

12 III. SYSTEM MODELING AND STRUCTURE 3.1 Robot Control Architectures Robotic Control in the Architectural design, including the Deliberative Architecture, Reactive Architecture, Behavior-based Architecture (mainly Subsumption Architecture) would be studied. There are three broad types of robot control architectures (Mataric (1997)) 2 : [1] Deliberative Purely Reactive Hybrid Behavior-based In short, deliberative strategies use a centralized model of the environment to plan complete paths to goals. These are typically done prior to the robot acting and may be even done off-line. These strategies can be slow to generate plans and brittle should the environment change (model updating and replanning is necessary, so action may be delayed). For uncertain environments (due to sensor or effector noise, or a changing environment) constant replanning may be necessary. 2 Matt Mitchell, "Robot Control",

13 Purely reactive are stimuli/response type architectures. These are typically pre-programmed condition-actions. Since there is no central world model, there is no planning or internal search for solution paths. Hybrids combine aspects of both purely reactive and purely deliberative strategies. One approach is to have a reactive low level controller and a high-level deliberative planner. Often the low level controller ensures the immediate safety of the agent, while the high level planner determines paths to goals. One type of hybrid approach is the Behavior-based approach. These are more complex than the simple condition/actions or lookup tables of pure reactive systems. One such Behavioral architecture builds upon the reactive subsumption architecture 3. The Subsumption Architecture was proposed by Brooks (1986). He argues that many developers of robots use a model that runs from inputs to outputs though stages of modeling, planning and task execution. This type of architecture involves a single synchronous thread of execution from input to output. The modules in such a system are based on functional components in the system (e.g. planning component, mapping component). Brooks proposed that robot control systems should be instead decomposed according to task achieving Behaviors or competencies. This means instead of viewing processing a sequence of processes moving from inputs to outputs, control is seen as a stack of competencies achieved through layers of control 4. [2] [3] 3 Karl Williams, Insectronics, Mc Graw Hill, pp , Ronald C. Arkin, Behavior-based Robotics, The MIT Press,

14 3.2 Incremental Design The ultimate goal of our robotic experiments is to build a robot which can be used in the future study of a robotic system. In order to build such a robust and compatible program, we must build up the program as a complete system with a set of complete behaviors, which enables the robot to be tested in the real world environment. Rodney A. Brooks at the MIT AI Laboratory suggested in his famous article that building complex robots (he calls creatures) which coexist in the world with humans must be incrementally built in the same manner as biological evolution (Brooks 1991). For instance, a single-cell amoeba which wanders the world without any goal and a human who exhibits intelligent behaviors are both complete biological systems although there is a difference in the degree of intelligence. During the astronomical time span, biological evolution on earth started from the lower intelligence of amebas and now has ended up with the human level intelligence up to this point. Brooks idea of building a robot mimics the process of evolution. The concept of this incremental design helps the entire project of building an intelligent system to advance toward to the goal steadily one step at a time. 3.3 System Overview The Robotic Command Centre must be able to open and close network connections, send command sequences, track and display images, etc based on user input through a graphical user interface. 13

15 3.4 Concurrent Design The Robotic Control Centre will have three parts that can run concurrently (Figure 2): Network sends and receives data from the network Interface receives input from the user Control tracks robot state, organizes actions of other parts Each of these components will run in their own thread, allowing each to continue functioning without waiting for other components. Figure 2 Concurrent Design Each of these parts can be characterized as a package that implements a specific aspect of the software. Within each of these packages there will be one or more classes to implement the required functionality. 3.5 Static Structure As mentioned in the System Overview, the software will be separated into three 14

16 packages. The packages are shown in Figure 3. The classes which each package contains are shown within each package. The classes are described in more detail by the class diagram section. This shows the data members and functions for each class. There are many subclasses of command that all override the same functions, and are not presented on the class diagram. Figure 3 Package Diagram 3.6 Main Class Diagrams Below are the main class diagrams for the Robotic Command Centre, they are interacting among each others as seen in the package diagram above. Figure 4 shows the Network Class Diagram. Communication between Robotic Command Centre and ER1 Robot starts with the network connection, client computer 15

17 has to open the socket for connection with correct IP address and Port Number in order to connect successfully. Figure 4 Class Diagram - Network Robotic Command Centre will frequently enquire the status of gripper arm, the Infrared (IR) sensors and the present connection status. Please note that there are three IR sensors located in different location in the robots, their respective values are measured by the closeness of the object and other factors, including the sunlight and other ambient factors. Figure 5 shows the State Class Diagram. Figure 5 Class Diagram - State User controls the ER1 robot by sending commands using the Robotic Command Centre. Beside of send the command one by one, Robotic Command Centre enables user to send several commands in the list box, save the commands in a file for later retrieval. 16

18 Figure 6 Class Diagram - Command The interface between the Robotic command Centre and the ER1 Robot is the Graphical User Interface. Carefully organized and user-friendly interface created under the Microsoft.NET framework has enhanced usability for users. As Figure 7 shows the different elements in the interface. Figure 7 Class Diagram - Interface 3.7 System Architecture The system architecture is an abstract design that organizes the system components. In the recent robotic literature, most autonomous robots employ a layered architecture. There are roughly two types in decomposing the system into layers, functional-based layers and behavior-based layers. Nowadays, the trend in layered architecture is Brooks subsumption architecture, in which the system is decomposed into task-oriented behaviors (Brooks 1986). In the subsumption architecture, the independent behaviors exercise their tasks (from sensing to acting) in parallel. Therefore, the failure of one behavior does not interrupt the entire system execution. The independence of behaviors also gives the capability of easily adding more 17

19 behaviors to the system in an incremental manner. Each behavior can either suppress or inhibit the input/output of other behaviors to interact with the environment, which causes the emergence of a high-level intelligent behavior without giving the robot specific instructions of what to do to achieve that particular behavior. Also, the absence of a central reasoning protocol, no symbolic representation of the world model, and the direct control of actuators by a behavior are well-known distinctive characteristics of the subsumption architecture (Brooks 1991). Although each behavior is independent, the ability of influencing another behavior eventually makes the system very complicated, and adding another behavior may thus require enormous efforts. In addition, because of the emergent characteristic of behavior-based systems, the complexity in analyzing the result of emergent behaviors may also cause a problem in modifying and configuring the system. The classical hierarchical approach had been, on the other hand, dominating the robotic trend for decades until the rise of the subsumption architecture. Unlike the behavior-based decomposition of the subsumption architecture, the traditional layered architecture decomposes the system into functional modules such as sense, plan, and act. This type of architecture has the advantage of having easily separable functional modules that are associated with an intuitive paradigm in designing the hierarchical architecture. However, it is often noted that the system is hardly modifiable once the hierarchy is defined since the functionality of modules is limited to contribute to certain behaviors (Liscano et al. 1995). 18

20 APPLICATIONS VISION NAVIGATION INTERACTION Client Developed Software Robot Control Centre ARCHITECTURE API OS & HARDWARE Figure 8 Layered System Architecture The robotic system architecture used in this thesis (Figure 8) consists of two layers taking the advantage of the former two types. Basically, the system has several behavior-based like structure. Each structure is composed of two functional layers, Hardware Layer and Component Layer. The Hardware Layer is a collection of modules communicating with the robot s hardware devices such as a camera, infrared sensors and motors. The Hardware Layer is implemented with Microsoft Visual C#.NET since the ER1 kit is provided with the development environment that specifies the language. The SDK (Software Development Kit) already contains libraries to help in accessing the hardware components of the robot, which reduces the amount of redundant effort. This layer functions as a bridge between the upper-level layer and the hardware. The Component Layer contains the intermediate functional modules which constitute the higher-level behaviors. The Component Layer is implemented with Microsoft Visual C#. 3.8 Simple Robotic Behaviors Below is an example of a common robot using the Subsumption Architectures. 19

21 avoid obstacle Sensors Motors move forward Figure 9 Module Diagram The preliminary design of the robot implement forward movement, when there is external stimulus encountered, like hit by either left or right touch sensors, it will turn to the opposite direction. This is a simple model and can be used in robot like Lego Mindstorms TM Robot, so there are only 2 modules formulated as Figure 9 shows. left touch sensor right touch sensor collision resolve turn right s wander move forward s s turn left s Figure 10 Subsumption Network Design Diagram - A Basic Obstacle Avoidance Robot Figure 10 shows the augmented finite state machines (AFSM) of the basic robotic design, it will be further elaborate as follow: 20

22 Wander AFSM: It is the first part of the robot movement until other stimulus has changed it preliminary direction. Move Forward AFSM: The robot will move forward except it is suppressed by either Left or Right touch sensors. Turn Left / Turn Right AFSM: It will suppress the move forward AFSM and move the robot in reverse direction when either Turn Left or Turn Right AFSM has been triggered. Collision Resolve AFSM: it takes inputs from the left and right touch sensors and determines which sensor is touched. It will trigger the reverse direction of the robot movement. 3.9 Evolution Robotics ER1 Robot A chain of behaviors can be constructed using Evolution Robotic Control Centre (RCC) Figure 11 Evolution Robotics Robot Control Centre Besides the simple robot behavior, we would like to implement a more complete robot which behaves grabbing the coffee, deliver it to the destination and homing to the source, the following behaviors are involved: Wandering: move in a random direction for some time. Seeking coffee: Find and move to the target coffee. Grabbing coffee: When the target coffee found, close its gripper. 21

23 Seeking required table: Locate the table for coffee delivery. Coffee delivery: Put down the coffee in the designated table. Homing: Return to the source (starting point). Figure 12 illustrates the Stimulus-Response (SR) diagram for this set of behaviors. Like the previous example, priority-based arbitration is the coordination, and the robot is executing only one behavioral rule at any time. Note in particular that when the robot senses the attractor, wandering is suppressed and when the attractor is grabbed, homing then suppresses grabbing, it is the mechanism of subsumption architecture. homing deliver coffee s seek required table s grab coffee s seek coffee s wander s Figure 12 Stimulus-Response Diagram for our subsumption-based Robot Waiter These are the main system modeling and structures, the methods and implementation details for the robots are going to be described in the next section. 22

24 IV. METHODOLOGY AND ALGORITHMS USED IN THE DESIGN / IMPLEMENTATION OF THE SYSTEM The main objectives of the project are to study the current technologies in robotic arena and to build a prototype mobile robot. The prototype should feature some forms of behavior, such as localization, path-finding and homing, etc. We have examined the following robots in our project: the Lego Mindstorms TM Robot and the ER1 Robot from Evolution Robotics. They can both use current high level language to further enhance the robotic behaviors to suit one's particular need, such as Microsoft Visual C#, Microsoft Visual Basic.NET, Python, etc. 4.1 Client / Server Architectures The term client/server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/server model started gaining acceptance in the late 1980s. The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing computing. A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the 23

25 software configuration. We will use the client / server architecture for our robotic system development. Figure 13 Communication Diagram of ER1 Robot and Robotic Command Centre Two communication modules have been created for both ER1 Robot and the desktop computer (Robotic Command Centre). They communicate using the same port number The ER1 Robot will receive command and send feedback to the Robotic Command Centre. The desktop computer runs another service in port 81 which is the Strabo TM Pathfinder for path navigation as further described as below. 24

26 4.2 Real-time Remote Access Control With the creation of the client software (Robotic Command Centre) and the server program located in the ER1 Robot, the real-time remote-access control can be achieved. Client can send the robot command one by one to the server (ER1 Robot) and the server will enforce the action accordingly. 4.3 Socket Programming To develop client / server applications in the TCP/IP domain, we will make use of socket programming in which the client and server will communicate among their same assigned sockets. Its fundamental concepts include network addressing, well known services, sockets and ports. 4.4 Hardware Design At first glance, ER1 Robot is just a robot skeleton, it seems to be nothing more than that, once it has been assembled with a notebook computer, and it can start working. The attractive point for using ER1 Robot but not others is it is easy to build and can make use of existing notebook computer as its brain. 25

27 Figure 14 ER1 Robot from Evolution Robotics Holding a Can of Coffee It comes with several peripherals: One camera and two motors. If we purchase an expansion kit, there will also have a gripper arm and three more Infra-red sensors for better obstacle avoidance navigation. Figure 15 ER1 Robot s Gripper Arm Robot Kit The hardware used in this experiment is a commercial robot kit called the ER1 Robot by evolution robotics. The robot kit includes the control software, aluminum beams and plastic connectors to build a chassis, two assembled scooter wheels powered by two stepper motors, one 360 degree rotating caster wheel, a power module, a battery (12V 5.4A), and a web-camera. The experimental robot also carries additional 26

28 accessories, three infrared sensors and extra beams and connectors for reinforcement. A laptop computer, IBM Thinkpad X40 Notebook Computer (Intel Mobile Centrino processor 1.1GHz with 768 MB RAM) with extended battery installed which can run for 7.5 hours according to the specification, is used as a controller device, and Windows XP Professional is loaded as the operating system. Figure 16 Customized ER1 Robot The bundled software that comes with the kit provides various tools for the users to operate the robot with its simple interface such as computer vision, hearing, speech, networking, remote control, , and some autonomous behaviors. However, the furnished high-level behaviors have no flexibility in customization at the algorithmic level of behaviors which in many cases requires programming for modifications. Therefore, the experiments have been conducted without using the bundled software. 27

29 Unlike the software, the hardware of the ER1 robot kit empowers users to customize the robot for their objectives. The reconfigurable chassis enables us to design a purposive mobile robot, and the extensions (extra cameras, sensors and grippers) can be easily added to the system if necessary. The purpose of this experiment is to build a robot as a test-bed for the future localization and path-finding project Camera, IR sensors, Gripper and Miscellaneous Figure 17 Logitech QuickCam Pro 4000 (Left), IR Sensors (Centre), Gripper (Right) In this experiment, three infrared (IR) sensors and a single web camera are used and gather information about the environment. Figure 16 depicts the arrangement of sensors installed on the robot. The camera, Logitech QuickCam Pro 4000 (Figure 17 Left) is mounted in front of the robot capturing the front view. The 320 x bit RGB image is updated and saved in memory at the rate of 2 frames per second by default. The camera is connected to the PC through a USB (Universal Serial Bus) port. Behaviors such as collision detection and obstacle avoidance are designed to perform tasks based on the information given by the three IR sensors. Besides, the gripper is used to grab object, e.g. cup of tea, canned soft drink, coffee, etc. Gripper has an IR sensor, when object enter into the inner part of the gripper, it will close automatically. However, there is no pressure sensor which means that soft object may have problem when it is being gripped. 28

30 4.4.3 Survey We have replaced the existing ER1 bundled camera (Figure 17) with the Logitech QuickCam Pro 4000 with improved image quality and more accurate object recognition. We have conducted an object recognition test in the daytime using the resolution 320 x bit RGB image for over 25 feet and it proved to be successful to identify the object. Figure 18 Bundled WebCam Although the IR sensors are not as accurate as the sonar and laser sensors, with respect to cost performance, IR sensors are the major solution in the mobile robotics. In mobile robot navigation, infrared sensors are mostly used in the safety-oriented tasks such as collision detection and obstacle avoidance because of faster response time and lower cost (Benet et al. 2002) 4.5 Software ER1 has its GUI control program called Robot Control Centre (RCC) which is a very easy to use behavior construction application. If-then construct is easy to build and users can set over 100 behaviors which act sequentially one after the other. Beside 29

31 that, we can make use of ERSP - a set of API for ER1 to further tailor its behavior which leverages Microsoft Visual C# as the extension to the RCC. ER1 Robot is good for the following behaviors, though there are still some rooms to enhance its effectiveness: Image Recognition ER1's ability for image / object recognition is undoubtedly one of the main features for its successful factor. Even large corporation like Sony has made use of the ERSP software development kit for AIBO in image recognition's ability Sound Recognition ER1 can recognize some sort of "word" or "phrase" for it to trigger with some action defined in its behavior setting. Their uses can include sending about unknown comer to its master when seeing someone going into their home while its master has gone for work Obstacle Avoidance ER1 has the ability to avoid collision with obstacle and recognize objects while moving. However, in order to further enhance the obstacle avoidance ability, it is recommended to install the second camera solely for obstacle avoidance purpose Navigation Model ER1 has its ability to navigate in their space available together with the introduction of obstacle avoidance ability. A simple view-based navigation which make up of different location s pictures for robot to easily move to different location. However, there is no internal map to hold its location and for path-following navigation. 30

32 Robin R. Murphy, Introduction to AI Robotics, The MIT Press, 2000 which cover a lot about different areas of robotics, in particular, the navigation model for our reference 5. Complete review of the ER1 Robot can be found in the footnotes 6. We make use of Strabo TM Pathfinder for our ER1 Robot navigation implementation server. Strabo TM Pathfinder is Fusion Robotics s solution to robot navigation. Strabo TM Pathfinder combines advanced artificial intelligence path finding algorithms with sophisticated wireless networking capabilities to allow the robot to navigate in the real world without having to grope around in it 7. Strabo TM Pathfinder is an HTTP compliant server whose sole mission is to provide navigational aid to robots via HTTP calls. We use an g embedded with the IBM ThinkPad Notebook Computer and access points with Strabo TM running on a server. Maps created in Strabo TM Pathfinder can be used by the robot to generate a list of directions that virtually any robot can follow. Strabo TM will also start listening for the robot on the assigned port number, which is 80 at install time and we have re-assigned the port number to 81 to eliminate the potential conflict with the web server. Strabo s maps assume the top of the map is facing north. This is the normal standard used in the western world for centuries. The top left corner is always numbered 1,1. 5 Robin R. Murphy, Introduction to AI Robotics, The MIT Press, Evolution robotics, 2002, Accessed 2002 Nov 29 7 Strabo Pathfinder, 31

33 Notice how the tile designation changes in the lower left hand corner as you move the mouse around the map. This will help you build very precise maps. Figure 19 Strabo TM Pathfinder main screen There are two path search algorithm support by Strabo TM Pathfinder - Dijkstra path searches and A* (Astar) path searches. These are the two most popular path finding algorithms used both in GPS navigation and video game navigation systems. They both get to the same place, but determine their path in quite different ways as shown below, both getting to the Table 8, but their paths are different. 32

34 Figure 20 Dijkstra path searches Figure 21 A* (Astar) path searches Herding It is a term that applies to the ability to nudge the robot into a particular path when it is traveling. The A* algorithm tries to find the easiest path to its destination. By using difficult, hard, normal and easy tiles in different combinations, A* will seek the easiest path. Given a choice of easy or difficult space, A* will usually choose the easy route. We call this herding. 33

35 4.6 Total Cost for the Robot Development We try all possible method to lower the cost of the robot development while do not sacrifice the quality of the product. Below are the breakdown items for the development. It is around HKD20,000, with about just HKD4,000 more than Sony Aibo ERS7 Robot dog, so it is quite attractive for the consumer Hardware Hardware Items ER1 Robot and value-added accessories IBM ThinkPad X40 Notebook Computer (Intel Centrino TM Price HKD7,000 HKD11, G) with 768MB RAM and extended battery Logitech QuickCam Pro 4000 and Torch Total: HKD750 HKD19, Software Bundled ER1 software Robot Control Centre (RCC), using the Socket API programming for further experiment Strabo TM Pathfinder (49.95 USD) Microsoft Visual Studio.NET - Visual C#, Visual Basic.Net Software Item Strabo TM Pathfinder (49.95 USD) Total: Price HKD390 HKD390 The total cost of the Robot Prototype is HKD19,

36 4.7 Software Design Robot Control Architecture The object oriented design technique is used for the design of the robot software. This allows for a flexible design, enabling new objects and such to be implemented easily. Also, this design will implement information hiding techniques to reduce the complexity of code implementation The system is designed to be flexible and easily changed later. For these purposes abstraction, polymorphism and information hiding have been utilized. As can be seen from Section 3, UML diagrams have been created to provide easy understanding of the interaction and dependencies between each class. The class diagram show an outline of how the classes are dependent on each other and how each will be used. The package diagram illustrates what classes are grouped together for a common purpose. In order to execute multiple tasks on a single processing unit, the robot control architecture must be carefully designed in a way that the robot would choose the right action among many candidates. While the classical hierarchical architecture and Brooks subsumption architecture with respect to the system organization has been discussed. In this section, we discuss issues within the robot control spectrum rather than the system design. The control method theoretically lies between two extremes, the planner-based centralized approach and the decentralized purely reactive approach (Mataric 1992). The former is a control method which makes a global decision on the robot s action by building a complete internal model of the 35

37 environment using a-priori knowledge and perceived data. On the other hand, the reactive approach normally maintains no internal model and locally decides the robot action based on the sensor inputs using simple if-then rules. In the recent robotics literature, non-extreme control models such as hybrid and behavior-based systems gained popularity because of their moderation that is relatively applicable to the realistic situations which usually require real-time sensitivity and planning capability. Various methodologies (e.g. behavior-based, blackboard, and agent-based systems) are found in many projects on mobile robot navigation. In terms of the control mechanism, the subsumption architecture seems valid and attractive because of its parallelism in a decentralized fashion and also because of its instantaneous decision-making process. However, behavior-based autonomous robots are hardly seen beyond research domains because of the structural complexity (designating the inhibition and suppression among multiple behaviors could be a complex and messy job) and the verification difficulty (due to the decentralized nature the robot may express highly unexpected (emergent) behaviors which makes it difficult to analyze the robot s behavior patterns). Besides, since the truly distributed model requires multi-processing units, the concept does not completely match the objective of using a commercial robot kit as the robot s framework. Therefore, the behavior-based system may not be the perfect model for building the robot control program this time. Blackboard architecture in robot navigation is its adaptability for the application needed to make dynamic control decisions. However, because of the presence of a global database, reactivity to the dynamic environment may not be instantaneous. Also, the existence of a control module (sometimes called an inference engine) may imply that blackboard systems are not as robust and reliable as behavior-based 36

38 systems. Once the control module stops functioning, the whole system collapses. On the other hand, having a malfunctioned behavior (or agent), the subsumption system still operates unless all behaviors stop functioning at the same time. Besides, multi-agent systems are instead winning a vote as a revolutionary method in controlling an autonomous robot. A number of multi-agent control systems are found in the recent AI literature (Soler et al. 2000; Sierra, L opez de M`antaras, and Busquets 2001). These systems are basically an extended form of the blackboard system because of the fact that multi-agent systems in a way share some characteristics with blackboard systems. For example, a multi-agent system has a collection of agents (also called knowledge sources (KSs) in a blackboard system) which collaborates in problem solving forming the cooperating expert. The goal of this thesis is to design and implement a robust and easily expandable robot control system with localization and path-finding abilities, starting with the commercial robot kit as a test bed. The system takes advantages of different contemporary technology and some form of behavior-based approaches Communication Protocol We have enacted a simple communication protocol between the ER1 Robot and the Robotic Command Centre. To make it short, if we want the robot to move six feet, we simply send the command "move 6 feet", etc Strabo Path Translator Strabo s direction and step can be translated to a valid ER1 Robot s movement command by calling this module. It takes into the consideration that accurate path 37

39 and step are still essential for ER1 Robot to move to the target location. The translator module will take into consideration to the heading of the ER1 Robot, so that it will not run into wrong direction. And the steps returned will be parsed into ER1 movement commands. For example, the following navigation string is returned by Strabo from a client computer and the first brace is the start point, the middle is the direction and steps, and finally, we have the destination point at the end. [3,2] [S,S,E,E,E,E,E,E,N] [9,3] Supposedly, the robot is facing South and the unit for the movement is feet, it will interpreted as move 1 feet, move 1 feet, rotate -90 degree, move 1 feet, move 1 feet, move 1 feet, move 1 feet, move 1 feet, move 1 feet, rotate -90 degree, move 1 feet for the ER1 Robot movement command Path Editor Converter It is a point to point drawing pad which is inspired by Canvas implementation for Microsoft Visual C# by Eduard Baranovsky (2004) 8. By drawing different point in the image box, we can setup a valid path for ER1 Robot Movement. The unit of the movement is based on the screen pixel, but we can define different measurement, e.g. inch, feet, to suit different need. It can also generate 8 Canvas implementation for C#, Eduard Baranovsky,

40 degree of movement based on the coordinates between different two subsequent points. Together with torch at night, the robot can behave as a rescue pioneer, in the not distant future, it may be one of the Mars Rover. One drawback is it is quite different to draw a very accurate path for the ER1 Robot with only using a mouse point-and-click ER1 Robot Server Program It is a program located in the ER1 Robot s notebook computer to listen for the port 9000 to communicate with the Robotic Command Centre. They communicate with the same set of communication protocol. Upon received and finished of any command in the ER1 Robot, it will send an ACK to the Robotic Command Centre for acknowledgment Robotic Command Centre It is a main program to connect to the ER1 Robot on port The main program has the following feature as show on the screens. The left pane is a fixed screen, it will show the photo captured from the robot's camera, connection status, gripper arm status and the IR Sensor Status. 39

41 Figure 22 Connection Screen The connection screen is the first procedure to connect the Robotic Command Centre with the ER1 Robot by inputting in the correct IP address of the ER1 Station and the correct port number. After that, press the Connect button to connect or Disconnect button if you want to end the current session as Figure 22 shows. Figure 23 Path Editor Screen The Path Editor can generate point to point direction for ER1 Robot movement, it is 40

42 still in experiment stage, but it may probably be used in exploring new land and rescue purposes. User can select their Unit in the map scale. Figure 24 Behavior Screen The Behavior Editor screen is the heart of autonomous robot. The screen has been grouped with different behaviors, including Strabo TM Pathfinder, movement behavior, object recognition behavior and gripper arm. Commands drawn by the Path Editor will also be added in the Enter Sequence box. You can input the new command by inputting the ER1 command in the New command field or delete the command in the list by pressing the Delete button. User can make a new sequence, save it or load the previously save sequence. If user wants to stop the robot, kindly press the Stop Sequence button. By combining with the usage of Strabo TM Pathfinder, user can retrieve the correct distance to the designated waypoint, like table 1. Strabo TM Pathfinder will return the valid direction and steps to complete the movement and there is a sequence to recompile the return value into the ER1 commands. 41

43 Figure 25 Remote Control Screen User can also remote control the robot if they want. It includes direct movement of the robot, wheel rotation setting, IR sensor setting and Gripper control, etc. Figure 26 Camera Setting Screen Camera resolution and capture setting can be formulated under the Camera tab screen. 42

44 Figure 27 Notes Screen Notes screen provide a basic command reminder for user to formulate different behaviors. Figure 28 About Screen About this program screen. 43

45 V. ANALYSIS OF ALGORITHM / SYSTEM We would like to develop a robot prototype for localization and path-finding using the ER1 Robot as the framework. Given the reality that emerging technologies in forthcoming year would include increasing use of speech recognition and generation, new sensor technologies that allow users to touch, feel and smell objects in virtual worlds, improved speech recognition and gesture recognition. We hope that our studies on this subject matter will be inspired by the others to follow and enhance. 5.1 ER1 Robot Prototype We are able to fulfill the designated task with trial and error. Although the robot prototype is improving every time, while we are developing the system we notice the following shortcomings. 5.2 Limitations and Possible Solution 1. Python Software Development Kit lacks the capability to read low value IR input, therefore, obstacle avoidance can not be enforced. In view of this, we have shifted our development effort using Microsoft Visual C# and Microsoft Visual Basic.NET. 2. Gripper arm do not have pressure senor, so if we use a cup made up of soft-paper, the cup will be under high pressure and change it to oval-shape as shown below. Tested with plastic cup and canned media do not pose any problem. 44

46 Figure 29 Problem of Holding Paper-made Coffee Cup Figure 30 Rubber-made Coffee Cup is the minimum requirement of the holding media Figure 31 Canned Coffee do not pose any problem with the ER1 Gripper 3. Image recognition will be affected by light and ambient effects. Whether the light is not enough, the robot need to trace for its target, sometimes, a looping problem will occur. In view of this, we have replaced the bundled Webcam with a high quality Logitech QuickCam Pro 4000 which get satisfactory result. 4. The robot would not move if the laptop was being charged. Also, due to severe consumption of power by wireless network, the original IBM ThinkPad X24 notebook computer cannot stand more than 1 hour, which is too cumbersome for frequent charging, we have ordered a new IBM ThinkPad X40 notebook computer which has built-in wireless LAN and can stand for more than 7 hours according to the specification. 5. We notice that navigation for robot is an important element and future research in the coming year because as our testing and some findings from different sources, there is still quite a lot of enhancement. Because of this, Strabo TM Pathfinder has been purchased for easy manipulation of the robot navigation behavior. 45

47 6. We do not have electronic compass to have accurate directional facility, when using with Strabo TM Pathfinder, we need to either put the robot to face absolute north or east in our implementation. 5.3 Experimental Setup We broke the testing of our robot into two categories. They are the basic functionality and usability. The basic functionality goals were to have the ER1 Robot able to perform basic actions from our user interface. These three basic actions were to establish a connection, backwards and forwards movement, rotation, and gripper arm control. All of these are essential for any development of the Robotic Command Centre to continue. Our usability goals were the set of goals for which controlling the robot would be simpler and easier for the robot controller. It would be time-consuming for the user to have to release ever command after he decided what he wanted to do. Instead, we created a command queue, which is a list of user commands that the robot user would like to send and it is also essential for localization through Strabo TM Pathfinder. Due to the limited space in my home, the experiments are conducted on a narrow straight corridor in an office-like indoor environment with a relatively dimmed lighting condition. The corridor extends about 7 feet in length and 6 feet in width. In a narrow opening like a corridor, the robot moves with a speed slower than an average walking speed. The movement behavior and object recognition behavior are the essential elements that must be executed during the experiment navigation. The collision 46

48 detection by the three IR sensors is executed in another thread. Figure 32 Partial Enviroment Setup The most important mission of the experiments is to analyze and verify the performance of the control system as well as to accomplish a successful navigation. Therefore, the robot is evaluated in each of the criteria listed as the following. 1. Mechanisms (targets: Collision Detection and Object Recognition) a. Robot with collision detection b. Full feature (collision detection and object recognition) 2. Robot control system a. Modularity and usability b. Safety (system safety and robustness) For the purpose of evaluating the performance with respect to safety, on each experiment the robot runs in a corridor with obstacles as in Figure 32. Because of the dead-end in the corridor, an extra control subroutine is added to the movement behavior in which the robot will slow down its speed or stop entirely to avoid collision with the wall. Object Recognition with direction sign for homing (Figure 32) has also been tested to see how the robot tackles to these problems. 47

49 Figure 33 Partial Environment Setup - Strabo TM 5.4 Experimental Results As a result, the robot has shown both desired and problematic behaviors. In the matter of collision detection, the robot is able to avoid collisions with obstacles and walls. The collision detection mechanism maneuvered the vehicle around and navigated it to the end of the hallway without collisions. There were also some problems, however. First of all, the behavior was often distracted by ambient light, which caused the retardation in navigating the robot. As a possible solution, we have lowered all the sensor range to 1.5 feet to improve sensor readings for applying collision detection. However, oftentimes it will require a sensor calibration in each and every unique environment. 48

50 The path generated by Strabo TM Pathfinder do not make into the consideration of coarse floor plan. As a result, most of the time, there is minor deviation though they are not pose serious problem in terms of safety in navigation. Although an improvement still needs to be made with respect to the accuracy in selecting the valid path for target destination, the main problem is the way to handle the unknown terrain outside a normal workspace. In principle, the robotic command system has no central brain in the system, and any information posted on the robotic command centre must be handled and interpreted by respective devices. In the current system, the movement behavior is the only device dealing with the shared information that reflects on the robot s actuators. The images obtained by the system camera show that the mechanism identifies the object almost perfectly. However, the ratio of correctness drops in identifying the object with increasing distance. In fact, it is extremely difficult to always make correct judgments in the dynamic scene without giving an appropriate amount of hints. It is the simplest solution with one drawback; that is, adding more parameters may deprive the robot of the adaptability to environments. There may be more solutions. For instance, we have increased the image capture resolution from 640 x 480 RGB Color and get desired result. There were some interesting behaviors exhibited by the robot. The robot was originally designed to do only three things: avoid collisions, travel destination and recognize object for homing. However, the robot also demonstrated unexpected movements: obstacle avoidance and smart homing. Originally, I would like to re-create the reverse direction for robot homing, due to some deviations in robot movement, homing may not be too accurate due to increased deviations from false 49

51 calculation by wheel in a uneven floor. We use two pictures for guidance and re-position of the robot and it found to be quite successful in homing for several times. Further testing found that with the new Logitech QuickCam Pro 4000, the robot can recognize an object over 25 feet (it is the maximum length of my home) with improved image quality and faster transfer speed for RCC s object recognition. Figure 34 Right Arrow Sign for Homing Figure 35 "Home" Sign for Homing As mentioned in the previous chapters, the objective of this experiment was to design and build a robot control program that is easy to use and easy to expand (modify) for future study. To begin with, the control system succeeded in facilitating modularity and usability. The complete modulation in the class hierarchy brought forth an effortless implementation, and the intelligible user interface has navigational parameters all adjustable and realizes the smooth experiment processes. The GUI (Graphical User Interface) written in Microsoft Visual Studio.NET is shown in Section Besides localization and path-finding abilities of the robot, with the combination effort of Remote Control and Path Editor, we can design a pre-programmed behavior for 50

52 robot movement and other action. It is particularly useful for danger zone or new land exploration (e.g. in the Mars or other planets). Figure 36 ER1 Robot with Torch Light in the Dark Figure 37 Capture Image of Front vvew from ER1 with Torch Light 5.3 Discussion The collision avoidance behavior, object recognition behavior, navigational behavior, and the system responsible for hardware components all cooperate within the robotic control system. The robot and the control system were presented and analyzed in the experiments. As a result, the robot did not exhibit the perfectly desired performance, but the layered approach in the design criteria has proved its feasibility in mobile robot navigation. The problems faced during the experiments are more related to the calibration against an environment and the parameter adjustment on the agents than the fundamental design criteria of the control system. The proposed layered architecture enabled the control system to be easily expandable, and the use of user-friendly GUI and user-modifiable Strabo TM Pathfinder map has made the system more easily cope with different user-requirement for them. Although there is some deviation of the target position due to uneven environment, 51

53 remedy method by using the attached camera to re-position the robot using sign-board method has proven successfully for homing. With the implementation of collision mechanism, the robot demonstrated safe navigation in a hallway using a set of IR proximity sensors. As the current ongoing research, possible solutions are being implemented to compensate for this problem. For example, Evolution NorthStar technology that addresses this long-standing problem. It uses a small, inexpensive sensor (usually placed inside the commercial robot) and an infrared, encrypted light device that plugs into a wall outlet to help a robot not only navigate from room to room, but actually know which room it's in. Figure 38 NorthStar Detector (Left) & NorthStar IR Projector (Right) The NorthStar detector uses triangulation to measure a product s exact position and heading in relation to IR light spots projected onto the ceiling (or any visible surface). Because each IR light spot has a unique signature, the detector can instantly and unambiguously determine where the product is. Because the NorthStar detector directly measures a product s position and heading, the localization result is intrinsically robust. A NorthStar-enabled product does not require prior training or mapping to measure its position. There is no need for expensive computational 52

54 capabilities. The system is insensitive to changes in the environment such as moving furniture or people and works in complete darkness. Figure 39 NorthStar in Operation So we can build our indoor GPS-like Robot which is similar to the robot using GPS facility outdoor. However, it is only available in the first quarter of Further study is sought to design an agent which actually performs the landmark-based navigation extending machine vision techniques in collaboration with NorthStar technology. It is foreseeable that using a robot to navigate different areas in the home and return image or doing some other stuff with accurate position can be achieved in the not long distant future. The price of the hardware and software for NorthStar is 1,495 US dollars. 53

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

COSC343: Artificial Intelligence

COSC343: Artificial Intelligence COSC343: Artificial Intelligence Lecture 2: Starting from scratch: robotics and embodied AI Alistair Knott Dept. of Computer Science, University of Otago Alistair Knott (Otago) COSC343 Lecture 2 1 / 29

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Saphira Robot Control Architecture

Saphira Robot Control Architecture Saphira Robot Control Architecture Saphira Version 8.1.0 Kurt Konolige SRI International April, 2002 Copyright 2002 Kurt Konolige SRI International, Menlo Park, California 1 Saphira and Aria System Overview

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Formation and Cooperation for SWARMed Intelligent Robots

Formation and Cooperation for SWARMed Intelligent Robots Formation and Cooperation for SWARMed Intelligent Robots Wei Cao 1 Yanqing Gao 2 Jason Robert Mace 3 (West Virginia University 1 University of Arizona 2 Energy Corp. of America 3 ) Abstract This article

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1 Introduction to Robotics CSCI 445 Laurent Itti Group Robotics Introduction to Robotics L. Itti & M. J. Mataric 1 Today s Lecture Outline Defining group behavior Why group behavior is useful Why group behavior

More information

COS Lecture 1 Autonomous Robot Navigation

COS Lecture 1 Autonomous Robot Navigation COS 495 - Lecture 1 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Introduction Education B.Sc.Eng Engineering Phyics, Queen s University

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Last Time: Acting Humanly: The Full Turing Test

Last Time: Acting Humanly: The Full Turing Test Last Time: Acting Humanly: The Full Turing Test Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent Can machines think? Can

More information

Chapter 14. using data wires

Chapter 14. using data wires Chapter 14. using data wires In this fifth part of the book, you ll learn how to use data wires (this chapter), Data Operations blocks (Chapter 15), and variables (Chapter 16) to create more advanced programs

More information

CPS331 Lecture: Agents and Robots last revised November 18, 2016

CPS331 Lecture: Agents and Robots last revised November 18, 2016 CPS331 Lecture: Agents and Robots last revised November 18, 2016 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents 3. To introduce the subsumption architecture

More information

Multi-Platform Soccer Robot Development System

Multi-Platform Soccer Robot Development System Multi-Platform Soccer Robot Development System Hui Wang, Han Wang, Chunmiao Wang, William Y. C. Soh Division of Control & Instrumentation, School of EEE Nanyang Technological University Nanyang Avenue,

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Teleoperated Robot Controlling Interface: an Internet of Things Based Approach

Teleoperated Robot Controlling Interface: an Internet of Things Based Approach Proc. 1 st International Conference on Machine Learning and Data Engineering (icmlde2017) 20-22 Nov 2017, Sydney, Australia ISBN: 978-0-6480147-3-7 Teleoperated Robot Controlling Interface: an Internet

More information

CPS331 Lecture: Agents and Robots last revised April 27, 2012

CPS331 Lecture: Agents and Robots last revised April 27, 2012 CPS331 Lecture: Agents and Robots last revised April 27, 2012 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents 3. To introduce the subsumption architecture

More information

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX.

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX. Review the following material on sensors. Discuss how you might use each of these sensors. When you have completed reading through this material, build a robot of your choosing that has 2 motors (connected

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 16, 2009 COMP 4766/6778 (MUN) Course Introduction January

More information

LDOR: Laser Directed Object Retrieving Robot. Final Report

LDOR: Laser Directed Object Retrieving Robot. Final Report University of Florida Department of Electrical and Computer Engineering EEL 5666 Intelligent Machines Design Laboratory LDOR: Laser Directed Object Retrieving Robot Final Report 4/22/08 Mike Arms TA: Mike

More information

Embodiment from Engineer s Point of View

Embodiment from Engineer s Point of View New Trends in CS Embodiment from Engineer s Point of View Andrej Lúčny Department of Applied Informatics FMFI UK Bratislava lucny@fmph.uniba.sk www.microstep-mis.com/~andy 1 Cognitivism Cognitivism is

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR

DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR Proceedings of IC-NIDC2009 DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR Jun Won Lim 1, Sanghoon Lee 2,Il Hong Suh 1, and Kyung Jin Kim 3 1 Dept. Of Electronics and Computer Engineering,

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Student Name: Student ID # UOSA Statement of Academic Integrity On my honor I affirm that I have neither given nor received inappropriate aid

More information

Control Arbitration. Oct 12, 2005 RSS II Una-May O Reilly

Control Arbitration. Oct 12, 2005 RSS II Una-May O Reilly Control Arbitration Oct 12, 2005 RSS II Una-May O Reilly Agenda I. Subsumption Architecture as an example of a behavior-based architecture. Focus in terms of how control is arbitrated II. Arbiters and

More information

Initial Report on Wheelesley: A Robotic Wheelchair System

Initial Report on Wheelesley: A Robotic Wheelchair System Initial Report on Wheelesley: A Robotic Wheelchair System Holly A. Yanco *, Anna Hazel, Alison Peacock, Suzanna Smith, and Harriet Wintermute Department of Computer Science Wellesley College Wellesley,

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Lab book. Exploring Robotics (CORC3303)

Lab book. Exploring Robotics (CORC3303) Lab book Exploring Robotics (CORC3303) Dept of Computer and Information Science Brooklyn College of the City University of New York updated: Fall 2011 / Professor Elizabeth Sklar UNIT A Lab, part 1 : Robot

More information

An Open Robot Simulator Environment

An Open Robot Simulator Environment An Open Robot Simulator Environment Toshiyuki Ishimura, Takeshi Kato, Kentaro Oda, and Takeshi Ohashi Dept. of Artificial Intelligence, Kyushu Institute of Technology isshi@mickey.ai.kyutech.ac.jp Abstract.

More information

CURIE Academy, Summer 2014 Lab 2: Computer Engineering Software Perspective Sign-Off Sheet

CURIE Academy, Summer 2014 Lab 2: Computer Engineering Software Perspective Sign-Off Sheet Lab : Computer Engineering Software Perspective Sign-Off Sheet NAME: NAME: DATE: Sign-Off Milestone TA Initials Part 1.A Part 1.B Part.A Part.B Part.C Part 3.A Part 3.B Part 3.C Test Simple Addition Program

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Collaborative Robotic Navigation Using EZ-Robots

Collaborative Robotic Navigation Using EZ-Robots , October 19-21, 2016, San Francisco, USA Collaborative Robotic Navigation Using EZ-Robots G. Huang, R. Childers, J. Hilton and Y. Sun Abstract - Robots and their applications are becoming more and more

More information

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE W. C. Lopes, R. R. D. Pereira, M. L. Tronco, A. J. V. Porto NepAS [Center for Teaching

More information

Robot Architectures. Prof. Yanco , Fall 2011

Robot Architectures. Prof. Yanco , Fall 2011 Robot Architectures Prof. Holly Yanco 91.451 Fall 2011 Architectures, Slide 1 Three Types of Robot Architectures From Murphy 2000 Architectures, Slide 2 Hierarchical Organization is Horizontal From Murphy

More information

IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS

IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS L. M. Cragg and H. Hu Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ E-mail: {lmcrag, hhu}@essex.ac.uk

More information

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

II. ROBOT SYSTEMS ENGINEERING

II. ROBOT SYSTEMS ENGINEERING Mobile Robots: Successes and Challenges in Artificial Intelligence Jitendra Joshi (Research Scholar), Keshav Dev Gupta (Assistant Professor), Nidhi Sharma (Assistant Professor), Kinnari Jangid (Assistant

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005) Project title: Optical Path Tracking Mobile Robot with Object Picking Project number: 1 A mobile robot controlled by the Altera UP -2 board and/or the HC12 microprocessor will have to pick up and drop

More information

Lab 8: Introduction to the e-puck Robot

Lab 8: Introduction to the e-puck Robot Lab 8: Introduction to the e-puck Robot This laboratory requires the following equipment: C development tools (gcc, make, etc.) C30 programming tools for the e-puck robot The development tree which is

More information

Robot Architectures. Prof. Holly Yanco Spring 2014

Robot Architectures. Prof. Holly Yanco Spring 2014 Robot Architectures Prof. Holly Yanco 91.450 Spring 2014 Three Types of Robot Architectures From Murphy 2000 Hierarchical Organization is Horizontal From Murphy 2000 Horizontal Behaviors: Accomplish Steps

More information

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging Proseminar Roboter und Aktivmedien Educational robots achievements and challenging Lecturer Lecturer Houxiang Houxiang Zhang Zhang TAMS, TAMS, Department Department of of Informatics Informatics University

More information

in the New Zealand Curriculum

in the New Zealand Curriculum Technology in the New Zealand Curriculum We ve revised the Technology learning area to strengthen the positioning of digital technologies in the New Zealand Curriculum. The goal of this change is to ensure

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

Chapter 1. Robots and Programs

Chapter 1. Robots and Programs Chapter 1 Robots and Programs 1 2 Chapter 1 Robots and Programs Introduction Without a program, a robot is just an assembly of electronic and mechanical components. This book shows you how to give it a

More information

Mindstorms NXT. mindstorms.lego.com

Mindstorms NXT. mindstorms.lego.com Mindstorms NXT mindstorms.lego.com A3B99RO Robots: course organization At the beginning of the semester the students are divided into small teams (2 to 3 students). Each team uses the basic set of the

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

Artificial Neural Network based Mobile Robot Navigation

Artificial Neural Network based Mobile Robot Navigation Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Team Project: A Surveillant Robot System

Team Project: A Surveillant Robot System Team Project: A Surveillant Robot System SW & HW Test Plan Little Red Team Chankyu Park (Michel) Seonah Lee (Sarah) Qingyuan Shi (Lisa) Chengzhou Li JunMei Li Kai Lin Software Lists SW Lists for Surveillant

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015 Subsumption Architecture in Swarm Robotics Cuong Nguyen Viet 16/11/2015 1 Table of content Motivation Subsumption Architecture Background Architecture decomposition Implementation Swarm robotics Swarm

More information

Incorporating a Software System for Robotics Control and Coordination in Mechatronics Curriculum and Research

Incorporating a Software System for Robotics Control and Coordination in Mechatronics Curriculum and Research Paper ID #15300 Incorporating a Software System for Robotics Control and Coordination in Mechatronics Curriculum and Research Dr. Maged Mikhail, Purdue University - Calumet Dr. Maged B. Mikhail, Assistant

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

A Hybrid Planning Approach for Robots in Search and Rescue

A Hybrid Planning Approach for Robots in Search and Rescue A Hybrid Planning Approach for Robots in Search and Rescue Sanem Sariel Istanbul Technical University, Computer Engineering Department Maslak TR-34469 Istanbul, Turkey. sariel@cs.itu.edu.tr ABSTRACT In

More information

Learning serious knowledge while "playing"with robots

Learning serious knowledge while playingwith robots 6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Learning serious knowledge while "playing"with robots Zoltán Istenes Department of Software Technology and Methodology,

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Yu Zhang and Alan K. Mackworth Department of Computer Science, University of British Columbia, Vancouver B.C. V6T 1Z4, Canada,

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

Capstone Python Project Features

Capstone Python Project Features Capstone Python Project Features CSSE 120, Introduction to Software Development General instructions: The following assumes a 3-person team. If you are a 2-person team, see your instructor for how to deal

More information

Computer Science as a Discipline

Computer Science as a Discipline Computer Science as a Discipline 1 Computer Science some people argue that computer science is not a science in the same sense that biology and chemistry are the interdisciplinary nature of computer science

More information

Robotics using Lego Mindstorms EV3 (Intermediate)

Robotics using Lego Mindstorms EV3 (Intermediate) Robotics using Lego Mindstorms EV3 (Intermediate) Facebook.com/roboticsgateway @roboticsgateway Robotics using EV3 Are we ready to go Roboticists? Does each group have at least one laptop? Do you have

More information

Collective Robotics. Marcin Pilat

Collective Robotics. Marcin Pilat Collective Robotics Marcin Pilat Introduction Painting a room Complex behaviors: Perceptions, deductions, motivations, choices Robotics: Past: single robot Future: multiple, simple robots working in teams

More information

understanding sensors

understanding sensors The LEGO MINDSTORMS EV3 set includes three types of sensors: Touch, Color, and Infrared. You can use these sensors to make your robot respond to its environment. For example, you can program your robot

More information

Real-time Cooperative Behavior for Tactical Mobile Robot Teams. September 10, 1998 Ronald C. Arkin and Thomas R. Collins Georgia Tech

Real-time Cooperative Behavior for Tactical Mobile Robot Teams. September 10, 1998 Ronald C. Arkin and Thomas R. Collins Georgia Tech Real-time Cooperative Behavior for Tactical Mobile Robot Teams September 10, 1998 Ronald C. Arkin and Thomas R. Collins Georgia Tech Objectives Build upon previous work with multiagent robotic behaviors

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

INTRODUCTION TO GAME AI

INTRODUCTION TO GAME AI CS 387: GAME AI INTRODUCTION TO GAME AI 3/31/2016 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2016/cs387/intro.html Outline Game Engines Perception

More information

1 Lab + Hwk 4: Introduction to the e-puck Robot

1 Lab + Hwk 4: Introduction to the e-puck Robot 1 Lab + Hwk 4: Introduction to the e-puck Robot This laboratory requires the following: (The development tools are already installed on the DISAL virtual machine (Ubuntu Linux) in GR B0 01): C development

More information

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Humanoid robot. Honda's ASIMO, an example of a humanoid robot Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.

More information

Department of Computer Science and Engineering The Chinese University of Hong Kong. Year Final Year Project

Department of Computer Science and Engineering The Chinese University of Hong Kong. Year Final Year Project Digital Interactive Game Interface Table Apps for ipad Supervised by: Professor Michael R. Lyu Student: Ng Ka Hung (1009615714) Chan Hing Faat (1009618344) Year 2011 2012 Final Year Project Department

More information

UNIT-III LIFE-CYCLE PHASES

UNIT-III LIFE-CYCLE PHASES INTRODUCTION: UNIT-III LIFE-CYCLE PHASES - If there is a well defined separation between research and development activities and production activities then the software is said to be in successful development

More information

The Khepera Robot and the krobot Class: A Platform for Introducing Robotics in the Undergraduate Curriculum i

The Khepera Robot and the krobot Class: A Platform for Introducing Robotics in the Undergraduate Curriculum i The Khepera Robot and the krobot Class: A Platform for Introducing Robotics in the Undergraduate Curriculum i Robert M. Harlan David B. Levine Shelley McClarigan Computer Science Department St. Bonaventure

More information

On-demand printable robots

On-demand printable robots On-demand printable robots Ankur Mehta Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology 3 Computational problem? 4 Physical problem? There s a robot for that.

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Indiana K-12 Computer Science Standards

Indiana K-12 Computer Science Standards Indiana K-12 Computer Science Standards What is Computer Science? Computer science is the study of computers and algorithmic processes, including their principles, their hardware and software designs,

More information

Exercise 2. Point-to-Point Programs EXERCISE OBJECTIVE

Exercise 2. Point-to-Point Programs EXERCISE OBJECTIVE Exercise 2 Point-to-Point Programs EXERCISE OBJECTIVE In this exercise, you will learn various important terms used in the robotics field. You will also be introduced to position and control points, and

More information

Note: Objective: Prelab: ME 5286 Robotics Labs Lab 1: Hello Cobot World Duration: 2 Weeks (1/28/2019 2/08/2019)

Note: Objective: Prelab: ME 5286 Robotics Labs Lab 1: Hello Cobot World Duration: 2 Weeks (1/28/2019 2/08/2019) ME 5286 Robotics Labs Lab 1: Hello Cobot World Duration: 2 Weeks (1/28/2019 2/08/2019) Note: At least two people must be present in the lab when operating the UR5 robot. Upload a selfie of you, your partner,

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Distributed Intelligence in Autonomous Robotics. Assignment #1 Out: Thursday, January 16, 2003 Due: Tuesday, January 28, 2003

Distributed Intelligence in Autonomous Robotics. Assignment #1 Out: Thursday, January 16, 2003 Due: Tuesday, January 28, 2003 Distributed Intelligence in Autonomous Robotics Assignment #1 Out: Thursday, January 16, 2003 Due: Tuesday, January 28, 2003 The purpose of this assignment is to build familiarity with the Nomad200 robotic

More information

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as

More information

A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition

A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition A Mobile Robot Behavior Based Navigation Architecture using a Linear Graph of Passages as Landmarks for Path Definition LUBNEN NAME MOUSSI and MARCONI KOLM MADRID DSCE FEEC UNICAMP Av Albert Einstein,

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Turtlebot Laser Tag. Jason Grant, Joe Thompson {jgrant3, University of Notre Dame Notre Dame, IN 46556

Turtlebot Laser Tag. Jason Grant, Joe Thompson {jgrant3, University of Notre Dame Notre Dame, IN 46556 Turtlebot Laser Tag Turtlebot Laser Tag was a collaborative project between Team 1 and Team 7 to create an interactive and autonomous game of laser tag. Turtlebots communicated through a central ROS server

More information

Outline. Paradigms for interaction. Introduction. Chapter 5 : Paradigms. Introduction Paradigms for interaction (15)

Outline. Paradigms for interaction. Introduction. Chapter 5 : Paradigms. Introduction Paradigms for interaction (15) Outline 01076568 Human Computer Interaction Chapter 5 : Paradigms Introduction Paradigms for interaction (15) ดร.ชมพ น ท จ นจาคาม [kjchompo@gmail.com] สาขาว ชาว ศวกรรมคอมพ วเตอร คณะว ศวกรรมศาสตร สถาบ นเทคโนโลย

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

A Lego-Based Soccer-Playing Robot Competition For Teaching Design Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

An Overview of the Mimesis Architecture: Integrating Intelligent Narrative Control into an Existing Gaming Environment

An Overview of the Mimesis Architecture: Integrating Intelligent Narrative Control into an Existing Gaming Environment An Overview of the Mimesis Architecture: Integrating Intelligent Narrative Control into an Existing Gaming Environment R. Michael Young Liquid Narrative Research Group Department of Computer Science NC

More information

Lab 7: Introduction to Webots and Sensor Modeling

Lab 7: Introduction to Webots and Sensor Modeling Lab 7: Introduction to Webots and Sensor Modeling This laboratory requires the following software: Webots simulator C development tools (gcc, make, etc.) The laboratory duration is approximately two hours.

More information