Decentralised Cooperative Control of a Team of Homogeneous Robots for Payload Transportation

Size: px
Start display at page:

Download "Decentralised Cooperative Control of a Team of Homogeneous Robots for Payload Transportation"

Transcription

1

2 Decentralised Cooperative Control of a Team of Homogeneous Robots for Payload Transportation by Ronal Singh A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computing Science Copyright 2013 by Ronal Singh School of Computing, Information and Mathematical Sciences Faculty of Science, Technology and Environment The University of the South Pacific Suva, Fiji Islands February, 2013

3 Declaration of Originality I hereby solemnly and sincerely declare that the work presented in this thesis is to the best of my knowledge my original work, except for citations that have been duly acknowledged in text. I further attest that the material presented herein has not been submitted previously, either in whole or in part for a degree at any academic institution. Ronal Singh Student ID: S February, 2013

4 Dedicated to my family

5 Contents Declaration of Originality... ii List of Figures... vi List of Tables...x Abstract... ix Acknowledgement... xi Chapter 1: Introduction Cooperative Transportation of Objects Collision Avoidance Task Definition Research Objectives Distributed Cooperative Control Algorithm Obstacle Avoidance via Negotiation Scalable and Robust Cooperative Control Strategy Thesis Organisation Chapter 2: Background and Related Work Control Architectures for Robotic Systems Cooperative Control Principles of Cooperative Control Cooperative Control Architectures Inter-Robot Communication Synchronisation in Group Behaviour Homogeneous and Heterogeneous Teams Approaches to Cooperative Transport Our Approach Summary Chapter 3: Behaviour-based Cooperative Control for Cooperative Transport Behaviour-based Robot Control Elementary Behaviours Go to Goal Avoid Obstacle Social Mediation... 35

6 3.3 Cooperative Transport Behaviour Motion Controller Summary Chapter 4: Design of the Experimental Environment Design of the Simulated Mobile Robots Design of the Robotic Simulator Implementation of the Robotics Simulator Cooperative Controls Performance Indicators Summary Chapter 5: Results and Discussion Limitations and Errors of the System Performance Indicator 1: Success Rate Performance Indicator 2: Scalability Performance Indicator 3: Robustness Summary Chapter 6: Conclusion and future work Summary Future work References Appendix A Appendix B Appendix C v

7 List of Figures Figure 1.1: Examples of group behaviour: (a) cooperative transport by ants; (b) birds flying in a formation; (c) swarming behaviour of fish....2 Figure 1.2: Components of Cooperative Control....4 Figure 1.3: Cooperative transport by pushing or pulling: (a) box-pushing task in Kube & Zhang (1993); (b) object being pushed/dragged by swarmbots in Groß et al. (2006)....5 Figure 1.4: Multi-robot systems carrying object: (a) 3-robot team carrying an object in Soares et al. (2007); (b) two humanoid robots transporting an object in Inoue et al. (2004)....6 Figure 1.5: Conflicting headings chosen by two robots....8 Figure 1.6: Negotiated team heading for two robots with different perceptions....8 Figure 2.1: Types of multi-robot teams: (a) Homogeneous team comprising of team members with same hardware and motion control (Trianni & Dorigo, 2006); (b) Heterogeneous team with different hardware and motion control laws for each robot in (Sugar & Kumar, 2002) Figure 2.2: Unsuccessful negotiation of the team heading by the social mediation behaviour of Ferrante et al. (2010a) Figure 2.3: Social Mediation behaviour presented in Ferrante et al. (2010a) Figure 2.4: Cooperative Transport behaviour presented in Ferrante et al. (2010a) Figure 3.1: Robot s behaviour-based controller Figure 3.2: Algorithm of the Avoid Obstacle Behaviour Figure 3.3: Algorithm of the Social Mediation Behaviour Figure 3.4: Rules to compute the (degree of neighbour s influence) Figure 3.5: Algorithm for the Cooperative Transport Behaviour Figure 3.6: Combining the output of Go to Goal and Avoid Obstacle behaviours. (a) provides a graphical view of the scenario; (b) shows the vectors representing the goal and obstacle directions; (c) vectors representing attraction to goal and repulsion from obstacle; (d) computing the final direction for the robot (assume w is 0.5) Figure 3.7: Weighted update of the robot's heading. (a) robot's current and proposed headings; (b) robot's actual heading if the weighting factor is Figure 3.8: The control system of the robot Figure 3.9: The fuzzy logic based motion controller Figure 4.1: Schematic and simulated view of the robot: (a) Schematic view; (b) Graphic view showing important mathematical components necessary to track the location of the robot; (c) and (d) Viewing the simulated robot created using JBullet Physics Engine. The white-

8 block on top of the robot simulates the 6DOF joint used to attach to the carried object Figure 4.2: Perception and action abilities of the simulated robot Figure 4.3: 6DOF joints: (a) linear displacement and rotation of a 6DOF joint; (b) Spring-like flexible 6DOF joint attaching the object to robot chassis has linear displacement and rotation. Arrows indicate rotation and linear displacement along the three axes Figure 4.4: The states of the motion controller Figure 4.5: The FIS for the robot s motion controller Figure 4.6: Frame-of-reference used by robots to identify its placement in the team Figure 4.7: Robots choose individual headings to achieve the desired team heading Figure 4.8: Rules of the FLC used by the robot's motion controller Figure 4.9: Illustrating the computation of robot placement ( loc ) Figure 4.10: Membership function for linguistic variable Robot Placement ( loc ) Figure 4.11: Membership function for linguistic variable desired heading ( social ) Figure 4.12: Membership function for linguistic variable steering angle ( steering ) Figure 4.13: Membership function for linguistic variable velocity (v) Figure 4.14: Basic architecture of the Java based Robotics Simulator Figure 4.15: Components of the Java based Simulator Figure 4.16: Arena 1 with the obstacle configuration, team's starting location, and the five goal locations Figure 5.1: Test environment used to realise the limitations and errors of the cooperative control Figure 5.2: Communication graph for the team used to study convergence properties Figure 5.3: Convergence of social mediation angle with usual parameters Figure 5.4: Convergence of social mediation angle with increasing update delay Figure 5.5: Convergence of social mediation angle with increasing mediation cycles: (a) with update delay of 50 control steps; (b) with update delay of 80 control steps; (c) with update delay of 500 control steps. Line of best fit is shown in each graph Figure 5.6: Convergence of social mediation angle with update delay of 500 steps and 15 social mediation cycles Figure 5.7: The degree of cooperative between the robots in Experiment Zero on the y-axis indicates that both robots have turned the front wheels in the opposite directions while one indicates that the wheels of the two robots are turned in the same direction, that is, are moving in the same direction vii

9 Figure 5.8: Convergence of individual robot's heading to social mediation angle in Experiment ; (a) between Control Steps 0 and 60; (b) between Control Steps 700 and Figure 5.9: Standard deviation of social mediation of angle before and after the wheel of Robot 2 collides with the obstacle in run Figure 5.10: Standard deviation of social mediation of angle immediately before collision in run Figure 5.11: The degree of cooperation between the robots in run 403. A value of 1 on the y-axis indicates that the robots are moving in the same direction Figure 5.12: Team's trajectory for tests conducted using Arena Figure 5.13: Team's trajectory for tests conducted using Arena Figure 5.14: Team's trajectory for tests conducted using Arena Figure 5.15: Team's trajectory for tests conducted using Arena Figure 5.16: Team's trajectory for tests conducted using Arena Figure 5.17: First arena used for the scalability test Figure 5.18: Task completion time for a load of 500KG as the number of the robots in the team is increased. Line of best fit has been shown in the graph. The error bars represent one standard deviation of the completion time Figure 5.19: The heading error as the number of robots in the team is increased. Line of best fit has been shown in the graph. The error bars represent one standard deviation of the heading error Figure 5.20 : Second arena used for the scalability test Figure 5.21: (a) The task completion time for different team sizes; (b) The heading error for different team sizes. The error bars represent one standard deviation of the completion time and heading error in the respective graphs Figure 5.22: Trajectory taken by the 6-robot team as the number of robots with motion control failure increases Figure 5.23: (a) Time taken by the 6-robot team to complete the task as number of failed robots increase; (b) number of collisions experienced by the 6-robot team as the number of robots failed Figure 5.24: Trajectory taken by the 6-robot team with increasing number of robots that are unable to control its motion and communicate with its neighbours Figure 5.25: (a) Time taken by the 6-robot team to complete the task as number of failed robots increase; (b) number of collisions experienced by the 6-robot team as the number of robots failed Figure 5.26: Task completion times for the 6-robot team with motion control and hardware failures. This graph shows difference between the completion times when the robot experiences the different levels of failures viii

10 Figure 5-27: Path deviations as the number of robots with complete failure increases. The solid black line shows the trajectory of the center of the carried object for teams containing failed robots while the dotted line shows the trajectory of the center of the carried object of the fully functional 6-robot team: (a) one robot with complete failure; (b) two robots with complete failure; (c) three robots with complete failure. The Root Mean Square Error and maximum deviation between the paths is also noted Figure A.1: Dimensions of the robotic platform used to test the cooperative control ix

11 List of Tables Table 3.1: Description of the steps of the Cooperative Transport Behaviour Table 4.1: Linguistic variables used for robot placement angle ( loc ) Table 4.2: Linguistic variables used for desired heading ( social ) Table 4.3: Linguistic variable used for steering angle ( steering ) Table 5.1: Standard deviation of social mediation angle with increasing heading update delay Table 5.2: Standard deviation of social mediation angle with increasing minimum mediation cycles for different heading update delays Table 5.3: Data for runs 101 to 505 that has been used to measure the success rate Table A.1: Simulator parameters that govern the behaviour of the robots Table A.2: Specifications of computers used for conducting tests using the Robotics Simulator Table C.1: Average heading error for different team sizes for first set of scalability tests Table C.2: Average task completion times for different load sizes carried by various teams for first set of scalability tests Table C.3: Average heading error for different team sizes for second set of scalability tests Table C.4: Average task completion times for various teams for second set of scalability tests

12 Abstract Research in cooperative robotics has amplified in the last two to three decades. A team of cooperating robots present many advantages, such as the ability to solve difficult tasks that are beyond a single autonomous robot, efficiency, and robustness. However, achieving a desired cooperative behaviour, such as cooperative transportation, has its share of challenges that include coherent coordination of robots behaviours, dynamic communication topology, and failure of robotic platforms. This thesis presents a distributed behaviour-based cooperative control that enables transportation of objects in environments containing static obstacles. The controller of each robot is decomposed into two components, Cooperative Transport Behaviour and Motion Controller. While the Motion Controller is dependent on the hardware of the robots involved in the cooperative task, the Cooperative Behaviour is at a much higher level of abstraction and is independent of the robotic platforms. The Cooperative Transport behaviour decides the team's heading while the Motion Controller determines the steering angle of the front wheels and the velocity of each robot in order to move the team in the direction decided by the Cooperative Transport behaviour. The Cooperative Transport behaviour comprises of three elementary behaviours combined using the subsumption architecture. The elementary behaviours are Go to Goal, Avoid Obstacle and Social Mediation, and these behaviours were chosen based on its sufficiency in achieving cooperative transportation. The Go to Goal and Avoid Obstacle behaviours safely navigate the robot to the goal location. The Social Mediation behaviour allows the team to negotiate the team's heading. However, the robots were not able converge to a precise value of the team's heading during the social mediation process. The Cooperative Transport behaviour performed a weighted update of the robot's heading such that only a portion of the social mediation angle was used to change the robot's current heading. This approach limited the effects of the slightly differing social mediation angles and enabled a smooth transition from the current to new heading. The Motion Controller of the

13 robot was designed using fuzzy logic, which not only mitigated the slight differences in social mediation angle noted during the social mediation process but also enabled a high degree of cooperation between the robots. The proposed cooperative control was tested in simulation. Several experiments reveal not only the success of the proposed control but also the control's scalability and robustness. The results indicate that the behaviour-based cooperative control that is capable of object transportation in environments containing static obstacles is decentralised, scalable, robust, and local. x

14 Acknowledgement This research work would not have been possible without the help and support of a number of important people in my life. I would like to express my most sincere gratitude to all of them. I would like to express my sincere gratitude to my principal supervisor, Dr. Sunil Lal. Dr. Lal devoted a tremendous amount of time instructing, mentoring, and helping me. His encouragement has been invaluable. He enlightened me on methods of scientific research as well. I would like to express my heartfelt thanks to Associate Professor Jito Vanualailai for introducing me to the world of robotics research. Without his thoughts, guidance, and support, I would not be working in this amazing field. I would like to thank my parents, wife, and sister for their never-ending support during my research. They provided me the necessary help and encouragement. I would also like to thank my colleagues from the School of Computing, Information, and Mathematical Sciences for valuable discussions. I would like to extend a special thanks to Mr. Anurag Sharma for his guidance and helpful suggestions. I would also like to thank Mr. Dinesh Kumar, Mr. Dinesh Rao, Mr. Shonal Singh, and Mr. Jai Raj for useful discussions. Last but not the least, I would like to express my most humble and sincere thanks to the University of the South Pacific for sponsoring this research project. I would like to thank the headship of the Faculty of Science, Technology and Environment for their help and support.

15 Chapter 1 Introduction 1 Introduction Research in collective robotics has amplified in the past decades. In particular, cooperation in multi-robot systems has become a major research area for collective robotics researchers. In collective robotics, a team of autonomous robots solve tasks via cooperation or non-cooperation (Kube & Zhang, 1993). In non-cooperative tasks, such as searching and sorting, the team may employ divide-and-conquer approach to reduce the execution time albeit a single robot given sufficient amount of time can perform the task. Conversely, cooperative tasks, for example cooperative load transport, require robots to exhibit cooperative behaviour given that a single robot cannot accomplish the task. This work focuses on tasks that require cooperation. The term cooperative robotics will be used to describe multi-robot systems that solve tasks via cooperation. Cooperative multi-robot systems present many advantages. Firstly, when tasks are inherently too complex or impossible for a single autonomous robot to accomplish, such as moving or pushing a very heavy object, a team of mobile robots can cooperatively accomplish the task, that is, move or push the heavy object. Secondly, a team of robots can achieve a better perception of the environment, improving accuracy of the decision making of not only the individual robots but also the team as a whole. Thirdly, performance gain can be realised by a team of cooperating robots, for example, in reconnaissance task, a team of robots can explore and/or map an area faster than a single robot. Furthermore, typically a simple robot is not only cheaper and easier to construct but also able to exhibit flexibility and robustness when compared to a task-specific single powerful robot. Finally, in a heterogeneous team, the more capable robots can aid less capable robots, for example, a robot may capitalise on advanced sensing capabilities of its neighbouring robots. A multi-robot system solving a task cooperatively needs to exhibit cooperative group behaviour. Cooperative group behaviour defined in Cao et al. (1997) is:

16 Given some task specified by a designer; a multiple-robot system displays cooperative behaviour if; due to some underlying mechanism that is; the mechanism of cooperation ; there is an increase in the total utility of the system Cooperative group behaviour has been ubiquitous in social insects and animals (Figure 1.1). The interesting aspect of the group behaviour exhibited by insects and animals is that the control of each individual of the group is distributed, asynchronous, and local. Moreover, the group behaviour is scalable and robust. Consider the cooperative transport of prey by a team of ants. Each ant makes its own decision based on its own local perception. There is no leader ant that guides all or part of the team. This makes the control scheme distributed. Ants can join and leave the transportation task, and the departure or arrival of ants does not stop the cooperative transport (unless there are a very small number of ants involved) making the overall control strategy robust. The capacity of each ant is limited, but in a team, ants exhibit complex group behaviours such as foraging and cooperative transportation of prey. (a) (b) (c) Figure 1.1: Examples of group behaviour: (a) cooperative transport by ants; (b) birds flying in a formation; (c) swarming behaviour of fish. Mimicking cooperative behaviour exhibited by social insects is a challenging task. Firstly, local perception capability introduces the challenge of creating a global and consistent awareness about the environment for each member of the team. A uniform knowledge of the environment will result in identical and quicker decision-making. Secondly, communication topology used by the team may itself be dynamic, that is, a robot s neighbour may change, or there may be delays in communication. Finally, 2

17 the distributed motion control laws that rely on local communication needs to activate the behaviour or action of each robot in the team coherently for the team to exhibit cooperative behaviour. A cooperative control strategy aiming to achieve cooperative behaviour needs to overcome these challenges. The ability of the social insects to achieve group behaviour via distributed control laws in a simple, local, robust, and scalable manner has inspired the design of cooperative control strategies for multi-robot systems involved in cooperative tasks. Cooperative control is intrinsic in cooperative robotics, that is, for a multi-robot system to exhibit cooperative behaviour, a cooperative control strategy becomes imperative as it provides the mechanism of cooperation. In multi-robot systems, cooperative control deals with achieving collective group behaviour for a group of robots having homogeneous control, limited or no local interaction, local sensing capabilities, and limited processing capability (Shamma, 2008). Although there is no specific requirement for the control to be homogeneous, this thesis focuses on cooperative behaviour for robots having homogeneous control. Matari (2007) provides an excellent overview of the control architectures for robotic systems. The architectures can be categorised as deliberative, reactive, hybrid or behaviour-based. Chapter 2 of this thesis will introduce these architectures. This research work uses behaviour-based control. Cooperative control has four important aspects: group objective, agents/robots, information topology, and agent s motion control algorithms (Bai et al., 2011). The group objective refers to the task of the multi-robot system, for example, flocking (Reynolds, 1987; Olfati-Saber, 2006; Sharma et al., 2009), cooperative transportation of object (Z.-D. Wang et al., 2004; Pereira et al., 2004; Lin et al., 2009; Ferrante et al., 2010a), and formation maintenance (Fax & Murray, 2004). The capabilities of the robots dictate the mechanism of cooperation and the tasks that the robot team can solve. A robot having sensors and a vision system to detect obstacles in the environment is obviously more capable than a robot that has sensors only. Therefore, the control strategy can make use of the advanced perception capability of the robot when making decisions regarding how it wants to navigate the robot in the environment. Information topology describes the method used by the robots to exchange messages or to communicate. The agent s motion control algorithm, which 3

18 controls the behaviour of the individual robots, should be homogeneous and distributed. Group Objective Agents Cooperative Control Motion Controllers Information Topology Figure 1.2: Components of Cooperative Control. The group objective studied in this thesis is cooperative transportation of objects. A team of three-wheeled mobile robots are the agents that support the group objective and explicit communication between neighbouring robots serve as the communication-topology. The robot is composed of a set of elementary behaviours and robot s low-level motion control algorithm, which controls the robot s steering angle and wheel motor forces, utilises a fuzzy logic based controller. Chapter 4 will clarify the role of the fuzzy logic based controller. Researchers have approached cooperative control from a number of perspectives, and these can be broadly classified as behaviour-based approach, potential-field based approach, and graph theory based approach. In behaviour-based approach, each robot is equipped with a set of elementary behaviours, and via behaviour arbitration or fusion, a subset of these behaviours is executed by the robots to achieve the desired cooperative behaviour (Brooks, 1986; Kube et al., 1993; Matari, 1995; Parker, 1998; Lal et al., 2008). In the potential field approach, commonly used in flocking and formation control studies, a task related potential function (objective function) is defined and following this, the robot s control laws are defined. To prove the stability properties, the Lyapunov stability theory may be applied (Olfati- with only Saber, 2006; Sharma et al., 2009). Information exchange between robots local interaction capabilities introduces the complexity of converging to a uniform 4

19 task-related state, which is used to guide the robot team to complete the assigned task. The graph theory based approach represents the inter-robot information flow using directed or undirected graphs. Algebraic graph theories are used to study the consensus properties of the cooperative control, for example, convergence with dynamic communication topology (Fax & Murray, 2004; Olfati-Saber et al., 2007). This thesis uses the behaviour-based approach to design the cooperative controller that facilitates cooperative transportation. 1.1 Cooperative Transportation of Objects The term cooperative transport describes a team of autonomous mobile robots involved in the transportation of objects in environments possibly in presence of static or dynamic obstacles of same or different shapes and sizes. There are two common transport methods studied in literature: pushing or carrying. The most common transport method is pushing the object during the cooperative transportation task (Figure 1.3). In most literature, a cooperative box-pushing task has been studied (Kube et al., 1993; Matari et al., 1995; Groß et al., 2006). Robots apply pushing or pulling forces to transport the box in the box-pushing task. (a) (b) Figure 1.3: Cooperative transport by pushing or pulling: (a) box-pushing task in Kube & Zhang (1993); (b) object being pushed/dragged by swarm-bots in Groß et al. (2006). Conversely, the robots may be required to carry the object to the goal location (Figure 1.4). In this type of transport, the robots have to attach to the transported object before transporting to the goal location (Sugar & Kumar, 2002; Inoue et al., 2004; Fu et al., 2006; Soares et al., 2007; Ferrante et al., 2010a). 5

20 (a) (b) Figure 1.4: Multi-robot systems carrying object: (a) 3-robot team carrying an object in Soares et al. (2007); (b) two humanoid robots transporting an object in Inoue et al. (2004). In comparison, carrying an object requires stronger coordination than pushing. For example, when pushing an object, if one robot becomes slower than the other robots, the slower robot could loose contact with the object and later speed up and join the transportation task without having a major impact on the transportation task. However, if the team is carrying the object, each robot has to maintain the right velocity and heading. Faster robots in this case will drag a slower robot, and the slower robot will cause the team to slow down. 1.2 Collision Avoidance Safe navigation is also crucial in cooperative transportation. The team of robots may be required to avoid obstacles that are static or dynamic while maintaining a strict formation during the transportation task. Therefore, collision avoidance forms an important theme for multi-robot teams involved in cooperative transportation. Obstacle detection involves realising that an obstacle is present in the environment, and that the robot has to avoid it. Obstacle detection, which is generally facilitated via sensors placed on the robot, determines the important aspects about the obstacle, such as location/direction of the obstacle and the distance from the robot to the obstacle. Obstacle avoidance then considers avoiding the detected obstacles. The primary goal of obstacle avoidance is to empower the mobile robot to cater for obstacles in its path. Collision avoidance methodology depends on environment awareness. If global knowledge of the environment is present, deliberative control can incorporate path planning. Deliberative control enables the robots to plan their movements and 6

21 assumes complete or partial global knowledge about the environment. Path planning fundamentally requires planning a trajectory for a mobile robot that will enable it to navigate safely from some start location to a goal location. The path planners can very easily incorporate the obstacle avoidance during path planning. Another technique is to build a representation or a map of the environment as the robot moves in the environment. This was done by a robot named Toto and is explained in detail in Matari (2007). However, this research work does not assume global knowledge of the environment, that is, the robots do not have any prior information about the obstacles present in the environment. The robots detect the obstacles via the on-board sensors as the robots navigate in the environment, but do not attempt to create a map for later use. Without a priori knowledge of the environment, except for the goal location, path planning is not applicable to the work studied in this thesis. Without global knowledge of the environment, this research work will have to utilise local collision avoidance, that is, avoid collisions as obstacles are detected. Reactive controllers employ this approach to collision avoidance. A number of local obstacle avoidance algorithms exist in the literature. These include Bug (1 and 2) (Lumelsky & Stepanov, 1987), Vector Field Histogram (VFH) (Borenstein & Koren, 1991), and Bubble-Band (Quinlan & Khatib, 1993). A detailed review of the obstacle avoidance techniques can be found in Siegwart & Nourbakhsh (2004) Negotiation in Collision Avoidance Using any one of the two techniques stated in the previous section, that is global or local path planning, a single robot makes a decision on how to avoid an obstacle, and can then carefully execute a series of movements that will implement the decision. However, obstacle avoidance in a multi-robot team involved in cooperative transport has one more layer of decision-making. It is possible for robots to have differing perception of the environment. Different perceptions in scenarios that require uniform perception of the environment will result in contradictory decisions. Consider the two robots involved in cooperative transportation in an environment containing obstacles. See Figure 1.5 for an illustration of a scenario that has the possibility of introducing varying environment perceptions. The following example shows that the robots attach to the carried object and are required to transport the object to the goal area, which is marked as Goal. 7

22 For this scenario, Robot 2 can perceive the goal and an obstacle while Robot 1 can only perceive the goal. Carried Object ROBOT 1 Heading of robot 1 Goal ROBOT 2 Heading of robot 2 Obstacle Figure 1.5: Conflicting headings chosen by two robots. Figure 1.5 clearly shows that both robots are attempting to go in different directions. Evidently, the conflicting headings chosen by the robots are a direct result of different environment perceptions. Robot 2 can perceive the obstacle and is trying to avoid colliding with it while Robot 1 is simply trying to go to the assigned goal area. To overcome this problem, a negotiation protocol employing explicit communication mediates the heading direction of the robot team. Figure 1.6 shows the negotiated team heading for the robot team shown in Figure 1.5. Carried Object ROBOT 1 Heading of robot 1 Negotiated heading Goal ROBOT 2 Heading of robot 2 Obstacle Figure 1.6: Negotiated team heading for two robots with different perceptions. 1.3 Task Definition In this thesis, a team of autonomous three-wheeled mobile robots transport an object by carrying it from a start location to a static goal location in an environment containing static obstacles. 8

23 The proposed cooperative control facilitating cooperative transportation of objects is designed based on the following assumptions: a) The robotic agents and their control are homogeneous; b) Local communication is utilised; c) The control is scalable; and d) The control is robust, that is, function satisfactorily in presence of failure of robotic agents. The following assumptions have been made about the transportation task and the environment: 1. The robots start pre-attached to the transported object. This research work does not consider assembly of the robots around the object. An example of work that looks at assembly of robots around the object prior to transportation is Groß et al. (2006). 2. The robots have no prior knowledge of the environment and the obstacles present in the environment. The robots perceive the obstacles via its sensing capabilities. As such, the robots have no way of planning the path to the goal area. 3. A common environment cue acting as a point of reference for all robots is present. This cue could be in the form of a light source placed at the goal location. All robots use this environment cue as a reference point when localising itself. 4. A beacon placed on the transported object acts as a frame of reference for the robot team. Using this frame of reference, each robot can identify its placement in the team during the cooperative transport task. The robot s motion control, which is discussed later, requires the robot to know its placement in the team, for example, whether the robot is in front of the reference point or behind the reference point relative to the direction of motion. 9

24 1.4 Research Objectives This thesis studies cooperative transportation. Section 1.3 has outlined the task that the robot team has to complete, and this section highlights the key objectives of this thesis. The proposed Cooperative Control should have a mechanism that enables robots in the team to negotiate the team heading during obstacle avoidance. Although negotiation is an important aspect, the proposed Cooperative Control should be distributed, robust, and scalable. Therefore, the objectives of this thesis will be designed to incorporate these aspects. Chapter 2 presents a detailed account of all the design choices Distributed Cooperative Control Algorithm This thesis presents a distributed control strategy accommodating transportation of an object by a team of autonomous mobile robots having homogenous control. This transportation task should be possible with obstacles being present in the environment. A distributed control promises robot autonomy Obstacle Avoidance via Negotiation When obstacles are present in the environment, it is possible for robots to have varying perception of the environment that leads to conflicting decisions. A negotiation based obstacle avoidance scheme is able to neutralise the effects of varying environment perceptions. Since negotiation via explicit communication is an expensive exercise, particularly with increasing number of robots that have local communication capacity, the aim is to minimise both, the state information exchanged between the robots and the frequency of communication Scalable and Robust Cooperative Control Strategy Apart from being distributed, the control strategy facilitating transportation of object needs to accommodate reasonably large number of robots, that is, the proposed cooperative transport control should be scalable in terms of team size. Another very important property to study will be the robustness of the control. Robot failures are possible and there can be different levels of failures. For example, a robot may not be able to communicate with other robots or a robot may be able to communicate but 10

25 not enforce the required motion because of hardware failure. The control strategy should enable object transportation even if a reasonable number of robots fail. 1.5 Thesis Organisation Chapter 2 provides the relevant background on cooperative control for cooperative transportation task. A review of existing literature is provided as well as attempt to provide the justification for the choices made for the design of the cooperative control proposed in this thesis. Chapter 3 describes in detail the cooperative control chosen for the cooperative transportation task. This chapter also discusses the elementary behaviours of the behaviour-based robot controller and the negotiation protocol that allows the robots to use explicit communication to decide the team heading. Chapter 4 presents an environment to test our cooperative control, and describes the robot and its capabilities. A number of different arenas were setup to test the proposed cooperative controller, and the three performance indicators that allow us to measure the performance of the cooperative control are discussed. Chapter 5 presents the results of the tests that measure the three performance indicators outlined in Chapter 4. The actual arenas used to compute the success rate and test the scalability and robustness of the control are presented. The mechanical limitations of the robots that prevent it from following the agreed team heading precisely are highlighted, and the errors in negotiation of the team heading and remedies to lower or eliminate these errors are considered. Fuzzy logic was used to design the low-level motion controller of the robots to be able to handle the uncertain team headings. Chapter 6 concludes providing a summary of this research work, and identifies possible future work. 11

26 Chapter 2 Background and Related Work 2 Background and related work Cooperative control strategies consider coordinating the motion of the robots in order to achieve the desired group behaviour. Researchers have looked at different aspects of cooperative control that is capable of exhibiting the desired group behaviour. The following sections discuss these aspects bearing in mind the vast literature that exists in this domain. This chapter considers the basic principles that guide the design of the proposed cooperative control before presenting existing cooperative control architectures utilised by multi-robot systems, and also looks at inter-robot communication mechanisms, types of robot teams, and cooperative control strategies for cooperative load transport. First, the types of control studied for navigation of single robots are discussed before presenting the types of control employed to coordinate a group of autonomous mobile robots. 2.1 Control Architectures for Robotic Systems The design of the controller for a single robotic system can be classified as deliberative, reactive, hybrid or behaviour-based (Matari, 2007) Deliberative Controller A deliberative controller assumes knowledge of the environment that the robot occupies, and with global knowledge of the environment, path planning can be employed to navigate the robot within the environment. The approaches taken by the path planners can be classified into three categories (Siegwart & Nourbakhsh, 2004): the cell decomposition method (Hiroshi et al., 1990), the roadmap method (Amato & Wu, 1996), and the potential field method (Warren, 1989). Path planning can either be offline or online. In offline planning, the robot s path planner plans the path from the start location to the goal location for the robot to follow, even before the robot starts moving. The robot is simply required to follow the given path to reach its goal. Offline path planning requires the environment and the goal to be static, which may not be possible in every scenario. Therefore, online path planning is utilised for dynamic environments and/or goals. Online path planning requires the robot to replan its path if it cannot follow the initially planned path, for example, when a

27 previously unseen obstacle appears in the planned path. Although online planning seems lucrative, it has some problems that hinder its practicality. Some problems of online planning are: i. If re-planning takes longer time than the speed of the robot, the robot will stop and wait for planning to complete; ii. A fair amount of storage is required to represent the robot s environment; and iii. The information used to represent the environment has to be accurate and current Reactive Controller Reactive controllers do not assume knowledge of the environment and therefore, cannot do path planning. There is a very tight coupling between the robot s sensors and its actuators, and this coupling is specified at design time by the designer in the form of rules that map the state of the robot to the corresponding action(s). In case multiple actions are triggered for a particular state, the robot can use arbitration or fusion to determine the final action. A well-known reactive architecture is the Subsumption Architecture (Brooks, 1986). An example of a reactive controller was presented in Matari et al. (1995) where two six-legged robots were required to cooperatively push a box. The primary drawback of the reactive controller is that it becomes extremely difficult to identify and map all possible robot states to appropriate action(s) for complex robotic systems. It is also a challenging task for the designer to identify the priority of the actions for arbitration or fusion. Another limitation of purely reactive controllers is that without any representation of the environment or memory of previous actions or states, it becomes easy for the robot to repeat an unwanted action. The reason is simple. Since reactive controllers do not have any long-term memory of previous actions, there is no way for the control to realise that it is making a mistake repeatedly. The hybrid controller addresses the limitations of the deliberative and reactive controllers Hybrid Controller A hybrid controller takes the best of reactive and deliberative approaches. This type of controller utilises a three-layer architecture comprising of planning layer, reactive layer and middle layer. The middle layer implements the logic that determines the utility of deliberative and reactive aspects. This makes the middle layer design very 13

28 challenging as well as specific to a particular problem. The challenge with the design of the middle layer is finding the right balance between the amount of deliberation and reactivity. Some examples of hybrid controllers are SSS (Connell, 1992), AuRA (Arkin & Balch, 1997), and 3T (Bonasso et al., 1997) Behaviour-based Controller Behaviour-based controller is designed by combining a set of basis or elementary behaviours, which are largely inspired by insects. Matari (2007) and Matari (1990) employed a behaviour-based control for the robot named Toto to create a representation of the environment and then navigate in it using the representation that it created. The behaviours of the behaviour-based control are time-extended, that is, it takes time to realise a behaviour, and this is what differentiates it from the actions of a reactive control. The actions of a reactive control are realised immediately. For example, it takes a longer time to achieve the "avoid obstacle" behaviour than a simple stop-and-turn action of a reactive control. However, the behaviours allow the robot to implement logic that determines an effective way for the robot to behave in a given scenario rather than a simple ad-hoc like action of a purely reactive controller. This research work utilises a behaviour-based control, which is discussed in more detail later in this chapter. The robot s navigation system is designed using the behaviour-based approach. There are two reasons for choosing this type of control. Firstly, it has been successfully applied to cooperative transport applications. Secondly, the behaviour-based control can be made either reactive or deliberative, and there is no need for a middle layer as in the hybrid architecture. 2.2 Cooperative Control The previous sections reviewed the control architectures employed by single robotic systems. This section provides an overview of cooperative control architectures. While the control architectures employed by single robotic systems coordinate the actions of a single robot or assemblage of robots treated as a single robot, the cooperative control architectures look at coordinating the actions of multiple robots that cannot be treated as a single assemblage. 14

29 Cooperative control deals with achieving collective group behaviour for a group of robots having homogeneous control, limited or no local interaction, local sensing capabilities, and limited processing capability (Shamma, 2008). The control of individual robot can be deliberative, reactive, hybrid or behaviour-based, and these have been outlined in section 2.1 (Control Architectures for Robotic Systems). 2.3 Principles of Cooperative Control The design of cooperative control strategy in this thesis will be guided by principles of scalability, robustness, and locality. The control strategy that coordinates the motion of a multi-robot system has to be scalable. Scalability refers to the control strategy s ability to cater for increasing number of robots. An increase in the task size or complexity and a proportional increase in the number of robots in the team should result in a comparable performance. Robustness measures the control strategy s capacity to handle robot failures. For example, assuming a wireless network facilitates inter-robot communication, a robust control strategy enables the team to accomplish the task with one or more robots unable to communicate with others in the group. Locality ensures that robots rely on data gathered via its own sensors and communication with neighbours, that is, the team does not employ a fully connected network of robots where each robot can communicate with every other robot in the team. A fully connected communication topology will be a hindrance to the control s scalability because of all-to-all connectivity requirement, which will require each robot to communicate with every other robot in the team. 2.4 Cooperative Control Architectures Cooperative control has numerous applications, but this thesis focuses on the cooperative control for load transportation task. The cooperative control architectures for cooperative transport can be classified as centralised or decentralised (Cao et al., 1997). The decentralised architecture can be further decomposed into leader/follower and leaderless architectures. 15

30 2.4.1 Centralised Architectures A centralised architecture involves a central authority or station guiding the multirobot team. This architecture results when employing a leader/follower strategy or deploying a fully connected communication topology, which enables all-to-all communication. In the classical leader/follower or master/slave strategy, the leader plans the team s trajectory and broadcasts sufficient information regarding the trajectory such that the followers can implement the planned trajectory. See Fu et al. (2006) for a classical leader/follower strategy, and Soares et al. (2007), Sugar & Kumar (2002), Lin et al. (2009), and Inoue et al. (2004) for variations of leader/follower strategy that is explained in the next section. Single point of failure and inconsistent perception of the environment are inherent disadvantages of such architectures. If the leader robot fails, the entire robot team may face failure in accomplishing the task. A leader robot may not have a complete environment perception and this may lead to biased or incorrect decisions taken by the team. To overcome the issue of single point of failure, teams may contain multiple leaders. Ren & Sorensen (2008) have studied teams with single and multiple leaders in formation control strategies. Dynamic team leaders allow team members to effectively contribute in the decision making process because each leader is chosen based on how effectively the new leader can guide the team. Fu et al. (2006) employed dynamic leader role assignment for obstacle avoidance during a cooperative transportation task. A robot assumed the leaders role if it was closest to the obstacle Decentralised Leader/Follower Architectures The decentralised leader/follower architecture sits between the two extremes of centralised and decentralised architectures. The main aim of this architecture is to detach the motion control of the individual robots from the overall coordination methodology. The leader in this architecture would indicate aspects of the planned trajectory to the followers while the robots are required to decide its individual motion such that the planned trajectory is realised. Lin et al. (2009) used two four-wheeled robots to cooperatively transport a bar using a master-slave configuration. The force sensors provided for the implicit communication between the master and slave, and each robot is equipped with 16

31 ultrasonic transmitters to localise itself using the ultrasonic receivers in the environment. Geometry-based location, speed, and force mathematical constraints govern the motion control of the master and the slave. Although this work used the master-slave configuration, the slave independently decided its motion based on its own perception of the environment. Inoue et al. (2004) utilised the master-slave control strategy for two humanoid robots transporting an object to a goal area. The slave robot corrected the mutual position shifts that occur during the cooperative transportation task using learning algorithms, such as Classifier System or Q-Learning. Only the slave robot was required to correct its position using the position of the master robot. The slave robot used a vision system for recognising the position of the master. In this master-slave configuration, the motion control of the robots is decentralised. In Sugar & Kumar (2002), a leader-follower control strategy facilitated cooperative transport of objects for a heterogeneous robot team consisting of two or three robots. The leader planned the trajectory for the task and broadcasted the trajectory information to the followers. Using the information provided by the leader, the followers then independently planned and executed their individual control modules to follow the planned trajectory. The role of leader may rotate among the team members for adaptability. Soares et al. (2007) designed a distributed control architecture using attractor dynamics to enable a team of robots to transport an object to a target location in an environment containing static and dynamic obstacles. In this leader/follower control scheme, the followers use the leader s heading and velocity information to determine individually the correct behaviour to execute. In order for the follower to plan its trajectory, it needs only two robots as reference points. Therefore, this control strategy is also termed as leadership-neighbourhood decentralised motion control strategy. Despite these attempts to fine-tune the centralised leader/follower strategies, some key issues remain with the decentralised leader/follower architecture. For example, in a team with redundant leaders, the team needs to choose the next leader if one leader fails. This is a non-trivial problem because if a leader fails in a multi-leader 17

32 team then followers have to first detect that leader is not available and then choose the next available leader or elect one leader. It is not only complex to manage multileader teams but also difficult to envision scalability when there is control coupling between team members, that is, actions of non-leader robots are heavily dependent on the actions of the leader Decentralised Leaderless Architectures A decentralised architecture does not require any one robot to act as the central station. All robots are autonomous, that is, each robot itself dictates its motion. To ensure scalability and considering the technological constraints, such as the range of the communication system, each robot makes its decisions based on its own observations and the observations of its neighbours. A robot s neighbours are the set of all robots that a particular robot is able to communicate with. This neighbourhood is determined either by the reach of the communication system used by the robots or the perception ability of the robot's vision system. A robot s state is the collection of values that dictate the behaviour of the robot. The aim is to decide the state of each robot such that the behaviour of each robot contributes to the group objective, which is cooperative transportation of objects. One of the key challenges in decentralised architecture is the distribution of state information of each robot to all robots in the team. The state information is necessary for the multi-robot team to reach a common decision, and the local communication capability of the robots limits a robot from reaching all members of the team. Therefore, dissemination and fusion of the state information of all robots in the decentralised architecture is a major challenge. However, the key advantage of the decentralised architecture is that it naturally allows scalability and robustness. Ferrante et al. (2010a) decomposed each robot s global cooperative transport behaviour in to three sub-behaviours: Obstacle Avoidance, Go to Target, and Social Mediation. The Social Mediation behaviour enables negotiation of the direction of motion, which forms the state information of each robot, while the robots control its individual motion using the agreed direction using a homogeneous motion control algorithm. 18

33 Trianni & Dorigo (2006) utilised an artificial neural network based robot controller to control a group of autonomous robots that were attached to each other and were wandering in an environment containing holes. The robot's artificial neural network controller learnt the obstacle/hole avoidance behaviour. A robot in danger of falling into a hole communicated implicitly with other robots via the traction force sensor. If the traction force sensor reading exceeded a particular threshold, the robot changed its direction of motion. If all robots complete the change of direction uniformly, the team moves away from danger, which is the obstacle/hole. Although the decentralised leaderless architecture overcomes the shortcomings of the centralised or decentralised leader/follower architectures, it has its share of challenges, which are very similar to the challenges faced when attempting to mimic the cooperative behaviour of social insects. Some of these challenges are: 1. Uniform perception of the environment - with only local perception capability, it becomes challenging to create a global and consistent awareness of the environment for each member of the team. A uniform knowledge of the environment will naturally result in identical and quicker decision-making. 2. Dynamic information topology - the communication topology used by the team may itself be dynamic, that is, a robot s neighbour may change overtime. 3. Uniform action or behaviour activation - the distributed motion control laws that rely on local communication needs to activate the behaviours or actions of all robots in the team coherently for the team to exhibit cooperative behaviour. This research work utilises decentralised leaderless cooperative control architecture. The reasons are clear: 1. There is no single-point-of-failure; 2. There is no complexity of managing multiple leaders; and 3. The control is scalable, that is, can handle large number of robots. 2.5 Inter-Robot Communication Irrespective of the multi-robot system s control architecture, communication plays a very crucial role in a cooperative multi-robot system. In cooperative transport, it is necessary for team members to coordinate their actions, and information exchange is at the heart of motion coordination for object transportation. Communication can 19

34 happen in two ways: implicit or explicit. Implicit or indirect communication requires the robots to exchange information via sensing or environment cues. A robot may rely on its sensing abilities to gather information about its environment (Campo et al., 2006). Implicit communication also happens when robots exert push/pull forces on each other. Trianni & Dorigo (2006) had the robots use the traction forces to determine the direction of motion during a cooperative hole-avoidance task. Explicit communication involves team of robots exchanging messages with each other via a communication system with the aim of improving the accuracy of the decisions made by the team in a cooperative transportation task. The communication system is realised using a number of tools. The most common tool employed for inter-robot communication is a wireless network (Ferrante et al., 2010a; Sugar & Kumar, 2002; Matari et al., 1995). In this communication system, the robots exchange state information in order to coordinate their motions. Another tool used is sound signals. Trianni & Dorigo (2006) had the robots to use sound signals to indicate the presence of obstacles, that is, holes in the arena to other robots in the team. Although works such as Trianni & Dorigo (2006) report that implicit communication is sufficient for cooperative transportation tasks, explicit communication has been used to either accomplish the task that could not be accomplished without communication (Matari, 1995) or improve the efficiency of the task (Trianni & Dorigo, 2006). In Matari (1995), two six-legged robots were tasked with pushing a box cooperatively to a goal location. Without communication, the action of one robot was quickly undone by the action of another robot. However, when explicit communication was used to synchronise the actions of the robots, the pair was able to cooperatively push the box to the goal location. Trianni & Dorigo (2006), which is one of the works that is closest to that of this thesis, studied both implicit and explicit communication, and reported that explicit communication enabled the robot team to demonstrate a quicker response to obstacle avoidance. Explicit communication aids the proposed cooperative control. Explicit communication forms the information topology based on the following reasons: 20

35 1. The available technology has been used successfully (Roberts et al., 2009; Ferrante et al., 2010a); and 2. Explicit communication has been shown to improve obstacle avoidance (Trianni & Dorigo, 2006; Matari, 1995). 2.6 Synchronisation in Group Behaviour Strong synchronisation is important in cooperative tasks that require strong coordination. Strong coordination is undoubtedly important in cooperative transport because the team members have to maintain a very strict formation. Weak coordination is required for tasks such as flocking. Synchronisation can be achieved via synchronous or asynchronous cooperative control. Synchronous control requires the robots to have a particular order of task execution or there exists control coupling between the robots motion controllers. Consider the work in Matari et al. (1995), which has been discussed in the previous section. In that cooperative strategy, the robot team was controlled via a synchronised turntaking protocol. Each robot took turns in pushing the box in the turn-taking protocol. The explicit communication aided synchronised protocol enabled the robot pair to push the box to the goal area in all tested instances. In asynchronous control, the robots are autonomous and control their motions independently. Asynchronous control can be aided by both explicit and implicit communication. Trianni & Dorigo (2006) used traction force sensors to synchronise the actions of the robots for environment exploration and hole avoidance. Explicit communication using sound signals were used to improve the response time of hole or obstacle avoidance. Campo et al. (2006) employed a vision system to recognise the direction of motion of its neighbouring robots and adjust its motion in a cooperative transportation task. Ferrante et al. (2010a) used explicit local communication between robots to negotiate the direction of motion in a cooperative transportation task. The negotiated and common direction resulted in a synchronised motion of the team. The key difference between synchronous and asynchronous control to coordinate, that is, synchronise the motion of the robots, is the control coupling between robots motion controllers. In synchronous approach, the motion controller of one robot 21

36 directly influences the motion controller of another robot, and as such, another robot directly dictates the actions of a particular robot. However, in asynchronous control, the robot is autonomous. This means that the robot decides its motion based on its perception and it is not dictated the exact actions to take by another robot. Although other robots provide valuable input, the robots cannot dictate the action of any other the robot. Asynchronous control was chosen because it clearly makes the system scalable. Asynchronous control prevents a high degree of control coupling between low-level motion controllers of the robots. Therefore, asynchronous control becomes a natural choice. 2.7 Homogeneous and Heterogeneous Teams Depending on the capabilities of the individual robots in the team, a multi-robot system is classified as homogeneous or heterogeneous. Two aspects of each robot determine the classification. First aspect is the robot s hardware. One or combination of the following hardware may enable the robot s environment perception: proximity sensors, vision system, sound system, and communication system. Second aspect is the robot s motion control algorithms. In a homogeneous team, all robots have the same hardware and motion controller. In contrast, a heterogeneous team comprises of at least one member having different hardware or motion control algorithm. Figure 2.1 shows examples of both types of teams. Most of the work considered in this thesis studies cooperative transportation using homogeneous robots, for example, Ferrante et al. (2010a), Trianni & Dorigo (2006) and Matari et al. (1995). However, in Sugar & Kumar (2002) and Matari (1995) authors study heterogeneous robot teams for cooperative transport. All robots will have the same control architecture and hardware, making the team studied in this research work, a homogeneous team. There is no technical basis for this choice. Since most of the research work including the key literature that form the basis of this research work used homogenous teams, a homogeneous team is employed for the cooperative transportation task. 22

37 2.8 Approaches to Cooperative Transport Researchers have investigated a number of different cooperative control strategies for cooperative tasks. The control strategies can be broadly classified into three categories: behaviour-based approach, potential-field based approach, and graph theory based approach. This research work utilises the behaviour-based cooperative control for cooperative transportation of object. Chapter 1 provides a brief introduction and related references on the other two approaches. (a) (b) Figure 2.1: Types of multi-robot teams: (a) Homogeneous team comprising of team members with same hardware and motion control (Trianni & Dorigo, 2006); (b) Heterogeneous team with different hardware and motion control laws for each robot in (Sugar & Kumar, 2002) Behaviour-based Approach The behaviour-based approach equips each robot with a set of elementary behaviours. To achieve the desired cooperative behaviour, the robots execute a subset of these behaviours through behaviour arbitration. The books Arkin (1998) and Matari (2007) provide an excellent background on behaviour-based controllers. Chapter 6 of Floreano & Mattiussi (2008) is also a worthy reading for this subject Behaviour Selection Social insects have largely inspired the choice of the elementary behaviours (Kube & Zhang, 1993b; Matari, 1995). However, having selected the elementary behaviours, the next challenging task is behaviour arbitration or fusion, which looks at the 23

38 selection and activation of elementary behaviours that lead to the desired group behaviour Behaviour Arbitration and Fusion Brooks (1986) introduced the Subsumption Architecture. Each robot is visualised as having a set of competencies that is organised in layers. The architecture requires a bottom-up design approach where the first competency or layer is perfected before adding the second layer on top of it. Similarly, subsequent layers can be added to the robot. The layers can be combined to form a single action where the higher level layers can achieve their objectives by either making use of the lower level layers or by suppressing the lower level layers completely. The layers communicate asynchronously. The key challenge here is to choose the priorities of the layers, that is, the order in which the competencies will be layered. Parker (1998) extended the strict subsumption architecture forming the ALLIANCE architecture by introducing behaviour-sets that are composed of the elementary or basis behaviours. The motivational behaviours, robot impatience and robot acquiescence, enables activation of the behaviour-set appropriate for a high-level task. Impatience motivates a robot to take over a task to help the team achieve the task while the acquiescence motivates the robot to leave the currently active task and involve itself in a task that it can effectively contribute to. ALLIANCE was proposed to address fault tolerance and adaptability. Matari et al. (1995) proposes the concept of basis behaviours and shows that combining these behaviours result in desired global behaviours. A set of five basisbehaviours are proposed and the selection of the basis-behaviours that result in global behaviour, such as flocking, is achieved using special combination operators, termed as complementary or contradictory. The complementary operator allows composition of behaviours that can be executed concurrently while contradictory operator allows execution of behaviours that are mutually exclusive and can execute one at a time. The concept of basis behaviours was bio-inspired and the authors showed that the composition of the basis behaviours using the above operators allowed the designer to predict the group behaviour. Kube et al. (1993) utilised four behaviours, Avoid, Goal, Slow, and Find in a boxpushing task. The behaviour arbitration is achieved using trained Adaptive Logic 24

39 Networks (ALN). ALNs offer easier design choice as the complexity of behaviour arbitration increases with increasing number of behaviours. The alternative strict subsumption architecture to behaviour arbitration requires the designer to decide the priorities (position of the behaviour in the layered architecture) of the behaviours. 2.9 Our Approach This research work utilises decentralised leaderless cooperative control architecture. The cooperative control chosen is asynchronous and aided by explicit communication. The robot s navigation system is behaviour-based. Furthermore, all robots will have the same control architecture and the hardware, making the team homogeneous. Since the decentralised leaderless control architecture is used, there is a need for a mechanism that enables the team of robots to decide the direction of motion of the team, that is, the team's heading, during cooperative transportation. A negotiation protocol is used to enable the team to agree on the team's heading as the team progresses towards the goal. This negotiation protocol also accommodates collision avoidance. Although a number of relevant researches have been stated in this chapter and Chapter 1 looks at Ferrante et al. (2010b) in particular. Ferrante et al. (2010a) provides the algorithms and other technical detailed related to Ferrante et al. (2010b). The reason for focusing on this particular research is that this work investigated cooperative transportation using very similar attributes to the ones that is investigated in our thesis. Some of these similar attributes are: 1. Behaviour-based control architecture is used for the robots; 2. Leaderless cooperative control architecture is used to achieve cooperative transportation of objects; and 3. A negotiation protocol that uses explicit communication has been used to mediate the direction of motion of the team. In Ferrante et al. (2010b), a group of foot-bot robots transport an irregularly shaped hand-bot from start to a target location while avoiding cuboid-shaped obstacles, each arbitrarily placed and oriented. The foot-bots are equipped with a communication system, sensors and actuators that allow it to perceive the obstacle and target, 25

40 communicate with neighbouring robots, and control the motion of the robot by adjusting the speed of the right and left wheels of the robot. The global Cooperative Transport behaviour proposed in this work has been decomposed into three subbehaviours, which are Obstacle Avoidance, Go to Target, and Social Mediation. The cooperative transport control (Figure 2.4) starts by perceiving the target and the obstacle. Assuming that the robot can perceive the target, it checks whether it is not facing the target, which is known as misalignment. The danger of misalignment is that the carried object may hinder the robot from perceiving a close obstacle. If the robot is misaligned or an obstacle has been perceived, the robot becomes anti-social or social otherwise. When there is misalignment only, the robot tries to face the target by setting its angle of motion to the target location. However, when the robot perceives an obstacle and is misaligned, obstacle avoidance takes priority over misalignment correction. Since the obstacle is perceived, the robot tries to adjust its angle of motion so that it avoids the obstacle. This angle is a weighted angle that takes into account the target location and the obstacle location. When the robot is very close to an obstacle, by adjusting this weight appropriately, priority is given to obstacle avoidance while factoring in the target location. After each robot has decided its angle of motion, the social mediation loop begins. The social mediation control s algorithm is given in Figure 2.3. In this loop, the robots exchange the heading information with its neighbours and decide the team's heading, which is also referred to as the system's angle of motion. This system's angle of motion is weighted to avoid abrupt motion and is used by a proportional control function to move the system in the desired direction. Because the behaviours were well designed, this research work adopted the behaviours proposed in Ferrante et al. (2010a) with major changes to the internal workings of the behaviours. There are, however, differences between what has been achieved in this thesis and the work that is done in Ferrante et al. (2010a). Some of these differences are: 1. The concept of dual states (social and stubborn) was removed, and instead the robot-to-obstacle distance information is used during the social mediation behaviour. This enables robots perceiving the environment differently, for 26

41 example, when two robots detect different obstacles, to consider to each others perceptions when deciding the team s heading; 2. An iterative version of the social mediation behaviour enables a common but approximate system level heading to be agreed upon by the robot team that uses local interactions; 3. A three-wheeled vehicle-like robot is used in this research work while in Ferrante et al. (2010a) the authors used s-bots, which have a rotating turret. The rotating turret allowed the team to make major changes to the team's heading conveniently. 4. A fuzzy logic based low-level motion controller was used of the robot. This controller takes the agreed team heading as input, and decides the steering angle and wheel-motor forces of the robot. To understand the need for the above amendments, there is a need to first look at the existing negotiation mechanism implemented by the Social Mediation behaviour in Ferrante et al. (2010a). The algorithms for the Social Mediation and Cooperative Transport behaviours are provided in Figure 2.3 and Figure 2.4 respectively. Notice that in steps 7 to 9 of Algorithm 2 the robot s state is set to stubborn if the robot perceives an obstacle or is misaligned towards the goal. In Algorithm 1, which implements the social mediation behaviour, the robot sends its intended direction of motion to its neighbouring robots. An interesting scenario is illustrated in Figure 2.2. Obstacle 2 Carried Object ROBOT 1 Heading of robot 1 Goal ROBOT 2 Heading of robot 2 Obstacle 1 Figure 2.2: Unsuccessful negotiation of the team heading by the social mediation behaviour of Ferrante et al. (2010a). 27

42 According to Algorithm 2 both robots, Robot 1 and Robot 2, in Figure 2.2 will be in the stubborn state following obstacle detection. Robot 2 is detecting Obstacle 1 while Robot 1 is detecting Obstacle 2. Assuming both robots can see the goal or target, each robot computes it s in an attempt to avoid the obstacle that it perceived. When the social mediation is performed according to Algorithm 1, both robots will send out its. There is no attempt to incorporate the view, that is, the heading information, of the neighbouring robot. Therefore, Robots 1 and 2 will never be able to mediate the team's heading because each is simply sending it s heading to the other robot. Figure 2.3: Social Mediation behaviour presented in Ferrante et al. (2010a). This thesis will provide an alternative way of negotiating the team s heading. The new version of both behaviours will be introduced in Chapter 3. 28

43 Figure 2.4: Cooperative Transport behaviour presented in Ferrante et al. (2010a) 2.10 Summary This research work utilises a decentralised leaderless cooperative control architecture for the cooperative transportation task. The subsumption architecture is used to model the behaviour of each robot, and the control of each robot in the team is homogeneous. Although this work follows the work of Matari (1995), which studied combination of basis behaviours for different group behaviours, this work is not considering combination of behaviours to yield different group behaviours. This thesis studies only one group behaviour, that is, cooperative transportation of objects by a multi-robot system. Therefore, the elementary behaviours necessary for 29

44 cooperative transportation task, and the activation and order of execution of these behaviours are discussed. The reason for choosing the behaviour-based approach to design the robot's controller is that a reactive controller can very easily be made into a deliberative one. Unlike the complexity involved with designing the middle layer of the hybrid architecture, a simple behaviour that has the ability to remember the robots states and the environment can accommodate path planning thus making the robot's reactive controller a deliberative one. This thesis investigates a negotiation protocol relying on explicit communication via a wireless network to enable the robot team to agree on the team s heading. The negotiation protocol also handles obstacle avoidance. Although the state information that describes the state of the robot can be very complex, the proposed negotiation scheme uses the state information that describes the current heading of the robot, the Cooperative Behaviour s state (Chapter 3 explains the state values and its uses), and the robot-to-obstacle distance for the nearest obstacle only. By reducing the complexity of the state information, it is conjectured that the cooperative control that is proposed will be easily implementable for heterogeneous teams. Chapter 3 presents the behaviour-based controller of the robot that is able to facilitate cooperative transport and accommodate obstacle avoidance via social mediation. The chapter also provides a detailed account of the robot's behaviours, and finally an insight into the low-level motion controller of the robots. 30

45 Chapter 3 Behaviour-based Cooperative Control for Object Transportation 3 Behaviour-based Cooperative Control for Cooperative Transport The primary purpose of this chapter is to provide detailed account of the proposed cooperative control for the cooperative transportation task studied in this thesis. The Cooperative Transport behaviour makes use of the elementary behaviours: Go to Goal, Avoid Obstacle, and Social Mediation. This chapter explains each elementary behaviour of the behaviour-based control of the robot, including the negotiation mechanism used by the robots to decide the team's heading during the cooperative transportation task. This chapter also discusses the design of the robot s low-level motion controllers. However, it is important to note that the low-level motion controllers are independent of the cooperative control, which enables us to perform cooperative control of any robotic team. The robot's hardware, for example, the sensors and actuators, determine the capabilities of the robot. Therefore, the low-level motion controllers need to be crafted considering the robot's hardware. Because the low-level motion controllers are dependent on the hardware, this chapter only provides a framework for the robot's motion controllers and Chapter 4 will provide an example of the fuzzy logic enabled motion controllers when the robots that have been used to test the proposed cooperative control are introduced. 3.1 Behaviour-based Robot Control The robot s controller, which is designed based on subsumption architecture, has three behaviours. The choice of these behaviours has been inspired by the work done in Ferrante et al. (2010a) and are selected solely based on task of the robot team. The three behaviours drive the simplest robot that can be involved in a cooperative transportation task, that is, if any one of the three behaviours is removed, the cooperative transportation task will not be realised. A single robot needs to safely navigate to the goal. As such, the robot needs to have a behaviour that will enable it to go to the goal and another behaviour that will allow it to avoid collisions. As each robot is navigating in the environment, it decides the best heading for it based on the location of the goal and presence of obstacles. As

46 illustrated by Figure 1.5, there are chances of team members choosing different or conflicting headings. Therefore, the robot needs a behaviour that will enable it to agree on the heading of the team, that is, the direction in which the team should be moving. The Go to Goal behaviour is responsible for moving the robot towards the goal while the Avoid Obstacle behaviour will enable the robot to avoid obstacles. The Social Mediation behaviour empowers the robots to negotiate the teams heading. The organisation of these three behaviours is shown in Figure 3.1. The robot will receive its input via the on-board sensors as well as the wireless communication system that the robots use to communicate with their neighbours. The communication input represents the decision of the robot s neighbour. In our case, the decision constitutes the heading chosen by the neighbour, its distance to the nearest obstacle, and its state. The communication input is received by the social mediation behaviour and has been shown separate from the sensory inputs to emphasise that the input is received from the neighbouring robots. Cooperative Transport Behaviour Comm. inputs Sensory inputs Social Mediation Avoid Obstacle Go to goal Actuator outputs Figure 3.1: Robot s behaviour-based controller. These behaviours have been selected based on its sufficiency in achieving the cooperative transportation task. It is easily possible to add more behaviours to the robot to complement it. For example, a build map behaviour can to allow the robot to create a representation of the environment. This added behaviour can contribute to the work done by the Go to Goal and Avoid Obstacle by providing additional 32

47 information about the environment that the robot has already visited. To create such representations, the robot will certainly need more storage and processing power. 3.2 Elementary Behaviours This section explains each of the three behaviours forming the cooperative transport behaviour in detail before explaining the cooperative transport behaviour Go to Goal The purpose of this behaviour is to determine the direction of the goal location. The perceived goal location will be utilised by the cooperative transport behaviour to determine the final heading of the robot. Depending on the behaviour design and the task requirement, the behaviour can accommodate static as well as dynamic goals. However, the primary purpose of this behaviour is to direct the robot towards the goal Avoid Obstacle The importance of safe navigation can never be over emphasised. The Avoid Obstacle behaviour allows the robot to avoid collisions. This behaviour can be implemented in a number of ways. A number of techniques for obstacle avoidance has been studied in literature and some of these include Bug (1 and 2) (Lumelsky & Stepanov, 1987), Vector Field Histogram (VFH) (Borenstein & Koren, 1991), and Bubble-Band (Quinlan & Khatib, 1993). A detailed review of the obstacle avoidance techniques can be found in Siegwart & Nourbakhsh (2004). The Bug algorithms and VFH will be discussed briefly, as these are most applicable to the capabilities of robots that have been used to test the proposed cooperative control. The Bug algorithm requires the robot to contour the obstacle that the robot encounters and then decide the best exit such that the robot can move towards the goal. Bug 1 employs this exact technique. However, this algorithm is very inefficient. Bug 2 algorithm requires the robot to contour the obstacle only until the robot could resume its journey to the goal. Thus, Bug 2 results in fast obstacle avoidance. A number of extensions to Bug algorithm also exist. A key limitation of the Bug algorithm is that it relies only on the latest sensor readings, which may not provide enough information for robust obstacle avoidance. 33

48 The VFH algorithms use a series of recent sensor readings to create a local occupancy map around the robot. This map is presented as a polar histogram that has the angle at which the obstacle was detected along the x-axis and the probability that the obstacle exists in the given angle along the y-axis. Given the histogram, all possible openings that are large enough for the robot are identified and cost functions are then used to determine the best of all possible steering angles available to the robot. The obstacle avoidance behaviour designed for our proposed cooperative control favours VFH like obstacle avoidance. The cooperative transport behaviour requires the obstacle avoidance behaviour to choose a single heading, which is then used by the cooperative transport behaviour to determine the final heading for the robot. If one considers the VFH technique then the obstacle avoidance behaviour that is proposed will need the angle with the highest probability. The Avoid Obstacle behaviour will then avoid the obstacle present at that angle after computing the distance to the obstacle. A very basic algorithm for the Avoid Obstacle is given in Figure : obs, obstacleperceived, d obs ScanForObstacles() 2: obstruction Detect_Obstruction() 3: 4: If bstruction true Then 5: state = S backtrack 6: Else 7: state=s forward 8: End If Figure 3.2: Algorithm of the Avoid Obstacle Behaviour. The Avoid Obstacle algorithm first requires the robot to perceive obstacles via ScanForObstacles(), which can implement any obstacle detection and avoidance technique, for example, VFH. The output of this behaviour indicates whether any obstacle was detected (obstacleperceived). If obstacles are perceived then the distance (d obs ) and angle ( obs ) to the closest obstacle is determined. Following the check for presence of obstacles via ScanForObstacles(), Detect_Obstruction() determines whether the robot is about to collide with the obstacle, that is, whether the robot is very close and will definitely collide with the 34

49 obstacle. If obstruction is detected then the robot s state is set to backtrack, or move forward otherwise. The backtrack state indicates to the cooperative transport behaviour that the robot cannot continue on the planned path. It is the responsibility of the cooperative transport behaviour to implement the backtrack strategy. Some possibilities include: random motion for a random amount of time or a planned backwards movement of the team. The social mediation behaviour, which manages communication with the neighbouring robots, has a mechanism that allows a robot to inform its neighbours about its decision to backtrack Social Mediation One of the critical responsibilities of the cooperative transport behaviour is to ensure that all robots have the same global or team heading. If robots have conflicting team headings then it will become extremely difficult or impossible for robots to exhibit cooperative behaviour. As an analogy, consider the scenario where you are holding on to one end of a rope and your friend is holding on to the other. If you try to pull the rope towards yourself and at the same time, your friend pulls the rope towards himself with equal force, the rope will not move in any direction. Similarly imagine one robot trying to go in the south direction while the other towards the north! The robots pulling in opposite directions will not be able to change the team's heading. Therefore, it is important that all robots have the same team heading and move in the direction of the agreed team heading. The responsibility of the social mediation behaviour is to have all robots agree on a common desired heading angle ( social ), which becomes the global or team heading angle. This behaviour is required because different team members may perceive the environment differently. As explained in Chapter 2, in Ferrante et al. (2010a) each robot assumes one of the following two states: social or stubborn. When more than one robot in the team is in a stubborn state, for example, when two different robots perceive two different obstacles, reaching a common socially mediated angle ( social ) becomes impossible. To overcome this problem, a rule-based social mediation behaviour has been designed. 35

50 1: If state=s backtrack Then 2: 3: Else 4: 5: End If 6: 7: LOOP FOR n times 8: Receive { 1, 2,, k } 9: Receive {MedState 1, MedState 2,,MedState k } 10: Receive {w 1, w 2,,w k } 11: = 1, 2,, k, social 12: Compute ( 1, 2,, k) k i=0 13: social i e j i 14:END LOOP 15: 16:LOOP FOR ( 1, 2,, k ) 17: If MedS ate k = backtrack Then 18: 19: state=s backtrack 20: Else If Med state backtrack and state=s backtrack Then 21: 22: End If 23:ENDLOOP Figure 3.3: Algorithm of the Social Mediation Behaviour. Note: The steps are numbered in the algorithm for easy referencing in the following paragraphs. The algorithm for the new social mediation behaviour is given in Figure 3.3. This behaviour requires each robot to acquire the heading information of its neighbours as well as the neighbour s weight w neighbour 0,1, which is the measure of the neighbour s distance to the nearest obstacle, and social mediation state MedState neighbour backtrack, forward. The weight, w neighbour, is used to compute the degree of influence ( 0,1 ), which measures the neighbour s contribution in the computation of the socially mediated angle ( social ) of a particular robot. Let us work through the algorithm to see how this behaviour is expected to work. The social mediation first considers whether there is an obstruction in the current path (steps 1-5). Recall that a robot can be put in S backtrack or S move_forward state by the Avoid Obstacle behaviour. The S backtrack state indicates to the cooperative transport behaviour that the robot cannot follow the planned path, that is, the agreed team heading. To convey this message to the neighbouring robots, each robot transmits the MedState neighbour backtrack, forward to its neighbours. Instead of transmitting the 36

51 MedState neighbour, a robot could perform another round of mediation solely for informing the neighbours about the decision to backtrack. Notice that the later case induces more communication related costs. Following this check (that is, steps 1-5), the robot computes the average social mediation angle, social in steps In steps 8-10, the robot receives the social mediation angle, social mediation state, and weight information from its neighbours. In step 11, the robot's own social mediation angle is included in the set of all social mediation angles. The degree by which the robot is influenced by its neighbour s mediation angle ( k ) is computed in step 12 using a rule-based scheme given in Figure 3.4, which is discussed shortly. Step 13 computes the average of all social mediation angles. The average is the summation of vectors that represent the social mediation angle. Therefore, the average social mediation angle is the angle of the resultant vector. The calculation of the social mediation angle ( social ) is repeated n number of times (step 7) so that the common social diffuses to all team members. Consider the example of two friends trying to meet up for a cup of coffee, and assume they have only two possible times, 10am or 3pm to choose from. If both have the same time in their minds then negotiation will contain only two steps: 1. First person: I can meet at 10am 2. Second person: I can also meet at 10am. After step two, both have agreed to the coffee time. When the two friends do not have the same time in their minds, more than two steps are required. Just to make the situation clear, consider the following conversion: 1. First person: I can meet at 10am 2. Second person: I can meet at 3pm. 3. First person: I can also meet at 3pm Notice that one more step is required to come to an agreement, and this agreement was reached when only two possible options were available. The team s heading is a continuous value and as a result, it will take a longer time for the robots to come to an agreement on the team s heading. 37

52 The repetition of steps 8-13 is needed particularly when the team size increases and the robots are capable of local interaction only. However, note that the robots motion controller works independent of the social mediation behaviour. Therefore, it is possible for the motion controller to be using the desired heading ( social ) that is still being negotiated. It is challenging to balance the mediation time and motion control of the robot, that is, if mediation takes a very long time then the robot will have to stop and wait for the mediation process to end and then use the agreed team heading to plan its motion. However, this is impractical because the robots will are aimed to be in continuous motion. Fax & Murray (2004), Ren & Beard (2004), Ren & Beard (2005), Olfati-Saber et al. (2007), and M. S. Stankovi et al. (2008) provide reviews and theoretical foundations of consensus algorithms for multi-agent systems with or without dynamic information topology. Knorna et al. (2011) studied the consensus algorithms for a network of agents. It is observed that when agents update their local states asynchronously and not all agents have access to the global value, for example the team heading in our case, converging to the desired value takes a much longer time when compared with the case when state updates are performed synchronously and all agents have access to the global value. Although this thesis does not intend to propose a consensus algorithm that improves these results, it is important to realise the challenge that is on hand. This thesis will discuss how the motion control strategy has been designed to balance the convergence time and the robot's motor force update frequency. Ren & Beard (2008) makes yet another excellent reading on distributed consensus. In the final section of the algorithm (from step 16), the robot checks if any neighbour is trying to backtrack, and if so, the robot changes it s state to backtrack as well. Next section looks at the degree of influence ( ) a neighbour has on a particular robot's decision, that is, the heading angle. The degree of influence has been used to replace the two states, stubborn and social, of Ferrante et al. (2010a). As noted earlier in Section 2.9, the two states prevented constructive negotiation. Therefore, the concept of states is replaced with a more flexible, continuous and inclusive mechanism, and that is the degree of influence. An algorithm is provided in Figure

53 A simple-rule based technique is used for computing the degree of influence. However, other techniques, such as fuzzy logic, could also be employed. The weight values w neighbour and w self measure how close a robot is to an obstacle. The variables of 0, 1 and 0, 1 are thresholds that determine when to use the neighbour's social mediation and the amount of influence the neighbour will have in the computation of the average social mediation angle, respectively. 1: If w neighbour > w self Then 2: If w self =0 Then 3: =1 4: Else If w neighbour > Then 5: = 6: Else 7: = 8: End If 9: Else 10: If w self > Then 11: =0 12: Else If w neighbour 0 AND w self =0 Then 13: =1 14: Else 15: =1-16: End If 17:End If Figure 3.4: Rules to compute the (degree of neighbour s influence). The algorithm first checks the neighbour's weight. If the neighbour s weight is nonzero and the robot's weight is zero, it indicates that the neighbour is trying to avoid an obstacle. In this case, the robot decides to go with the social mediation angle of the neighbour (1-3). In other cases, the robot will always include a certain degree of the neighbour's social mediation angle in the computation of the average social mediation angle (4-8). Then the opposite is done in steps If the robot's weight is greater than its neighbours then it will either not consider the neighbour's social mediation angle (11) or be influenced by the neighbour's social mediation angle by a certain degree (15). The algorithm favours robots that are closer to the obstacle in order to prioritise obstacle avoidance Social Mediation Behaviour: Discussion The paragraphs above describe the social mediation behaviour. The primary purpose of this behaviour is to aid the robots in negotiating the team heading. The behaviour 39

54 has been designed to allow the neighbours to have a degree of influence when a particular robot is computing its social mediation angle. In addition, the social mediation behaviour has a mechanism to inform the neighbours that the agreed social mediation angle cannot be implemented, that is, the robot cannot follow the agreed social mediation angle. The next section discussed the cooperative transportation behaviour, which is responsible for making use of the above three behaviours to achieve cooperative transportation of objects. 3.3 Cooperative Transport Behaviour This section looks at the algorithm for the cooperative transport behaviour, which is the group behaviour that studied in this research. The cooperative transport behaviour makes use of the Go to Goal, Avoid Obstacle and Social Mediation behaviours in order to achieve cooperative transportation. 1: target, targetperceived PerceiveTarget() 2: obs, obstacleperceived, d obs PerceiveObstacle() 3: istargetmisaligned IsTargetMisaligned() 4: 5: If istargetmisaligned And Not obstacleperceived 6: social = target 7: End If 8: If obstacleperceived Then 9: If targetperceived Then 10: w self - d obs +1 d max 11: obs_avoid w self e j obs+ +(1-w self ) e j target 12: Else 13: obs_avoid e j obs+ 14: End If 15: social e j social+(1- ) e j obs_avoid 16: Else If targetperceived Then 17: social = target 18: End If 19: SocialMediationControlLoop() Figure 3.5: Algorithm for the Cooperative Transport Behaviour. The Cooperative Transport Behaviour is implemented using the algorithm given in Figure 3.5. Table 3.1 explains the important steps of the algorithm. 40

55 Step# Comments 1 The target or goal is perceived via the Go to Goal behaviour. In this step, the output of the Go to Goal behaviour is utilised by the cooperative transport behaviour. 2 The Avoid Obstacle behaviour aids the cooperative transport behaviour by perceiving the obstacles and providing the details of the obstacles to this behaviour. If multiple obstacles are perceived, the closest obstacle is chosen for avoidance. The system will move towards the target area when no obstacles are perceived. 3 It is a requirement that the team continuously face the goal area. This requirement avoids the blind side of the carried object from hitting the obstacles. The IsTargetMisaligned() routine checks for misalignment. If the robot s current heading ( social ) and the target angle ( target ) differ by more than a predefined threshold, the robot is said to be misaligned. 5-6 In case of misalignment and absence of any obstacle being perceived, the algorithm forces the robot to move towards the target. This means that if obstacle is detected then obstacle avoidance is given priority If the robot perceives an obstacle, it computes the avoidance angle ( obs_avoid ) after considering whether it can perceive the target/goal. If the target is perceived, a simple attraction/repulsion scheme is used to compute the avoidance angle. The robot is attracted towards the target and repels from the obstacle. The magnitude of attraction or repulsion is determined by the weight w, which measures the robot s proximity to the obstacle. The value of the weight w is large when the robot is very close and small when the robot is far away from the obstacle. 13 On the other hand, if the robot does not perceive the target, that is, only perceives the obstacle, the robot attempts to move in the direction that is directly opposite the obstacle. 41

56 15 If the obstacle is perceived, the robot updates its current heading ( social ) by a factor ( 0, 1 ). By using a weighted update of the heading angle, the robot can adapt to changing decisions and achieve a smooth transition from current to the new heading If only the target is perceived, the robot moves towards the target. 19 The social mediation control loop is executed to negotiate the systems heading. The motion controller of the robot will attempt to adjust the steering wheels to achieve the required heading ( social ). Table 3.1: Description of the steps of the Cooperative Transport Behaviour Behaviour Arbitration The Go to Goal and the Avoid Obstacle are complementary behaviours. Therefore, the output of these behaviours are combined and is used by the social mediation behaviour. Section (see page 24) has already introduced the concept of behaviour arbitration and provided some examples. Arkin (1998) provides an indepth review of the arbitration or fusion process. To combine the output of the two behaviours, Go to Goal and the Avoid Obstacle, a simple summation is used. This simplicity is possible because the output of both behaviours can be treated as vectors, and as such, the resultant is a vector. The equation (3-1) computes the resultant of the vectors that direct the robot towards the goal and moves the robot away from the obstacle. The value of is added to the angle at which the obstacle is present to implement repulsion from the obstacle. Therefore, equation (3-1) models repulsion from obstacle and attraction towards the goal or target. obs_avoid w self e j obs+ +(1-w self ) e j target (3-1) The illustrations in Figure 3.6 show a simple scenario and computation of the robot's heading using the outputs of the Go to Goal and the Avoid Obstacle behaviours. 42

57 3.3.2 Computing the Weight w The robot's weight is a very critical variable in the decision making process. It is one of the state variables exchanged during social mediation. The objective of the weight (w [0, 1]) is to measure the robots proximity to the obstacle. The weight is zero if the robot does not perceive any obstacle and weight will be 1 only when the robot is on top of the obstacle, that is, there is absolutely no space/distance between the obstacle and the robot. The weight is calculated using equation (3-3): w - d obs d max +1 where the d obs is the distance to the obstacle and d max represents the sensors range Weighted Update of Robot's Heading Achieving smooth motion is one of the key objectives of the robot's motion controllers. There will be many instances during the cooperative transportation task where the robot would be required to change its heading to avoid obstacles. As such, abrupt motion must be avoided. To achieve smooth motion of the robots, the cooperative transport behaviour employs a weighted update of the robots heading (see step 15 in Figure 3.5). (3-3) The weighted update is achieved using the equation (3-2): social e j social+(1- ) e j obs_avoid An illustration is used to explain the weighted update. See Figure 3.7 for the weighted updated of the robot's heading for the robot shown in Figure

58 target/goal direction Goal Goal vector Robot Obstacle direction Obstacle Obstacle vector (a) (b) Goal vector Goal vector Resultant vector: robot s new heading Obstacle Avoidance vector Original Obstacle vector Obstacle Avoidance vector (c) (d) Figure 3.6: Combining the output of Go to Goal and Avoid Obstacle behaviours. (a) provides a graphical view of the scenario; (b) shows the vectors representing the goal and obstacle directions; (c) vectors representing attraction to goal and repulsion from obstacle; (d) computing the final direction for the robot (assume w is 0.5). new heading Actual heading Robot current heading (a) (b) Figure 3.7: Weighted update of the robot's heading. (a) robot's current and proposed headings; (b) robot's actual heading if the weighting factor is

59 3.4 Motion Controller The robots control system is decomposed into two components: Cooperative Transport behaviour and the Motion Controller (Figure 3.8). The cooperative transport behaviour decides the heading of the robot while the motion controller is responsible for applying the necessary motor forces and steering angle to the wheels. Figure 3.8: The control system of the robot. In an ideal case, the robot's motion controller receives the social mediation angle that has been agreed on by all team members, that is, the social mediation behaviour has been able to convey a uniform team heading throughout the team. Although this is desired and would happen during the transportation task, reaching an exact team heading and disseminating this particular team heading to all team members within a reasonable time is a challenging task. The main issue is the frequency at which the robots update their motor forces and steering angles. If the robots operate at a frequency that is far lower than the communication frequency used by the social mediation behaviour, it would be possible to have all members of the team agree on the same team heading. As imaginable, this is very unrealistic and even if this is realised, it will make the robots extremely slow. Therefore, balancing the operational frequency of the robot and the social mediation latency will mean that the robots will be operating with an approximate team heading at any given time. In fact, approximate consensus has been well studied in literature (Dolev et al., 1986; N. A. Lynch, 1996). Approximate consensus targets at having the desired global value of all robots within a small range of each other. This means, the robots are not required to converge to or agree on a precise value or state. The designer will determine the tolerance level, that is, the amount of difference between the state information of two robots that can be tolerated. 45

60 Fuzzy logic is used to design the low-level motion controller of the robot to be able to handle the differences in the team headings. Fuzzy logic has the ability to handle approximate and ambiguous data (Zadeh, 1965; Zadeh, 1988; Hellmann, 2001). This is exactly the capability that is need for the robots that are dealing with approximate team heading. As shown in Figure 3.8, the motion controller receives the social mediation angle, that is, the new direction for the robot. The motion controller will then execute a fuzzy logic controller (FLC) to determine the forces to apply to each actuator. Finally, the motion controller applies the forces to the required actuators. The three processes that are performed by the motion controller are shown in Figure 3.9. Receive Heading Run Fuzzy Logic Controller Apply Fuzzy Forces to Actuators Figure 3.9: The fuzzy logic based motion controller. As explained earlier, the FLC that needs to be designed is heavily dependent on the robot that is used for the task. Chapter 4 details the design of a fuzzy logic based motion controller for the robot that has been used to test the proposed cooperative control. 3.5 Summary This chapter presented the proposed behaviour-based cooperative control that enables transportation of objects. The parent Cooperative Transport behaviour is composed of the child behaviours: Go to Goal, Avoid Obstacle, and Social Mediation. The Go to Goal behaviour is responsible for directing the robot towards the goal while the Avoid Obstacle behaviour is responsible for detecting and 46

61 avoiding obstacles in the environment. The outputs of these two complementary behaviours are computed by representing the output of each as a vector and taking the sum of the vectors. The resultant vector is analysed to determine the new heading of the robot, and this heading becomes the input of the Social Mediation behaviour. The Social Mediation behaviour liaises with neighbouring robots with the aim of reaching a consensus on the heading of the team. This agreed heading becomes the team's heading. Although the aim is for all the robots to use the same team heading to plan its motion, consensus does take time. Therefore, there is a need to balance the speed at which the robot updates its motor forces and the latency involved with social mediation. It is realised that it may not always be possible for all robots to have the same team heading. This means that the robot may move even though the social mediation process has not ended. As such, the team heading value for the individual robots may be different, that is, robots may have an approximate team heading. Because the team heading will be approximate, there is a need to plan the motion of the robot to compensate for the slight differences in the team heading value maintained by different robots. This thesis proposes the use of fuzzy logic for designing the motion controller of the robot. The fuzzy logic based controller will be able to handle approximate team headings, that is, minor differences in the headings of the robots. Chapter 4 describes the setup of the environment that was used to test the behaviour based cooperative control. The chapter specifies the design of the simulated robot that has been used for the transportation task and provides a detailed description of the fuzzy logic based low-level robot controllers. The performance indicators for the cooperative control are also presented. The performance indicators, which measure the performance of the proposed cooperative control, will be used to determine whether the research objectives were achieved. 47

62 Chapter 4 Experimental Setup 4 Design of the Experimental Environment This chapter provides the details of the environment that was used to test the cooperative control proposed in Chapter 3. The cooperative control was tested in simulation. The implementation details of the robot that was used for the transportation task including the robot s fuzzy logic based motion controller are provided before discussion about the design of the Java based Robotics Simulator. This chapter also discusses the inter-robot communication scheme and concludes by describing the performance indicators for the cooperative control. The performance indicators, which measure the performance of the proposed cooperative control, will be used to determine whether the research objectives were achieved. 4.1 Design of the Simulated Mobile Robots The simulated mobile robots used in the cooperative transportation task are a class of two-driving wheel mobile robots utilising a caster wheel for stability. A schematic and simulated view of the robot is given in Figure 4.1. The dimensions of the robot depicted in Figure 4.1(d) have been selected for industrial grade pick and place applications. However, it is possible to scale the robots with comparable performance in different environments Robot Kinematics In 2D, the global position of the robot (r pos ) can be described in the world frame (dynamic world in JBullet Physics Engine (Dvorak, 2008)) relative to a point on the chassis of the robot using: ( 4-1 ) The robot s motion can then be described by: ( 4-2 )

63 where u and v are the components of the linear velocity and is the angular velocity. The JBullet Physics Engine s Vector3f class allows the simulator to obtain the robot s motion data in the simulator that has been created for this research work. yy xx (a) Simulated 6DOF joint Length = 2m (b) Mass = 500KG Width = 1m Height = 0.5m (c) (d) Figure 4.1: Schematic and simulated view of the robot: (a) Schematic view; (b) Graphic view showing important mathematical components necessary to track the location of the robot; (c) and (d) Viewing the simulated robot created using JBullet Physics Engine. The white-block on top of the robot simulates the 6DOF joint used to attach to the carried object Robot Capabilities Each robot has a set of sensors for environment perception and actuators to control its motions (Figure 4.2). Each robot has sensors to detect the obstacles. A simulated range sensor, which is placed in front of robot, enables obstacle detection within /2 <= <= /2. A light-sensor is utilised to detect the light-source behind the target area. A communication system facilitating inter-robot communication is simulated. The simplex communication system enables a robot to communicate with a maximum of four neighbouring robots. Although in practice a robot may be able to communicate with more than four robots, the restriction will enable the tests to determine whether the cooperative control will be scalable. Six actuators are used to control the motion of the robot. Two actuators control the steering angle of the two front wheels, another two actuators apply force to the motors that control the rotation of wheels and the remaining two actuators apply braking force to the two wheels. 49

64 The two front wheels will be controlled simultaneously. This means that the same steering angle, motor force, and brake force is applied to both wheels. Therefore, this is not a differential drive system. Communication System Sensors Actuators Simulated Robot Figure 4.2: Perception and action abilities of the simulated robot DOF Joint for Holding the Carried Object The robots attach to the transported object using a six-degree-of-freedom (6DOF) joint that allows a small translational displacement and small rotation along the three axes (Figure 4.3). Carried Object Robot Chassis (a) (b) Figure 4.3: 6DOF joints: (a) linear displacement and rotation of a 6DOF joint; (b) Spring-like flexible 6DOF joint attaching the object to robot chassis has linear displacement and rotation. Arrows indicate rotation and linear displacement along the three axes. 50

65 Although our goal is to achieve a smooth and precise trajectory for the multi-robot team, this may not be possible in actual environment. Because of slightly differing velocity or heading angles, it may not be possible for the robots to maintain a fixed inter-robot distance throughout the task. Therefore, the proposed flexible joint will accommodate these minor and probably inevitable alignment errors. Such joints are natural. Consider two persons carrying a table. It is not possible for both individuals to maintain the same hand position throughout the cooperative transportation task. Because of pull and push effect, uneven surface and obstacles, both individuals will constantly realign their hands to maintain a good grip of the table. For this reason, it is believed that the use of the proposed 6DOF joint is necessary Design of the Robot s Motion Controller The motion controller of the robot controls the steering angles ( steering ) of the two front wheels and applies the necessary engine force so that the robot can maintain the required velocity (v). The motion controller is designed to exhibit four behaviours, which are move_forward, reverse, wait, and recover. The required behaviours are executed based on the controller s state, which is partially determined by the cooperative transport behaviour (Figure 3.8). The robot s motion controller can be in any one of the states shown in Figure 4.4. Figure 4.4: The states of the motion controller. The cooperative transport behaviour via the social mediation behaviour, which has been explained in the Section 3.2.3, supplies the social mediation angle and its state to the motion controller. This socially mediated angle is called the desired heading angle of the robot. The state transitions are facilitated by considering the state of the cooperative transport behaviour (state, ). The activates the move_forward state, while the triggers the reverse state. Note that in 51

66 the cooperative transport behaviour the term backtrack was used while in the motion controller the term reverse was used to indicate the robot's inability to proceed in the agreed direction. This is because the cooperative transport behaviour is independent of the low-level motion control. Therefore, the cooperative transport behaviour's backtrack state can be implemented by the robot's motion controller depending on the mechanical capabilities of the robot. Mondada et al. (2003) and Dorigo et al. (2004) used the s-bot having the ability to turn its turret. Therefore, the s-bot's backtrack state would trigger either the rotation of the turret or reverse motion. The robots that have been used have only one option, and that is to stop and reverse. The term reverse was chosen to describe the motion controller s state when the robot is reversing. Controller s move_forward state When in the move_forward state, the motion controller utilises a Mamdani-type fuzzy logic based control scheme to establish the steering angle of the wheels ( steering ), and its velocity (v) using the desired heading ( social ) and the robot placement ( loc ) (Figure 4.5). The fuzzy logic based controller requires the robot to have knowledge of its placement in the team and the desired heading. A robot could be in front leading the transportation task or at the back following the robots in front. A robot can be in any one of the following four regions: front-right, front-left, rearright, or rear-left (Figure 4.6). social + loc FIS steering + v Figure 4.5: The FIS for the robot s motion controller. 52

67 Figure 4.6: Frame-of-reference used by robots to identify its placement in the team. The robot uses a frame-of-reference located anywhere on the transported object to determine its placement ( loc ). A beacon placed on the transported object can serve as the frame-of-reference. All robots when ascertaining its placement ( loc) must perceive the beacon. The requirement of knowing the placement of the robot within the team is specific to the robot team that is being utilised. If the s-bots were used, knowing the placement of the robot would not be a requirement because the s-bots have a rotating turret that enables it to change its heading by simply rotating the turret. In the case of robots used in this research work, the robots that are leading the transportation task will act differently from the robots that are trailing in the transportation task. See Figure 4.7 for an illustration of a scenario that shows how the trailing robots (Robots 3 and 4) complement the leading ones (Robots 1 and 2) when the team is making a left turn. Carried Object Heading of robot 1 ROBOT 4 Heading of robot 4 ROBOT 3 Heading of robot 3 ROBOT 1 ROBOT 2 Desired team heading Heading of robot 2 Figure 4.7: Robots choose individual headings to achieve the desired team heading. 53

68 Robots 1 and 2 will naturally choose its steering angle such that it is heading in the direction of the desired team heading. However, Robots 3 and 4 will be moving in the opposite direction. This combination of movements will result in a rotation-like movement of the team, and as imaginable a quicker alignment towards the team's desired heading. This also results in a quicker misalignment correction. The robots current desired heading ( social ) and its placement ( loc ) is used to determine the steering angle ( steering ) and expected velocity (v) using the rules in Figure 4.8, which have been crafted to move the robots behind the frame-ofreference in an opposite direction to the robots in front of it when turning as illustrated in Figure 4.7. Notice that trailing robots have the same rules regardless of it being on the left or right. The reason for this is that it has been conjectured that the trailing robots would behave in a similar manner to complement the leading robots. Also, notice that the leading robot that is on the outer edge of the turning path, for example Robot 2 in Figure 4.7, will travel more. Therefore, the robots velocity and steering angles will be slightly different from the robots that are closer to the fulcrum, for example Robot 1 in Figure 4.7. That is, Robot 1 is expected to move less and act as the pivot of the turning system. The rules reduce the velocity of the robots when making turns while tries to achieve maximum velocity when moving straight ahead. social Large-right Small-right Straight Small-left Large-Left (LR) (SR) (ST) (SL) (LL) loc steering steering steering steering steering FR S LR M SR F ST F SL M LL FL M LR F SR F ST M SL F LL RR or RL M LL F SL F ST F SR M LR M LL F SL F ST F SR M LR Figure 4.8: Rules of the FLC used by the robot's motion controller. Key for velocity (v): S = Slow; M = Medium; F = Fast 54

69 The membership functions for the four linguist variables are presented in the following sections. Robots Placement The robot can be in any one of the regions shown in Figure 4.6. The robot will take on any value between 0 and 360. The robots placement angle ( loc ) is computed relative to the robot, that is, the angle at which the beacon is located relative to the robot. See Figure 4.9 for an illustration. Table 4.1 shows the relationship between the robot placement angle ( loc ) and the robot s region. Using Table 4.1 the membership function for the robot placement angle ( loc ) was created. Robot Placement Angle ( loc ) Robot s Region 0 to 90 Rear-right 90 to 180 Front-right 180 to 270 Front-left 270 to 360 Rear-left Table 4.1: Linguistic variables used for robot placement angle ( loc ) ROBOT 1 loc 0 Beacon ROBOT loc 220 Therefore, Robot 1 is in front-left ROBOT 3 Carried object ROBOT 2 Direction of motion Figure 4.9: Illustrating the computation of robot placement ( loc ). 55

70 1 membership Rear-Right Front-Right Front-Left Rear-Left Robot Placement ( loc ) Figure 4.10: Membership function for linguistic variable Robot Placement ( loc ). Social Mediation (Desired) Heading ( social ) The desired heading represents the new heading of the robot, and this can be between -60 and +60, that is, -60 social 60. This means that from the robots current heading, the robot can turn left or right by maximum of 60. This restriction has been placed because the steering wheel angles ( steering ) can be -20 steering 20. With the robots ability to turn left or right by 20, it is impossible for the robot to be making a very large turn instantaneously. The reason for not mapping the steering angle and the heading of the robot such that both are constrained by 20 is that with the desired heading angle set to ±, it becomes possible to get more range for the linguistic variables chosen for the desired heading. Table 4.2 describes the significance of each linguistic variable chosen for the desired heading and Figure 4.11 provides the membership function. 56

71 Linguistic Variable Range Comments LL (Large Left) -60 social -30 Robot is making a large left turn. SL (Small Left) -30 social 0 Robot is making a small left turn. ST (Straight) -5 social 5 Robot is going straight, that is continuing on its current path. SR (Small Right) 0 social 0 Robot is making a small right turn. LR (Large Right) 0 social 0 Robot is making a large right turn. Table 4.2: Linguistic variables used for desired heading ( social ). LL SL ST SR LR 1 membership Desired Heading ( social ) Figure 4.11: Membership function for linguistic variable desired heading ( social ). Steering Angle ( steering ) The robot can steer its wheels by a maximum of ±. Table 4.3 explains the linguistic variables used for the steering angle and in Figure 4.12 shows the membership function for the steering angle. 57

72 Linguistic Variable Range Comments LL (Large Left) -20 steering -10 Steering angle is adjusted such that the robot makes a large left turn. SL (Small Left) -12 steering 0 Steering angle is adjusted such that the robot makes a small left turn. ST (Straight) -2 steering 2 Steering angle is adjusted such that the robot keeps moving straight, that is, on its current path. SR (Small Right) 0 steering 12 Steering angle is adjusted such that the robot makes a small right turn. LR (Large Right) 10 steering 20 Steering angle is adjusted such that the robot makes a large right turn. Table 4.3: Linguistic variable used for steering angle ( steering ). membership LL SL ST SR LR Steering Angle ( steering ) Figure 4.12: Membership function for linguistic variable steering angle ( steering ). Robot Velocity (v) It is crucial to control the velocity of the robot together with its heading. When the robot is making turns, particularly sharp turns when avoiding collisions, the velocity of the robot has to be reduced. The average velocity of the robot was 5km/hr and the maximum velocity to be 10km/hr. In actual experiments, the robots average and maximum velocities are less than these values, and are documented in Chapter 5. Figure 4.13 shows the membership function for the linguistic variable (v). 58

73 membership 1 0 Slow Medium Fast Robot Velocity (v) km/hr Figure 4.13: Membership function for linguistic variable velocity (v). Controller s reverse state When in the reverse state, the robot changes the direction of motion by moving in an opposite direction to its current direction of motion. The steering angle is decided using the fuzzy logic controller described earlier. However, the robot assumes a virtual target ( target_virtual ) when executing the social mediation behaviour. This facilitates a safe reverse, that is, the social mediation behaviour is utilised to avoid collision with obstacles when reversing. The location of the virtual target is computed as shown in equation (4-1). target_virtual = target + (4-1) Controller s wait and recover states The wait state requires the robot to stop after reversing and the recover state moves the robot in a direction that is opposite to the direction that the robot was moving in immediately before the reverse state was triggered, thus, exhibiting a simple backtrack motion. For example, assume that the robots steering angle was -16, that is, the robot was making a large left turn when the reverse state was triggered. When the motion controller enters the recover state, it will set the steering angle to +16 so that the robot makes a right turn. The reverse state is maintained for a small predefined time. There were two major options for implementation of the recover state s behaviour. One implementation of the recover behaviour has been described in the previous paragraph while the other was to implement a random motion. The problem with random motion is that it can cause the robot to continually approach the obstacle, that 59

74 is, cause the robot to oscillate in front of the obstacle requiring a very long time for the robot to avoid the obstacle. Therefore, the approach taken by this research is more structured and deterministic. In absence of global knowledge, that is, prior knowledge of the environment, the wait and recover states and associated behaviours are required to avoid the robot team from being trapped in an infinite loop of forward and backward motion when avoiding obstacles. 4.2 Design of the Robotic Simulator A number of free and commercial robotic simulators are available (Craighead et al., 2007). However, a Java based simulator has been designed and implemented. The basic architecture of the simulator is shown in Figure The Simulator Control Window (SCW), which is the main class of the simulator, is responsible for producing the simulation window (animation) that shows the motion of the robots, the arena, and the obstacles. The arena, obstacles, robots, and the transported object are registered in this window. Each robot is implemented by the JBullet Raycast Vehicle class and associated with each robot is a Java thread, which controls everything that the robot does including social mediation. Essentially, each robot in the simulator is a Java thread. The primary reason for implementing a simulator using threads is that it makes the state updates asynchronous and thus mimics the real-world environment closely. However, it is understood completely that the simulator cannot achieve true parallelism. The other key advantage of the architecture is that it allows the designer to add robots to the environment with extreme ease. All that is required is to add a new JBullet Raycast Vehicle instance, add a new Java thread instance to the robot, and finally add the robot to the environment. 60

75 jbullet Raycast Vehiccle jbullet Raycast Vehicle jbullet Ray ycast Vehicle (Robot 1) (Robot 2) (Rob bot n) Java Thread Java Thread Java Thread T Inter T Thread C Comm. Environment Definition Obstacles Transported Object Goaal Simulator Control Window (SCW) Figure 4.14: Baasic architecture of the Java based Robotics Simullator. Each robot is made up of o seven key methods (modules/components). Th hese are: 1. Perceive Obstaclle to detect obstacles in the environment and extract e details, that is, heading and distance, of the obstacle. This method im mplements the b Avoid Obstacle behaviour. 2. Perceive Target to detect the heading of the target/goal. This method implements the G Go to Goal behaviour. 3. Social Mediationn to negotiate the team s heading. This methood implements the Social Mediaation behaviour. 4. Communication to abstract the wireless communication betweeen robots. 5. Misalignment to t check for misalignment of the robot. 61

76 ort behaviour. 6. Cooperative Transport to implement the Cooperative Transpo mplemented by the thread s run()method. This method is im 7. Motion Control to implement the motion controller of the robbot. The fuzzy logic based motion controller is discussed in detail later in this chapter. 4.3 Implementation off the Robotics Simulator This section briefly disccusses the implementation of some important aspects a of the simulator before the sim mulated robot is discussed in detail Development Envvironment A number of componennts are required to create a simulation. The com mponents that are required to create thee Java-based simulator are shown in Figure Physics Library (JBullet) Programming Language (Java) Graphics Library (LWJGL) Simulator Figure 4.15: 4 Components of the Java based Simulator Netbeans The simulator was creaated using Netbeans. Netbeans, which is a prodduct of Oracle Corporation (Oracle Coorporation, 2010), is a very popular Integrated Development Environment (IDE) for JJava. It is a very convenient tool and proved too be extremely useful for this project because there was a need to integrate many libraries to implement the simulatorr. Netbeans provides an environment for seamleess integration and use of multiple thirdd-party libraries. 62

77 4.3.3 JBullet Physics Library The Java based simulator incorporated the JBullet Physics Library (Dvorak, 2008) to simulate the rigid body dynamics. JBullet is the Java port of the popular Bullet Physics Engine (Coumans, 2005). The JBullet library also includes the Light Weight Java Game Library (LWJGL) (LWJGL Team, 2006) Light Weight Java Game Library (LWJGL) The simulator makes use of the LWJGL. It is an open source library for game development for Java JFuzzyLogic The robot s motion controller is designed using fuzzy logic. The Java based simulator uses the JFuzzyLogic 2.0 library (Cingolani, 2006) to implement the fuzzy inference system Robot Localisation Localisation is crucial in navigation. Siegwart & Nourbakhsh (2004) and Matari (2007) provide a comprehensive coverage of the topic. The JBullet's Vector3f class allow easy localisation of the robot. The common environment cue that was assumed present in the arena is recorded with the Simulator Control Window (SCW) before the simulation runs were made. As a robot moved in the simulated arena, it used the Vector3f class s vector operations to localise itself using the common environment cue as the reference point Simulation of wireless communication This research work abstracted the wireless communication between the robots, and there are some assumptions that have been made when designing the test environment. These assumptions are: 1. The designer specifies the neighbours of the robots. The robots would detect neighbours automatically in an actual environment, for example, by using the HELLO messages; 2. Each robot can have a maximum of four neighbours; and 3. A dedicated wireless link exists between a robot and its neighbours. 63

78 4.3.8 Sensor Simulation Before the experiments begin, the designer enters the location (x, y, and z coordinates) of the obstacles into the SCW. When a particular obstacle is within the sensor range of the robot, the robot can extract the details of the particular obstacle via the SCW. If multiple obstacles are detected by the sensor, the robot extracts the details of the obstacle that is the closest. Occasionally, a small random error can be introduced to the coordinates of the obstacle to ensure that the robot can handle imprecise sensor readings. 4.4 Cooperative Controls Performance Indicators This section discusses how the performance of the proposed cooperative control has been measured. The specific test cases and the results of the tests will be presented in Chapter 5. The proposed cooperative control algorithm was tested for its ability to perform the transportation task, its scalability, and its robustness. To do this, a number of different environments/arenas were generated and multiple tests were conducted. The following sections explain the generation of the arena and how these were used to test the success rate, scalability, and robustness Generating the arena and the obstacles To test the cooperative transport behaviour, a number of different arenas containing a number of obstacles were generated. The obstacles were generated randomly. It was ensured that the inter-obstacle distances were sufficient for the team to navigate between the obstacles. The obstacles were of cylindrical shape and all obstacles were of the same dimensions. Each obstacle is 1 meter in diameter and 2 meters in height, and weighs 50KG Indicator 1: Success Rate The success rate measures the cooperative controls effectiveness at enabling the robot team to complete the transportation task. The metrics that was used to measure the success rate was the robot team's ability to reach the goal location within a predefined time without any collisions. 64

79 Twenty-five obstacle courses were generated using five different arenas in order to measure the success rate. Five different arenas were generated, and each arena contained different number and configuration of the obstacles. The configuration, that is location of obstacles within the arena, was generated randomly. For each of the five arenas, five different goal locations were selected and a 3-robot team was required to carry an object from the initial location to the goal location. For each goal location within a particular arena, a total of three runs where done. The objective of repeating the runs was to ensure repeatability of the results. Figure 4.16 shows a sample arena. The black filled circles are the obstacles, and the team's starting position and the five goal locations are also marked. GOAL 1 Arena 1 GOAL 2 START GOAL 3 GOAL 4 GOAL 5 Figure 4.16: Arena 1 with the obstacle configuration, team's starting location, and the five goal locations. The success rate (in percentage) was measured using equation (4-2): number of successful runs without collisions 100 (4-2) total number of runs The success rate determines the success of the negotiation behaviour and distributed cooperative control, that is, research objectives and Indicator 2: Scalability Scalability measures the proposed cooperative control s ability to handle increasing number of robots in the team. To test for scalability, first a team of up to twelve robots was required to complete the transportation. The experiment started with a 65

80 team of three robots transporting an object from the start to the goal location. The team size was increased by one and up to a 12-robot team was tested. Further experiments were conducted in another arena using a team of up to fifteen robots. The following metrics measure the scalability of the control algorithm: 1. Number of collisions: With increasing number of robots in the team, ideally the robot team should be able to be able to complete the task without collision with obstacles. 2. Completion Time: With increasing number of robots, ideally the task completion time should be comparable to be comparable. 3. The Mean Absolute Heading Error: measures the success of each robot in moving in the direction of the desired heading. The significance of this measure is that it incorporates the convergence error as well as errors induced by the push-and-pull effects that prevent the robot from following the desired team heading precisely. The convergence error measures the effectiveness of the social mediation behaviour in achieving perfect negotiation. Therefore, the heading error is a holistic measure of all errors of the system, that is, it is inclusive of errors from the high-level cooperative transport behaviour to the low-level motion controller. The heading error is the absolute difference of the robot s actual heading ( ) and desired heading ( ) at any given time or control step. At any given control step, the desired heading is determined by the motion controller using the team heading provided by the cooperative transport behaviour. Ideally, the desired and actual headings should be same, that is, the robot is able to move in the direction of the desired heading precisely. The heading error is computed using Equation (4-3): (4-3) Ideally, the variation between the heading errors across different team sizes should be minimal. The scalability test partially measures the satisfaction of research objective

81 4.4.4 Indicator 3: Robustness Robustness was tested to see how well the cooperative control was able to handle robot failures. The following failures are simulated: 1. Motion Control Failure This failure hinders the robot from steering itself and applying engine force. It is assumed that the wheels of the robot become free when the robot fails. However, inter-robot communication is possible. 2. Communication Failure This failure disables inter-robot communication, that is, a robot is not able to communicate with its neighbours. However, the robot is able to control its motion. A 6-robot team was required to complete a transport task successfully. Using the 6- robot team, each of the two failures mentioned above were simulated and the following metrics were used to measure the cooperative controls effectiveness: 1. Number of collisions: With increasing number of robots failing, ideally the robot team should complete the task without collision with obstacles. 2. Completion Time: With increasing number of robots failing, ideally the task completion time should to be similar. 3. Deviation from path: With increasing number of robots failing, the robot team should follow a path similar to the path taken by the 6-robot team that formed the control experiment for robustness test. The Root Mean Square Error of the path deviations was computed and ideally, the RMSE should be minimal. The robustness test partially measures the satisfaction of research objective Summary The proposed cooperative control algorithm is independent of the types of robots that are used for the transportation task. This chapter documented the robot team that was used for the cooperative transportation task. This chapter has discussed the three-wheel robot that has been used for the transportation task including the robots capabilities, and its fuzzy logic based motion controllers. The design of the Java-based simulator that has been used to test the 67

82 cooperative control algorithm has been presented. The development environment that has been used to create the simulator has been documented as well. The cooperative control s performance indicators in terms of the success rate, the scalability, and the robustness of the proposed cooperative control has been presented. In essence, these performance indicators measure the research objectives that have been stated in Chapter 1. The success rate determines the success of the distributed cooperative control and the negotiation behaviour, that is, research objectives and The scalability and robustness tests measure the satisfaction of research objective Chapter 5 presents the specific test cases and the results of the tests performed according to the details provided in Section 4.4. The chapter also provides explanation of the results to determine whether the research objectives outlined in Section 1.4 have been achieved. 68

83 Chapter 5 Results and Discussion 5 Results and Discussion This chapter presents the test cases that measure the performance of the proposed cooperative control. The controls performance is measured using its success rate, scalability, and robustness. These performance indicators from Section 4.4 are: 1. Indicator 1: Success Rate The success rate measures the cooperative controls effectiveness at enabling the robot team to complete the transportation task. The metrics that was used to measure the success rate was the robot team's ability to reach the goal location within a pre-defined time without any collisions. 2. Indicator 2: Scalability The scalability test required the cooperative control algorithm to be able to handle increasing number of robots associated with increased task complexity. An increase in the load size and a proportional increase the number of robots in the team should result in similar performance in terms of task completion time for different load sizes. 3. Indicator 3: Robustness Robustness was tested to see how well the cooperative control was able to handle robot failures. Before presentation of the results of the experiments conducted for the above three indicators, some limitations and errors that hinder the robots from achieving perfection are discussed. It is important for us to understand the systems limitations and errors as this allows us to explain the results of the experiments that were conducted to measure the three performance indicators. 5.1 Limitations and Errors of the System There should be no difference between the theoretical and real worlds in an ideal case. However, achieving such an environment is a challenging task, and our system

84 is no exception. This section highlights the limitations and errors that prevent the algorithm from achieving perfection. The investigations will target two major areas. Firstly, the social mediation, and secondly, the ability of each robot to follow the agreed heading perfectly. The constructed test scenario, which is shown in Figure 5.1, required the 3-robot team to overcome one obstacle while going from the start to the goal location. The grey dashed rectangles show the position and orientation of the team at every 300 control steps. To simplify the representation of the system, only the parameter of the system or collection is shown in Figure 5.1. GOAL START Obstacle Figure 5.1: Test environment used to realise the limitations and errors of the cooperative control Converging to a Common Social Mediation Angle In the best-case scenario, all robots agree to one global team heading, that is, all robots have the same social mediation angle. Although this is desired, achieving it may impose constraints on how fast the system completes its task. The results show that near perfect convergence can be achieved given sufficient time for the system to perform social mediation. There are many parameters used by the simulator (see Appendix A for a detailed list), but of all, the important ones are: 1. Engine Force - This controls the velocity of the robot. 2. Minimum Mediation Cycles (MMC) - This controls the number of iterations of the social mediation control loop. 3. Heading Update Delay (HUD) - This controls the amount of time the robot can spend in social mediation before the motion controller implements the agreed social mediation angle. 70

85 The communication graph for the test setup is shown in Figure 5.2. Notice that Robot 1 and Robot 3 cannot communicate. This has been done intentionally to setup an environment where a robot cannot communicate with some of the robots in the team. Experiment : Convergence of Social Mediation Angle with Usual System Parameters The usual parameters of the system are: 1. Engine Force Minimum Mediation Cycles (MMC) Heading Update Delay (HUD) - 0 Robots 1 and 2 can communicate with each other R2 R1 Robots 2 and 3 can communicate R3 Figure 5.2: Communication graph for the team used to study convergence properties. The chart in Figure 5.3 shows the standard deviation of social mediation angle for the test environment in Figure 5.1. The standard deviation is a measure of the variation of social mediation angle between robots. In an ideal case, that is, with perfect negotiation, the standard deviation should be zero. Note: The reason for choosing the standard deviation as a measure is to allow us to study the variation of each robot s social mediation angle (team heading) from the average social mediation angle. In an ideal case, when each robot s social mediation angle should be same as the average social mediation angle, the standard deviation of the social mediation angle should be zero, that is, there is perfect negotiation. 71

86 12.00 Standard Deviation of Social Mediation Angle Std Dev ( ) Control Steps Figure 5.3: Convergence of social mediation angle with usual parameters. During the control steps between 500 and 1500, the team is avoiding an obstacle, while after approximately 1800 th control step, the team is changing the direction towards the goal location. The results indicate that when the team has to make a major change to the heading, more time is required by the team to converge to a common social mediation angle. A major change in the heading of the robot requires the team to change the heading by more than 30. In an attempt to reduce the standard deviation, further experiments were conducted. These experiments were aimed at giving more time for the team to carry out negotiation of the team heading, that is, the social mediation angle. This next section documents the results of these experiments. Experiment : Convergence of Social Mediation Angle with Varying System Parameters The values of the following three parameters were adjusted such that the team gets more time to negotiate the team heading before the individual robot's controllers use the agreed or negotiated heading to decide the robot's steering wheel angle and the wheel motor forces. The three parameters are: 1. Engine Force 2. Minimum Mediation Cycles (MMC) 72

87 3. Heading Update Delay (HUD) Each experiment was repeated three times and the average values are documented. The engine force was reduced from the normal value of 100 to 50 to reduce the velocity of the robot. There is a need to reduce the velocity because the team will spend more time in social mediation and as such, if the system is moving too fast then collisions with obstacles become more frequent. The value of engine force was determined via simulations. Experiment (a) The first set of experiments studied the impact of the Heading Update Delay on the standard deviation of the social mediation angle. Table 5.1 tabulates the results of the runs while in Figure 5.4 graphs the results. For these experiments, the engine force was 50 while the Minimum Mediation Cycles was set to 9. Run # Heading Update Delay (control steps) Average Standard Deviation ( ) Table 5.1: Standard deviation of social mediation angle with increasing heading update delay. The graph in Figure 5.4 shows that with increasing Heading Update Delay, the standard deviation of social mediation angle reduces proportionally. Experiment (b) The standard deviation was studied with increasing number of minimum mediation cycles while the engine force was kept constant. When conducting experiments for different update delay parameter and for each update delay value, different minimum mediation cycles were chosen. Table 5.2 presents the average standard deviation from all runs and Figure 5.5 documents the results using graphs. 73

88 Standard Deviation of Social Mediation Angle with Increasing Update Delay Std Dev ( ) Control Steps Figure 5.4: Convergence of social mediation angle with increasing update delay. Heading Update Delay (control steps) Minimum Mediation Cycles Average Standard Deviation of Social Mediation ( ) Table 5.2: Standard deviation of social mediation angle with increasing minimum mediation cycles for different heading update delays 74

89 Std Dev ( ) Standard Deviation of Social Mediation with Increasing Mediation Cycles for Update Delay of 50 Control Steps Social Mediation Cycles (a) Std Dev ( ) Standard Deviation of Social Mediation with Increasing Mediation Cycles for Update Delay of 80 Control Steps Social Mediation Cycles (b) Std Dev ( ) Standard Deviation of Social Mediation with Increasing Mediation Cycles for Update Delay of 500 Control Steps Social Mediation Cycles (c) Figure 5.5: Convergence of social mediation angle with increasing mediation cycles: (a) with update delay of 50 control steps; (b) with update delay of 80 control steps; (c) with update delay of 500 control steps. Line of best fit is shown in each graph. 75

90 Figure 5.6 shows the standard deviation for the entire run. The Heading Update Delay was 500 control cycles and the social mediation cycles were 15. For each 500- control steps, the robot team s motion controller used the agreed social mediation angle. For this reason, the graph appears to model a Piecewise function. Std Dev ( ) Standard Deviation of Social Mediation with high Update Delay and Social Mediation Cycle Control Steps Figure 5.6: Convergence of social mediation angle with update delay of 500 steps and 15 social mediation cycles. The results indicate that for particular combination of engine force, update delay and social mediation cycles, the standard deviation decreases (Figure 5.6 and Figure 5.5). Although an inversely proportional relationship between the standard deviation of the socially mediated angle or the heading update delay was not noted, it was noted that for particular combinations of the engine force, update delay, and social mediation cycles, there was a reduction in the standard deviation of the socially mediated angle. Experiment : Discussion The experiments were aimed at understanding how to reduce the standard deviation of the social mediation angle. In an ideal case, the standard deviation will be zero, that is, all robots agree to the same value of social mediation angle. Through the experiments, it was learnt that the following combinations could result in a near perfect mediation when the team makes major changes to its heading: 76

91 1. Lower engine force: The velocity of the team has to be reduced. When more time is provided for social mediation, the team needs to move slow enough to allow the team to complete the mediation process. 2. High mediation cycle: The robots need to repeat the mediation control loop for a reasonably large number of times. 3. Heading Update Delay: The team needs to be given a reasonable amount of time to mediate the team's heading. This is particularly true when the team is in the process of making a major change to its heading. However, setting the Heading Update Delay to a very large value would obviously be very unsafe because it does not allow the team to change its heading frequently, increasing the chances of collisions. Spending too much time on mediation also requires the team to move slowly, thus reducing the velocity of the system. Next section presents the results that demonstrate the fuzzy logic enabled motion controller's ability to mitigate the large standard deviation in the social mediation angle. Experiment : Mitigating the High Standard Deviation of Social Mediation Angle This set of experiments studied the heading information of each robot in the team. The direction of the front wheels of each robot was looked at to see whether the robots were moving in the same direction. It was important to know whether the robots were moving in conflicting directions, for example, one of the two leading robots moving towards the right while the other towards left. These conflicting headings will cause the two robots to be pulling in opposite directions and thus have no positive effect on changing the team's heading. The values of the steering angles of the robots can not be compared because it is dependent on the robot's position within the team and type of manoeuvre that the team is making. However, if the two leading robots are considered, both should be moving in the same direction at all times. Figure 5.7(b) plots the steering wheel direction of the two leading robots to see whether the front wheels of both robots are turned in the same direction. A zero value on the y-axis for a particular control step indicates that the wheels of both 77

92 robots are turned in the opposite directions while a value of one indicates that the robots are moving in the same direction. The graph focuses on the heading direction of the wheels between control steps 600 and 1200 from the experiment conducted in Experiment (Figure 5.3). In this experiment, the team was avoiding an obstacle in its path between control steps 600 and Figure 5.7(a) reveals that when the team is avoiding obstacles during these control steps, the standard deviations of social mediation angles are high. However, as shown in Figure 5.7(b), the fuzzy logic enabled motion controller was able to mitigate the high standard deviation in the team s heading resulting in a high degree of cooperation between the robots. Immediately before the 700 th control step in Figure 5.7(b), Robot 1 detected the obstacle and had changed its direction in order to avoid the obstacle. It took seven (7) control steps for this information to disseminate to Robot 2. As a result, note that there is a conflict for seven control steps, that is, 120 milliseconds, before both robots finally start to move in the same direction. After about 1150 control steps, the team is making a major change to its direction in an attempt to go towards the goal location after avoiding the obstacle. In all other instances, where the two robot s front wheels were turned in opposite directions, the conflict lasted between one to three control cycles, that is, 15 to 50ms. The most important aspect to note in Figure 5.7(a) and Figure 5.7(b) is that although the standard deviation is high during the obstacle avoidance phase, the fuzzy logic enabled motion controller avoided any major conflict in the directions chosen by the robots. During 600 and 1200 control cycles, the robots were heading in the same direction 93% of the time. Furthermore, the conflict in direction of the robots chosen by the robots occurred nine distinct times and in most cases, this conflict did not last for more than three control cycles. Therefore, the motion controller of the robots is able to mitigate the high standard deviation of the social mediation angle. 78

93 Std Dev ( ) Standard Deviation of Social Mediation Angle Control Steps (a) Deg of Coop. Degree of Cooperation During Obstacle Avoidance Control Steps (b) Figure 5.7: The degree of cooperative between the robots in Experiment Zero on the y-axis indicates that both robots have turned the front wheels in the opposite directions while one indicates that the wheels of the two robots are turned in the same direction, that is, are moving in the same direction Limitation of Converging the Robot's Heading to the Social Mediation Angle The previous section (5.1.1) presented the analysis of convergence of the social mediation angle. The experiment aimed at learning how the standard deviation can be reduced to result in a near perfect negotiation of the team's heading by the robots and how the fuzzy logic enabled motion controller mitigated the high standard deviations in the social mediation angle. This section studies the team s ability to follow the agreed social mediation angle. A number of different experiments were conducted to answer two questions: 79

94 1. How long does it take each robot to converge to the agreed team heading? 2. Is the robot able to converge to the agreed team heading (social mediation angle) precisely, that is, without any error? Experiment This experiment used the data collected from Experiment Analysis was done on data from two distinct sections of the experiment. First, section was the very beginning of the experiment when the team began moving towards the goal location. At this stage, the team could only perceive the goal location and not the obstacle. Second, section of the experiment was when the team began avoiding the obstacle. Section 1: Control Cycles 0 to 60 In this section of the experiment, the team was just starting to move towards the goal location. Figure 5.8(a) presents the difference between desired heading of the robot, that is, the robot's social mediation angle, and the robot's actual heading using a graph. The error on the y-axis shows the difference between the social mediation angle, that is, the robot's desired heading, and the robot's actual heading. The x-axis shows the control steps. Section 2: Control Cycles 700 to 800 In this section of the experiment, the team was just starting to avoid an obstacle. Figure 5.8 (b) graphs the difference between desired heading of the robot, that is, the robot's social mediation angle, and the robot's actual heading between control steps 700 and

95 Convergence of Individual Robot's Heading to Social Mediation Angle Error ( ) Control Steps Robot 1 Robot 2 Robot 3 (a) Convergence of Individual Robot's Heading to Social Mediation Angle Error ( ) Control Steps Robot 1 Robot 2 Robot 3 (b) Figure 5.8: Convergence of individual robot's heading to social mediation angle in Experiment ; (a) between Control Steps 0 and 60; (b) between Control Steps 700 and 800. Figure 5.8 helps us answer the two questions stated above: 1. It takes approximately 60 control steps, which is equivalent to 1 second, for the robots to converge to its social mediation angle when the robot is not making any major change to its heading. It takes the team a much longer time to converge to the social mediation angle when a major change to the heading 81

96 is required. A major change in the heading of the robot requires the team to change the heading by more than The robots are not able to converge to the desired heading (social mediation angle) precisely. The difference between the actual heading and social mediation angle is higher when the team is making a major change to its heading. However, when the change is not major, the robot's actual heading is very close to the agreed social mediation angle, that is, the team's heading. The absolute mean error and absolute mean percentage error of the agreed social mediation angle between control steps 0 to 60 was 4.45 and 2.46% respectively, while it was and 8.75% respectively between control steps 700 and 800. The difficulty in converging the robot's heading to the social mediation angle without errors is attributed to the following factors: 1. The push-and-pull effects from other robots. The robots have slight differences in the agreed social mediation angle, which causes robots to exert push-and-pull forces on each other. Therefore, the slightly differing social mediation angle contributes to the problem of the robot failing to converge precisely to the social mediation angle. 2. There are mechanical constraints placed on the system. The robot can turn its steering to a maximum of 20. This limitation causes the robots to take longer to converge to the social mediation angle. A combination of the pull-and-push forces exerted by robots on each other and the mechanical constraints makes it challenging for the team to follow the team's heading precisely Discussion on Limitations and Errors of the System Sections and discussed two important factors that affect the accuracy of the system in following the agreed team heading. The errors in the social mediation process and the mechanical limitations do not allow the team to converge to the desired heading precisely. The proposed cooperative control was tested against the performance indicators outlined in Section 4.4 in light of the above-mentioned errors and limitations in our system. 82

97 5.2 Performance Indicator 1: Success Rate The success rate reflects the ability of the cooperative control to enable a 3-robot team to complete the assigned transportation task in different environments. This section presents each of the five arenas with the start and five goal locations in each arena, and the placement of the obstacles within the arena. The goal locations in the arenas are independent of each other. Five goal locations in five arenas resulted in twenty-five different obstacle courses. Each run is documented using a graph showing the trajectory of the team from the start to the goal location, and after documenting all the runs using graphs, a table documents other important aspects of the experiments. The communication graph of the 3-robot team is the same as the communication graph of the team used in the previous experiments, and is shown in Figure 5.2. Each obstacle is 1 meter in diameter and 2 meters in height, and weighs 100KG. The dimensions of the 500KG load was 5 x 3.5 x 0.25 meters (Length x Width x Height). Each robot weighs 500KG. To ensure easy and uniform interpretation of the graphs, the results of each run are presented using the following notations and conventions: 1. There are six graphs for each arena. The first graph will show the arena, the five goal locations that the robot team has to reach, and the configuration of the obstacles. The remaining five graphs will show the path taken by the team to each of the five goal locations. 2. Each run has been assigned a unique run number, which is later used in the tabulated results and discussion. The run number will allow us to refer to specific runs when presenting the results. 3. The top right-hand corner of each graph will show the run number and the average duration for each run in minutes. 4. A solid line from start to goal shows position of the centre of the carried object during the transportation task. 5. The start and end positions and orientations of the team are shown using solid black rectangles while during the transportation task the positions and orientations of the team are shown using dashed grey rectangles. The graphs do not plot the paths taken by each robot because this makes the graph 83

98 difficult to read. The size of the plotted rectangle represents the dimensions of the system, that is, it represents the outer edges of the system. 6. The grey rectangles represent the position and orientation of the robot team at intervals of 300 control steps. 7. The size of the obstacles or the system may appear different for different arenas simply because of the different zoom levels used, but in fact, the arenas and the obstacles are of the same size. Arena 1: Discussion Figure 5.12 shows the first arena, Arena 1. In all cases, the robot team was able to reach the goal location. You will notice that in certain sections of the graph, the concentration of the grey rectangles is high (Figure 5.12 (b)). This is because the team has spent more time in those areas of the arena trying to avoid obstacles and in doing so the team made numerous backtrack or reverse and forward movements. In run 105 (Figure 5.12 (f)), the robot s wheel brushes the obstacle as the team was making a very sharp turn in an attempt to avoid the obstacle. To make sure that the social mediation was working correctly and the overall cooperative control algorithm did not contribute to the collision, an analysis was done using the data from the run. The analysis considered the outcome of the social mediation behaviour. Figure 5.9 documents the standard deviation of the agreed social mediation angle of the team for about 500 control cycles immediately before and after the robot's wheel came in contact with the obstacle. Because the standard deviation was close to zero for most of time, it can be safely concluded that the robots agreed to a common heading. The team was required to make a very sharp turn in order to avoid colliding with the obstacle. Earlier sections documented that it takes time for the team to converge to the desired social mediation angle. Therefore, it was concluded that the minor collision in run 105 was due to the delay in the convergence of the social mediation angle and the resulting push-and-pull forces, and the reaction time of the robots. A formal definition of minor collision is provided when presenting the success rates. The reaction time is the time that it takes for a robot to determine that another robot in the team is trying to avoid an obstacle. 84

99 Standard Deviation of Social Mediation Angle for Run 105 Std Dev ( ) Control Step Figure 5.9: Standard deviation of social mediation of angle before and after the wheel of Robot 2 collides with the obstacle in run 105. Arena 2 Figure 5.13 shows the results obtained using Arena 2. All runs in this arena were successful, that is, the assigned goals were reached and no collisions were noted. Arena 3 Figure 5.14 shows the results obtained using Arena 3. In this arena, all runs were successful, that is, the assigned goals were reached and no collisions were noted. Arena 4 Figure 5.15 shows the results obtained for Arena 4. It is noted a collision in run 403 while all other runs were successful. looked at the possible causes of the collision in run 403. The standard deviation of the social mediation angle was investigated and presented in Figure Figure 5.10 shows that there was no major disagreement in the team's heading. We then focused on whether the robots were heading in the same direction. Attention was given to the direction of the steering wheels of the robots to see whether they were turned in the direction that they were supposed to. Figure 5.11 presents the results. It is evident that there is no conflict in the heading of the robots. Therefore, the minor collision in run 403 was caused by the pull-and-push forces exerted by robots on each other and the reaction time of the robots. 85

100 Standard Deviation of Social Mediation Angle Std Dev ( ) Control Steps Figure 5.10: Standard deviation of social mediation of angle immediately before collision in run 403. Degree of Cooperation During Obstacle Avoidance Deg of Coop Control Steps Figure 5.11: The degree of cooperation between the robots in run 403. A value of 1 on the y-axis indicates that the robots are moving in the same direction. Arena 5 In this arena, all except run 505 were successful. The results are documented in Figure In run 505, Robot 1's front wheel brushed past the obstacle. As documented in run 105, this minor collision was a direct result of the push-and-pull forces. 86

101 Arena 1 GOAL 1 Arena 1 GOAL 1 RUN: 101 TIME: 2:58 min GOAL 2 START START GOAL 3 GOAL 4 GOAL 5 Motion controller executes backtrack/reverse to avoid obstacle (a) RUN: 102 TIME: 0:57 min (b) RUN: 103 TIME: 1:42 min GOAL 2 START START GOAL 3 (c) (d) RUN: 104 TIME: 1:25 min RUN: 105 TIME: 1:44 min START START Robot s wheel touches the obstacle GOAL 4 GOAL 5 (e) (f) Figure 5.12: Team's trajectory for tests conducted using Arena 1. 87

102 Arena 2 GOAL 1 GOAL 2 Arena 2 GOAL 1 RUN: 201 TIME: 2:15 min GOAL 3 START GOAL 4 START GOAL 5 (a) (b) GOAL 2 RUN: 202 TIME: 0:55 min RUN: 203 TIME: 0:59 min START GOAL 3 START (c) (d) RUN: 204 TIME: 1:24 min RUN: 205 TIME: 1:58 min GOAL 4 START START GOAL 5 (e) (f) Figure 5.13: Team's trajectory for tests conducted using Arena 2. 88

103 Arena 3 GOAL 1 Arena 3 GOAL 1 RUN: 301 TIME: 1:19 min GOAL 2 GOAL 3 START START GOAL 4 GOAL 5 (a) (b) GOAL 2 RUN: 302 TIME: 1:04 min RUN: 303 TIME: 4:09 min START GOAL 3 START (c) (d) RUN: 304 TIME: 3:38 min RUN: 305 TIME: 1:21 min GOAL 4 START START GOAL 5 (e) (f) Figure 5.14: Team's trajectory for tests conducted using Arena 3. 89

104 Arena 4 Arena 4 RUN: 401 TIME: 2:20 min GOAL 2 GOAL 1 GOAL 4 START GOAL 1 START GOAL 3 GOAL 5 (a) (b) GOAL 2 RUN: 402 TIME: 2:01 min RUN: 403 TIME: 1:15 min START START Collision GOAL 3 (c) (d) RUN: 404 TIME: 1:03 min RUN: 405 TIME: 1:58 min GOAL 4 START START GOAL 5 (e) (f) Figure 5.15: Team's trajectory for tests conducted using Arena 4. 90

105 Arena 5 Arena 5 RUN: 501 TIME: 3:31 min GOAL 2 GOAL 1 GOAL 4 START START GOAL 3 GOAL 5 GOAL 1 (a) (b) RUN: 502 TIME: 2:17 min RUN: 503 TIME: 1:55 min GOAL 3 START START GOAL 2 (c) (d) GOAL 4 RUN: 504 TIME: 1:04 min Robot s wheel touches the obstacle RUN: 505 TIME: 2:27 min START START GOAL 5 (e) (f) Figure 5.16: Team's trajectory for tests conducted using Arena 5. 91

106 Run# Average Completion Time (minutes) Number of Collisions Completed Transportation Task? 101 2:58 0 Yes 102 0:57 0 Yes 103 1:42 0 Yes 104 1:25 0 Yes 105 1:44 1 Yes 201 2:15 0 Yes 202 0:55 0 Yes 203 0:59 0 Yes 204 1:24 0 Yes 205 1:58 0 Yes 301 1:19 0 Yes 302 1:04 0 Yes 303 4:09 0 Yes 304 3:38 0 Yes 305 1:21 0 Yes 401 2:20 0 Yes 402 2:01 0 Yes 403 1:15 1 No 404 1:03 0 Yes 405 1:58 0 Yes 501 3:31 0 Yes 502 2:17 0 Yes 503 1:55 0 Yes 504 1:04 0 Yes 505 2:27 1 Yes Table 5.3: Data for runs 101 to 505 that has been used to measure the success rate. 92

107 Success Rate The success rate has been determined by considering the number of collisions that occur during the transportation task and whether the transportation task was completed. Therefore, two success rates will be presented. One measure includes minor collisions while the other measures the success rate without any collisions. Success Rate without Collisions Equation (4-2) was used to compute the success rate without any collision. number of successful runs without collisions 100 (4-2) total number of runs Note: Each experiment was repeated three times. Therefore, in total, seventy-five runs measured the success rate. Success Rate with Minor Collisions A collision is minor when: 1) Only the wheel of the robot comes in contact with the obstacle, that is, the body of the robot is at no point in time in contact with the obstacle; 2) The volume deformation of the wheel that is in contact with the obstacle is less than 5%. It is assumed that the wheels of the robot can deform; and 3) The obstacle that the robot collides with does not move. The robot s wheel brushing past the obstacle is a minor collision because it did not stop the transportation task and there is no or extremely little chance of damage to the robots. The minor collisions are almost negligible. Minor collisions were observed in runs 105 (Arena 1) and 505 (Arena 5). The success rate was computed using equation (5-1). 93

108 number of runs collisions 100 (5-1) total number of runs Section 5.1 (Limitations and Errors of the System) explained the reasons for the failures or collisions. 5.3 Performance Indicator 2: Scalability Scalability tests determine how well the control algorithm performs with varying team sizes. Quite naturally, a configuration of smaller number of robots has a smaller turning radius than a collection of a large number of robots. Therefore, to eliminate the influence of physical limitations on the performance of the algorithm, this experiment was setup to determine the relationship between the team size and task completion time while the environment given in Figure 5.17 was kept fixed. Each obstacle is 1 meter in diameter and 2 meters in height, and weighs 100KG. The dimensions of the load was 12 x 12 x 0.25 meters (Length x Width x Height). GOAL Robot Team START Figure 5.17: First arena used for the scalability test. Starting with a 3-robot team, the team size was increased at increments of one and up to a 12-robot team was tested. The teams were subjected to different loads. Starting at 500KG, each team was required to carry loads at increments of 500KG until either the team could not carry the allocated load or the load was 2500KG. The dimension of the load was 12 x 12 x 0.25 meters (Length x Width x Height). 94

109 To ensure repeatability of the experimental results, each run was repeated three times. Figure 5.18 presents the task completion times for the load size of 500KG. Appendix C tabulates the complete data. Task Completion Time versus Team Size for Load Size of 500KG Team Size Figure 5.18: Task completion time for a load of 500KG as the number of the robots in the team is increased. Line of best fit has been shown in the graph. The error bars represent one standard deviation of the completion time. Completion Time (sec) All teams in all runs transported the object to the goal area without collisions, resulting in a success rate of 100%. The mean and standard deviation of the completion time is 31.8 and 0.59 degrees respectively. It is noted that 80% of the completion times falls within one standard deviation of the mean while 100% of the completion times falls within two standard deviations. The reason for the slight increase in the task completion times for larger teams is the effects of push-and-pull forces from other robots. The push-and-pull forces will naturally increase as there is an increase in the number of robots in the team. Furthermore, the Mean Absolute Heading Error, which is a holistic measure of all errors of the system, is studied. Ideally, a scalable system is one where the variation in the heading errors is negligible. Please see Section for detailed discussion on the computation of the heading error. Figure 5.19 plots the average heading error for each team size. Results indicate that there is no major variation in the heading error with increasing team size. The mean 95

110 and standard deviation of the heading error is 5.53 and 0.26 degrees respectively. It is noted that 90% of the heading errors falls within one standard deviation of the mean while 100% falls within two standard deviations. Heading Error( ) Heading Error versus Team Size Team Size Figure 5.19: The heading error as the number of robots in the team is increased. Line of best fit has been shown in the graph. The error bars represent one standard deviation of the heading error. The experiments were repeated using the environment in Figure This reasonably complicated arena combines the features of the arenas used to measure the success rate, such as, configuration of obstacles that requires a team to follow walls and make sharp turns. Starting with a 3-robot team, the team size was increased at increments of three and up to 15-robot team was required to carry a 500KG load. Each obstacle is 1 meter in diameter and 2 meters in height, and weighs 100KG. The dimensions of the load was 12 x 12 x 0.25 meters (Length x Width x Height). All runs were completed without collisions, resulting in a success rate of 100%. Figure 5.21 plots the task completion times and heading errors. It is noted that 100% of the task completion times and the heading errors were within one standard deviation of the mean. The mean and standard deviation of the completion time is 40.3 and 1 second respectively while the mean and standard deviation of the heading error is 6.41 and 1.34 degrees respectively. 96

111 GOAL Robot Team START Figure 5.20 : Second arena used for the scalability test. Completion Time (sec) Task Completion Time versus Team Size for Load Size of 500KG Team Size (a) Heading Error ( ) Heading Error vesus Team Size for Load Size of 500KG Team Size (b) Figure 5.21: (a) The task completion time for different team sizes; (b) The heading error for different team sizes. The error bars represent one standard deviation of the completion time and heading error in the respective graphs. 97

112 The results of the two experiments indicate that our cooperative control algorithm is reasonably scalable, that is, overwhelming majority of the completion times and heading errors are within one standard deviation of the respective means. The slight increases in completion times and heading errors for larger teams are not unexpected. As the team size is increased, the effects of the push-and-pull forces will naturally increase, causing slightly higher heading errors and completion times for larger teams. 5.4 Performance Indicator 3: Robustness Robustness of the cooperative control measures its ability to handle robot failures, that is, the control's fault tolerance. Our aim is to design a controller that enables the team to transport the object to the goal location even if some robots fail. With increasing quality of hardware, possibility of failure is low. However, there is a possibility of robots failing to either communicate with its neighbour s or control its motion. Therefore, necessary tests were constructed to study the robustness of the proposed cooperative control. The two types of hardware failures tested were: 1. Motion Control Failure This failure hinders the robot from steering itself and applying engine force. It is assumed that the wheels of the robot become free when the robot fails. However, the robot can communicate with its neighbours. 2. Communication Failure This failure disables inter-robot communication, that is, a robot is not able to communicate with its neighbours. However, the robot is able to control its motion. The proposed control was subjected to Motion Control Failure only, and then to Motion Control Failure with Communication Failure. The experiment started with a team of six robots transporting an object from the start to the goal location shown in Figure 5.20, which is one of the arenas that was used for scalability tests. Using the 6-robot team, one or both hardware failures stated above were induced in one robot and then the experiment was re-run. In this manner, up to three robots were failed. The team could not complete the transportation task if more than three robots failed. 98

113 5.4.1 Performance with Motion Control Failure The first set of experiments tested for motion control failure only. In this failure, the robot could communicate with its neighbours but could not control its motion. The results, which are reported in Figure 5.22, show the paths taken by the 6-robot team as there is an increase in the number of robots with motion control failure. Run 701 becomes the control experiment in which all six robots are fully functional. The graphs show the path taken by the team together with the start and finish positions and orientations shown using solid rectangles. The intermediate positions and orientations are shown using dashed grey rectangles at intervals of three hundred control steps. The top right hand corner of each graph shows the number of failed robots in the 6-robot team and number of collisions experienced by the team, while the top left hand corner provides each experiment a unique run number. Run: 701 Num Failed: 0 Time: 1:32 min Num Collisions: 0 Run: 702 Num Failed: 1 Time: 2:51 min Num Collisions: 1 START START (a) (b) Run: 703 Num Failed: 2 Time: 3:10 min Num Collisions: 3 Run: 704 Num Failed: 3 Time: 3:38 min Num Collisions: 2 START START (c) (d) Figure 5.22: Trajectory taken by the 6-robot team as the number of robots with motion control failure increases. 99

114 Figure 5.23(a) shows the time taken by the 6-robot team to complete the transportation task as the number of robots with motion control failure increased. There is a clear and almost linear increase in the completion time. This indicates that as the number of robots with motion control failure increases, it takes longer time for the team to complete the task. This increase in time is expected. As each robot fails, it requires other robots to drag it during the transportation task. This not only slows down the team but also causes the team to take a different path from a team that had all six robots functioning perfectly. Completion Time (sec) Task Completion Time with Motion Control Failure Number of Failed Robots (a) Num Collisions Number of Minor Collisions with Motion Control Failure Number of Failed Robots (b) Figure 5.23: (a) Time taken by the 6-robot team to complete the task as number of failed robots increase; (b) number of collisions experienced by the 6-robot team as the number of robots failed. 100

115 Figure 5.23(b) shows that there were collisions as robots fail. There are two important points that needs to be highlighted regarding these collisions: 1. In run 701, the team is already travelling very close to the obstacle towards the end of the arena. This is expected as the arena presents a challenging path for the team to take. With even a single robot failing, the other robots have to drag the failed robot while carrying the object. As such, it is expected that there will be deviation between runs 701 and 702 to 704. A slight deviation causes minor collisions with the obstacles. 2. Even the slightest contact with the obstacle was considered as a collision. In most cases, the minor collisions are ignorable since these do not stop the transportation task and has no or very little possibility of causing any damage to the robot. Success Rate with Motion Control Failure Although collisions were observed, there was a specific emphasis on major collisions, that is, the collisions that stop the team from completing the task. Success Rate with Minor Collisions Equation 5-1 was used to compute this success rate. Each experiment was repeated three times. Therefore, to measure the success rate, twelve runs were completed. number of runs collision 100 (5-1) total number of runs Performance with Motion Control and Communication Failures The section studies robots experiencing motion control and communication failures. The experiments were conducted in the same way as the experiments that were conducted using the 6-robot team for the motion control failure. Figure 5.24 shows the path and time taken by the team as the number of robots experiencing complete failure increased. 101

116 Figure 5.25 presents the task completion times and the number of collisions that the team experienced. There is an almost linear increase in the completion time as the number of robots fail. The collisions reported in these experiments are also minor. Success Rate with Minor Collisions The success rate with minor collisions is: number of runs collision 100 (5-1) total number of runs Run: 801 Num Failed: 0 Time: 1:32 min Num Collisions: 0 Run: 802 Num Failed: 1 Time: 3:29 min Num Collisions: 3 START START (a) (b) Run: 803 Num Failed: 2 Time: 3:15 min Num Collisions: 1 Run: 804 Num Failed: 3 Time: 3:51 min Num Collisions: 2 (c) START (d) START Figure 5.24: Trajectory taken by the 6-robot team with increasing number of robots that are unable to control its motion and communicate with its neighbours. 102

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1

CSCI 445 Laurent Itti. Group Robotics. Introduction to Robotics L. Itti & M. J. Mataric 1 Introduction to Robotics CSCI 445 Laurent Itti Group Robotics Introduction to Robotics L. Itti & M. J. Mataric 1 Today s Lecture Outline Defining group behavior Why group behavior is useful Why group behavior

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Robotic Systems ECE 401RB Fall 2007

Robotic Systems ECE 401RB Fall 2007 The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation

More information

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Seiji Yamada Jun ya Saito CISS, IGSSE, Tokyo Institute of Technology 4259 Nagatsuta, Midori, Yokohama 226-8502, JAPAN

More information

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Learning to avoid obstacles Outline Problem encoding using GA and ANN Floreano and Mondada

More information

Methodology for Agent-Oriented Software

Methodology for Agent-Oriented Software ب.ظ 03:55 1 of 7 2006/10/27 Next: About this document... Methodology for Agent-Oriented Software Design Principal Investigator dr. Frank S. de Boer (frankb@cs.uu.nl) Summary The main research goal of this

More information

CS594, Section 30682:

CS594, Section 30682: CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:

More information

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015 Subsumption Architecture in Swarm Robotics Cuong Nguyen Viet 16/11/2015 1 Table of content Motivation Subsumption Architecture Background Architecture decomposition Implementation Swarm robotics Swarm

More information

Multi-Platform Soccer Robot Development System

Multi-Platform Soccer Robot Development System Multi-Platform Soccer Robot Development System Hui Wang, Han Wang, Chunmiao Wang, William Y. C. Soh Division of Control & Instrumentation, School of EEE Nanyang Technological University Nanyang Avenue,

More information

Collective Robotics. Marcin Pilat

Collective Robotics. Marcin Pilat Collective Robotics Marcin Pilat Introduction Painting a room Complex behaviors: Perceptions, deductions, motivations, choices Robotics: Past: single robot Future: multiple, simple robots working in teams

More information

CS 599: Distributed Intelligence in Robotics

CS 599: Distributed Intelligence in Robotics CS 599: Distributed Intelligence in Robotics Winter 2016 www.cpp.edu/~ftang/courses/cs599-di/ Dr. Daisy Tang All lecture notes are adapted from Dr. Lynne Parker s lecture notes on Distributed Intelligence

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Multi-Robot Coordination. Chapter 11

Multi-Robot Coordination. Chapter 11 Multi-Robot Coordination Chapter 11 Objectives To understand some of the problems being studied with multiple robots To understand the challenges involved with coordinating robots To investigate a simple

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Design of Adaptive Collective Foraging in Swarm Robotic Systems

Design of Adaptive Collective Foraging in Swarm Robotic Systems Western Michigan University ScholarWorks at WMU Dissertations Graduate College 5-2010 Design of Adaptive Collective Foraging in Swarm Robotic Systems Hanyi Dai Western Michigan University Follow this and

More information

Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization

Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization MAITE LÓPEZ-SÁNCHEZ, JESÚS CERQUIDES WAI Volume Visualization and Artificial Intelligence Research Group, MAiA Dept. Universitat

More information

Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment

Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment Fatma Boufera 1, Fatima Debbat 2 1,2 Mustapha Stambouli University, Math and Computer Science Department Faculty

More information

Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots

Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots Philippe Lucidarme, Alain Liégeois LIRMM, University Montpellier II, France, lucidarm@lirmm.fr Abstract This paper presents

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Stanford Center for AI Safety

Stanford Center for AI Safety Stanford Center for AI Safety Clark Barrett, David L. Dill, Mykel J. Kochenderfer, Dorsa Sadigh 1 Introduction Software-based systems play important roles in many areas of modern life, including manufacturing,

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Flocking-Based Multi-Robot Exploration

Flocking-Based Multi-Robot Exploration Flocking-Based Multi-Robot Exploration Noury Bouraqadi and Arnaud Doniec Abstract Dépt. Informatique & Automatique Ecole des Mines de Douai France {bouraqadi,doniec}@ensm-douai.fr Exploration of an unknown

More information

Structure and Synthesis of Robot Motion

Structure and Synthesis of Robot Motion Structure and Synthesis of Robot Motion Motion Synthesis in Groups and Formations I Subramanian Ramamoorthy School of Informatics 5 March 2012 Consider Motion Problems with Many Agents How should we model

More information

Handling Failures In A Swarm

Handling Failures In A Swarm Handling Failures In A Swarm Gaurav Verma 1, Lakshay Garg 2, Mayank Mittal 3 Abstract Swarm robotics is an emerging field of robotics research which deals with the study of large groups of simple robots.

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

CEEN Bot Lab Design A SENIOR THESIS PROPOSAL

CEEN Bot Lab Design A SENIOR THESIS PROPOSAL CEEN Bot Lab Design by Deborah Duran (EENG) Kenneth Townsend (EENG) A SENIOR THESIS PROPOSAL Presented to the Faculty of The Computer and Electronics Engineering Department In Partial Fulfillment of Requirements

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Socially-Mediated Negotiation for Obstacle Avoidance in Collective Transport

Socially-Mediated Negotiation for Obstacle Avoidance in Collective Transport Socially-Mediated Negotiation for Obstacle Avoidance in Collective Transport Eliseo Ferrante, Manuele Brambilla, Mauro Birattari and Marco Dorigo IRIDIA, CoDE, Université Libre de Bruxelles, Brussels,

More information

DYNAMIC ROBOT NETWORKS: A COORDINATION PLATFORM FOR MULTI-ROBOT SYSTEMS

DYNAMIC ROBOT NETWORKS: A COORDINATION PLATFORM FOR MULTI-ROBOT SYSTEMS DYNAMIC ROBOT NETWORKS: A COORDINATION PLATFORM FOR MULTI-ROBOT SYSTEMS a dissertation submitted to the department of aeronautics and astronautics and the committee on graduate studies of stanford university

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

Term Paper: Robot Arm Modeling

Term Paper: Robot Arm Modeling Term Paper: Robot Arm Modeling Akul Penugonda December 10, 2014 1 Abstract This project attempts to model and verify the motion of a robot arm. The two joints used in robot arms - prismatic and rotational.

More information

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2010 Enhanced performance of delayed teleoperator systems operating

More information

An Introduction To Modular Robots

An Introduction To Modular Robots An Introduction To Modular Robots Introduction Morphology and Classification Locomotion Applications Challenges 11/24/09 Sebastian Rockel Introduction Definition (Robot) A robot is an artificial, intelligent,

More information

Advances in Antenna Measurement Instrumentation and Systems

Advances in Antenna Measurement Instrumentation and Systems Advances in Antenna Measurement Instrumentation and Systems Steven R. Nichols, Roger Dygert, David Wayne MI Technologies Suwanee, Georgia, USA Abstract Since the early days of antenna pattern recorders,

More information

Distributed Intelligent Systems W11 Machine-Learning Methods Applied to Distributed Robotic Systems

Distributed Intelligent Systems W11 Machine-Learning Methods Applied to Distributed Robotic Systems Distributed Intelligent Systems W11 Machine-Learning Methods Applied to Distributed Robotic Systems 1 Outline Revisiting expensive optimization problems Additional experimental evidence Noise-resistant

More information

PERFORMANCE ANALYSIS OF A RANDOM SEARCH ALGORITHM FOR DISTRIBUTED AUTONOMOUS MOBILE ROBOTS CHENG CHEE KONG NATIONAL UNIVERSITY OF SINGAPORE

PERFORMANCE ANALYSIS OF A RANDOM SEARCH ALGORITHM FOR DISTRIBUTED AUTONOMOUS MOBILE ROBOTS CHENG CHEE KONG NATIONAL UNIVERSITY OF SINGAPORE PERFORMANCE ANALYSIS OF A RANDOM SEARCH ALGORITHM FOR DISTRIBUTED AUTONOMOUS MOBILE ROBOTS CHENG CHEE KONG NATIONAL UNIVERSITY OF SINGAPORE 24 PERFORMANCE ANALYSIS OF A RANDOM SEARCH ALGORITHM FOR DISTRIBUTED

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

ARDUINO BASED WATER LEVEL MONITOR- ING AND CONTROL VIA CAN BUS TUAN ABU BAKAR BIN TUAN ISMAIL UNIVERSITI MALAYSIA PAHANG

ARDUINO BASED WATER LEVEL MONITOR- ING AND CONTROL VIA CAN BUS TUAN ABU BAKAR BIN TUAN ISMAIL UNIVERSITI MALAYSIA PAHANG ARDUINO BASED WATER LEVEL MONITOR- ING AND CONTROL VIA CAN BUS TUAN ABU BAKAR BIN TUAN ISMAIL UNIVERSITI MALAYSIA PAHANG ARDUINO BASED WATER LEVEL MONITORING AND CONTROL VIA CAN BUS TUAN ABU BAKAR BIN

More information

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MF1 94) Las Vega, NV Oct. 2-5, 1994 Fuzzy Logic Based Robot Navigation In Uncertain

More information

A Taxonomy of Multirobot Systems

A Taxonomy of Multirobot Systems A Taxonomy of Multirobot Systems ---- Gregory Dudek, Michael Jenkin, and Evangelos Milios in Robot Teams: From Diversity to Polymorphism edited by Tucher Balch and Lynne E. Parker published by A K Peters,

More information

Imperfect Monitoring in Multi-agent Opportunistic Channel Access

Imperfect Monitoring in Multi-agent Opportunistic Channel Access Imperfect Monitoring in Multi-agent Opportunistic Channel Access Ji Wang Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

Multi-threat containment with dynamic wireless neighborhoods

Multi-threat containment with dynamic wireless neighborhoods Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 5-1-2008 Multi-threat containment with dynamic wireless neighborhoods Nathan Ransom Follow this and additional

More information

STRESS DETECTION USING GALVANIC SKIN RESPONSE SHAHNAZ SAKINAH BINTI SHAIFUL BAHRI UNIVERSITI MALAYSIA PAHANG

STRESS DETECTION USING GALVANIC SKIN RESPONSE SHAHNAZ SAKINAH BINTI SHAIFUL BAHRI UNIVERSITI MALAYSIA PAHANG STRESS DETECTION USING GALVANIC SKIN RESPONSE SHAHNAZ SAKINAH BINTI SHAIFUL BAHRI UNIVERSITI MALAYSIA PAHANG STRESS DETECTION USING GALVANIC SKIN RESPONSE SHAHNAZ SAKINAH BINTI SHAIFUL BAHRI This thesis

More information

CARMA: Complete Autonomous Responsible Management Agent (System)

CARMA: Complete Autonomous Responsible Management Agent (System) University of Technology, Sydney Faculty of Engineering and Information Technology CARMA: Complete Autonomous Responsible Management Agent (System) Submitted by: Haydn Mearns BE (Soft.) 2012 Principal

More information

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011 Overview of Challenges in the Development of Autonomous Mobile Robots August 23, 2011 What is in a Robot? Sensors Effectors and actuators (i.e., mechanical) Used for locomotion and manipulation Controllers

More information

Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot

Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot Mohamed Ghorbel 1, Lobna Amouri 1, Christian Akortia Hie 1 Institute of Electronics and Communication of Sfax (ISECS) ATMS-ENIS,University

More information

Formation and Cooperation for SWARMed Intelligent Robots

Formation and Cooperation for SWARMed Intelligent Robots Formation and Cooperation for SWARMed Intelligent Robots Wei Cao 1 Yanqing Gao 2 Jason Robert Mace 3 (West Virginia University 1 University of Arizona 2 Energy Corp. of America 3 ) Abstract This article

More information

TIME- OPTIMAL CONVERGECAST IN SENSOR NETWORKS WITH MULTIPLE CHANNELS

TIME- OPTIMAL CONVERGECAST IN SENSOR NETWORKS WITH MULTIPLE CHANNELS TIME- OPTIMAL CONVERGECAST IN SENSOR NETWORKS WITH MULTIPLE CHANNELS A Thesis by Masaaki Takahashi Bachelor of Science, Wichita State University, 28 Submitted to the Department of Electrical Engineering

More information

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1 ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS Xiang Ji and Hongyuan Zha Material taken from Sensor Network Operations by Shashi Phoa, Thomas La Porta and Christopher Griffin, John Wiley,

More information

Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks

Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks Muh Anshar Faculty of Engineering and Information Technology

More information

Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes

Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes 7th Mediterranean Conference on Control & Automation Makedonia Palace, Thessaloniki, Greece June 4-6, 009 Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes Theofanis

More information

Collective Locomotion

Collective Locomotion Pierre Arnaud Laboratoire de Micro-Informatique, LAMI DI EPFL Swiss Federal Institute of Technology, CH-1015 Lausanne Pierre.Arnaud@di.epfl.ch http://diwww.epfl.ch/lami/team/arnaud/ 0. Abstract In this

More information

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Recently, consensus based distributed estimation has attracted considerable attention from various fields to estimate deterministic

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Hybrid Neuro-Fuzzy System for Mobile Robot Reactive Navigation

Hybrid Neuro-Fuzzy System for Mobile Robot Reactive Navigation Hybrid Neuro-Fuzzy ystem for Mobile Robot Reactive Navigation Ayman A. AbuBaker Assistance Prof. at Faculty of Information Technology, Applied cience University, Amman- Jordan, a_abubaker@asu.edu.jo. ABTRACT

More information

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Student Name: Student ID # UOSA Statement of Academic Integrity On my honor I affirm that I have neither given nor received inappropriate aid

More information

Task Allocation: Role Assignment. Dr. Daisy Tang

Task Allocation: Role Assignment. Dr. Daisy Tang Task Allocation: Role Assignment Dr. Daisy Tang Outline Multi-robot dynamic role assignment Task Allocation Based On Roles Usually, a task is decomposed into roleseither by a general autonomous planner,

More information

Introduction to Robotics

Introduction to Robotics - Lecture 13 Jianwei Zhang, Lasse Einig [zhang, einig]@informatik.uni-hamburg.de University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects of Multimodal Systems July

More information

Sorting in Swarm Robots Using Communication-Based Cluster Size Estimation

Sorting in Swarm Robots Using Communication-Based Cluster Size Estimation Sorting in Swarm Robots Using Communication-Based Cluster Size Estimation Hongli Ding and Heiko Hamann Department of Computer Science, University of Paderborn, Paderborn, Germany hongli.ding@uni-paderborn.de,

More information

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer Engineering March 1 st, 2016 Outline 2 I. Introduction

More information

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

Navigation of an Autonomous Underwater Vehicle in a Mobile Network Navigation of an Autonomous Underwater Vehicle in a Mobile Network Nuno Santos, Aníbal Matos and Nuno Cruz Faculdade de Engenharia da Universidade do Porto Instituto de Sistemas e Robótica - Porto Rua

More information

1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg)

1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg) 1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg) 6) Virtual Ecosystems & Perspectives (sb) Inspired

More information

New task allocation methods for robotic swarms

New task allocation methods for robotic swarms New task allocation methods for robotic swarms F. Ducatelle, A. Förster, G.A. Di Caro and L.M. Gambardella Abstract We study a situation where a swarm of robots is deployed to solve multiple concurrent

More information

Biological Inspirations for Distributed Robotics. Dr. Daisy Tang

Biological Inspirations for Distributed Robotics. Dr. Daisy Tang Biological Inspirations for Distributed Robotics Dr. Daisy Tang Outline Biological inspirations Understand two types of biological parallels Understand key ideas for distributed robotics obtained from

More information

Robot Autonomy Project Final Report Multi-Robot Motion Planning In Tight Spaces

Robot Autonomy Project Final Report Multi-Robot Motion Planning In Tight Spaces 16-662 Robot Autonomy Project Final Report Multi-Robot Motion Planning In Tight Spaces Aum Jadhav The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 ajadhav@andrew.cmu.edu Kazu Otani

More information

An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment

An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment Ching-Chang Wong, Hung-Ren Lai, and Hui-Chieh Hou Department of Electrical Engineering, Tamkang University Tamshui, Taipei

More information

Robot Team Formation Control using Communication "Throughput Approach"

Robot Team Formation Control using Communication Throughput Approach University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 1-1-2013 Robot Team Formation Control using Communication "Throughput Approach" FatmaZahra Ahmed BenHalim

More information

Moving Path Planning Forward

Moving Path Planning Forward Moving Path Planning Forward Nathan R. Sturtevant Department of Computer Science University of Denver Denver, CO, USA sturtevant@cs.du.edu Abstract. Path planning technologies have rapidly improved over

More information

E190Q Lecture 15 Autonomous Robot Navigation

E190Q Lecture 15 Autonomous Robot Navigation E190Q Lecture 15 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Probabilistic Robotics (Thrun et. Al.) Control Structures Planning Based Control Prior Knowledge

More information

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots Mousa AL-Akhras, Maha Saadeh, Emad AL Mashakbeh Computer Information Systems Department King Abdullah II School for Information

More information

AGENTS AND AGREEMENT TECHNOLOGIES: THE NEXT GENERATION OF DISTRIBUTED SYSTEMS

AGENTS AND AGREEMENT TECHNOLOGIES: THE NEXT GENERATION OF DISTRIBUTED SYSTEMS AGENTS AND AGREEMENT TECHNOLOGIES: THE NEXT GENERATION OF DISTRIBUTED SYSTEMS Vicent J. Botti Navarro Grupo de Tecnología Informática- Inteligencia Artificial Departamento de Sistemas Informáticos y Computación

More information

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Look out! : Socially-Mediated Obstacle Avoidance in Collective Transport Eliseo

More information

Pedestrian Navigation System Using. Shoe-mounted INS. By Yan Li. A thesis submitted for the degree of Master of Engineering (Research)

Pedestrian Navigation System Using. Shoe-mounted INS. By Yan Li. A thesis submitted for the degree of Master of Engineering (Research) Pedestrian Navigation System Using Shoe-mounted INS By Yan Li A thesis submitted for the degree of Master of Engineering (Research) Faculty of Engineering and Information Technology University of Technology,

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

A MARINE FAULTS TOLERANT CONTROL SYSTEM BASED ON INTELLIGENT MULTI-AGENTS

A MARINE FAULTS TOLERANT CONTROL SYSTEM BASED ON INTELLIGENT MULTI-AGENTS A MARINE FAULTS TOLERANT CONTROL SYSTEM BASED ON INTELLIGENT MULTI-AGENTS Tianhao Tang and Gang Yao Department of Electrical & Control Engineering, Shanghai Maritime University 1550 Pudong Road, Shanghai,

More information

Correcting Odometry Errors for Mobile Robots Using Image Processing

Correcting Odometry Errors for Mobile Robots Using Image Processing Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,

More information

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Traffic Control for a Swarm of Robots: Avoiding Target Congestion Traffic Control for a Swarm of Robots: Avoiding Target Congestion Leandro Soriano Marcolino and Luiz Chaimowicz Abstract One of the main problems in the navigation of robotic swarms is when several robots

More information

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments Tang S. H. and C. K. Ang Universiti Putra Malaysia (UPM), Malaysia Email: saihong@eng.upm.edu.my, ack_kit@hotmail.com D.

More information

Holland, Jane; Griffith, Josephine; O'Riordan, Colm.

Holland, Jane; Griffith, Josephine; O'Riordan, Colm. Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Title An evolutionary approach to formation control with mobile robots

More information

APPLICATION OF FUZZY BEHAVIOR COORDINATION AND Q LEARNING IN ROBOT NAVIGATION

APPLICATION OF FUZZY BEHAVIOR COORDINATION AND Q LEARNING IN ROBOT NAVIGATION APPLICATION OF FUZZY BEHAVIOR COORDINATION AND Q LEARNING IN ROBOT NAVIGATION Handy Wicaksono 1, Prihastono 2, Khairul Anam 3, Rusdhianto Effendi 4, Indra Adji Sulistijono 5, Son Kuswadi 6, Achmad Jazidie

More information

First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems

First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems Shahab Pourtalebi, Imre Horváth, Eliab Z. Opiyo Faculty of Industrial Design Engineering Delft

More information

Review of Soft Computing Techniques used in Robotics Application

Review of Soft Computing Techniques used in Robotics Application International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 3 (2013), pp. 101-106 International Research Publications House http://www. irphouse.com /ijict.htm Review

More information

Learning and Interacting in Human Robot Domains

Learning and Interacting in Human Robot Domains IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART A: SYSTEMS AND HUMANS, VOL. 31, NO. 5, SEPTEMBER 2001 419 Learning and Interacting in Human Robot Domains Monica N. Nicolescu and Maja J. Matarić

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Wireless Robust Robots for Application in Hostile Agricultural. environment.

Wireless Robust Robots for Application in Hostile Agricultural. environment. Wireless Robust Robots for Application in Hostile Agricultural Environment A.R. Hirakawa, A.M. Saraiva, C.E. Cugnasca Agricultural Automation Laboratory, Computer Engineering Department Polytechnic School,

More information

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,

More information

Distributed Task Allocation in Swarms. of Robots

Distributed Task Allocation in Swarms. of Robots Distributed Task Allocation in Swarms Aleksandar Jevtić Robosoft Technopole d'izarbel, F-64210 Bidart, France of Robots Diego Andina Group for Automation in Signals and Communications E.T.S.I.T.-Universidad

More information

A User Friendly Software Framework for Mobile Robot Control

A User Friendly Software Framework for Mobile Robot Control A User Friendly Software Framework for Mobile Robot Control Jesse Riddle, Ryan Hughes, Nathaniel Biefeld, and Suranga Hettiarachchi Computer Science Department, Indiana University Southeast New Albany,

More information

Biologically-inspired Autonomic Wireless Sensor Networks. Haoliang Wang 12/07/2015

Biologically-inspired Autonomic Wireless Sensor Networks. Haoliang Wang 12/07/2015 Biologically-inspired Autonomic Wireless Sensor Networks Haoliang Wang 12/07/2015 Wireless Sensor Networks A collection of tiny and relatively cheap sensor nodes Low cost for large scale deployment Limited

More information