Robots in the Loop: Supporting an Incremental Simulation-based Design Process

Size: px
Start display at page:

Download "Robots in the Loop: Supporting an Incremental Simulation-based Design Process"

Transcription

1 s in the Loop: Supporting an Incremental -based Design Process Xiaolin Hu Computer Science Department Georgia State University Atlanta, GA, USA Abstract This paper presents the results of applying an incremental simulation-based design process to study a robotic convoy system. -in-the-loop simulation, as a major step in this process, allows the system to be measured with combined robot models and real robots. This capability effectively bridges the gap between conventional simulation where models are used and real system experiment where real robots are used. For each step in this incremental process, the simulation/experiment setup is described. The measurement data are then presented and compared. These experiments and results demonstrate the capabilities of robot-in-the-loop simulation and justify the effectiveness of using the incremental simulation-based design process. Keywords: -in-the-loop simulation, Incremental Design Process, ic Convoy, DEVS 1 Introduction Distributed robotic systems usually include a large number of robots that communicate with each other to achieve coordination. Due to the complexity of these systems, verification and evaluation are important in the design process to check if the system under development will fulfill correct behaviors and achieve desired performance. Traditionally, simulation plays important roles from this perspective. However, these roles are typically constrained to the model world on computers. When real system components, such as real robots, are brought into the development, the simulation models quickly become outdated and are hardly reused. Instead, real system experiments are carried out where real robots are tested in a real field. This transition from simulation models to real system components is a necessary step. Unfortunately, it is rarely smooth due to the fact that there exist discrepancies between simulation models and real system components. For example, a robot model used in simulation may not model well (or it is very hard to model well) the mechanic dynamics of a real robot. This difference between simulation models and real system components results in a gap between simulation-based study and real system experiment. Such a gap is significant Narayanaswami Ganapathy Bernard P. Zeigler Arizona Center for Integrative Modeling and University of Arizona Tucson, AZ, USA. {narayang, zeigler}@ece.arizona.edu for large-scale multi-robot systems that operate in challenging environments. To smooth the transition from conventional simulation to real system experiment and to bridge the gap between them, we developed a simulation-based virtual environment that allows combined real robots and virtual robot models to work together [1]. We call this capability of including real robots into simulation robot-in-the-loop (RIL) simulation. RIL simulation brings simulation-based study one step closer to the reality and allows system-wide test to be carried out using combined models and real robots. This is especially useful for large-scale cooperative robotic systems whose complexity and scalability severely limit experimentations in a physical environment using all real robots. For large-scale cooperative robotic systems that include hundreds of robots, RIL simulation makes it possible to conduct system-wide tests and measurements without waiting for all real robots to be available, because the rest of the robots can be provided by the simulationbased virtual environment. The capability of RIL simulation adds into conventional simulation and real system experiment to form an incremental measurement process that includes three steps: conventional simulation, RIL simulation, and real system experiment. As the process proceeds, the system under development evolves from models, to combined models and real robots, and to all real robots. In this paper, we show how this process is applied to a cooperative robotic convoy system. The setup of each step is described and some simulation/experimental results are presented to demonstrate the feasibility and effectiveness of this process. This research is an extension to our previous work on model continuity [] and simulation-based virtual environment [1]. The models and simulation environment that were developed are based on the Discrete Event System Specification (DEVS) modeling and simulation framework [3]. The rest of this paper is organized as follows. Section presents the incremental measurement process. Section 3 describes the robotic convoy example. Section describes simulation/experiment setups and presents the measurement results. Section discusses this research and provides some future research directions.

2 Incremental Measurement Process An incremental measurement process is formed by integrating RIL simulation that allows combined robot models and real robots to work together. This process includes three steps: conventional simulation, RIL simulation, and real robot experiment. Figure 1 illustrates this process by considering a system with two robots. Model Virtual Virtual Actuator Environment Model (a) Model Virtual Virtual Actuator Model Virtual Virtual Actuator Virtual HIL Actuator Environment Model (b) Actuator Environment Figure 1: An incremental measurement process (c) Actuat The first step is conventional simulation, where all components are models that are simulated on computers. As shown in Figure 1(a), in this step both robot models are equipped with virtual sensors/actuators (sensor/actuator models, which are implemented as abstractactivities in DEVS) to interact with an environment model. Couplings between two robots can be added so they can communicate with each other. This conventional way of simulation has the most flexibility because all components are models thus different configurations can be easily applied to study the system under development. The second step is RIL simulation where one or more real robots are included together with other robot models that are simulated on computers. By replacing robot models with real robots, this step brings simulation-based study closer to the reality and increases the fidelity of simulation results 1. As shown in Figure 1(b), in this step the robot model still use virtual sensors/actuators. However, depending on the study objectives, the real robots may have a combination of virtual sensors/actuators and HIL (from Hardware-In-the-Loop) sensors/actuators. A HIL sensor/actuator, implemented as HILActivity, acts like a real sensor/actuator, but is also coupled to the environment model to synchronize with it. For example, a HIL motor will drive a real robot to move in a real world. Meanwhile, it sends messages to the environment model to update its position in the virtual world. More information about HIL sensor/actuators can be found at [1]. In Figure (b), the real robot uses a virtual sensor and a HIL actuator. Through the virtual sensor, it gets sensory input from the environment model. Using the HIL actuator, it interacts with a real environment (which is not shown in the figure) and also synchronizes with the environment model. In RIL simulation, robot models and the environment model are simulated on computers; the model that controls the real robots runs on the real robot. Couplings between the two robots are maintained the same as in conventional simulation. So the real and virtual robots interact with each 1 This may not be true if robot models have considered enough details of real robots. other in the same way as they do in the first step, although here the commutation actually happens across a network. The final step is the real system experiment, where all real robots run in a real physical environment. These robots use sensor/actuator interfaces (implemented as RTActivities) to drive real sensors and actuators. They communicate with each other in the same way as they do in the previous steps because the couplings between them are not changed through the process. Since all measurement results of this step come directly from the real system, they have the most fidelity. However, they are also most costly and time consuming to be collected. As described above, three types of DEVS Activities have been developed to act as sensors/actuators interfaces between a robot s decision-making model and the environment model. They are abstract Activity abstractactivity, real-time Activity RTActivity, and hardware-in-the-loop Activity HILActivity. These activities play different roles in different situations. An abstractactivity serves as a virtual sensor or actuator that is used by the decision-making model to interact with the environment model in simulation. An RTActivity is used in real execution to drive a real sensor or actuator. A HILActivity is employed in RIL simulation to drive a real sensor or actuator and also synchronizes with the environment model. It is important to note that in order to maintain the decision-making model unchanged, the corresponding abstractactivity, RTActivity, and HILActivity should maintain a same set of interface functions that are used by the decision-making model. This incremental simulation-based measurement process establishes an operational framework to measure and evaluate cooperative robotic systems. As the process proceeds, the flexibility (easy to setup different experiments) and productivity (time saving and cost saving) of the measurement decreases and the fidelity (loyal to the reality) of the measurement increases. 3 A ic Convoy System A robotic convoy system has been developed as a case study example to illustrate how the incremental measurement process works. This robot convoy system consists of an indefinite number of robots, saying N robots (N>1). These robots are in a line formation where each robot (except the leader and the ender) has a front neighbor and a back neighbor. The robots used in this system are car type mobile robots with wireless communication capability. They can move forward/backward and rotate around the center, and have whisker sensors and infrared sensors. One of the main goals of this convoy system is to maintain the coherence of the line formation and to synchronize robots movements. Synchronization means a robot cannot move forward if its front robot doesn t move, and it has to wait if its back robot doesn t catch up. To serve this purpose, synchronization messages are passed between a robot and its neighbors. To achieve

3 coherence of the line formation, the moving parameters of a front robot are passed backward to its immediate back robot. This allows the back robot to plan its movement accordingly based on its front robot s movement. Figure shows the model of this system. Each block represents a robot model. These robot models have input and output ports, which are used to receive/send synchronization messages as well as moving parameters. The model couplings are shown in Figure. Figure 3 shows a snapshot of a simulation of a convoy system with 3 robots within a field surrounded by walls (no obstacles inside). The simulations show that robots will not follow the same path of the leader robot. But they are able to go after their immediate front robots, thus forming a coherent team from the entire system point of view. FReadyIn n FReadyOut BReadyIn FReadyOut 3 FReadyIn BReadyOut BReadyIn FReadyOut FReadyIn BReadyOut BReadyIn 1 BReadyOut Figure : System model of the robotic convoy system During the convoy, the leader robot (1 in Figure ) decides the path of convoy. In this example, it moves straight forward if there is no obstacle ahead. Otherwise it turns right. All other robots conduct movement based on their IR sensory inputs and also the moving parameters passed from their front robots. Specifically, a robot first predicts where its front robot is and turns to that direction. It then moves forward (or backward) to catch its front robot. After that it may go through an adjust process to make sure that it follows its front robot. This adjust process is necessary because noise and variance exist during a movement so a robot may not reach the desired position/direction after a movement. During adjustment, a robot scans around until it finds its front robot. Then it sends out a synchronization message to inform its neighbors. Thus each robot executes a basic predict and turn move adjust inform routine in every cycle. The motivation to design the above control logic, especially the adjust process, is because there are significant uncertainty and inaccuracy in the movement of the mobile robots that we use. To model robots motion uncertainty, two noise factors: distance noise factor (DNF) and angle noise factor (ANF) are developed and implemented using random numbers. The DNF is the ratio of the maximum distance variance as compared to the robot s desired moving distance. The ANF is the ratio of the maximum angle variance as compared to the robot s desired moving distance. The sensor models model the sensing range ( cm) of IR sensors. Also, all IR sensory data are rounded to the closest multiplicity of (i.e., 1,, ), since this is how the real IR sensors work. The environment model is responsible to keep track of robots movement and provides sensory dada when robots need it. It includes TimeManager models and a SpaceManager model. A TimeManager models the time for a robot to complete a movement. The SpaceManager models the moving space, including the dimension, shape and the obstacles inside the field. It also keeps track of robots positions and moving directions. Such tracking is needed to supply robots with the correct sensory data. A robot s position is updated when the environment model gets moving command messages from the robot. Figure 3: Snapshot of robots in simulation /Experiment & Results Figure : Snapshot of RIL simualtion Following the incremental study process, simulations and experiments were carried out to measure the robotic convoy system described in Section 3. In particular, a movie [] was recorded for RIL simulation where two real robots are used. This movie demonstrates the coordination between real robots and robot models. Figure gives a snapshot from this movie, which shows how two real robots work together with robot models. In this movie, four robots (denoted by R, R1, R, and R3) are used, among which the second and third ones (R1 and R) are real robots. R1 uses virtual IR sensor to get sensory input from the environment model. R uses real IR sensor to sense its front robot (R1) and the real environment. As shown by Figure, this movie has two windows. The right window shows how two real robots move in the real world. The left window is the simulation window. It displays the movements of the entire convoy system, among which the second and third robots are the counterparts of the two real robots (R1 and R). This means the second and third robots movements in the simulation window are synchronized with the two real robots movements in the real world. Thus when a real robot moves backward because it is too close to its front robot, its counterpart in the simulation window moves backward too. Notice that in RIL simulation, a counterpart s position (direction) is updated based on the real robot s moving parameters. Due to noise, the actual distances that a real robot moves may be

4 different from the ones specified by the moving parameters. However, these errors are tolerable since wheel encoders are used by real robots to determine if a desired distance is reached. Quantitative results were also collected using the incremental measurement process that includes conventional simulation, RIL simulation, and real system experiment. Below we present these results from conventional simulation and RIL simulation using a convoy system with six robots. We also show the results from a real system experiment using two real robots..1 and RIL Setup Figure shows the convoy system that was studied using conventional simulation. The system includes six robot models, with being the leader robot. All robot models were equipped with virtual IR sensors and virtual motors to interact with an environment model. The environment model defines a cm 1cm open space surrounded by walls. The simulation stops when completes one circle. 3 1 Figure : Conventional simualtion with six robots Figure shows the setup of RIL simulation for the same system shown in Figure. The only difference is that the third and fourth robots, and 3 respectively, are real robots. uses virtual IR sensor to get sensory input, i.e., the distance to 1, from the environment model. 3 uses real IR sensor to get sensory input, i.e., the distance to, from the real environment. Both and 3 use motor HILActivities to move on a physical floor (without any obstacles). All other robots are models that use virtual IR sensors and virtual motors. In this system, because each robot (except ) s decision making is affected by its immediate front robot, this setup of RIL simulation divides the six robots into three categories. 1 Figure : RIL simualtion with six robots Category 1 (, 1): These two robot models exist in the virtual world. Their decision makings are not affected by the fact that two real robots are included in the simulation (1 needs to wait for the ready message from the real robot. However, that is only for synchronization purpose.). Because of this, it is expected the results collected in RIL simulation for these two models will be the same as those collected in conventional simulation. Category (, 3): These two real robots represent two different situations. 1) moves in the real world. However, its immediate front robot is 1, which is a model and is not affected by the two real robots. This means the sensory data, and hence the movement patterns, of in RIL simulation should be the same as (or very similar to) those in conventional simulation. However, due to noise and variance, the actual moving distances of in RIL simulation will be different from those in conventional simulation. ) 3 makes its decision solely based on the information from the real world, i.e., it receives information from and uses its IR sensors to check if it follows. Because 3 uses its real sensors and follows a real robot, it is expected the sensory data and the movement patterns of 3 in RIL simulation will be different from those in conventional simulation. Category 3 (, ): These two robot models exist in the virtual world. However, because s immediate front robot is 3 whose behavior changes when come to RIL simulation, it is expected the results collected in RIL simulation for will be different from the ones collected in conventional simulation. Similarly, s results change when come to RIL simulation. In fact, if there are more robots after, the results of those robots change too.. Measurement Metrics and Results Several measurement metrics were defined and simulation results were collected in one trial of conventional simulation and two trials of RIL simulation. To analyze these data, for each robot category described above, we pick up one robot and compare its conventional simulation results (referred to as simulation data) with its RIL simulation results (referred to as RIL data). These results show that, 1, s simulation data and RIL data are the same (similar); 3 s simulation data and RIL data are different; and s simulation data and RIL data are different too. Note that it is important to differentiate and 3 in the two simulations. In conventional simulation, they are models. In RIL simulation, they are real robots. Number of adjustment of each step In order to follow its front robot, a robot may go through an adjust process after each movement step. The number of adjustment of a robot thus indicates how smoothly this robot convoys. Figure 7 shows the number of adjustment for 1, 3, and. In the figure, the horizontal axis represents the movement steps; the vertical axis represents the number of adjustment. For example, the figure shows that 1 had adjustments at step in conventional simulation as well as in two RIL simulations. As we expected, these results show 1 had the same simulation data and RIL data. But 3 and s simulation data and RIL data are different.

5 for 1 Front IR Distances (cm) for Front IR Distances (cm) for 3 Front IR Distances (cm) for Front IR Distances (cm) for Figure 7: Number of adjustment The results presented in Figure 7 show that for 3, the number of adjustment in RIL simulation at some steps is much larger than that in conventional simulation. For instance, the number of adjustment at step 17 in RIL trial is 11. However this value at the same step in conventional simulation is. This information, by comparing simulation data with RIL data, provides useful feedback to the designers, i.e., it indicates that the robot models may not model the real robots movement very well. On the other hand, from RIL simulation, we can see that even though 3 s number of adjustment becomes large at some steps, its IR distance data (presented in Figure 8) are still stable. For example, Figure 8 shows 3 s IR distance at step 17 in RIL trial is 3. This information, collected in RIL simulation using real robots, increases designers confidence about how the final real system is going to work. Notice that in both of these two cases, RIL simulation allows the designers to use only several, instead of all, real robots to gain the above knowledge. IR distance data of each step (after adjustment) We define two metrics to study how coherent this robotic system convoys. One of them is the front IR distance, which is the value returned from a robot s front IR sensor. Figure 8 shows the front IR distance of, 3, and. As can be seen, s IR distance data in simulation and in RIL are the same. But 3 and s IR distance data are different. More importantly, Figure 8 shows that even though 3 s IR distance changes, the change is within a boundary and does not accumulate as time proceeds (the desired IR distance is set to ). This information, together with the number of adjustment presented above, indicates the control model of this robot convoy system is robust. Front IR Distance (cm) Front IR Distance (cm) for Figure 8: Front IR distance (cm) Coherence data The coherence data is another metric that is defined to study how coherently robots convoy. It calculates the difference between a robot s actual position and its desired position (based on its front robot). The formula used for the calculation can be found in [1]. Figure 9 shows the coherence data for, 3, and. Similar to the reason explained above, coherence data does not change from simulation to RIL, while 3 and s coherence data change. Coherence Data Coherence Data Coherence Data Coherence Data for Steps Coherence Data for Steps Coherence Data for Steps Figure 9 : Coherence data of robots Figure 1 illustrates the average coherence of the five robots (except, whose coherence is ). It clearly

6 shows that although the average coherence data are different in simulation and in RIL, they are still consistent. Similar consistency between simulation data and RIL data can also be seen in Figure 9, for example, the coherence data of 3. This type of consistency among simulation data, RIL data, and real experiment data (presented next) conveys two important messages to us. First, it provides some level of validation to the simulation models used in conventional simulation. Secondly, it justifies the model continuity methodology and the incremental simulation-based measurement process that we applied to develop this robotic convoy system. Average Coherence Data 3 1 Average Coherence Data Steps Figure 1 : Average coherence data.3 System Experiment and Results system experiments were also carried out by applying the same decision-making models to two real robots. In these experiments, two real robots were placed in a cm 1cm open field surrounded by walls (boxes). Both robots use real IR sensors and motors to sense and move within the real environment. A movie for one of these experiments can be found at []. Measurement data were also collected. Figure 11 shows the number of adjustment and front IR distance for the second real robot in three trails of experiments. As expected, these results demonstrate similarities to the results from conventional simulation and RIL simulation. For example, by calculating the average IR distance data for 3 in conventional simulation, RIL simulation, and real execution, we have 3.cm,.3cm, and 1.3cm respectively. Front IR Distance (cm) in Execution Front IR Distance (cm) in Execution Figure 11: system experiment results Trial1 Trial Trial3 Trial1 Trial Trial3 Discussion and Future Work Both the measurement data, such as the front IR distances, and the recorded movies show the continuity of evolving this robotic convoy system from conventional simulation, to RIL simulation, and then to real system execution. This is because a model continuity methodology is applied where the same control models are maintained through different stages of system development. The quantitative results from simulations/experiments demonstrate the feasibility of carrying out an incremental simulation-based design process by gradually bringing real system components into the design until the system evolves into its final form. This capability is especially useful for large-scale complex systems. It provides an operational framework to support simulation models and real system components to work together for system-wide test and measurement. RIL simulation, as a major step in this process, effectively bridges the gap between conventional simulation and real system experiment. As a prelude for actual implementation, RIL simulation brings simulationbased study closer to the reality and increases the confidence that the final system will work as designed. To summarize, this research affords several flexibilities to testing and measurement of distributed robotic systems: a) flexibility to study real robots in a virtual environment; b) flexibility to study models based on inputs/interactions from real robots; c) flexibility to study a large-scale multirobot system using combined real and virtual robots. We note that the incremental design process presented in this paper should not be limited to robotic convoy applications only. As one future research task, we plan to apply this process to other application areas. In addition, we plan to develop experimental frames for each step and add more complexity to the robotic system to check how effectively this process can handle more complex situations. References [1] X. Hu, B. P. Zeigler, Measuring Cooperative ic Systems Using -Based Virtual Environment, Performance Metrics for Intelligent Systems Workshop, August [] X. Hu, A -based Software Development Methodology for Distributed -time Systems, Ph.D. Dissertation, University of Arizona, [3] B. P. Zeigler, H. Preahofer, T. G. Kim, Theory of Modeling and, New York, NY, Academic Press,. [] Movie SWF movie file, playable using Internet Explore [] Movie SWF movie file, playable using Internet Explore

A Simulation-Based Virtual Environment to Study Cooperative Robotic Systems

A Simulation-Based Virtual Environment to Study Cooperative Robotic Systems A Simulation-Based Virtual Environment to Study Cooperative Robotic Systems Xiaolin Hu (corresponding author) Computer Science Department, Georgia State University, Atlanta GA, USA 30303 Address: Department

More information

Applying Robot-in-the-Loop-Simulation to Mobile Robot Systems

Applying Robot-in-the-Loop-Simulation to Mobile Robot Systems 1 Applying Robot-in-the-Loop-Simulation to Mobile Robot Systems Xiaolin Hu, Member, IEEE Abstract Simulation-based study plays important roles in robotic systems development. This paper describes robot-in-theloop

More information

An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing An Integrated ing and Simulation Methodology for Intelligent Systems Design and Testing Xiaolin Hu and Bernard P. Zeigler Arizona Center for Integrative ing and Simulation The University of Arizona Tucson,

More information

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Traffic Control for a Swarm of Robots: Avoiding Target Congestion Traffic Control for a Swarm of Robots: Avoiding Target Congestion Leandro Soriano Marcolino and Luiz Chaimowicz Abstract One of the main problems in the navigation of robotic swarms is when several robots

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Robotics using Lego Mindstorms EV3 (Intermediate)

Robotics using Lego Mindstorms EV3 (Intermediate) Robotics using Lego Mindstorms EV3 (Intermediate) Facebook.com/roboticsgateway @roboticsgateway Robotics using EV3 Are we ready to go Roboticists? Does each group have at least one laptop? Do you have

More information

2.4 Sensorized robots

2.4 Sensorized robots 66 Chap. 2 Robotics as learning object 2.4 Sensorized robots 2.4.1 Introduction The main objectives (competences or skills to be acquired) behind the problems presented in this section are: - The students

More information

CSC C85 Embedded Systems Project # 1 Robot Localization

CSC C85 Embedded Systems Project # 1 Robot Localization 1 The goal of this project is to apply the ideas we have discussed in lecture to a real-world robot localization task. You will be working with Lego NXT robots, and you will have to find ways to work around

More information

Revision for Grade 7 in Unit #1&3

Revision for Grade 7 in Unit #1&3 Your Name:.... Grade 7 / SEION 1 Matching :Match the terms with its explanations. Write the matching letter in the correct box. he first one has been done for you. (1 mark each) erm Explanation 1. electrical

More information

Lab 1: Testing and Measurement on the r-one

Lab 1: Testing and Measurement on the r-one Lab 1: Testing and Measurement on the r-one Note: This lab is not graded. However, we will discuss the results in class, and think just how embarrassing it will be for me to call on you and you don t have

More information

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim MEM380 Applied Autonomous Robots I Winter 2011 Feedback Control USARSim Transforming Accelerations into Position Estimates In a perfect world It s not a perfect world. We have noise and bias in our acceleration

More information

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife Behaviour Patterns Evolution on Individual and Group Level Stanislav Slušný, Roman Neruda, Petra Vidnerová Department of Theoretical Computer Science Institute of Computer Science Academy of Science of

More information

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Seiji Yamada Jun ya Saito CISS, IGSSE, Tokyo Institute of Technology 4259 Nagatsuta, Midori, Yokohama 226-8502, JAPAN

More information

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Universal Journal of Control and Automation 6(1): 13-18, 2018 DOI: 10.13189/ujca.2018.060102 http://www.hrpub.org Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Yousef Moh. Abueejela

More information

LAB 5: Mobile robots -- Modeling, control and tracking

LAB 5: Mobile robots -- Modeling, control and tracking LAB 5: Mobile robots -- Modeling, control and tracking Overview In this laboratory experiment, a wheeled mobile robot will be used to illustrate Modeling Independent speed control and steering Longitudinal

More information

Experiment 4.B. Position Control. ECEN 2270 Electronics Design Laboratory 1

Experiment 4.B. Position Control. ECEN 2270 Electronics Design Laboratory 1 Experiment 4.B Position Control Electronics Design Laboratory 1 Procedures 4.B.1 4.B.2 4.B.3 4.B.4 Read Encoder with Arduino Position Control by Counting Encoder Pulses Demo Setup Extra Credit Electronics

More information

Note to Teacher. Description of the investigation. Time Required. Materials. Procedures for Wheel Size Matters TEACHER. LESSONS WHEEL SIZE / Overview

Note to Teacher. Description of the investigation. Time Required. Materials. Procedures for Wheel Size Matters TEACHER. LESSONS WHEEL SIZE / Overview In this investigation students will identify a relationship between the size of the wheel and the distance traveled when the number of rotations of the motor axles remains constant. It is likely that many

More information

Nebraska 4-H Robotics and GPS/GIS and SPIRIT Robotics Projects

Nebraska 4-H Robotics and GPS/GIS and SPIRIT Robotics Projects Name: Club or School: Robots Knowledge Survey (Pre) Multiple Choice: For each of the following questions, circle the letter of the answer that best answers the question. 1. A robot must be in order to

More information

Multi-Platform Soccer Robot Development System

Multi-Platform Soccer Robot Development System Multi-Platform Soccer Robot Development System Hui Wang, Han Wang, Chunmiao Wang, William Y. C. Soh Division of Control & Instrumentation, School of EEE Nanyang Technological University Nanyang Avenue,

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Embedded Robust Control of Self-balancing Two-wheeled Robot

Embedded Robust Control of Self-balancing Two-wheeled Robot Embedded Robust Control of Self-balancing Two-wheeled Robot L. Mollov, P. Petkov Key Words: Robust control; embedded systems; two-wheeled robots; -synthesis; MATLAB. Abstract. This paper presents the design

More information

ME375 Lab Project. Bradley Boane & Jeremy Bourque April 25, 2018

ME375 Lab Project. Bradley Boane & Jeremy Bourque April 25, 2018 ME375 Lab Project Bradley Boane & Jeremy Bourque April 25, 2018 Introduction: The goal of this project was to build and program a two-wheel robot that travels forward in a straight line for a distance

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Part of: Inquiry Science with Dartmouth

Part of: Inquiry Science with Dartmouth Curriculum Guide Part of: Inquiry Science with Dartmouth Developed by: David Qian, MD/PhD Candidate Department of Biomedical Data Science Overview Using existing knowledge of computer science, students

More information

CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY

CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY Submitted By: Sahil Narang, Sarah J Andrabi PROJECT IDEA The main idea for the project is to create a pursuit and evade crowd

More information

Intelligent Robotics Sensors and Actuators

Intelligent Robotics Sensors and Actuators Intelligent Robotics Sensors and Actuators Luís Paulo Reis (University of Porto) Nuno Lau (University of Aveiro) The Perception Problem Do we need perception? Complexity Uncertainty Dynamic World Detection/Correction

More information

Activity Template. Subject Area(s): Science and Technology Activity Title: Header. Grade Level: 9-12 Time Required: Group Size:

Activity Template. Subject Area(s): Science and Technology Activity Title: Header. Grade Level: 9-12 Time Required: Group Size: Activity Template Subject Area(s): Science and Technology Activity Title: What s In a Name? Header Image 1 ADA Description: Picture of a rover with attached pen for writing while performing program. Caption:

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Available online at ScienceDirect. Procedia Computer Science 24 (2013 )

Available online at   ScienceDirect. Procedia Computer Science 24 (2013 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 24 (2013 ) 158 166 17th Asia Pacific Symposium on Intelligent and Evolutionary Systems, IES2013 The Automated Fault-Recovery

More information

Indiana K-12 Computer Science Standards

Indiana K-12 Computer Science Standards Indiana K-12 Computer Science Standards What is Computer Science? Computer science is the study of computers and algorithmic processes, including their principles, their hardware and software designs,

More information

UNIT1. Keywords page 13-14

UNIT1. Keywords page 13-14 UNIT1 Keywords page 13-14 What is a Robot? A robot is a machine that can do the work of a human. Robots can be automatic, or they can be computer-controlled. Robots are a part of everyday life. Most robots

More information

CURIE Academy, Summer 2014 Lab 2: Computer Engineering Software Perspective Sign-Off Sheet

CURIE Academy, Summer 2014 Lab 2: Computer Engineering Software Perspective Sign-Off Sheet Lab : Computer Engineering Software Perspective Sign-Off Sheet NAME: NAME: DATE: Sign-Off Milestone TA Initials Part 1.A Part 1.B Part.A Part.B Part.C Part 3.A Part 3.B Part 3.C Test Simple Addition Program

More information

Structure and Synthesis of Robot Motion

Structure and Synthesis of Robot Motion Structure and Synthesis of Robot Motion Motion Synthesis in Groups and Formations I Subramanian Ramamoorthy School of Informatics 5 March 2012 Consider Motion Problems with Many Agents How should we model

More information

Paulo Costa, Antonio Moreira, Armando Sousa, Paulo Marques, Pedro Costa, Anibal Matos

Paulo Costa, Antonio Moreira, Armando Sousa, Paulo Marques, Pedro Costa, Anibal Matos RoboCup-99 Team Descriptions Small Robots League, Team 5dpo, pages 85 89 http: /www.ep.liu.se/ea/cis/1999/006/15/ 85 5dpo Team description 5dpo Paulo Costa, Antonio Moreira, Armando Sousa, Paulo Marques,

More information

Autonomous Obstacle Avoiding and Path Following Rover

Autonomous Obstacle Avoiding and Path Following Rover Volume 114 No. 9 2017, 271-281 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Autonomous Obstacle Avoiding and Path Following Rover ijpam.eu Sandeep Polina

More information

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,

More information

Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School

Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School Erik Von Burg Mesa Public Schools Gifted and Talented Program Johnson Elementary School elvonbur@mpsaz.org Water Sabers (2008)* High Heelers (2009)* Helmeteers (2009)* Cyber Sleuths (2009)* LEGO All Stars

More information

Hybrid LQG-Neural Controller for Inverted Pendulum System

Hybrid LQG-Neural Controller for Inverted Pendulum System Hybrid LQG-Neural Controller for Inverted Pendulum System E.S. Sazonov Department of Electrical and Computer Engineering Clarkson University Potsdam, NY 13699-570 USA P. Klinkhachorn and R. L. Klein Lane

More information

Sensors and Sensing Motors, Encoders and Motor Control

Sensors and Sensing Motors, Encoders and Motor Control Sensors and Sensing Motors, Encoders and Motor Control Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 13.11.2014

More information

Note to the Teacher. Description of the investigation. Time Required. Additional Materials VEX KITS AND PARTS NEEDED

Note to the Teacher. Description of the investigation. Time Required. Additional Materials VEX KITS AND PARTS NEEDED In this investigation students will identify a relationship between the size of the wheel and the distance traveled when the number of rotations of the motor axles remains constant. Students are required

More information

AN INTEGRATED MULTI-ROBOT TEST BED TO SUPPORT INCREMENTAL SIMULATION-BASED DESIGN

AN INTEGRATED MULTI-ROBOT TEST BED TO SUPPORT INCREMENTAL SIMULATION-BASED DESIGN AN INTEGRATED MULTI-ROBOT TEST BED TO SUPPORT INCREMENTAL SIMULATION-BASED DESIGN Ehsan Azarnasab Georgia State University Department of computer Science Atlanta, GA, 30303 Xiaolin Hu Georgia State University

More information

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS DAVIDE MAROCCO STEFANO NOLFI Institute of Cognitive Science and Technologies, CNR, Via San Martino della Battaglia 44, Rome, 00185, Italy

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

Collective Robotics. Marcin Pilat

Collective Robotics. Marcin Pilat Collective Robotics Marcin Pilat Introduction Painting a room Complex behaviors: Perceptions, deductions, motivations, choices Robotics: Past: single robot Future: multiple, simple robots working in teams

More information

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington Department of Computer Science and Engineering The University of Texas at Arlington Team Autono-Mo Jacobia Architecture Design Specification Team Members: Bill Butts Darius Salemizadeh Lance Storey Yunesh

More information

Today s Menu. Near Infrared Sensors

Today s Menu. Near Infrared Sensors Today s Menu Near Infrared Sensors CdS Cells Programming Simple Behaviors 1 Near-Infrared Sensors Infrared (IR) Sensors > Near-infrared proximity sensors are called IRs for short. These devices are insensitive

More information

Design Project Introduction DE2-based SecurityBot

Design Project Introduction DE2-based SecurityBot Design Project Introduction DE2-based SecurityBot ECE2031 Fall 2017 1 Design Project Motivation ECE 2031 includes the sophomore-level team design experience You are developing a useful set of tools eventually

More information

A Machine Tool Controller using Cascaded Servo Loops and Multiple Feedback Sensors per Axis

A Machine Tool Controller using Cascaded Servo Loops and Multiple Feedback Sensors per Axis A Machine Tool Controller using Cascaded Servo Loops and Multiple Sensors per Axis David J. Hopkins, Timm A. Wulff, George F. Weinert Lawrence Livermore National Laboratory 7000 East Ave, L-792, Livermore,

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani Session 11 Introduction to Robotics and Programming mbot >_ {Code4Loop}; Roochir Purani RECAP from last 2 sessions 3D Programming with Events and Messages Homework Review /Questions Understanding 3D Programming

More information

A COMPARISON STUDY OF THE COMMUTATION METHODS FOR THE THREE-PHASE PERMANENT MAGNET BRUSHLESS DC MOTOR

A COMPARISON STUDY OF THE COMMUTATION METHODS FOR THE THREE-PHASE PERMANENT MAGNET BRUSHLESS DC MOTOR A COMPARISON STUDY OF THE COMMUTATION METHODS FOR THE THREE-PHASE PERMANENT MAGNET BRUSHLESS DC MOTOR Shiyoung Lee, Ph.D. Pennsylvania State University Berks Campus Room 120 Luerssen Building, Tulpehocken

More information

Mars Rover: System Block Diagram. November 19, By: Dan Dunn Colin Shea Eric Spiller. Advisors: Dr. Huggins Dr. Malinowski Mr.

Mars Rover: System Block Diagram. November 19, By: Dan Dunn Colin Shea Eric Spiller. Advisors: Dr. Huggins Dr. Malinowski Mr. Mars Rover: System Block Diagram November 19, 2002 By: Dan Dunn Colin Shea Eric Spiller Advisors: Dr. Huggins Dr. Malinowski Mr. Gutschlag System Block Diagram An overall system block diagram, shown in

More information

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

Available online at  ScienceDirect. Procedia Computer Science 56 (2015 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 56 (2015 ) 538 543 International Workshop on Communication for Humans, Agents, Robots, Machines and Sensors (HARMS 2015)

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

WELCOME TO THE SEMINAR ON INTRODUCTION TO ROBOTICS

WELCOME TO THE SEMINAR ON INTRODUCTION TO ROBOTICS WELCOME TO THE SEMINAR ON INTRODUCTION TO ROBOTICS Introduction to ROBOTICS Get started with working with Electronic circuits. Helping in building a basic line follower Understanding more about sensors

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO K. Sindhuja 1, CH. Lavanya 2 1Student, Department of ECE, GIST College, Andhra Pradesh, INDIA 2Assistant Professor,

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Page ENSC387 - Introduction to Electro-Mechanical Sensors and Actuators: Simon Fraser University Engineering Science

Page ENSC387 - Introduction to Electro-Mechanical Sensors and Actuators: Simon Fraser University Engineering Science Motor Driver and Feedback Control: The feedback control system of a dc motor typically consists of a microcontroller, which provides drive commands (rotation and direction) to the driver. The driver is

More information

understanding sensors

understanding sensors The LEGO MINDSTORMS EV3 set includes three types of sensors: Touch, Color, and Infrared. You can use these sensors to make your robot respond to its environment. For example, you can program your robot

More information

ICTCM 28th International Conference on Technology in Collegiate Mathematics

ICTCM 28th International Conference on Technology in Collegiate Mathematics ARDUINO IN THE CLASSROOM: CLASSROOM READY MODULES FOR UNDERGRADUATE MATHEMATICS Michael D. Seminelli 1 Department of Mathematical Sciences United States Military Academy West Point, NY 10996 Michael.Seminelli@usma.edu

More information

Mechatronics Engineering and Automation Faculty of Engineering, Ain Shams University MCT-151, Spring 2015 Lab-4: Electric Actuators

Mechatronics Engineering and Automation Faculty of Engineering, Ain Shams University MCT-151, Spring 2015 Lab-4: Electric Actuators Mechatronics Engineering and Automation Faculty of Engineering, Ain Shams University MCT-151, Spring 2015 Lab-4: Electric Actuators Ahmed Okasha, Assistant Lecturer okasha1st@gmail.com Objective Have a

More information

Simple Path Planning Algorithm for Two-Wheeled Differentially Driven (2WDD) Soccer Robots

Simple Path Planning Algorithm for Two-Wheeled Differentially Driven (2WDD) Soccer Robots Simple Path Planning Algorithm for Two-Wheeled Differentially Driven (2WDD) Soccer Robots Gregor Novak 1 and Martin Seyr 2 1 Vienna University of Technology, Vienna, Austria novak@bluetechnix.at 2 Institute

More information

VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING

VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING P.NARENDRA ILAYA PALLAVAN 1, S.HARISH 2, C.DHACHINAMOORTHI 3 1Assistant Professor, EIE Department, Bannari Amman Institute of Technology,

More information

OPEN CV BASED AUTONOMOUS RC-CAR

OPEN CV BASED AUTONOMOUS RC-CAR OPEN CV BASED AUTONOMOUS RC-CAR B. Sabitha 1, K. Akila 2, S.Krishna Kumar 3, D.Mohan 4, P.Nisanth 5 1,2 Faculty, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India

More information

Sensors and Sensing Motors, Encoders and Motor Control

Sensors and Sensing Motors, Encoders and Motor Control Sensors and Sensing Motors, Encoders and Motor Control Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 05.11.2015

More information

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX.

The light sensor, rotation sensor, and motors may all be monitored using the view function on the RCX. Review the following material on sensors. Discuss how you might use each of these sensors. When you have completed reading through this material, build a robot of your choosing that has 2 motors (connected

More information

Artificial Intelligence Planning and Decision Making

Artificial Intelligence Planning and Decision Making Artificial Intelligence Planning and Decision Making NXT robots co-operating in problem solving authors: Lior Russo, Nir Schwartz, Yakov Levy Introduction: On today s reality the subject of artificial

More information

Park Ranger. Li Yang April 21, 2014

Park Ranger. Li Yang April 21, 2014 Park Ranger Li Yang April 21, 2014 University of Florida Department of Electrical and Computer Engineering EEL 5666C IMDL Written Report Instructors: A. Antonio Arroyo, Eric M. Schwartz TAs: Andy Gray,

More information

A conversation with Russell Stewart, July 29, 2015

A conversation with Russell Stewart, July 29, 2015 Participants A conversation with Russell Stewart, July 29, 2015 Russell Stewart PhD Student, Stanford University Nick Beckstead Research Analyst, Open Philanthropy Project Holden Karnofsky Managing Director,

More information

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Journal of Academic and Applied Studies (JAAS) Vol. 2(1) Jan 2012, pp. 32-38 Available online @ www.academians.org ISSN1925-931X NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Sedigheh

More information

Based on the ARM and PID Control Free Pendulum Balance System

Based on the ARM and PID Control Free Pendulum Balance System Available online at www.sciencedirect.com Procedia Engineering 29 (2012) 3491 3495 2012 International Workshop on Information and Electronics Engineering (IWIEE) Based on the ARM and PID Control Free Pendulum

More information

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed AUTOMOTIVE Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed Yoshiaki HAYASHI*, Izumi MEMEZAWA, Takuji KANTOU, Shingo OHASHI, and Koichi TAKAYAMA ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

More information

Real-Time Bilateral Control for an Internet-Based Telerobotic System

Real-Time Bilateral Control for an Internet-Based Telerobotic System 708 Real-Time Bilateral Control for an Internet-Based Telerobotic System Jahng-Hyon PARK, Joonyoung PARK and Seungjae MOON There is a growing tendency to use the Internet as the transmission medium of

More information

A User Friendly Software Framework for Mobile Robot Control

A User Friendly Software Framework for Mobile Robot Control A User Friendly Software Framework for Mobile Robot Control Jesse Riddle, Ryan Hughes, Nathaniel Biefeld, and Suranga Hettiarachchi Computer Science Department, Indiana University Southeast New Albany,

More information

What is a Simulation? Simulation & Modeling. Why Do Simulations? Emulators versus Simulators. Why Do Simulations? Why Do Simulations?

What is a Simulation? Simulation & Modeling. Why Do Simulations? Emulators versus Simulators. Why Do Simulations? Why Do Simulations? What is a Simulation? Simulation & Modeling Introduction and Motivation A system that represents or emulates the behavior of another system over time; a computer simulation is one where the system doing

More information

Chapter 7: The motors of the robot

Chapter 7: The motors of the robot Chapter 7: The motors of the robot Learn about different types of motors Learn to control different kinds of motors using open-loop and closedloop control Learn to use motors in robot building 7.1 Introduction

More information

Chapter 2 Mechatronics Disrupted

Chapter 2 Mechatronics Disrupted Chapter 2 Mechatronics Disrupted Maarten Steinbuch 2.1 How It Started The field of mechatronics started in the 1970s when mechanical systems needed more accurate controlled motions. This forced both industry

More information

Learning serious knowledge while "playing"with robots

Learning serious knowledge while playingwith robots 6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Learning serious knowledge while "playing"with robots Zoltán Istenes Department of Software Technology and Methodology,

More information

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011

Overview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011 Overview of Challenges in the Development of Autonomous Mobile Robots August 23, 2011 What is in a Robot? Sensors Effectors and actuators (i.e., mechanical) Used for locomotion and manipulation Controllers

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

Week Lesson Assignment SD Technology Standards. SPA Handout. Handouts. Handouts/quiz. Video/handout. Handout. Video, handout.

Week Lesson Assignment SD Technology Standards. SPA Handout. Handouts. Handouts/quiz. Video/handout. Handout. Video, handout. Week Lesson Assignment SD Technology Standards 1 Lesson 1: Intro to Robotics class Discuss goals of class & definition of a robot SPA Define engineering, programming and system. Define managing a project.

More information

Worksheet Answer Key: Tree Measurer Projects > Tree Measurer

Worksheet Answer Key: Tree Measurer Projects > Tree Measurer Worksheet Answer Key: Tree Measurer Projects > Tree Measurer Maroon = exact answers Magenta = sample answers Construct: Test Questions: Caliper Reading Reading #1 Reading #2 1492 1236 1. Subtract to find

More information

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Michael Hölzl, Roland Neumeier and Gerald Ostermayer University of Applied Sciences Hagenberg michael.hoelzl@fh-hagenberg.at,

More information

Relationship to theory: This activity involves the motion of bodies under constant velocity.

Relationship to theory: This activity involves the motion of bodies under constant velocity. UNIFORM MOTION Lab format: this lab is a remote lab activity Relationship to theory: This activity involves the motion of bodies under constant velocity. LEARNING OBJECTIVES Read and understand these instructions

More information

Design Lab Fall 2011 Controlling Robots

Design Lab Fall 2011 Controlling Robots Design Lab 2 6.01 Fall 2011 Controlling Robots Goals: Experiment with state machines controlling real machines Investigate real-world distance sensors on 6.01 robots: sonars Build and demonstrate a state

More information

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell 2004.12.01 Abstract I propose to develop a comprehensive and physically realistic virtual world simulator for use with the Swarthmore Robotics

More information

The Real-Time Control System for Servomechanisms

The Real-Time Control System for Servomechanisms The Real-Time Control System for Servomechanisms PETR STODOLA, JAN MAZAL, IVANA MOKRÁ, MILAN PODHOREC Department of Military Management and Tactics University of Defence Kounicova str. 65, Brno CZECH REPUBLIC

More information

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots Mousa AL-Akhras, Maha Saadeh, Emad AL Mashakbeh Computer Information Systems Department King Abdullah II School for Information

More information

Localization (Position Estimation) Problem in WSN

Localization (Position Estimation) Problem in WSN Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless

More information

Blending Human and Robot Inputs for Sliding Scale Autonomy *

Blending Human and Robot Inputs for Sliding Scale Autonomy * Blending Human and Robot Inputs for Sliding Scale Autonomy * Munjal Desai Computer Science Dept. University of Massachusetts Lowell Lowell, MA 01854, USA mdesai@cs.uml.edu Holly A. Yanco Computer Science

More information

Predicting Content Virality in Social Cascade

Predicting Content Virality in Social Cascade Predicting Content Virality in Social Cascade Ming Cheung, James She, Lei Cao HKUST-NIE Social Media Lab Department of Electronic and Computer Engineering Hong Kong University of Science and Technology,

More information

A Day in the Life CTE Enrichment Grades 3-5 mblock Programs Using the Sensors

A Day in the Life CTE Enrichment Grades 3-5 mblock Programs Using the Sensors Activity 1 - Reading Sensors A Day in the Life CTE Enrichment Grades 3-5 mblock Programs Using the Sensors Computer Science Unit This tutorial teaches how to read values from sensors in the mblock IDE.

More information

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE 1 LEE JAEYEONG, 2 SHIN SUNWOO, 3 KIM CHONGMAN 1 Senior Research Fellow, Myongji University, 116, Myongji-ro,

More information