SENIOR PROJECT FINAL REPORT TEAM MARRS
|
|
- Irma Beasley
- 5 years ago
- Views:
Transcription
1 Prepared by: SENIOR PROJECT FINAL REPORT Farah Abdel Meguid Mai Khater Mennat Allah Ali Merihan El Hefnawi TEAM MARRS
2 Table of Contents 1. Introduction System Overview System Structure Usage Constraints Requirements Revisited Functional Requirements Modifications Base Station Guide Robot Ground Robots Non functional Requirement Modifications Performance Requirements Reliability Fault Tolerance Expandability Design Modifications Hardware Design Modifications The Lifting Mechanism The Wheels Software Design Modifications Human Detection Mission Control Testing and Evaluation Challenges Lack of Robot Components Availability Lack of Workshop Facilities Human detection Different Kinds of Floors Conclusion and Future Work More Mobile Guide Robot More Ground Robots Using 3D Cameras/Deep Sensors Stronger Motors:... 16
3 1. Introduction Out of the many different fields of computer science, we have chosen to explore the field of embedded systems in our thesis project. This field continues to witness massive advancement and research. It is utilized in all different disciplines such as medical surgeries, space explorations, arts, robotics and others. The robotics field is one that has greatly advanced over the past decade reaching great successes. Robots are now used to assist human beings in their different daily tasks such as cleaning and navigation. They are also used in industry such as in manufacturing cars and control. Other robotics projects have been developed to entertain humans especially the children and the seniors; they are equipped with the ability to listen and talk, and surprisingly, a new robot is being developed now that will be called the ethical robot. It will be taught how to think and take decisions concerning ethics and values, a thing that we thought only human beings could achieve. Despite the importance of all the fields previously mentioned, we have chosen to explore robotic systems in rescue missions. Catastrophes and accidents kill thousands of people every year, some of them are victims of the accidents themselves and others die while rescuing others. Therefore, interest has risen in developing robotic systems to rescue instead of human beings. This area has successfully been implemented and expanded to include rescuing in many of different environments. For instance, there are robots that rescue in case of fire, others under water surfaces, demining purposes, explosions and in case of gas leakage. Out of those environments, we have chosen to develop our project to work in an environment where gas has leaked and the site is inaccessible by human rescuers. The spill or leakage of hazardous materials renders sites inaccessible by human rescuers. This is because, if, for example, toxic gas has leaked in a building, certain measures have to be taken before rescuers can get inside. For example, they should attempt to know what type of gas has leaked to know how to handle it, they have to make sure that there isn t a potential threat of explosion and they have to ensure that they are perfectly insulated, all of which causes delay. This lack of timely intervention leads to an increased number of casualties. This loss can be prevented/decreased if the rescue mission is carried out faster. Also, many human rescuers suffocate or get poisoned while working in such environments. Even if instant harm was not detectable on rescuers, getting frequently subjected to such environments negatively affects their health on the long run. This is another loss that can be prevented by using rescue robots. Due to the problems mentioned above, we believe that robots should carry out rescue missions in hazardous environments. This would allow for more accurate
4 and faster detection and retrieval of survivors and would protect the lives and health of rescuers. For all that, in our thesis project, we have built a Multi Agent Rescue Robotic System (MARRS) that will perform rescuing in case of a gas leakage. Our target was building an inexpensive system that would perform both discovery and rescue. Meaning, even if our system didn t find any survivors inside, it would report back information about the environment such as the kind of gas and temperature inside using certain sensors that will be added to it. 2. System Overview 2.1 System Structure Below is a list of our system s main members and the functionality of each. a. Base Station: Launches the mission. Terminates the mission. Makes sure the system is performing correctly. Keeps track of the number of robots that are connected. Performs complex neural networks algorithms. Locates survivors. Receives data about the environment from Guide. b. Guide Robot: Identifies survivors. Identifies which gas has been leaked among the family of liquefied petroleum gas. Sends survivor locations to Ground Robots. Sends environment data to Base Station. c. Ground Robots: Rescue the survivors. Communicate all its findings to the Guide Robot. 2.2 Usage Constraints The system operates in an environment where a harmful liquefied petroleum gas (LPG) has been leaked. Potential survivors are expected to be found lying on the ground. The environment will also have obstacles that the Ground Robots are expected to overcome in
5 order to rescue the survivors. Some of the LPG gases pose an explosive risk that is why the Guide Robot will have a gas-detecting sensor and the Ground Robots will detect the environment temperature. 3. Requirements Revisited 3.1 Functional Requirements Modifications Base Station The Base Station will be represented in our system with a normal PC. A PC is good enough to meet the requirements and responsibilities of the Base Station. It has enough memory to store data temporarily and permanently. It has a WiFi module to communicate with the ground robots. It has a suitable processor to be able to run classifiers on frames fast enough Guide Robot Due to the tasks described in the system structure, the Guide Robot must be able to capture frames as fast as possible and process them. Suitable battery life of the Guide Robot is crucial for the mission because if the Guide fails, the mission shall be aborted. Also, the Guide must support stable and reliable communication with the other system features because it is the main coordinator between the different components Ground Robots Ground Robots requirements are very critical as they work in the field. They: 1. Are able to lift a human body through a lifting mechanism. 2. Can communicate together and with the Guide Robot too. 3. Can move in all four directions 3.2 Non functional Requirement Modifications Performance Requirements Performance is a very important aspect of our system. We need our system to have very high performance since it is all about saving lives. Therefore the classifiers are done on the Base Station, which will always make sure that the system is performing well and will also be responsible of detecting humans.
6 3.2.2 Reliability We need our system to be reliable while executing the search and rescue mission. Hence, we have installed different layers in detecting the human, so that the system does not detect something else as a human and rescue it instead. This is one of the reasons why we have chosen to use a neural network to detect a human; to make the results as accurate as possible Fault Tolerance The system needs to tolerate some faults if they were to happen. That is why the system is intended to have many inexpensive Ground Robots so that if one is destroyed or lost in the process of search or rescue, another one would be able to replace it. Hence, backup robots should be ready for that purpose Expandability We want our system to be applicable in other environments, not just gas leakage. This can be done with changing some of the sensors. For example, the LPG sensor that we are intending to use in the product can be changed to detect nuclear radiation leakages, and the search and rescue mechanism can be done the same way. 4. Design Modifications 4.1 Hardware Design Modifications Since the beginning of the project, we went through some modifications regarding the hardware. In this section, we will go through the hardware modification decisions we have made. First, let us go over the initial design. The ideal system would consist of 6 robots cooperating to lift a human. However, our prototype consists of 2 ground robots that will lift a mannequin when exploration has been done and the guide robot has detected a human. We have done a simulation of the robot s movement before implementing it in real life using a Solid Works, which is a mechanical simulation tool. Figure 1 below shows the result after simulation The robot body (which is the rectangular box shown in figure 1) is made of aluminum bars that are joined together via 90-degree joining plates. The rectangular box s dimensions are 40cmx30cmx20cm. The forklift is made of 3 mm aluminum sheet of dimensions 30cmx30cmx7cm. It is mounted on top of linear bearings. Those linear
7 bearings slide up and down 2 aluminum shafts, which are fixed on the aluminum bars via shaft end supports. Figure 1 Ground Robot Initial Design The Lifting Mechanism The initial design of the lifting mechanism was supposed to be a pulley system. As can be seen in the design in figure 1 above, the pulley system would have been used to lift the forklift up and down the aluminum shafts, which are supported by shaft end supports. The pulley system would have required the following components: 2 Metal Chains o Those would have been the chains that connect the motors to the forklift to make it move up and down the shafts 2 worm gear motors o The pulleys would have been fixed onto 2 worm gear motors. That way, when the motors work, the pulley system would move up or down depending on the rotation direction of the motor. 2 Idler Pulley Kits o Those would be placed on the robot from the top as can be seen in the design in figure 1. The metal chains would have been fixed to it. This would facilitate the vertical movement of the forklift.
8 However, all of these components would have been quite expensive. Also, if the components were not strong enough to handle the weight carried by the forklift, the chain might disconnect from the pulley idlers or the motors, which would have caused the forklift to drop to the ground. We solved this problem by replacing the pulley system in our design by a linear actuator motor. The motor could be seen in figure 2 below. Figure 2 Linear Actuator Motor This type of motors creates a vertical motion instead of a circular motion in conventional motors. When the motor rotates, it causes a rod to move up and down. All what we had to do was connect the linear actuator motor to the forklift. We did so by attaching a rectangular aluminum sheet between the forklift and the linear actuator. This can be seen in figure 3 below The Wheels Figure 3 Forklift and Linear Actuator Motor Connection The wheels we used were omnidirectional wheels. The reason why we chose omnidirectional wheels was because we want our system to move in all direction as is without changing the body s orientation as in the figure 4 below. That is because we want the system to be flexible to move in narrow spaces such as doors.
9 Figure 4 System Movements In the simulation, the wheels that we intended to use were a specific type of omnidirectional wheels called macnum wheels. They are fixed the same way normal wheels are fixed on cars (placed on each corner). The problem with those wheels is they are not available in Egypt for direct sale. They would have been customized specially for us. Hence, they are very highly expensive. We solved this problem by altering the design of our robot to use normal omnidirectional wheels. Those wheels are fixed in the middle of every side in the robot as shown in figure 5 below. Figure 5 Omnidirectional Wheels on the Ground Robot 4.2 Software Design Modifications The Base Station launches the mission by sending a signal to the Guide Robot. After that, there are three processes going on in parallel. First of all, the system in the base station starts to monitor the robots and their status. Second, the temperature sensor, which is mounted on one of the Ground Robots, is active and sends back readings to the base station. And finally, the Guide Robot starts capturing frames of the sight of the leakage and sends those images back to the base station to determine the possibility of the existence of human beings in those frames. Human identification in this project relies on a pre-trained classifier called Caffe. As a result, classifying could not be done
10 on a board because it would require a lot of processing power that can only be provided by a regular desktop computer. Hence, the frames captured have to be sent to the base station for processing. The base station then responds with a filtered image that segments the human body, if any exists. This image is used by the Guide Robot to guide the Ground Robots towards the human body. When they reach the body the rescue routine is launched and they start lifting the body Human Detection Human Detection is a main function of the Base Station. After the Guide Robot is launched into the site, it captures frames and sends them to the Base Station. The Base station detects whether there is a human in the frame or not. This is done through Caffe, which is further described below. Caffe is a framework for neural networks that was developed in Berkeley Vision and Learning Center (BVLC) and contains a lot of neural network models that have been tested for different purposes. Model Zoo in Caffe contains different deep learning architectures. We decided to use the Fully Convolutional semantic segmentation model. The reason behind choosing a fully convolutional neural network is the fast testing time and the reasonable accuracy that we can obtain from running images through such a network. A convolutional network contains several layers of convolution. Basically, a convolutional layer contains neurons each with a kernel that has a fixed window size. This kernel is convolved with the corresponding pixels in the image. Convolution, in the first layer, extracts features from the images and then the convolution layer extracts features from features and so on. There are also some pooling layers that reduced dimensionality of the image as we move from one layer to the other. The following figure 6 explains the system.
11 Figure 6 Neural Network Used The model used was trained to segment 60 different classes from any given frame. These 60 classes contain class Human as number (56). There are three resolutions for CFNs. We chose to use the highest resolution (8s) because of the better accuracy. Resolution here refers to the stride, which is the step between two consecutive kernels in the convolutional layer. The following figure 7 contains all the layers used. Figure 7 Neural Network Layers After passing captured frames to function Caffe_function(), which is a C++ wrapper for a python code that passes the image as an input to a pre-trained CFN, we obtain the results. A sample of the result is shown in figure 8 below.
12 Figure 8 Caffe Results Mission Control The control approach that is used in the system is vision based. This means that the system captures images from the site frequently and processes them and based on this the system makes new decisions regarding the robots next step. The red blob in the frame received from the Base station represents the human body in the frame. Pixels of the red blob are extracted and merged with another frame that has the robots detected. The blue and white blobs represent our two Ground Robot. The following figure 9 shows the Robot Detection before merging in the human frame (left) and after merging the human frame (right). Figure 9 The Final Detection Results Through the communication channels between the Guide and the Ground Robots, instructions of the Robot movement are sent. After each movement, new frames are taken and processed so the Guide Robot can make an informed decision about the next movement of each robot. When the system reaches the point that each Ground Robot is surrounding the human body from both sides, they receive a signal from the Guide robot to start the rescue routine during which they move a little bit forward and start lifting their forklifts.
13 5. Testing and Evaluation 5.1 Testing Methodology We tested the components of our system before integrating and making our final test. The following were the components that we tested on their own: Network Modules After implementing our network modules on a desktop computer, we tested the module on two computers and made sure text is being sent correctly. After we succeeded, we implemented sending images. Then, we implemented our module on the UDOO board (which is on the Guide Robot) and made sure it is sending and receiving correctly to and from a desktop computer. Lastly, we implemented the module on the two Raspberry Pis (which are on the two Ground Robots) and made sure every component is working well. Human Detection We tested the neural network based code obtained from Caffe by getting our own data set, which consists of different picture of humans we got in the lab. We tested to make sure it is working correctly. Also, we wanted to know the capabilities of the system and how it performs on different sizes, color skins, and shapes. Robot Detection We implemented a computer vision code to extract the location of our two Ground Robots based on pictures taken by the Guide Robot. Based on that, the Guide Robot tells the Ground Robots how to move. We tested the computer vision code thoroughly before integrating it with the robot movement codes. This leads us to the next point. Robot Movement The robot movement codes were first implemented on an Arduino board and tested on the wheels separately. The reason behind that was to minimize the cost of losing a Raspberry Pi in case the movement caused problems. After the codes were successful, we mounted them on the robot and on the Raspberry Pi. We made a separate executable to test all the movements of the robots before integrating them to the whole system. Integrating and Testing After making sure all the components work correctly, we integrated them together to test our finalized system. 5.2 Testing Results
14 Testing the entire system was not as hectic as we thought it would be since every component was tested on its own. The only part that needed fine-tuning after integration was the movement of the robot relative to its location. We needed to make sure it does not move more or less than it needs to. Other than that, all the other problems that appeared were minor problems that were fixed on the spot. 5.3 Discussion MARRS was definitely not the easiest project to implement and it needed a lot of hard work and determination to be built and functioning well. However, with a lot of research, work, and willpower, we were able to deliver the system we promised we would deliver. As can be seen, we have done intensive testing on the system to be able to reach this point. 6. Challenges We have met many challenges through our journey of building and programming the robots. This section will list these challenges and describe them in details. 6.1 Lack of Robot Components Availability a) Omni-directional wheels Using normal wheels allows the system to move in only two directions, which puts our system at a disadvantage. Our system is supposed to operate in narrow spaces and rescue humans to get them to safety. Hence, we wanted to provide our ground robots with maximum flexibility when it comes to motion, and so, we decided to use omnidirectional wheels to provide our system with this flexibility. The challenge we faced, however, was the availability of those wheels. In order to acquire them, we had them especially manufactured for us. There were a lot of delays and issues with the design and procurement of the wheels, which created a major setback for us. The alternative was importing the wheels, which was not a feasible option due to time constraints and high cost. b) Motors While testing the system, one of the motors broke down. And to obtain another motor with the exact dimensions and specifications was impossible. This was the toughest challenge among all because it was only a week before our final presentation. As a result, we had to resort to using another motor and fixing it on the body of the robot using a different approach so that all four wheels can move again. 6.2 Lack of Workshop Facilities Our prototype is built from scratch. Consequently, we needed heavy support when it came to mechanical parts and workshop facilities. Unfortunately, these facilities were not available at the mechanical workshops at our university. Therefore, we had to
15 outsource all our operations to workshops outside university, which consumed a lot of time and added complexity to the development process. 6.3 Human detection Before reaching the decision of using neural networks as a method of detecting humans, we were faced with many confusing choices. We had different implementations using Pattern Analysis algorithms, Computer Vision algorithms, or Neural Networks algorithms. We explored the option of Computer Vision and Pattern Analysis but found put the Neural Networks algorithms such as the module we are using gives off the highest accuracy percentage. 6.4 Different Kinds of Floors The material used to manufacture the wheels is very sensitive to the different kinds of floors. We realized this when we tested it in multiple locations. In our laboratory, where most experiments took place, the environment was very convenient. However, during rehearsals, we noticed that the floor in the presentation hall was very slippery causing the robots to drift and move at a different speed that they are supposed to. This created an overhead because we had to adjust the parameters of the robot motion whenever we changed testing locations. 7. Conclusion and Future Work In order to turn our prototype into an actual product, we need to make a few adjustments. Our product works in an environment where a poisonous gas been leaked and the site has been rendered inaccessible by human rescuers. We want our product to be able to explore a new environment successfully, locate and identify survivors, and even return useful information about the sight, in case it did not find any victims. In other words, we are employing rescue robotic systems to help reduce the number of casualties in the victims and rescuers side and provide faster intervention. Some of the upgrades we need to make to our prototype are: 7.1 More Mobile Guide Robot We propose changing our static guide robot guide into a drone. This was involved in our initial design but due to the fact that the import of drones has been banned in Egypt and that, locally, we could not find a convenient drone, we used a static guide robot instead. A more flexible guide robot allows us to collect more information about the accident site, locate more victims and cover a wider search area than a static guide. 7.2 More Ground Robots
16 Our prototype works for only a mannequin with maximum weight 20 kg. As we have shown in our initial design scheme, in order to rescue an actual human being, we need the weight of the human to be distributed among at least six ground robots so that they can get the human to safety. 7.3 Using 3D Cameras/Deep Sensors As per our initial design, we wanted the ground robots to be able to detect obstacles and avoid them while they move about the scene. Therefore, we propose employing 3D cameras or deep sensors on the ground robots to detect those obstacles. These sensors will yield more accurate results than the approach used in our system, which only looks for obstacles from the guide robot s point of view. 7.4 Stronger Motors: Since we will build more robots to support heavier weights, we will also need stronger motors with better specifications and more power so that they can lift heavier weights and allow the robots to move with an appropriate speed while carrying those weights. In conclusion, we have successfully built and programmed from scratch a prototype for a Multi Agent Rescue Robotic System to save human survivors from an area where gas has leaked. We hope that this system could save human lives and save the lives of rescuers who go inside the hazardous environments and risk their lives to save other humans.
AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1
AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,
More informationCedarville University Little Blue
Cedarville University Little Blue IGVC Robot Design Report June 2004 Team Members: Silas Gibbs Kenny Keslar Tim Linden Jonathan Struebel Faculty Advisor: Dr. Clint Kohl Table of Contents 1. Introduction...
More informationReal-Time Face Detection and Tracking for High Resolution Smart Camera System
Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell
More informationBluetooth Low Energy Sensing Technology for Proximity Construction Applications
Bluetooth Low Energy Sensing Technology for Proximity Construction Applications JeeWoong Park School of Civil and Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr. N.W., Atlanta,
More informationROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION
ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and
More informationDesign of Tracked Robot with Remote Control for Surveillance
Proceedings of the 2014 International Conference on Advanced Mechatronic Systems, Kumamoto, Japan, August 10-12, 2014 Design of Tracked Robot with Remote Control for Surveillance Widodo Budiharto School
More informationWhere C= circumference, π = 3.14, and D = diameter EV3 Distance. Developed by Joanna M. Skluzacek Wisconsin 4-H 2016 Page 1
Instructor Guide Title: Distance the robot will travel based on wheel size Introduction Calculating the distance the robot will travel for each of the duration variables (rotations, degrees, seconds) can
More informationSolar Mobius Final Report. Team 1821 Members: Advisor. Sponsor
Senior Design II ECE 4902 Spring 2018 Solar Mobius Final Report Team 1821 Members: James Fisher (CMPE) David Pettibone (EE) George Oppong (EE) Advisor Professor Ali Bazzi Sponsor University of Connecticut
More informationINCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3
INCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3 Labshare 2011 Table of Contents 1 Introduction... 3 1.1 Remote Laboratories... 3 1.2 Inclined Plane - The Rig Apparatus... 3 1.2.1 Block Masses & Inclining
More informationBeach Cleaning Robot (WALL-E 2.0) Project Proposal for Senior Robotics 2018
Beach Cleaning Robot (WALL-E 2.0) Project Proposal for Senior Robotics 2018 Ver 1c 29 Sep 2017 1.0 General This is a senior robotics and Paraparaumu College Societal, Environmental and Animal Rights Action
More informationCSC C85 Embedded Systems Project # 1 Robot Localization
1 The goal of this project is to apply the ideas we have discussed in lecture to a real-world robot localization task. You will be working with Lego NXT robots, and you will have to find ways to work around
More informationARTIFICIAL INTELLIGENCE - ROBOTICS
ARTIFICIAL INTELLIGENCE - ROBOTICS http://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_robotics.htm Copyright tutorialspoint.com Robotics is a domain in artificial intelligence
More informationUNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR
UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR
More informationLabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System
LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System Muralindran Mariappan, Manimehala Nadarajan, and Karthigayan Muthukaruppan Abstract Face identification and tracking has taken a
More informationVisual compass for the NIFTi robot
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY IN PRAGUE Visual compass for the NIFTi robot Tomáš Nouza nouzato1@fel.cvut.cz June 27, 2013 TECHNICAL REPORT Available at https://cw.felk.cvut.cz/doku.php/misc/projects/nifti/sw/start/visual
More informationEfficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision
Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Peter Andreas Entschev and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer Science Federal
More informationUndefined Obstacle Avoidance and Path Planning
Paper ID #6116 Undefined Obstacle Avoidance and Path Planning Prof. Akram Hossain, Purdue University, Calumet (Tech) Akram Hossain is a professor in the department of Engineering Technology and director
More informationA Portable Magnetic Flux Leakage Testing System for Industrial Pipelines Based on Circumferential Magnetization
19 th World Conference on Non-Destructive Testing 2016 A Portable Magnetic Flux Leakage Testing System for Industrial Pipelines Based on Circumferential Magnetization Kunming ZHAO 1, Xinjun WU 1, Gongtian
More informationAutomobile Prototype Servo Control
IJIRST International Journal for Innovative Research in Science & Technology Volume 2 Issue 10 March 2016 ISSN (online): 2349-6010 Automobile Prototype Servo Control Mr. Linford William Fernandes Don Bosco
More informationImplement a Robot for the Trinity College Fire Fighting Robot Competition.
Alan Kilian Fall 2011 Implement a Robot for the Trinity College Fire Fighting Robot Competition. Page 1 Introduction: The successful completion of an individualized degree in Mechatronics requires an understanding
More informationEngineering Technology (2010) Sample work program A. September 2010
Engineering (2010) Sample work program A September 2010 Engineering (2010) Sample work program A Compiled by the Queensland Studies Authority September 2010 A work program is the school s plan of how the
More informationVishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)
Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,
More informationElectronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects
Contemporary Engineering Sciences, Vol. 9, 2016, no. 17, 835-841 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2016.6692 Electronic Travel Aid Based on Consumer Depth Devices to Avoid Moving
More informationMulti-Robot Cooperative System For Object Detection
Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based
More informationUNIT1. Keywords page 13-14
UNIT1 Keywords page 13-14 What is a Robot? A robot is a machine that can do the work of a human. Robots can be automatic, or they can be computer-controlled. Robots are a part of everyday life. Most robots
More informationWater Meter Basics Incremental encoders
Water Meter Basics Measuring flow can be accomplished in a number of ways. For residential applications, the two most common approaches are turbine and positive displacement technologies. The turbine meters
More informationT.C. MARMARA UNIVERSITY FACULTY of ENGINEERING COMPUTER ENGINEERING DEPARTMENT
T.C. MARMARA UNIVERSITY FACULTY of ENGINEERING COMPUTER ENGINEERING DEPARTMENT CSE497 Engineering Project Project Specification Document INTELLIGENT WALL CONSTRUCTION BY MEANS OF A ROBOTIC ARM Group Members
More informationOptimization of motion adjustment pattern in intelligent minesweeper robots (experimental research)
Journal of Electrical and Electronic Engineering 2014; 2(2): 36-40 Published online April 30, 2014 (http://www.sciencepublishinggroup.com/j/jeee) doi: 10.11648/j.jeee.20140202.11 Optimization of motion
More informationNational Aeronautics and Space Administration
National Aeronautics and Space Administration 2013 Spinoff (spin ôf ) -noun. 1. A commercialized product incorporating NASA technology or expertise that benefits the public. These include products or processes
More informationFLL Coaches Clinic Chassis and Attachments. Patrick R. Michaud
FLL Coaches Clinic Chassis and Attachments Patrick R. Michaud pmichaud@pobox.com Erik Jonsson School of Engineering and Computer Science University of Texas at Dallas September 23, 2017 Presentation Outline
More informationSIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB
SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University
More informationDeep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell
Deep Green System for real-time tracking and playing the board game Reversi Final Project Submitted by: Nadav Erell Introduction to Computational and Biological Vision Department of Computer Science, Ben-Gurion
More informationFLL Robot Design Workshop
FLL Robot Design Workshop Tool Design and Mechanism Prepared by Dr. C. H. (Tony) Lin Principal Engineer Tire and Vehicle Mechanics Goodyear Tire & Rubber Company tony_lin@goodyear.com Description Mechanism
More informationPerformance Analysis of Ultrasonic Mapping Device and Radar
Volume 118 No. 17 2018, 987-997 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Performance Analysis of Ultrasonic Mapping Device and Radar Abhishek
More informationPick and Place Robotic Arm Using Arduino
Pick and Place Robotic Arm Using Arduino Harish K 1, Megha D 2, Shuklambari M 3, Amit K 4, Chaitanya K Jambotkar 5 1,2,3,4 5 th SEM Students in Department of Electrical and Electronics Engineering, KLE.I.T,
More informationAGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira
AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables
More informationSupervisors: Rachel Cardell-Oliver Adrian Keating. Program: Bachelor of Computer Science (Honours) Program Dates: Semester 2, 2014 Semester 1, 2015
Supervisors: Rachel Cardell-Oliver Adrian Keating Program: Bachelor of Computer Science (Honours) Program Dates: Semester 2, 2014 Semester 1, 2015 Background Aging population [ABS2012, CCE09] Need to
More informationMachine Vision for the Life Sciences
Machine Vision for the Life Sciences Presented by: Niels Wartenberg June 12, 2012 Track, Trace & Control Solutions Niels Wartenberg Microscan Sr. Applications Engineer, Clinical Senior Applications Engineer
More informationRobotics II DESCRIPTION. EXAM INFORMATION Items
EXAM INFORMATION Items 37 Points 49 Prerequisites NONE Grade Level 10-12 Course Length ONE SEMESTER Career Cluster MANUFACTURING SCIENCE, TECHNOLOGY, ENGINEERING, AND MATHEMATICS Performance Standards
More informationImplementation of Conventional and Neural Controllers Using Position and Velocity Feedback
Implementation of Conventional and Neural Controllers Using Position and Velocity Feedback Expo Paper Department of Electrical and Computer Engineering By: Christopher Spevacek and Manfred Meissner Advisor:
More informationNUST FALCONS. Team Description for RoboCup Small Size League, 2011
1. Introduction: NUST FALCONS Team Description for RoboCup Small Size League, 2011 Arsalan Akhter, Muhammad Jibran Mehfooz Awan, Ali Imran, Salman Shafqat, M. Aneeq-uz-Zaman, Imtiaz Noor, Kanwar Faraz,
More informationAutonomous Stair Climbing Algorithm for a Small Four-Tracked Robot
Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Quy-Hung Vu, Byeong-Sang Kim, Jae-Bok Song Korea University 1 Anam-dong, Seongbuk-gu, Seoul, Korea vuquyhungbk@yahoo.com, lovidia@korea.ac.kr,
More informationCorrecting Odometry Errors for Mobile Robots Using Image Processing
Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,
More informationSTOx s 2014 Extended Team Description Paper
STOx s 2014 Extended Team Description Paper Saith Rodríguez, Eyberth Rojas, Katherín Pérez, Jorge López, Carlos Quintero, and Juan Manuel Calderón Faculty of Electronics Engineering Universidad Santo Tomás
More informationVisual Perception Based Behaviors for a Small Autonomous Mobile Robot
Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Scott Jantz and Keith L Doty Machine Intelligence Laboratory Mekatronix, Inc. Department of Electrical and Computer Engineering Gainesville,
More informationOverview of Challenges in the Development of Autonomous Mobile Robots. August 23, 2011
Overview of Challenges in the Development of Autonomous Mobile Robots August 23, 2011 What is in a Robot? Sensors Effectors and actuators (i.e., mechanical) Used for locomotion and manipulation Controllers
More informationDeblurring. Basics, Problem definition and variants
Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying
More informationA New Approach for Transformer Bushing Monitoring. Emilio Morales Technical Application Specialist Qualitrol
A New Approach for Transformer Bushing Monitoring Emilio Morales Technical Application Specialist Qualitrol Abstract Transformer bushings are one of the most critical components of a transformer. Up to
More informationDeveloping a Computer Vision System for Autonomous Rover Navigation
University of Hawaii at Hilo Fall 2016 Developing a Computer Vision System for Autonomous Rover Navigation ASTR 432 FINAL REPORT FALL 2016 DARYL ALBANO Page 1 of 6 Table of Contents Abstract... 2 Introduction...
More informationTEST PROJECT MOBILE ROBOTICS FOR JUNIOR
TEST PROJECT MOBILE ROBOTICS FOR JUNIOR CONTENTS This Test Project proposal consists of the following documentation/files: 1. DESCRIPTION OF PROJECT AND TASKS DOCUMENTATION The JUNIOR challenge of Mobile
More informationBuilding a comprehensive lab sequence for an undergraduate mechatronics program
Building a comprehensive lab sequence for an undergraduate mechatronics program Tom Lee Ph.D., Chief Education Officer, Quanser MECHATRONICS Motivation The global engineering academic community is witnessing
More informationMASTER SHIFU. STUDENT NAME: Vikramadityan. M ROBOT NAME: Master Shifu COURSE NAME: Intelligent Machine Design Lab
MASTER SHIFU STUDENT NAME: Vikramadityan. M ROBOT NAME: Master Shifu COURSE NAME: Intelligent Machine Design Lab COURSE NUMBER: EEL 5666C TA: Andy Gray, Nick Cox INSTRUCTORS: Dr. A. Antonio Arroyo, Dr.
More informationAn Introduction to Convolutional Neural Networks. Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland
An Introduction to Convolutional Neural Networks Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland Sources & Resources - Andrej Karpathy, CS231n http://cs231n.github.io/convolutional-networks/
More informationCLASSIFICATION CONTROL WIDTH LENGTH
Sumobot Competition Robots per Event: Length of Event: Robot Weight Range: Robot Dimensions: Arena Specifications: Robot Control: Event Summary: Two each match 1 minute per match (max) Two robots compete
More informationOPEN CV BASED AUTONOMOUS RC-CAR
OPEN CV BASED AUTONOMOUS RC-CAR B. Sabitha 1, K. Akila 2, S.Krishna Kumar 3, D.Mohan 4, P.Nisanth 5 1,2 Faculty, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India
More informationSurvivor Identification and Retrieval Robot Project Proposal
Survivor Identification and Retrieval Robot Project Proposal Karun Koppula Zachary Wasserman Zhijie Jin February 8, 2018 1 Introduction 1.1 Objective After the Fukushima Daiichi didaster in after a 2011
More informationNCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects
NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS
More informationThe Disappearing Computer. Information Document, IST Call for proposals, February 2000.
The Disappearing Computer Information Document, IST Call for proposals, February 2000. Mission Statement To see how information technology can be diffused into everyday objects and settings, and to see
More informationEnsuring the Safety of an Autonomous Robot in Interaction with Children
Machine Learning in Robot Assisted Therapy Ensuring the Safety of an Autonomous Robot in Interaction with Children Challenges and Considerations Stefan Walke stefan.walke@tum.de SS 2018 Overview Physical
More informationIsrael Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings. Amos Gellert, Nataly Kats
Mr. Amos Gellert Technological aspects of level crossing facilities Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings Deputy General Manager
More informationSeparation Connector. Prototyping Progress Update March 1, 2013
Separation Connector By Koll Christianson, Luis Herrera, and Zheng Lian Team 19 Prototyping Progress Update March 1, 2013 Submitted towards partial fulfillment of the requirements for Mechanical Engineering
More informationEssential Understandings with Guiding Questions Robotics Engineering
Essential Understandings with Guiding Questions Robotics Engineering 1 st Quarter Theme: Orientation to a Successful Laboratory Experience Student Expectations Safety Emergency MSDS Organizational Systems
More informationTeam Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington
Department of Computer Science and Engineering The University of Texas at Arlington Team Autono-Mo Jacobia Architecture Design Specification Team Members: Bill Butts Darius Salemizadeh Lance Storey Yunesh
More informationSpace Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people
Space Research expeditions and open space work Education & Research Teaching and laboratory facilities. Medical Assistance for people Safety Life saving activity, guarding Military Use to execute missions
More informationLane Detection in Automotive
Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 5 Defining our Region of Interest... 6 BirdsEyeView Transformation...
More information* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged
ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing
More informationApplication Areas of AI Artificial intelligence is divided into different branches which are mentioned below:
Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE
More informationSemi-Autonomous Parking for Enhanced Safety and Efficiency
Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University
More informationMulti-robot Formation Control Based on Leader-follower Method
Journal of Computers Vol. 29 No. 2, 2018, pp. 233-240 doi:10.3966/199115992018042902022 Multi-robot Formation Control Based on Leader-follower Method Xibao Wu 1*, Wenbai Chen 1, Fangfang Ji 1, Jixing Ye
More informationWorkshop on Intelligent System and Applications (ISA 17)
Telemetry Mining for Space System Sara Abdelghafar Ahmed PhD student, Al-Azhar University Member of SRGE Workshop on Intelligent System and Applications (ISA 17) 13 May 2017 Workshop on Intelligent System
More informationL E C T U R E R, E L E C T R I C A L A N D M I C R O E L E C T R O N I C E N G I N E E R I N G
P R O F. S L A C K L E C T U R E R, E L E C T R I C A L A N D M I C R O E L E C T R O N I C E N G I N E E R I N G G B S E E E @ R I T. E D U B L D I N G 9, O F F I C E 0 9-3 1 8 9 ( 5 8 5 ) 4 7 5-5 1 0
More informationDEEP LEARNING ON RF DATA. Adam Thompson Senior Solutions Architect March 29, 2018
DEEP LEARNING ON RF DATA Adam Thompson Senior Solutions Architect March 29, 2018 Background Information Signal Processing and Deep Learning Radio Frequency Data Nuances AGENDA Complex Domain Representations
More informationArtificial Neural Network based Mobile Robot Navigation
Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,
More informationPerception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision
11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste
More informationHigh Performance Imaging Using Large Camera Arrays
High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,
More informationTeam S.S. Minnow RoboBoat 2015
1 Team RoboBoat 2015 Abigail Butka Daytona Beach Homeschoolers Palm Coast Florida USA butkaabby872@gmail.com Nick Serle Daytona Beach Homeschoolers Flagler Beach, Florida USA Abstract This document describes
More informationValidation Document. ELEC 491 Capstone Proposal - Dynamic Projector Mount Project. Andy Kwan Smaran Karimbil Siamak Rahmanian Dante Ye
Validation Document ELEC 491 Capstone Proposal - Dynamic Projector Mount Project Andy Kwan Smaran Karimbil Siamak Rahmanian Dante Ye Executive Summary: The purpose of this document is to describe the tests
More informationElectrical Machines Diagnosis
Monitoring and diagnosing faults in electrical machines is a scientific and economic issue which is motivated by objectives for reliability and serviceability in electrical drives. This concern for continuity
More informationRobust Hand Gesture Recognition for Robotic Hand Control
Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State
More informationELECTRIC SLIP ROLL MACHINE. Model: ESR-1300X2.5/ESR-1300X4.5 ESR-1550X3.5/ESR-1580X2.0
ELECTRIC SLIP ROLL MACHINE Model: ESR-1300X2.5/ESR-1300X4.5 ESR-1550X3.5/ESR-1580X2.0 Operation Manual Table of contents I MAIN SPECIFICATION...2 II SAFETY INSTRUCTIONS.. 2 III OPERATION INSTRUCTIONS..4
More informationMilind R. Shinde #1, V. N. Bhaiswar *2, B. G. Achmare #3 1 Student of MTECH CAD/CAM, Department of Mechanical Engineering, GHRCE Nagpur, MH, India
Design and simulation of robotic arm for loading and unloading of work piece on lathe machine by using workspace simulation software: A Review Milind R. Shinde #1, V. N. Bhaiswar *2, B. G. Achmare #3 1
More informationSimulation of a mobile robot navigation system
Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei
More informationHaptic presentation of 3D objects in virtual reality for the visually disabled
Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,
More informationINTRODUCTION OF SOME APPROACHES FOR EDUCATIONS OF ROBOT DESIGN AND MANUFACTURING
INTRODUCTION OF SOME APPROACHES FOR EDUCATIONS OF ROBOT DESIGN AND MANUFACTURING T. Matsuo *,a, M. Tatsuguchi a, T. Higaki a, S. Kuchii a, M. Shimazu a and H. Terai a a Department of Creative Engineering,
More informationHybrid LQG-Neural Controller for Inverted Pendulum System
Hybrid LQG-Neural Controller for Inverted Pendulum System E.S. Sazonov Department of Electrical and Computer Engineering Clarkson University Potsdam, NY 13699-570 USA P. Klinkhachorn and R. L. Klein Lane
More informationCMDragons 2009 Team Description
CMDragons 2009 Team Description Stefan Zickler, Michael Licitra, Joydeep Biswas, and Manuela Veloso Carnegie Mellon University {szickler,mmv}@cs.cmu.edu {mlicitra,joydeep}@andrew.cmu.edu Abstract. In this
More informationMobile Cognitive Indoor Assistive Navigation for the Visually Impaired
1 Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired Bing Li 1, Manjekar Budhai 2, Bowen Xiao 3, Liang Yang 1, Jizhong Xiao 1 1 Department of Electrical Engineering, The City College,
More informationThe Art of Neural Nets
The Art of Neural Nets Marco Tavora marcotav65@gmail.com Preamble The challenge of recognizing artists given their paintings has been, for a long time, far beyond the capability of algorithms. Recent advances
More informationTechniques for Suppressing Adverse Lighting to Improve Vision System Success. Nelson Bridwell Senior Vision Engineer Machine Vision Engineering LLC
Techniques for Suppressing Adverse Lighting to Improve Vision System Success Nelson Bridwell Senior Vision Engineer Machine Vision Engineering LLC Nelson Bridwell President of Machine Vision Engineering
More informationResponsible Data Use Assessment for Public Realm Sensing Pilot with Numina. Overview of the Pilot:
Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina Overview of the Pilot: Sidewalk Labs vision for people-centred mobility - safer and more efficient public spaces - requires a
More informationCognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many
Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July
More informationIntroduction to NeuroScript MovAlyzeR Handwriting Movement Software (Draft 14 August 2015)
Introduction to NeuroScript MovAlyzeR Page 1 of 20 Introduction to NeuroScript MovAlyzeR Handwriting Movement Software (Draft 14 August 2015) Our mission: Facilitate discoveries and applications with handwriting
More informationAutomated hand recognition as a human-computer interface
Automated hand recognition as a human-computer interface Sergii Shelpuk SoftServe, Inc. sergii.shelpuk@gmail.com Abstract This paper investigates applying Machine Learning to the problem of turning a regular
More informationCURIE Academy, Summer 2014 Lab 2: Computer Engineering Software Perspective Sign-Off Sheet
Lab : Computer Engineering Software Perspective Sign-Off Sheet NAME: NAME: DATE: Sign-Off Milestone TA Initials Part 1.A Part 1.B Part.A Part.B Part.C Part 3.A Part 3.B Part 3.C Test Simple Addition Program
More informationDistributed Vision System: A Perceptual Information Infrastructure for Robot Navigation
Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp
More informationIntelligent driving TH« TNO I Innovation for live
Intelligent driving TNO I Innovation for live TH«Intelligent Transport Systems have become an integral part of the world. In addition to the current ITS systems, intelligent vehicles can make a significant
More informationFormation and Cooperation for SWARMed Intelligent Robots
Formation and Cooperation for SWARMed Intelligent Robots Wei Cao 1 Yanqing Gao 2 Jason Robert Mace 3 (West Virginia University 1 University of Arizona 2 Energy Corp. of America 3 ) Abstract This article
More informationLearning serious knowledge while "playing"with robots
6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Learning serious knowledge while "playing"with robots Zoltán Istenes Department of Software Technology and Methodology,
More informationDevelopment of Explosion-proof Autonomous Plant Operation Robot for Petrochemical Plants
1 Development of Explosion-proof Autonomous Plant Operation Robot for Petrochemical Plants KOJI SHUKUTANI *1 KEN ONISHI *2 NORIKO ONISHI *1 HIROYOSHI OKAZAKI *3 HIROYOSHI KOJIMA *3 SYUHEI KOBORI *3 For
More informationDESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY
DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY 1 RAJU RATHOD, 2 GEORGE PHILIP.C, 3 VIJAY KUMAR B.P 1,2,3 MSRIT Bangalore Abstract- To ensure the best place, position,
More information