Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Size: px
Start display at page:

Download "Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)"

Transcription

1 Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit)

2 Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation, who has been supporting me throughout the entire duration of my research work. It was Ivan who had sent me the camera that helped me in my research work, and he has been my point of contact all throughout with Ximea. He also gave me pointers from time to time to improve my technical writing. I would also like to thank my advisors, Dr. Stephen Levinson and Dr. Paris Smaragdis for their support and technical advice that made this project a reality. I would like to thank Luke Wendt, a Ph.D. student and a good friend, who has always been supportive of me since the very first days of my research. He recommended great books to aid me in my research and to improve my understanding of the harder concepts in machine learning and statistics. He was also my partner in the computer vision course and a lot of the progress that was made in this research experiment is directly related to the vision course. This project required a lot of tools, especially for modeling and simulation. It was Aaron Silver, a Ph.D. student and another good friend, who taught me how to use these tools. It was Aaron who patiently taught me how to begin programming a humanoid robot and spent hours on end scrutinizing his own code for me to learn.

3 Project Overview The field of robotics has improved in leaps and bounds over the past half century. Robots have gone from mere characters in science fiction novels to indispensable servants in several fields. Assembly lines, previously staffed by humans, are now giving way to industrial robots that perform the same task hundreds of times faster, with an even better accuracy rate. As a result, it is more imperative than ever before to design robots that can perceive their surroundings and intelligently work within their environment, rather than being programmed for every possible case by a human programmer. The celebrated science fiction author Isaac Asimov has written quite a few works in the field of Artificial Intelligence. In his book I, Robot, Asimov talks about how robots would play a part in our daily lives in the near future. He goes onto predict that one day robots might be capable of emotion and they may love or hate their masters, in effect mimicking the behavior of any random human being. The goal of the project is to teach a specialized humanoid robot, the icub robot, to solve any puzzle, wherein a ball of a given color would be placed at the start position of the maze, and the robot would navigate the ball through obstacles and get the ball to the finish position. The robot would be able to move the ball through the maze by physically tilting the base of the puzzle with its hand. In the process, the robot would utilize the most efficient way possible. If no possible path exists, the robot would not begin to solve the maze. The importance of this experiment is profound. This experiment marks the beginnings of letting a robot truly think that its actions have an impact on its immediate surroundings. When a child is born, initially it is not able to comprehend objects, shapes, etc. and cannot focus on anything that is not within its direct gaze. However, over time, the child learns that it can move and play with objects, thus realizing

4 that its motor hand movements have changed the environment. The child learns this from neurological feedback loops like the rattling of toys. In this experiment, we are hoping to determine if a robot can also develop this consciousness that its actions mean something, and hopefully to make the leap that by in order to reach state C, it needs to perform action A on state B. We believe that this is the foundation of truly autonomous robotics and this project is the first step in this direction.

5 Project Components For the successful completion of the project, the most important components were the Currera Starter Kit (which included the Currera RL04C camera) and the icub robot itself, along with its required hardware. The starter kit also came with demo applications of a variety of computer vision libraries. After testing them all out, I decided to proceed with the OpenCV vision library, since I had prior experience with it and it has the largest support community. The project involves the successful design and execution of complex vision, control and artificial intelligence algorithms in order to be able to learn from experience and modify its behavior accordingly in future iterations. As a result, a demarcation was made with regards to the storage and processing of the data amongst different components. All the vision related processing was performed by the Currera camera. This includes the determination of all the internal and external edges of the maze and the initial position of the ball. This information is then transmitted to the icub to store in its RAM, freeing up the Currera camera to deal exclusively with the live camera feed and to process it. The Currera unit is responsible for detecting the edge markers, the centroid of the ball and its current relative position at all times with respect to the markers. The unit is also responsible for filtering out other possible candidates for being the ball (possible with objects having similar HSV values) and for making sure that the entire board is visible at all times. This is done by the process of inverse homography and the entire computation is done within the Currera unit itself. Only the output is sent to the icub for further processing. We believe that the Currera unit was able to handle such loads seamlessly because of the Atom Z530 process and the 1GB RAM that it had, making it a powerful computer in its own right. The icub robot comes with its own PC104 controller and memory, and communicates with the actuators and sensors using the CAN protocol. The initial version of the artificial intelligence algorithms is loaded onto the icub memory. There is a constant interaction between the controller and the memory, and so any changes in the parameters of the algorithms that would help fine-tune the algorithm would result in

6 an automatic update in the parameters. All changes are logged as well, so that the project can be replicated any number of times. The flow chart for the interaction of all the various components of this project is given below: Figure 1 - Components Overview

7 Part Replacement The icub comes with two cameras of its own, DragonFly2 cameras from PointGrey. However, they are CCD based while the Currera is CMOS based. The DragonFly2 is always prone to noise and slow speeds. This was what prompted me to start researching alternatives, and Ximea s Currera starter kit was the perfect fit for me to carry on my research. Figure 2 - Currera camera right after mounting it on stand Figure 3 - Lens

8 Figure 4 Breakout box BOB144 Figure 5 - The camera right before fixing inside icub

9 Figure 6 - Testing the camera before insertion Because of these performance reasons, the right camera of the icub was removed and was replaced with the Currera camera. The base of the skull of the icub contains the PC104, which covers the entire interior of the skull. As a result, the camera is not visible from the back of the robot, but can be seen as the iris of an eye from the front. All the wires from the BOB144 have been passed through the neck into the body and are connected to the icub s control systems so that the loop is maintained.

10 Figure 7 - View of icub from the back Figure 8 - Notice BOB144 towards the far end of top side of the skull

11 Algorithm approaches and views Offline Analysis of the Maze First the maze must be studied to obtain a control policy. This is done by analyzing a top down view of the maze. Since it is very difficult to provide a perfect orthographic view of the maze, an inverse homography must be applied to this step as well. Therefore the first thing that must be done is identification of features with known geometric relation to each other. At least four must be present to determine the inverse homography. The easiest way to go about doing this is to place high contrast color markers on each corner of the maze. The color red was selected because it was not present in the maze or the surrounding background. The ball was also chosen to be red. In the initial top down analysis no ball will be present. Color thresholding provides a binary image indicating where high concentrations of red are present. Thresholding the original RGB values appear to be overly sensitive. This sensitivity can be removed by first converting to HSV coordinates. A segmentation algorithm, like RasterScan, can be used to label contiguous regions and sort them by size. The four largest regions are expected to be the four corner markers. The geometry of the corners is known and therefore with a bit of logic labels can be applied to the compass markers. The markers are assumed to be related in a square manner from the top down perspective. Using this information, an inverse homography can be computed to obtain the correct top down perspective of the maze. All surrounding border content can be cropped off. The maze wall and open path are the only things that remain after this step. Choosing colors that are easily distinguished, e.g., yellow and green, allows thresholding of the open path. Again, a RasterScan will provide the open contiguous path of the maze. Once a path is obtained, it can be discretized into a grid. Important features in the image need to be tagged, like the start and goal. A high contrast color or a fiducial marker could be used or a manual user interface would work as well. Once a start and finish location are specified, reinforcement learning can begin. The reinforcement learning simulates trial and error runs of a simulated maze environment. The

12 control actions involve course wrist motion with long pauses. After each action the ball is at rest in one of the corners of the maze. After a sufficiently long run time value iteration converges and an optimal policy is obtained. Filtering of the optimal policy provides more general control domains. The final filtered control policy corresponding to this is then saved for online control. As long as the canter of the markers and the ball in the projected view are roughly in the same plane as the center of the ball and markers in the top down view, their transformations should be isomorphic. Online Analysis of the Maze Once the optimal control policy is obtained, any maze can be solved online with Bert. The maze is now viewed from a projected perspective. HSV (Hue-saturation-value) color thresholding provides a binary image indicating where high concentrations of red are present, which in turn indicates the location of the corner markers and the ball. A RasterScan segmentation approach returns the largest 5 objects in the field of vision of the robot. With the known geometry of the board on which the maze is built, the markers and the ball can be labeled. From the markers, an inverse homography would return the unprojected coordinates of the maze. This inverse homography is the applied to the ball s location. After cropping and discretizing the resulting projected image, the ball s location with respect to the grid and maze can be accurately determined. The application of the optimum control policy used in the offline analysis of the maze would result in the movement of the ball. Due to this design, the robot would move the board along one of the 3 possible axes and wait for a period of time before leveling the board once again. During this period, the ball would have progressed to another corner of the maze that is closer to the goal.

13 Currera camera identifying the ball The first thing that is needed for the entire project to run is to determine what object(s) can be classified as the ball in question. We pre-determined that the ball was going to be red in color, so it seemed natural to program the Currera camera to look for red objects, i.e. objects whose R value is high in the RGB value. The following figure gives an indication of the range of values seen by the Ximea Currera camera. Figure 9 - RGB values in the maze As a result of the small variation in the RGB values, I programmed the Currera camera to look at the HSV values instead. Now, the results are much more promising. The Currera camera accurately detects the

14 four corners of the maze, and can detect the ball easily. The corner detection and the resulting inverse homography is shown in the next 2 figures. Figure 10 - Corner detection Figure 11 - After inverse homography

15 Figure 12 - Detecting the ball correctly This represents an almost complete information the Currera camera outputs to the icub for further processing. Until this point, the Currera camera has already performed a significant amount of computer vision processing. The camera has detected the four edges of the board, the internal paths of the board, the relative position of the ball and performed an inverse homography in real time. But there is something even more amazing that the Currera can perform that made it perfect for my project. The Currera is also able to segment the image and its path (based on the HSV values), so that the icub can generate the optimum path right away. Furthermore, the Currera can also detect the start and end points of the maze as well, based on the tags attached to the maze. The following figures show the true computing capability of the Currera camera.

16 Figure 13 - HSV image of the maze Figure 14 - Segmenting the image into little squares for rule generation

17 Figure 15 - Currera labelling the start and end points on its own The information conveyed in figures 10,12,14 and 15 are sent to the icub processor to conduct the next part of the project, which is to apply the artificial intelligence algorithms (Q-learning, SARSA) and generate the rules for the maze. Here to, constant feedback from the Currera camera is what guides the icub to be able to successfully solve the maze.

18 Figure 16 - Rule generation by icub Figure 17 - Control policy generated based on Currera input

19 Figure 18 - Smoothed out control policy Thus, with the rules generated (figure 18) and all the information stored in the memory, the icub relies on the live feed streaming from the Currera camera to ensure that the ball is on path and is able to make decisions to guide the ball accurately from the start to the finish point.

20 Figure 19 - icub in action solving the maze

USAGE OF COMPUTER VISION AND MACHINE LEARNING TO SOLVE 3D MAZES VISHNU NATH KAMALNATH THESIS

USAGE OF COMPUTER VISION AND MACHINE LEARNING TO SOLVE 3D MAZES VISHNU NATH KAMALNATH THESIS USAGE OF COMPUTER VISION AND MACHINE LEARNING TO SOLVE 3D MAZES BY VISHNU NATH KAMALNATH THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science

More information

Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman

Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman Intelligent Robotics Research Centre Monash University Clayton 3168, Australia andrew.price@eng.monash.edu.au

More information

Spring 2005 Group 6 Final Report EZ Park

Spring 2005 Group 6 Final Report EZ Park 18-551 Spring 2005 Group 6 Final Report EZ Park Paul Li cpli@andrew.cmu.edu Ivan Ng civan@andrew.cmu.edu Victoria Chen vchen@andrew.cmu.edu -1- Table of Content INTRODUCTION... 3 PROBLEM... 3 SOLUTION...

More information

Reinforcement Learning Simulations and Robotics

Reinforcement Learning Simulations and Robotics Reinforcement Learning Simulations and Robotics Models Partially observable noise in sensors Policy search methods rather than value functionbased approaches Isolate key parameters by choosing an appropriate

More information

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell Deep Green System for real-time tracking and playing the board game Reversi Final Project Submitted by: Nadav Erell Introduction to Computational and Biological Vision Department of Computer Science, Ben-Gurion

More information

Team KMUTT: Team Description Paper

Team KMUTT: Team Description Paper Team KMUTT: Team Description Paper Thavida Maneewarn, Xye, Pasan Kulvanit, Sathit Wanitchaikit, Panuvat Sinsaranon, Kawroong Saktaweekulkit, Nattapong Kaewlek Djitt Laowattana King Mongkut s University

More information

OPEN CV BASED AUTONOMOUS RC-CAR

OPEN CV BASED AUTONOMOUS RC-CAR OPEN CV BASED AUTONOMOUS RC-CAR B. Sabitha 1, K. Akila 2, S.Krishna Kumar 3, D.Mohan 4, P.Nisanth 5 1,2 Faculty, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Confidence-Based Multi-Robot Learning from Demonstration

Confidence-Based Multi-Robot Learning from Demonstration Int J Soc Robot (2010) 2: 195 215 DOI 10.1007/s12369-010-0060-0 Confidence-Based Multi-Robot Learning from Demonstration Sonia Chernova Manuela Veloso Accepted: 5 May 2010 / Published online: 19 May 2010

More information

Vision-Guided Motion. Presented by Tom Gray

Vision-Guided Motion. Presented by Tom Gray Vision-Guided Motion Presented by Tom Gray Overview Part I Machine Vision Hardware Part II Machine Vision Software Part II Motion Control Part IV Vision-Guided Motion The Result Harley Davidson Example

More information

Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot

Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot International Conference on Control, Robotics, and Automation 2016 Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot Andrew Tzer-Yeu Chen, Kevin I-Kai Wang {andrew.chen, kevin.wang}@auckland.ac.nz

More information

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018.

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018. Research Intern Director of Research We are seeking a summer intern to support the team to develop prototype 3D sensing systems based on state-of-the-art sensing technologies along with computer vision

More information

Artificial Neural Network based Mobile Robot Navigation

Artificial Neural Network based Mobile Robot Navigation Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

An Electronic Eye to Improve Efficiency of Cut Tile Measuring Function

An Electronic Eye to Improve Efficiency of Cut Tile Measuring Function IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 4, Ver. IV. (Jul.-Aug. 2017), PP 25-30 www.iosrjournals.org An Electronic Eye to Improve Efficiency

More information

Hanuman KMUTT: Team Description Paper

Hanuman KMUTT: Team Description Paper Hanuman KMUTT: Team Description Paper Wisanu Jutharee, Sathit Wanitchaikit, Boonlert Maneechai, Natthapong Kaewlek, Thanniti Khunnithiwarawat, Pongsakorn Polchankajorn, Nakarin Suppakun, Narongsak Tirasuntarakul,

More information

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine

More information

League <BART LAB AssistBot (THAILAND)>

League <BART LAB AssistBot (THAILAND)> RoboCup@Home League 2013 Jackrit Suthakorn, Ph.D.*, Woratit Onprasert, Sakol Nakdhamabhorn, Rachot Phuengsuk, Yuttana Itsarachaiyot, Choladawan Moonjaita, Syed Saqib Hussain

More information

Part of: Inquiry Science with Dartmouth

Part of: Inquiry Science with Dartmouth Curriculum Guide Part of: Inquiry Science with Dartmouth Developed by: David Qian, MD/PhD Candidate Department of Biomedical Data Science Overview Using existing knowledge of computer science, students

More information

KMUTT Kickers: Team Description Paper

KMUTT Kickers: Team Description Paper KMUTT Kickers: Team Description Paper Thavida Maneewarn, Xye, Korawit Kawinkhrue, Amnart Butsongka, Nattapong Kaewlek King Mongkut s University of Technology Thonburi, Institute of Field Robotics (FIBO)

More information

Developing a Computer Vision System for Autonomous Rover Navigation

Developing a Computer Vision System for Autonomous Rover Navigation University of Hawaii at Hilo Fall 2016 Developing a Computer Vision System for Autonomous Rover Navigation ASTR 432 FINAL REPORT FALL 2016 DARYL ALBANO Page 1 of 6 Table of Contents Abstract... 2 Introduction...

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Robocup Electrical Team 2006 Description Paper

Robocup Electrical Team 2006 Description Paper Robocup Electrical Team 2006 Description Paper Name: Strive2006 (Shanghai University, P.R.China) Address: Box.3#,No.149,Yanchang load,shanghai, 200072 Email: wanmic@163.com Homepage: robot.ccshu.org Abstract:

More information

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005) Project title: Optical Path Tracking Mobile Robot with Object Picking Project number: 1 A mobile robot controlled by the Altera UP -2 board and/or the HC12 microprocessor will have to pick up and drop

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

Implementation of License Plate Recognition System in ARM Cortex A8 Board

Implementation of License Plate Recognition System in ARM Cortex A8 Board www..org 9 Implementation of License Plate Recognition System in ARM Cortex A8 Board S. Uma 1, M.Sharmila 2 1 Assistant Professor, 2 Research Scholar, Department of Electrical and Electronics Engg, College

More information

Computer Vision Robotics I Prof. Yanco Spring 2015

Computer Vision Robotics I Prof. Yanco Spring 2015 Computer Vision 91.450 Robotics I Prof. Yanco Spring 2015 RGB Color Space Lighting impacts color values! HSV Color Space Hue, the color type (such as red, blue, or yellow); Measured in values of 0-360

More information

Number Plate Recognition Using Segmentation

Number Plate Recognition Using Segmentation Number Plate Recognition Using Segmentation Rupali Kate M.Tech. Electronics(VLSI) BVCOE. Pune 411043, Maharashtra, India. Dr. Chitode. J. S BVCOE. Pune 411043 Abstract Automatic Number Plate Recognition

More information

Vision System for a Robot Guide System

Vision System for a Robot Guide System Vision System for a Robot Guide System Yu Wua Wong 1, Liqiong Tang 2, Donald Bailey 1 1 Institute of Information Sciences and Technology, 2 Institute of Technology and Engineering Massey University, Palmerston

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

Park Ranger. Li Yang April 21, 2014

Park Ranger. Li Yang April 21, 2014 Park Ranger Li Yang April 21, 2014 University of Florida Department of Electrical and Computer Engineering EEL 5666C IMDL Written Report Instructors: A. Antonio Arroyo, Eric M. Schwartz TAs: Andy Gray,

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Image Processing : Introduction

Image Processing : Introduction Image Processing : Introduction What is an Image? An image is a picture stored in electronic form. An image map is a file containing information that associates different location on a specified image.

More information

CSC C85 Embedded Systems Project # 1 Robot Localization

CSC C85 Embedded Systems Project # 1 Robot Localization 1 The goal of this project is to apply the ideas we have discussed in lecture to a real-world robot localization task. You will be working with Lego NXT robots, and you will have to find ways to work around

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Team Description 2006 for Team RO-PE A

Team Description 2006 for Team RO-PE A Team Description 2006 for Team RO-PE A Chew Chee-Meng, Samuel Mui, Lim Tongli, Ma Chongyou, and Estella Ngan National University of Singapore, 119260 Singapore {mpeccm, g0500307, u0204894, u0406389, u0406316}@nus.edu.sg

More information

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu

More information

Controlling Humanoid Robot Using Head Movements

Controlling Humanoid Robot Using Head Movements Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika

More information

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Visual Interpretation of Hand Gestures as a Practical Interface Modality Visual Interpretation of Hand Gestures as a Practical Interface Modality Frederik C. M. Kjeldsen Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

A Method of Multi-License Plate Location in Road Bayonet Image

A Method of Multi-License Plate Location in Road Bayonet Image A Method of Multi-License Plate Location in Road Bayonet Image Ying Qian The lab of Graphics and Multimedia Chongqing University of Posts and Telecommunications Chongqing, China Zhi Li The lab of Graphics

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

UTILIZATION OF ROBOTICS AS CONTEMPORARY TECHNOLOGY AND AN EFFECTIVE TOOL IN TEACHING COMPUTER PROGRAMMING

UTILIZATION OF ROBOTICS AS CONTEMPORARY TECHNOLOGY AND AN EFFECTIVE TOOL IN TEACHING COMPUTER PROGRAMMING UTILIZATION OF ROBOTICS AS CONTEMPORARY TECHNOLOGY AND AN EFFECTIVE TOOL IN TEACHING COMPUTER PROGRAMMING Aaron R. Rababaah* 1, Ahmad A. Rabaa i 2 1 arababaah@auk.edu.kw 2 arabaai@auk.edu.kw Abstract Traditional

More information

Responding to Voice Commands

Responding to Voice Commands Responding to Voice Commands Abstract: The goal of this project was to improve robot human interaction through the use of voice commands as well as improve user understanding of the robot s state. Our

More information

Face Detector using Network-based Services for a Remote Robot Application

Face Detector using Network-based Services for a Remote Robot Application Face Detector using Network-based Services for a Remote Robot Application Yong-Ho Seo Department of Intelligent Robot Engineering, Mokwon University Mokwon Gil 21, Seo-gu, Daejeon, Republic of Korea yhseo@mokwon.ac.kr

More information

CORC Exploring Robotics. Unit A: Introduction To Robotics

CORC Exploring Robotics. Unit A: Introduction To Robotics CORC 3303 Exploring Robotics Unit A: Introduction To Robotics What is a robot? The robot word is attributed to Czech playwright Karel Capek. He first coined the term in his 1921 play Rossum's Universal

More information

Guidance of a Mobile Robot using Computer Vision over a Distributed System

Guidance of a Mobile Robot using Computer Vision over a Distributed System Guidance of a Mobile Robot using Computer Vision over a Distributed System Oliver M C Williams (JE) Abstract Previously, there have been several 4th-year projects using computer vision to follow a robot

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Blue-Bot TEACHER GUIDE

Blue-Bot TEACHER GUIDE Blue-Bot TEACHER GUIDE Using Blue-Bot in the classroom Blue-Bot TEACHER GUIDE Programming made easy! Previous Experiences Prior to using Blue-Bot with its companion app, children could work with Remote

More information

Simulation of a mobile robot navigation system

Simulation of a mobile robot navigation system Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei

More information

NUST FALCONS. Team Description for RoboCup Small Size League, 2011

NUST FALCONS. Team Description for RoboCup Small Size League, 2011 1. Introduction: NUST FALCONS Team Description for RoboCup Small Size League, 2011 Arsalan Akhter, Muhammad Jibran Mehfooz Awan, Ali Imran, Salman Shafqat, M. Aneeq-uz-Zaman, Imtiaz Noor, Kanwar Faraz,

More information

Students will design, program, and build a robot vehicle to traverse a maze in 30 seconds without touching any sidewalls or going out of bounds.

Students will design, program, and build a robot vehicle to traverse a maze in 30 seconds without touching any sidewalls or going out of bounds. Overview Challenge Students will design, program, and build a robot vehicle to traverse a maze in 30 seconds without touching any sidewalls or going out of bounds. Materials Needed One of these sets: TETRIX

More information

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987)

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987) Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group bdawson@goipd.com (987) 670-2050 Introduction Automated Optical Inspection (AOI) uses lighting, cameras, and vision computers

More information

Various Calibration Functions for Webcams and AIBO under Linux

Various Calibration Functions for Webcams and AIBO under Linux SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Various Calibration Functions for Webcams and AIBO under Linux Csaba Kertész, Zoltán Vámossy Faculty of Science, University of Szeged,

More information

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people Space Research expeditions and open space work Education & Research Teaching and laboratory facilities. Medical Assistance for people Safety Life saving activity, guarding Military Use to execute missions

More information

Undefined Obstacle Avoidance and Path Planning

Undefined Obstacle Avoidance and Path Planning Paper ID #6116 Undefined Obstacle Avoidance and Path Planning Prof. Akram Hossain, Purdue University, Calumet (Tech) Akram Hossain is a professor in the department of Engineering Technology and director

More information

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception my goals What is the state of the art boundary? Where might we be in 5-10 years? The Perceptual Pipeline The classical approach:

More information

Vision for Robotics Lab session 8 CAMSHIFT

Vision for Robotics Lab session 8 CAMSHIFT Vision for Robotics Lab session 8 CAMSHIFT The OpenCV implementation of the CAMSHIFT algorithm relies on a number of auxiliary functions. The first of these is cv2.calchist, which calculates the histogram

More information

Robot Autonomy Project Auto Painting. Team: Ben Ballard Jimit Gandhi Mohak Bhardwaj Pratik Chatrath

Robot Autonomy Project Auto Painting. Team: Ben Ballard Jimit Gandhi Mohak Bhardwaj Pratik Chatrath Robot Autonomy Project Auto Painting Team: Ben Ballard Jimit Gandhi Mohak Bhardwaj Pratik Chatrath Goal -Get HERB to paint autonomously Overview Initial Setup of Environment Problems to Solve Paintings:HERB,

More information

Image Processing and Particle Analysis for Road Traffic Detection

Image Processing and Particle Analysis for Road Traffic Detection Image Processing and Particle Analysis for Road Traffic Detection ABSTRACT Aditya Kamath Manipal Institute of Technology Manipal, India This article presents a system developed using graphic programming

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Self-Localization Based on Monocular Vision for Humanoid Robot

Self-Localization Based on Monocular Vision for Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 4, pp. 323 332 (2011) 323 Self-Localization Based on Monocular Vision for Humanoid Robot Shih-Hung Chang 1, Chih-Hsien Hsia 2, Wei-Hsuan Chang 1

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Ensuring the Safety of an Autonomous Robot in Interaction with Children

Ensuring the Safety of an Autonomous Robot in Interaction with Children Machine Learning in Robot Assisted Therapy Ensuring the Safety of an Autonomous Robot in Interaction with Children Challenges and Considerations Stefan Walke stefan.walke@tum.de SS 2018 Overview Physical

More information

Unit 12: Artificial Intelligence CS 101, Fall 2018

Unit 12: Artificial Intelligence CS 101, Fall 2018 Unit 12: Artificial Intelligence CS 101, Fall 2018 Learning Objectives After completing this unit, you should be able to: Explain the difference between procedural and declarative knowledge. Describe the

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

SitiK KIT. Team Description for the Humanoid KidSize League of RoboCup 2010

SitiK KIT. Team Description for the Humanoid KidSize League of RoboCup 2010 SitiK KIT Team Description for the Humanoid KidSize League of RoboCup 2010 Shohei Takesako, Nasuka Awai, Kei Sugawara, Hideo Hattori, Yuichiro Hirai, Takesi Miyata, Keisuke Urushibata, Tomoya Oniyama,

More information

Autonomous Obstacle Avoiding and Path Following Rover

Autonomous Obstacle Avoiding and Path Following Rover Volume 114 No. 9 2017, 271-281 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Autonomous Obstacle Avoiding and Path Following Rover ijpam.eu Sandeep Polina

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Vision Review: Image Processing. Course web page:

Vision Review: Image Processing. Course web page: Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 7, Announcements Homework and paper presentation guidelines are up on web page Readings for next Tuesday: Chapters 6,.,

More information

Lab book. Exploring Robotics (CORC3303)

Lab book. Exploring Robotics (CORC3303) Lab book Exploring Robotics (CORC3303) Dept of Computer and Information Science Brooklyn College of the City University of New York updated: Fall 2011 / Professor Elizabeth Sklar UNIT A Lab, part 1 : Robot

More information

Generating Personality Character in a Face Robot through Interaction with Human

Generating Personality Character in a Face Robot through Interaction with Human Generating Personality Character in a Face Robot through Interaction with Human F. Iida, M. Tabata and F. Hara Department of Mechanical Engineering Science University of Tokyo - Kagurazaka, Shinjuku-ku,

More information

Evolutionary robotics Jørgen Nordmoen

Evolutionary robotics Jørgen Nordmoen INF3480 Evolutionary robotics Jørgen Nordmoen Slides: Kyrre Glette Today: Evolutionary robotics Why evolutionary robotics Basics of evolutionary optimization INF3490 will discuss algorithms in detail Illustrating

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

MAKER: Development of Smart Mobile Robot System to Help Middle School Students Learn about Robot Perception

MAKER: Development of Smart Mobile Robot System to Help Middle School Students Learn about Robot Perception Paper ID #14537 MAKER: Development of Smart Mobile Robot System to Help Middle School Students Learn about Robot Perception Dr. Sheng-Jen Tony Hsieh, Texas A&M University Dr. Sheng-Jen ( Tony ) Hsieh is

More information

Low-Cost, On-Demand Film Digitisation and Online Delivery. Matt Garner

Low-Cost, On-Demand Film Digitisation and Online Delivery. Matt Garner Low-Cost, On-Demand Film Digitisation and Online Delivery Matt Garner (matt.garner@findmypast.com) Abstract Hundreds of millions of pages of microfilmed material are not being digitised at this time due

More information

ivu Plus Quick Start Guide P/N rev. A -- 10/8/2010

ivu Plus Quick Start Guide P/N rev. A -- 10/8/2010 P/N 154721 rev. A -- 10/8/2010 Contents Contents 1 Introduction...3 2 ivu Plus Major Features...4 2.1 Demo Mode...4 2.2 Sensor Types...4 2.2.1 Selecting a Sensor Type...5 2.3 Multiple Inspections...6 2.3.1

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

Image Processing & Projective geometry

Image Processing & Projective geometry Image Processing & Projective geometry Arunkumar Byravan Partial slides borrowed from Jianbo Shi & Steve Seitz Color spaces RGB Red, Green, Blue HSV Hue, Saturation, Value Why HSV? HSV separates luma,

More information

Comparison between Open CV and MATLAB Performance in Real Time Applications MATLAB)

Comparison between Open CV and MATLAB Performance in Real Time Applications MATLAB) Anaz: Comparison between Open CV and MATLAB Performance in Real Time -- Comparison between Open CV and MATLAB Performance in Real Time Applications Ammar Sameer Anaz Diyaa Mehadi Faris ammar3303@gmail.com

More information

Robot: icub This humanoid helps us study the brain

Robot: icub This humanoid helps us study the brain ProfileArticle Robot: icub This humanoid helps us study the brain For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-icub/ Program By Robohub Tuesday,

More information

System and method for subtracting dark noise from an image using an estimated dark noise scale factor

System and method for subtracting dark noise from an image using an estimated dark noise scale factor Page 1 of 10 ( 5 of 32 ) United States Patent Application 20060256215 Kind Code A1 Zhang; Xuemei ; et al. November 16, 2006 System and method for subtracting dark noise from an image using an estimated

More information

An External Command Reading White line Follower Robot

An External Command Reading White line Follower Robot EE-712 Embedded System Design: Course Project Report An External Command Reading White line Follower Robot 09405009 Mayank Mishra (mayank@cse.iitb.ac.in) 09307903 Badri Narayan Patro (badripatro@ee.iitb.ac.in)

More information

Tsinghua Hephaestus 2016 AdultSize Team Description

Tsinghua Hephaestus 2016 AdultSize Team Description Tsinghua Hephaestus 2016 AdultSize Team Description Mingguo Zhao, Kaiyuan Xu, Qingqiu Huang, Shan Huang, Kaidan Yuan, Xueheng Zhang, Zhengpei Yang, Luping Wang Tsinghua University, Beijing, China mgzhao@mail.tsinghua.edu.cn

More information

Minho MSL - A New Generation of soccer robots

Minho MSL - A New Generation of soccer robots Minho MSL - A New Generation of soccer robots Fernando Ribeiro, Gil Lopes, João Costa, João Pedro Rodrigues, Bruno Pereira, João Silva, Sérgio Silva, Paulo Ribeiro, Paulo Trigueiros Grupo de Automação

More information

RoboJunior is a Certified Robotics Summer Camp conceptualised by ARK Technosystems to promote robotics amongst the youth right from grass roots.

RoboJunior is a Certified Robotics Summer Camp conceptualised by ARK Technosystems to promote robotics amongst the youth right from grass roots. What is RoboJunior? RoboJunior is a Certified Robotics Summer Camp conceptualised by ARK Technosystems to promote robotics amongst the youth right from grass roots. This workshop aims to provide a technical

More information

Natural Interaction with Social Robots

Natural Interaction with Social Robots Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,

More information

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD This thesis is submitted as partial fulfillment of the requirements for the award of the Bachelor of Electrical

More information

MEM455/800 Robotics II/Advance Robotics Winter 2009

MEM455/800 Robotics II/Advance Robotics Winter 2009 Admin Stuff Course Website: http://robotics.mem.drexel.edu/mhsieh/courses/mem456/ MEM455/8 Robotics II/Advance Robotics Winter 9 Professor: Ani Hsieh Time: :-:pm Tues, Thurs Location: UG Lab, Classroom

More information

Implementing RoshamboGame System with Adaptive Skin Color Model

Implementing RoshamboGame System with Adaptive Skin Color Model American Journal of Engineering Research (AJER) e-issn: 2320-0847 p-issn : 2320-0936 Volume-6, Issue-12, pp-45-53 www.ajer.org Research Paper Open Access Implementing RoshamboGame System with Adaptive

More information

Robots in the Loop: Supporting an Incremental Simulation-based Design Process

Robots in the Loop: Supporting an Incremental Simulation-based Design Process s in the Loop: Supporting an Incremental -based Design Process Xiaolin Hu Computer Science Department Georgia State University Atlanta, GA, USA xhu@cs.gsu.edu Abstract This paper presents the results of

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information

Vision Ques t. Vision Quest. Use the Vision Sensor to drive your robot in Vision Quest!

Vision Ques t. Vision Quest. Use the Vision Sensor to drive your robot in Vision Quest! Vision Ques t Vision Quest Use the Vision Sensor to drive your robot in Vision Quest! Seek Discover new hands-on builds and programming opportunities to further your understanding of a subject matter.

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information