Design Concept of State-Chart Method Application through Robot Motion Equipped With Webcam Features as E-Learning Media for Children

Similar documents
Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

Simulation of a mobile robot navigation system

Control of motion stability of the line tracer robot using fuzzy logic and kalman filter

OPEN CV BASED AUTONOMOUS RC-CAR

ARTIFICIAL ROBOT NAVIGATION BASED ON GESTURE AND SPEECH RECOGNITION

An Electronic Eye to Improve Efficiency of Cut Tile Measuring Function

Multi Robot Navigation and Mapping for Combat Environment

II. MAIN BLOCKS OF ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Spring 2005 Group 6 Final Report EZ Park

WifiBotics. An Arduino Based Robotics Workshop

Implementation Of Vision-Based Landing Target Detection For VTOL UAV Using Raspberry Pi

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

QUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP

Automatic Electricity Meter Reading Based on Image Processing

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

UNIVERSITY OF REGINA FACULTY OF ENGINEERING. TIME TABLE: Once every two weeks (tentatively), every other Friday from pm

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Implementation of a Self-Driven Robot for Remote Surveillance

Modelling and Simulation of Tactile Sensing System of Fingers for Intelligent Robotic Manipulation Control

INTELLIGENT SELF-PARKING CHAIR

Optimization Maze Robot Using A* and Flood Fill Algorithm

Proposal for a Rapid Prototyping Environment for Algorithms Intended for Autonoumus Mobile Robot Control

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Autonomous Obstacle Avoiding and Path Following Rover

Automated Driving Car Using Image Processing

Low Cost Obstacle Avoidance Robot with Logic Gates and Gate Delay Calculations

Terry Max Christy & Jeremy Borgman Dr. Gary Dempsey & Nick Schmidt November 29, 2011

Development of a MATLAB Data Acquisition and Control Toolbox for BASIC Stamp Microcontrollers

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

ReVRSR: Remote Virtual Reality for Service Robots

Park Ranger. Li Yang April 21, 2014

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

Face Detector using Network-based Services for a Remote Robot Application

BULLET SPOT DIMENSION ANALYZER USING IMAGE PROCESSING

AR 2 kanoid: Augmented Reality ARkanoid

Multi-robot Formation Control Based on Leader-follower Method

A New Approach to Control a Robot using Android Phone and Colour Detection Technique

ECC419 IMAGE PROCESSING

András László Majdik. MSc. in Eng., PhD Student

We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat

Wheeled Mobile Robot Kuzma I

Design of Tracked Robot with Remote Control for Surveillance

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface

Nautical Autonomous System with Task Integration

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Design and Implementation of an Unmanned Ground Vehicle

COMPACT FUZZY Q LEARNING FOR AUTONOMOUS MOBILE ROBOT NAVIGATION

International Journal of Advance Engineering and Research Development

Essential Understandings with Guiding Questions Robotics Engineering

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair.

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

Robot Jousting. A two-player interactive jousting game involving wheeled robots. Alexander Cruz, En Lei, Sunil Srinivasan, Darrel Weng

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

Image Processing and Particle Analysis for Road Traffic Detection

A Reconfigurable Guidance System

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

Live Hand Gesture Recognition using an Android Device

Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller

Face Recognition Based Attendance System with Student Monitoring Using RFID Technology

INTELLIGENT SEGREGATION SYSTEM

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4

Formation and Cooperation for SWARMed Intelligent Robots

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

IMPLEMENTATION METHOD VIOLA JONES FOR DETECTION MANY FACES

DESIGN AND DEVELOPMENT OF LIBRARY ASSISTANT ROBOT

Team KMUTT: Team Description Paper

The Future of AI A Robotics Perspective

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Emotion Based Music Player

ROBOTIC ARM FOR OBJECT SORTING BASED ON COLOR

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people

ME 6406 MACHINE VISION. Georgia Institute of Technology

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

DTMF Controlled Robot

Utilization of Digital Image Processing In Process of Quality Control of The Primary Packaging of Drug Using Color Normalization Method

Controlling Humanoid Robot Using Head Movements

An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment

The Use of Neural Network to Recognize the Parts of the Computer Motherboard

Path Following and Obstacle Avoidance Fuzzy Controller for Mobile Indoor Robots

KMUTT Kickers: Team Description Paper

On-demand printable robots

Controlling Obstacle Avoiding And Live Streaming Robot Using Chronos Watch

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

HAPTIC BASED ROBOTIC CONTROL SYSTEM ENHANCED WITH EMBEDDED IMAGE PROCESSING

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida

Follower Robot Using Android Programming

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Transcription:

Design Concept of State-Chart Method Application through Robot Motion Equipped With Webcam Features as E-Learning Media for Children Rossi Passarella, Astri Agustina, Sutarno, Kemahyanto Exaudi, and Junkani Abstract Digital image processing is a rapidly growing technology with many applications in the fields of science and technology. One of common application is the system control of a navigation mobile robot either to detect the object barrier, to control the motion of a mobile robot or to control the movement of the motion that has been made with the help of computers. In the process, the final project of digital image system was implemented to capture object pixels from the camera to the captured object, and then compared with the distance of the point to the computer program using Visual C # object 2010. The Parameter used in the detection process is the object color. Whereas the object's position and height were used as an input camera for linear and angular velocities to set how large the pixels have been captured. By using OpenCV and image processing with webcam feature, the program can be started by writing the commands. The process of image processing is used to detect objects, so that the data obtained from this process will be about the midpoint of the object and the extents of the detected object. The data obtained was used as an approach to track the trajectory of the object. The success of this Open Source Computer Vision (Open CV) was determined by the accuracy of performing image and the given knowledge base learning. The test results showed that the camera catches the navigated robot and see the things that are in the arena. Keywords E-learning Image processing, Open CV, Robot motion, State-Chart I. INTRODUCTION TILIZATION of the internet is not only for distance Ulearning education, but also for development of conventional education system. E-learning is a learning model that is created in digital format through electronic devices and aims for expanding the access learning systems to public education. The essence of e-learning is actually a virtual learning via internet access and electronic media, namely computer [1, 2]. Distance learning that utilizes Rossi Passarella is head of Lab. Industrial Automation, and lecturer from.department of Computer Engineering Faculty of Computer Science. University of Sriwijaya. Indonesia (e-mail:passarella.rossi@gmail.com). Astri Agustina and Junkani is a student from Department of Computer Engineering Faculty of Computer Science. University of Sriwijaya. Indonesia Sutarno is with Lab. Industrial Automation, and lecturer from Department of Computer Engineering Faculty of Computer Science. University of Sriwijaya. Indonesia (e-mail: sutarno@unsri.ac.id). Kemahyanto Exaudi was a staff from faculty of computer science, University of Sriwijaya, Indonesia. computer technology enables learners to learn through computers in their respective places without having to physically appear to follow the lessons / lectures in a class. In this system, e-learning will be discussed among the children by setting up the desired line and assisted with robots. There are three main aspects in the development of robotic study such as the mechanism of motion, electrical circuits i.e; sensors and actuators, and the program that controls the working robot. In special cases, working robot is an activity with a specific algorithm to be mapped in a program. One of such cases is the activity that involves the completion of robot motion equipped with webcam features as e-learning media. This robot act as a device with motion base on the input colors which captured by the webcam. Many researches have been conducted to determine the motion of the robot with a various methods [3-9], one of them is the state-chart method. This method shows a various state of momentary which are traversed by the object, as well as events caused by transition of one state to another, and the activity as a result from change of state [10]. In this study, the authors used image processing technology to determine the pixel size of the arena and state-chart method. This method uses webcam as the image capture so that it is more efficient in determining the object point. II. METHOD The methods are divided into two categories, namely: A. Thinking In this category, when the robot is outside the lines which indicate by the reading of both left and right variables at 0, so that the robot moves to correct direction, the robot must receive the input from the camera. The robot movement has to be aware of its own existence or position whether it is on the correct track or not. When the robot realizes its position, the control can be done easily. An example of the correct logic robot is when the robot is in the middle of the line, it will move straight forward, while when the robot is on the left side of the line, then it will move to the right side. But, when the robot is on the left side of the line and the line suddenly disappears, the robot will turn sharply to the right and so on. 115

B. Drawing (and Thinking) The possible positions of the robot (state) can be drawn in the form of a box with slightly rounded ends (or may be drawn as spheres). Transition from one state to another is described as an arrow and the causes of the transition condition are written above the arrow, while controlling the robot activities itself missal, can be written in the state (circles). Fig 1. Shows the complete representation of the control logic in the form of State-chart robot which used in this study. It may previously unknown State-chart, but the diagram in Fig 1 seems to be very intuitive and easy to understand, so it will be easy to understand what is desired by those images. Fig. 1 State-chart diagram III. RESULTS AND DISCUSSION A. Physical robot model Designing a physical model of mobile robots is part of engineering art [11-13] since to make the prototype, engineer need the creation of interactive, kinetic and behavior-based art of the robot, to help the engineer in designing process, the tree diagram of material selection is applied to determine the need of mobile robot (fig 2). This tree diagram is to break down the system into the small part so it will help in decision to choice the material, sensor, actuator, energy source, program software and method. Making robot mechanics from its base components is a difficult job. One of the greatest difficulties is designing the propulsion system, for indoor robots nearly always consist of electric motors. Designing an electric drive system requires knowledge in choosing a correct motor for the size and weight of the robot, the motor can contain a built in gearbox. The robot made in two-storey, where the form below with a width of 150 mm as it used to place motor driver, batteries, and DC motor with gearbox. At levels above, it's used for placing microcontroller and radio transmitter. The arrangement of the mobile robot model is shown in fig 3. Fig. 2 Tree diagram of material selection Fig. 3 The arrangement of the mobile robot 116

Webcam Capture Area and Image established and followed by the movement of the robot in accordance with what have been described. Pixel value of area and image PC / Laptop Binary conversion Count Area pixel RF.r RF.t USB to serial Micro controller L293D Action (Motor) Fig. 6 First Layout of the OpenCV Fig. 4 Block diagram of the workflow engine Description of Figure 4: 1. Webcam capture of existing conditions around the arena, and send to the PC 2. The pixel values of the image are determined arena 3. Binary converted values are calculated 4. Pixel arena is determined 5. The PC sends serial data via USB to serial 6. Serial RF transmitter 7. After received by the RF transmitter 8. Robot captured RF receiver 9. Data is sent to the micro 10. The robot moves B. Design System At this stage, the authors designed the study where the camera will be used. Fig 5 shows a block diagram inside the webcam Image Sensor A C B D Filter RGB ADC Fig. 5 Block Diagram Webcam Fig 6 shows, early look of OpenCV program when illustrates two frames. The first frame at the beginning is the drawing frame, which is the place to draw the formed line, while the second frame is the actual frame position where the movement of robot starts to take action by following the direction line, drawn by drawing position. The beginning process to draw from the starting point to the end can be seen from Fig 7. The images clearly show that the drawing frame formed lines to produce a desired image, and will be E Fig. 7 A. Step 1 The robot began to detect in OpenCv B. Step 2 User started drawing in OpenCv C. Step 3 User began to move the cursor to form a line D. Step 4 left frame, the robot starts to move following the line in the right frame E. Step 5 when the robot has moved into position With the camera sensor, the environment capture process around the arena will be easier in determining the conducted pixel point. Fig 8 shows the area that covered the camera sensor in reading the size of the determined area. Furthermore, after capturing process or data, the image processing will be 117

carried out. The process is carried out from the early stages of the threshold by converting the gray image into a binary image (the black or white pixels). Then it will be measured the obtained pixel from the captured camera process. After the threshold process and pixel measurement, the PC will be sent to RF in robot to move according to previous step. The robot will receive input from the PC as command to process the image on track. From the describing figure of robot motion, the directional movement of robot to form a drawing line will be shown on OpenCV. This process is vice versa from image processing of the camera, where the middle point will show the validated data during the pixel measurement. Fig 9 shows the flowchart of image processing in camera sensor in general. Fig. 10 Specific Block Diagram Fig. 8 Robot Area At the design stage of the research process undertaken previously described in Fig. 4, while the Fig. 9 describes the general block diagram of the capture arena who begins to move tool. Furthermore, the completion of a detailed block diagram is described in the block diagram below is more specific (Fig. 10). C. Data Validation Validation of previous designed system as a strengthen evidence tested experimentally, a program is made based on the design system. The program will be made in compiler of Open Source Computer Vision (OpenCV) [14]. The validation will be carried out by showing the correct algorithm, where the successful of calculation is indicated when the manual calculation is similar to program calculation and base on reference. D. Computer Vision Computer vision is trying to imitate the human vision. The human vision is very complex, where a human sees an object with its sight (eye) which then transfers to brain to be interpreted. This interpretation can be used to make a decision [15, 16]. Computer vision is an automated process that integrates a large number of processes for visual perception, such as image acquisition, image processing, classification, recognition (recognition), and makes a decision. Computer vision consists of techniques to estimate the characteristics of objects in the image, measure characteristics associated with the geometry of the object, and interpret the information geometry. It can be concluded in the following equation: Vision = G + M+ I (1) Fig. 9 Basic Block Diagram Where: G = Geometry M = Measurement I = Interpretation A process in computer vision is divided into 3 (three) activities: 1. Obtaining or acquiring digital image. 2. Computational techniques to process or modify the image data (image processing operations) 118

3. Analyze and interpret the image using the processed results for a specific purpose, such as guiding the robot, controlling equipment, monitor the manufacture and others. Image processing is actually the preliminary process in computer vision (preprocessing), whereas the pattern introduction is a process to be interpreted the image. Techniques in pattern introduction in computer vision have an important role to recognize the object. edge of pixel line will be carried out in a PC using OpenCV program. Finally, the drawing track process will be completed. START Capture Change the image color to Grayscale image Changing the grayscale image into a black and white image thresholding Diminution Quantization Matrix Drawing track End Fig. 12 Flowchart of image pre-processing Fig. 11 Simulation Image IV. CONCLUSION In conclusion, the state-chart method can be used to understand how making an avoider robot motion system for children learning media, at the same time simplify the computational system. The environment condition has been designed for this robot s implementation (Fig.11). Microcontroller ATmega 16 will be used as the robot s brain. To control the robot s motion, two left-right DC motor will be used, equipped with 2 freely wheels for front and rear. Furthermore, the WI-Fi technology is utilized to show the result monitored by webcam to the PC. The targets to achieve from this study are: 1. To implement the robot system using state-chart method 2. To help the application of e-learning using robot navigation. 3. To obtain an efficient algorithm in the robot navigation system. 4. To implement the image processing in determining the measurement of conversion points. In general, the analysis system from the whole process is shown in Fig 12, where after the webcam capture the image, the result will be proceeded by changing the image color to a grayscale image, which then converted into black and white image or Thresholding. Afterwards, the process of downsizing the matrix quantization, measurement of robot distance to ACKNOWLEDGMENT This work was supported by Department of Computer Engineering and Faculty of Computer Science. University of Sriwijaya. REFERENCES [1] Engelbrecht, E. A look at e-learning models: investigating their value for developing an e-learning strategy. Progressio 2003,25, 2, 38-47. [2] Garrison, R. and Anderson, T. E-learning in the 21stCentury: A Framework for Research and Practice, London: Routledge Falmer. 2003 [3] Barraquand, J., & Latombe, J. C. Robot motion planning: A distributed representation approach. The International Journal of Robotics Research, 1991. 10(6), 628-649. [4] Choset, H. M. (Ed.).. Principles of robot motion: theory, algorithms, and implementations. MIT press. (2005) [5] Belta, C., Bicchi, A., Egerstedt, M., Frazzoli, E., Klavins, E., & Pappas, G. J. Symbolic planning and control of robot motion [grand challenges of robotics]. Robotics & Automation Magazine, IEEE 2007, 14(1), 61-70. [6] Lumelsky, V. J. Algorithmic and complexity issues of robot motion in an uncertain environment. Journal of Complexity, 3(2), 146-182. [7] Hemes, B., Fehr, D., & Papanikolopoulos, N. Motion primitives for a tumbling robot. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on (pp. 1471-1476). IEEE. [8] Dalley, S. A., Varol, H. A., & Goldfarb, M. A method for the control of multigrasp myoelectric prosthetic hands. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 20(1), 58-67. [9] T. Merz, P. Rudol, and M. Wzorek, Control system framework for autonomous robots based on extended state machines, in International Conference on Autonomic and Autonomous Systems, ICAS 06, p. 14 [10] Dressler, F., & Fuchs, G. Energy-aware operation and task allocation of autonomous robots. In Robot Motion and Control, 2005. RoMoCo'05. Proceedings of the Fifth International Workshop on (pp. 163-168). IEEE. 119

[11] Eduardo Kac,. The origin and development of robotic art. The Journal of research into New Media Technologies Vol 7 No 1.pp76-86. 2001a [12] Eduardo Kac,. Towards a chronology of robotic art. The Journal of research into New Media Technologies Vol 7 No 1.pp 87-111.2001b [13] Casey Wayne Smith.. Material design for a robotic arts studio. Master of science-thesis- Massachusetts Institute of Technology. 2002 [14] Bradski, Gary, and Adrian Kaehler. Learning OpenCV: Computer vision with the OpenCV library. O'reilly, 2008. [15] Schalkoff, Robert J. Digital image processing and computer vision. Vol. 286. New York: Wiley, 1989. [16] Umbaugh, Scott E. Computer Vision and Image Processing: A Practical Approach Using Cviptools with Cdrom. Prentice Hall PTR, 1997. 120