HCI for Real world Applications

Similar documents
Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

License Plate Localisation based on Morphological Operations

Improved SIFT Matching for Image Pairs with a Scale Difference

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Implementation of Real Time Hand Gesture Recognition

Robust Hand Gesture Recognition for Robotic Hand Control

MULTI ROBOT COMMUNICATION AND TARGET TRACKING SYSTEM AND IMPLEMENTATION OF ROBOT USING ARDUINO

Autonomous Positioning of Mobile Robot Based on RFID Information Fusion Algorithm

An Electronic Eye to Improve Efficiency of Cut Tile Measuring Function

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam

Student Attendance Monitoring System Via Face Detection and Recognition System

Research on Hand Gesture Recognition Using Convolutional Neural Network

Estimation of Absolute Positioning of mobile robot using U-SAT

Intelligent Robotics Sensors and Actuators

Gesture Recognition with Real World Environment using Kinect: A Review

Content Based Image Retrieval Using Color Histogram

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Human Computer Interaction by Gesture Recognition

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Mobile Positioning in Wireless Mobile Networks

Controlling Humanoid Robot Using Head Movements

Simulation of a mobile robot navigation system

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Multi robot Team Formation for Distributed Area Coverage. Raj Dasgupta Computer Science Department University of Nebraska, Omaha

Live Hand Gesture Recognition using an Android Device

Image Extraction using Image Mining Technique

Multi-robot Formation Control Based on Leader-follower Method

A Survey on Hand Gesture Recognition and Hand Tracking Arjunlal 1, Minu Lalitha Madhavu 2 1

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

SLIC based Hand Gesture Recognition with Artificial Neural Network

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

Automatic Licenses Plate Recognition System

Mobile Robots Exploration and Mapping in 2D

Summary of robot visual servo system

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Sri Shakthi Institute of Engg and Technology, Coimbatore, TN, India.

Localization in Wireless Sensor Networks

Available online at ScienceDirect. Procedia Computer Science 76 (2015 )

Intelligent Tactical Robotics

A Real Time Static & Dynamic Hand Gesture Recognition System

Navigation of PowerPoint Using Hand Gestures

Face Recognition Based Attendance System with Student Monitoring Using RFID Technology

Non-Uniform Motion Blur For Face Recognition

A Proposal for Security Oversight at Automated Teller Machine System

Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies

A Qualitative Approach to Mobile Robot Navigation Using RFID

An Adaptive Indoor Positioning Algorithm for ZigBee WSN

The Hand Gesture Recognition System Using Depth Camera

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

A SURVEY ON HAND GESTURE RECOGNITION

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

Using BIM Geometric Properties for BLE-based Indoor Location Tracking

An Autonomous Self- Propelled Robot Designed for Obstacle Avoidance and Fire Fighting

Region Based Satellite Image Segmentation Using JSEG Algorithm

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Mobile Robots (Wheeled) (Take class notes)

Real Time Hand Gesture Recognition for Human Machine Communication Using ARM Cortex A-8

Formation and Cooperation for SWARMed Intelligent Robots

CORC 3303 Exploring Robotics. Why Teams?

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

Hardware Implementation of an Explorer Bot Using XBEE & GSM Technology

Motion Detector Using High Level Feature Extraction

Mobile Sensor Networks based on Autonomous Platforms for Homeland Security

Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

An Overview of Hand Gestures Recognition System Techniques

OPEN CV BASED AUTONOMOUS RC-CAR

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Vision based Object Recognition of E-Puck Mobile Robot for Warehouse Application

New task allocation methods for robotic swarms

Tightly-Coupled Navigation Assistance in Heterogeneous Multi-Robot Teams

CAPACITIES FOR TECHNOLOGY TRANSFER

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Correcting Odometry Errors for Mobile Robots Using Image Processing

Face Detector using Network-based Services for a Remote Robot Application

Retrieval of Large Scale Images and Camera Identification via Random Projections

Learning and Using Models of Kicking Motions for Legged Robots

Introduction. Introduction ROBUST SENSOR POSITIONING IN WIRELESS AD HOC SENSOR NETWORKS. Smart Wireless Sensor Systems 1

A Novel Approach to Design a Customized Image Editor and Real-Time Control of Hand-Gesture Mimicking Robotic Movements on an I-Robot Create

Team Description Paper

Morphological Image Processing Approach of Vehicle Detection for Real-Time Traffic Analysis

Hand Gesture Recognition System Using Camera

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer

Transcription:

IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 11, Issue 1 (May. - Jun. 2013), PP 70-74 HCI for Real world Applications Sreeji C, Vineetha G R, Amina Beevi A, Nasseena N, Neethu S S Depariment of Computer Science and Engineering, Kerala University Sree Buddha College of Engineering, Pattoor, Alappuzha Abstract: Human-computer interaction (HCI) is necessary for different real world applications. Swarm robotics is an emerging technology in the real world. This paper presents navigation of swarm robot through hand gestures. The interaction of Human-robot is needed for controlling the swarm robots. The communication between multiple robots possible through sending messages. For sending messages Bluetooth technology can be used. Our goal is to make swarm robot to work effectively based on the gesture given. Background noise and dynamic environments are the key issues in the gesture recognition process. The background objects will affect the system performance and accuracy. Hand tracking along with feature extraction is used in this paper to dealing these issues. The robot used is the foot-bot robot developed in swarmanoid projects. Webcam is used as visual interface. Collaboration of robot with human has great importance in the field of service robotics. Keywords Swarm robots, hand detection, Gabor filter, gesture identification, navigation of swarm robots, Bluetooth, human-computer interaction. I. INTRODUCTION Nowadays, robots are used in various environments for different applications, from industrial fields, manufacturing to autonomous exploration of remote planets. Among these, mobile robots have achieved great consideration from scientific, industrial and military communities. Robotics deals with design, construction, manufacture, application of robots. These are automated machines that can work like humans. They can work in dangerous places, where humans cannot. The main characteristics of robots are sensing, movement, energy and intelligence. Swarm robotics is an approach for coordinating a large number of relatively small robots. Swarm robotics is a field focuses on controlling large scale homogenous multi robot systems. Swarm robots have the characteristics like robustness, flexibility, scalability, effectiveness, fault tolerance, adaptability and so on. Robots used are simple and cheap. This paper presents how to co-ordinate a number of robots for performing a particular task. A powerful processor is required for control of swarm robots. The controller has additional task compared to individual mobile robot controllers. In addition to obstacle avoidance and navigation swarm robot can communicate through sending messages [4]. Swarm can perform tasks that one expensive robot cannot. Even if some of the robots in the swarm fail, it can still achieve the task. For swarm applications real robot is not used because of its high economical cost and large area needed for execution. In this paper swarm robots move according to the commands given through gestures. We use hand gestures for giving commands. Which is the powerful way of communication and additional devices are not needed. A hand gesture is a movement that we make with our hand to give control instead of speaking. Recognizing gestures is a complex task which involves motion modelling, motion analysis, pattern recognition etc. Vision based interface are one of the key research area in which human-robot interaction gain interests [2]. In this paper we use tracking algorithm to detect hand in the image and Gabor filter for feature extraction and identification. Presence of noise is a serious issue. Unwanted objects and backgrounds will affect the system. So we use a tracking algorithm to detect hand and extract information from that part. We select skin color as a parameter for tracking a hand because of its computational simplicity. Hand gestures are very rich in shape variation, motion and textures. So we choose static features like finger tips, finger direction and hand contours are selected for recognition. But lightning conditions will not give correct features. The rest of this paper is organized as follows. Section II gives a brief idea about location sensing systems and mobile robot navigation. Section III presents hand detection. Section IV presents feature extraction and gesture recognition. Section V presents goal directed navigation of swarm robots. Section VI presents hardware implementation and section VII includes conclusion. II. Related Work A. Location Sensing Systems [7] Location sensing systems in mobile robot can be classified into absolute and relative sensors. Absolute location sensor includes GPS (global positing system) using at least three satellites and an external camera. Robots with GPS [11] receivers can obtain their own 3-D location plus time information. The receiver is smaller in size. GPS is difficult to deal with signal blockage, so will not work in indoors. 70 Page

The relative sensors include cameras, RSSI (received signal strength indicator) measurement systems and proximity sensors. Camera placed on robot take the snapshot of surroundings. Through a series of computations the information is extracted from the snapshot. From the continuous process, the robot can recognize the presence of objects. RSSI measurement systems are employed in wireless environment based on wireless networking of IEEE 802.11 protocol family or RFID (radio frequency identification) RSSI measurement systems are not affected by any obstacles. RSSI needs other hardware devices like antenna and reader. Because of low accuracy, it is difficult to deploy large-scale robot swarms using RSSI measurements system. Proximity sensors are classified into LRF (laser range finder), ultrasonic sensor and Ir (infrared sensor). Compared with LRF, sonar and Ir are smaller and low cost. These merits can be directly connected with swarm organization of larger number of mobile robots. Ir based parallax distance measurement can be cheaper and smaller than sonar, but it is affected by color of objects. Multi sensor fusion technique like camera plus laser photo detector and RSSI based on sonar introduced to make mutual strengths. B. Mobile Robot Navigation [13] For navigation of mobile robots in the indoor environment needs some tasks. Assume the environment to be static and all other objects are rigid. Indoor environment has intrinsic uncertainty and complexity that cannot be overcome with this approach. An alternative, reactive approach is used. An example is the use of potential field method (PFM) for local planning like obstacle avoidance. In the combination of global path planning with local path planning, calculate the shortest path between two points, as it navigates adjust the path plan according to the environment. Obstacle avoidance results naturally from the incremental path planning. III. HAND DETECTION This section deals with detection of hands by taking parameter as skin. A. Detecting Skin And Non-skin Color is used as a parameter in detection. Detection of hand from a dynamic environment is done by means of identifying color. So selection of color space is crucial. Geometric variations and orientations are the issues which are to be handled while tracking. The aim is to differentiate between skin and non-skin pixels. Detection of skin includes detecting image pixels and regions that contain skin-tone color. Background should be controlled or make uneven to skin colored objects. The appearance of skin in an image depends on the illumination conditions. Skin detection takes place in two phases. Training phase and detection phase. Training a skin detector involves three basic steps. Collect database of skin patches from different images. Choose an appropriate color space. Learn the parameters of skin classifier. Detection phase involves two steps. Converting the image into some color space and classifying each pixel using skin classifier to either skin or non-skin. Classifier classifies and labels whether it is a skin or non-skin. In the context of skin classification true positives are skin pixels and classifier labels it as skin and true negatives are non-skin pixels and label it non-skin. Most commonly used color space is RGB color space because of its simplicity. A variety of classification techniques can be used. Any pixel falls inside the skin color class boundary is labelled as skin. B. Detection Based On Skin After detecting the skin a threshold was defined to separate skin and non-skin. A binary image was generated which tracks the skin region and non-skin region. Intensity weighted centroid method is used to keep track of skin regions. Then the image frame is cropped so that the resultant image contains only hand region. IV. FEATURE EXTRACTION AND GESTURE RECOGNITION A. Gabor Filter Each of the images is passed through a bank of Gabor filters in 8 equally spaced directions and θ at 16 frequencies λ to get their dense filter responses below G(X,Y)=exp - X 2 +γ 2 Y 2 *cos 2п X 2σ 2 λ Where X = xcosθ+ysinθ and Y = -xsinθ +ycosθ are the rotations of the Gabor filters with angle θ which varies between 0 and п. Let each image by tessellated by squares at 8 different scales, where each scale corresponds to two adjacent Gabor filter banks of a compatible frequency. For each scale, the maximum-operation is performed on the two Gabor filter responses whose frequencies are nearest to the scale, on a pixel-wise basis. This scheme allows some robustness to slight mis-alignment during the pre-processing stage. Finally, at each scale, the 71 Page

variance is computed from the resultant responses within each square. These variances are finally concatenated to form a feature vector and features are stored in the database. B. Gesture Recognition Gesture recognition enables humans to interface with the machine and interact naturally without any mechanical devices. First hand is tracked and then features are extracted using Gabor filter from the image. Gabor filter operation is performed on two images, one is the model image (image from the database) and other is the test image. The database image contains a gesture which is taken in predefined conditions and captured in usual environment. Many matching techniques used are Manhattan distance, Euclidean distance etc for finding the match [12]. For a feature F i 1 in a model image the corresponding feature F j 2 must be checked in test image. If a pair of features matched Euclidean dist (F i 1,F j 2) should be calculated. A threshold is kept to determine whether match is positive or negative. Fig. 1 Feature extraction V. GOAL DIRECTED NAVIGATION OF SWARM ROBOTS The task of the robots is to detect and understand the command, and collectively reach a distributed consensus about it in order to actuate its execution. The problem is particularly challenging since the robots in the swarm can be spread in different positions in the environment and be engaged in tasks of their own when the command is issued. We use hand gestures as mean for human-swarm communication. A hand gesture encodes a command that the swarm will execute. Hand gesture is a powerful way for communication. So no need of additional devices. We investigated how to exploit robot mobility, swarm spatial distribution, and bluetooth communications, to let the robots in the swarm: (i) implement a distributed and cooperative sensing of hand gestures, and (ii) robustly reach a consensus about a gesture. Motorized track-based wheels allow a robot to move at a specified speed. System gets input from the webcam and gesture recognition is performed. The message propagates between swarm by using bluetooth communication system. VI. HARDWARE IMPLEMENTATION Real time implementation of hand gesture recognition consists of foot-bot robot developed in swarmanoid project. The foot-bot robot developed mainly for swarm applications. We use a subset of sensors and actuators available for such platform. The frontal camera is used for recognizing gestures. Motorized track based wheels allow robot to move at the speed of 5cm per second. The infrared-based-range and bearing sensor and actuator allow a robot to detect its line of sight neighbours up to a range of few meters and to recover their distance and bearing. The system gets input from webcam. After acquiring gestures, we use Gabor filter feature extraction individual gesture recognition and generation of opinion vectors, assigning probability to each known vectors. The resulting opinions are spread to all robots in the swarm through sending messages. Each robot can record its own opinions and opinions from other robots. Generate a decision vector as the component sum of all classification vectors D or opinions it was locally generated and/ or received from other robots. D s component with the highest value, i, indicates the gesture class in favour of which most evidence is available at the moment to the robot. The robot also calculates its measure of confidence about the true class being i as λ=di - Di. Where i is the index of the second highest component of D. When a robot has gathered enough evidence, λ exceed a predefined threshold, send it to swarm. It propagates through bluetooth communication. Robots receiving a decision immediately adopt it. If different 72 Page

decisions are generated in the swarm the one accessed with the highest confidence overrides the propagation of others. Fig. 2 Hardware implementation of hand gesture recognition system. Robots start acquiring hand images at a rate of roughly one per second. Immediately after each acquisition, the image is processed as described above. Resulting decisions are spread to robots in the swarm. Fig. 4 Different hand gestures used to create database 73 Page

VII. CONCLUSION In this paper we provide the facility of gesture recognition to mobile robots for swarm applications. The communication between robots is possible by sending messages. Features are low cost, scalable, robust, adaptable, effective, flexible etc. In this paper we use a robust scheme for hand gesture recognition which uses hand detection and feature extraction using Gabor filter. This method will remove noise from the image. Swarm robots can be controlled through these gestures. The system performance and accuracy was increased with the usage of hand detection and feature extraction. References [1] C.C.Wang, K.C.Wang.: Hand Posture Recognition Using Adaboost with SIFT For Human Robot Interaction, in Robotics: Viable Robotic Service to Human, Springer-2009. [2] M. Kolsch and M. Turk.: Robust hand detection, in IEEE International Conference on Automatic Face and Gesture Recognition, 2004. [3] Cristina Manresa, Javier Varona, Ramon Mas and Francisco J.Perales: Hand Tracking and Gesture Recognition for Human-Computer Interaction, Electronic Letters on Computer Vision and Image Analysis 5(3):96-104, 2005. [4] Alessandro Giusti, Jawad Nagi, Luca M. Gambardella, Gianni A. Di Caro: Distributed Consensus for Interaction between Humans and Mobile Robot Swarms (Demonstration). [5] Ihab Zaqout, Roziati Zainuddin, Sapian Baba: Pixel-Based Skin Color Detection Technique, in Machine Graphics and Vision, 2005. FLEXChip Signal Processor (MC68175/D), Motorola, 1996. [6] Qiu-yu Zhang, Mo-yi Zhang, Jian-qiang Hu,.: Hand Gesture Contour Tracking Based on Skin Color Probability and State Estimation Model, Journal of Multimedia, Vol. 4, No. 6, December 2009. [7] Low-Cost Dual Rotating Infrared Sensor for Mobile Robot Swarm Applications Geunho Lee, Member, IEEE, and Nak Young Chong, Member, IEEE. [8] G. Lee and N. Y. Chong, Decentralized formation control for small scale robot teams with anonymity, Mechatronics, vol. 19, no. 1, pp.85 105, 2009. [9] G. Lee and N. Y. Chong, A geometric approach to deploying robot swarms, Ann. Math. Artif. Intell. vol. 52, no. 2 4, pp. 257 280, 2009. [10] E. Sahin, Swarm robotics: From sources of inspiration to domains of application, in Proc. 8th Int. Conf. Simulation of Adaptive Behavior (LNCS), 2005, vol.3342, pp.10 20. [11] H. Niwa, K. Kodaka, Y. Sakamoto, M. Otake, S. Kawaguchi, K. Fujii, Y. Kanemori, and S. Sugano, GPS-based indoor positioning system with multi-channel pseudolite, in Proc. IEEE Int. Conf. Robot. Autom., 2008, pp. 905 910. [12] Faraj Alhwarin, Chao Wang, Danijela Risti -Durrant, Axel Graser, Improved SIFT-Features Matching for Object Recognition, BCS International Academic Conference- Visions of Computer Science,2008. [13] Benavidez P, Mobile robot navigation and target tracking system, System of systems engineering, 2011 6 th International Conference. [14] http://en.wikipedia.org/wiki/scale-invariant_feature_transform. 74 Page