Designing a Lightweight Gesture Recognizer Based on the Kinect Version 2

Similar documents
Fingertip Detection: A Fast Method with Natural Hand

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Gesture Recognition with Real World Environment using Kinect: A Review

Image Manipulation Interface using Depth-based Hand Gesture

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

The Making of a Kinect-based Control Car and Its Application in Engineering Education

Air Marshalling with the Kinect

Toward an Augmented Reality System for Violin Learning Support

Morphological filters applied to Kinect depth images for noise removal as pre-processing stage

KINECT CONTROLLED HUMANOID AND HELICOPTER

Robust Hand Gesture Recognition for Robotic Hand Control

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

The Hand Gesture Recognition System Using Depth Camera

A Novel System for Hand Gesture Recognition

Research on Hand Gesture Recognition Using Convolutional Neural Network

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Development of excavator training simulator using leap motion controller

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

SLIC based Hand Gesture Recognition with Artificial Neural Network

Hand Gesture Recognition System Using Camera

Classification for Motion Game Based on EEG Sensing

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

Journal of Mechatronics, Electrical Power, and Vehicular Technology

CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2

Outline. Comparison of Kinect and Bumblebee2 in Indoor Environments. Introduction (Cont d) Introduction

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Development of a telepresence agent

A Real Time Static & Dynamic Hand Gesture Recognition System

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

Real Time Hand Gesture Tracking for Network Centric Application

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

CS415 Human Computer Interaction

Design of an Interactive Smart Board Using Kinect Sensor

Team Description Paper

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Wireless Master-Slave Embedded Controller for a Teleoperated Anthropomorphic Robotic Arm with Gripping Force Sensing

Humera Syed 1, M. S. Khatib 2 1,2

Prediction and Correction Algorithm for a Gesture Controlled Robotic Arm

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3D VISUALIZATION

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

Available online at ScienceDirect. Procedia Computer Science 50 (2015 )

Recognizing Military Gestures: Developing a Gesture Recognition Interface. Jonathan Lebron

Team Description Paper: HuroEvolution Humanoid Robot for Robocup 2010 Humanoid League

Comparison of Head Movement Recognition Algorithms in Immersive Virtual Reality Using Educative Mobile Application

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

What was the first gestural interface?

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X

KINECT HANDS-FREE. Rituj Beniwal. Department of Electrical Engineering Indian Institute of Technology, Kanpur. Pranjal Giri

Development of a Robotic Vehicle and Implementation of a Control Strategy for Gesture Recognition through Leap Motion device

Augmented Reality using Hand Gesture Recognition System and its use in Virtual Dressing Room

Cost Oriented Humanoid Robots

Robot manipulation based on Leap Motion - For small and medium sized enterprises Ulrica Agell

Training NAO using Kinect

Changjiang Yang. Computer Vision, Pattern Recognition, Machine Learning, Robotics, and Scientific Computing.

CS415 Human Computer Interaction

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Research Seminar. Stefano CARRINO fr.ch

A Smart Home Design and Implementation Based on Kinect

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

A Study on Motion-Based UI for Running Games with Kinect

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

The Control of Avatar Motion Using Hand Gesture

VOICE CONTROL BASED PROSTHETIC HUMAN ARM

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

Controlling Humanoid Robot Using Head Movements

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Touch & Gesture. HCID 520 User Interface Software & Technology

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2015

International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015)

Baset Adult-Size 2016 Team Description Paper

HUMAN MACHINE INTERFACE

Medical Robotics. Part II: SURGICAL ROBOTICS

A Dynamic Fitting Room Based on Microsoft Kinect and Augmented Reality Technologies

Providing The Natural User Interface(NUI) Through Kinect Sensor In Cloud Computing Environment

Localized Space Display

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Service Robots in an Intelligent House

Team KMUTT: Team Description Paper

Augmented Keyboard: a Virtual Keyboard Interface for Smart glasses

Augmented Desk Interface. Graduate School of Information Systems. Tokyo , Japan. is GUI for using computer programs. As a result, users

Touch & Gesture. HCID 520 User Interface Software & Technology

Advancements in Gesture Recognition Technology

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Augmented and Virtual Reality

Hand Gesture Recognition Using Radial Length Metric

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

Human Computer Interaction by Gesture Recognition

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

WHITE PAPER Need for Gesture Recognition. April 2014

ZJUDancer Team Description Paper Humanoid Kid-Size League of Robocup 2014

Transcription:

10 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Designing a Lightweight Gesture Recognizer Based on the Kinect Version 2 Leonidas Deligiannidis Wentworth Institute of Technology Dept. of Computer Science and Networking 550 Huntington Av. Boston, MA 02115 USA deligiannidisl@wit.edu Hamid R. Arabnia University of Georgia Dept. of Computer Science 415 GSRC Athens, GA 30602 USA hra@cs.uga.edu Abstract - We present a lightweight gesture recognizer utilizing Microsoft s Kinect version 2 sensor. Our recognizer seems to be robust enough for many applications and does not require any training. Because this version of the Kinect sensor is equipped with a higher resolution than its predecessor depth camera, it can track some of the user s fingers, and the Kinect SDK is able to provide state information of the user s hands. Armed with this capability, we were able to design our gesture recognizer. New gestures can be specified programmatically at the moment, but we are also working on a graphical user interface (GUI) that would allow a user to define new gestures with it. We show how we built the recognizer and demonstrate its usage via two applications we designed. The first application is a simple picture manipulation application. For the second application, we designed a 3DOF robotic arm that can be controlled using gestures. Keywords: Kinect 2, Gesture Recognition. 1. Introduction In late 2010, Microsoft Corporation introduced the first version of a gaming device, the Kinect, which could be used along with its Xbox gaming console. Recently, Microsoft released the second version of the Kinect [1], for their new Xbox One console, which is faster and provides higher resolution video and depth feeds. The Kinect is a motion sensing input device and can connect to a PC via a Universal Serial Bus (USB) adapter. The Kinect sensor consists of a video camera and a depth camera. The depth camera provides depth information for each pixel using an infrared-ir projector and an IR camera. It also has a multiarray microphone that can detect the direction of where spoken commands are issued. The primary purpose of the Kinect sensor is to enable game-players interact and play games without the need of holding a physical game controller. This innovation changed the way with which we play and interact with games. Players can now use natural command such as tilting to the left / right, raising their hands, jumping, etc. to issue commands. The Kinect sensor enables the players to do this by continuously tracking their body movements, tracking their gestures, as well as observing for verbal commands. These capabilities that it offers, along with its low and affordable price, made the Kinect sensor an attractive device to researchers. Using the freely available SDK [1] for the Kinect, we can now design programs that incorporate the functionalities of the Kinect sensor in our research. The fact that the device does not need to be trained or calibrated to be used, make it easy and simple to be used in many environments other than the environments it was originally designed for. For example, in [2] the functionality of the sensor has been extended to detect and recognize objects and obstacles so that visual impaired people can avoid them. Because of its contactless nature of interaction [3], the Kinect found its way into operating rooms where non-sterilize-able devices cannot be used [4]. Its depth camera can be used to scan and construct 3D maps and object [5]. An effective techniques to control applications such as Google Earth and Bing Maps 3D utilizing a small, yet easy to remember, set of hand gestures is illustrated in [6]. The robotics community [7] adopted the Kinect sensor so that users can interact with robots in a more natural way. Some applications require the tracking of the fingers [8], which was not supported by the original sensor [9][10][11] but it is now (with the second generation of the sensor), in a limited way however. There are several methods for detecting and tracking fingers, but this is a hard problem mainly because of the resolution of the depth sensor; this is also true for the second generation of the camera. Some methods work well such as [12], as long as the orientation of the hands does not vary. Other techniques to work require specialized instruments and arrangements such as infrared camera [13], stereo camera [14], a fixed background [15], and track-able markers on hands and fingers [16]. Other systems need a training phase [17] to recognize gestures such as clapping, waving, shaking head, etc.

Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 11 2. Gestures Interacting in an application utilizing the Kinect sensor requires the sensor to actively track the motion of the user. Even though playing a game could require only large movements of the user s body, limbs, or hands, to interpret the movements as commands such as jumping, leaning, waving, ducking, etc. other applications could require more precise input. Specifically, an application should be able to classify a posture as an event as well as a gesture and should be able to differentiate between the two. A gesture normally has a beginning and an end. A posture is a static positioning of the user and her arms, legs, etc. where as a gesture is dynamic by nature. A user should be able to indicate the beginning and possibly the ending of a gesture. A gesture, such as waving, is a dynamic motion of one s hand(s) but doesn t have to be precise. Other gestures need to be more precise as the user s arm, for example, is being tracked to control the movement of a remote robotic arm, but more importantly the initiation and the termination of the gesture could be of greater importance. A system that actively tracks the movements of a user and miss-interprets actions of a user s intent as actual commands will soon be abandoned by the user as it confuses her. Initiating the termination of a gesture is very valuable too, as it can be used to cancel or stop the current tracking state. As it is reported in [18] the main difficulties in designing a gesture recognizer has to address the issues of temporal segmentation ambiguity which deals with the beginning and the ending of a gesture, and spatial-temporal variability" which deals with the tolerance of the initiation and termination of a gesture since each person performs the same gestures differently. The Kinect s version 1 depth sensor was low resolution to a point where finger detection was difficult to make, and impossible to make from a distance. The Kinect 2 has a higher resolution depth camera, and finger detection is provided by the SDK. Finger detection is still limited but at least the SDK reports postures of the hand based on the fingers arrangement. For example, the Kinect 2 can distinguish, in any orientation, if the hand is Open, Closed, or Lasso. Lasso is defined by closing the hand and extending the index finger (like pointing to an object). However, because of the still-low resolution of the depth sensor, it is recommended that the user extends both the index and the middle fingers while touching each other to indicate the Lasso posture. If the user is close to the camera, extending the index finger alone is enough. Having two hands, where each hand can perform 3 different postures, we have 9 different combinations we can use to indicate the beginning and ending of a gesture. Additional postures can be defined by, for example, hiding your hand behind your back while performing a posture with your other hand; the Kinect SDK provides tracking state information for each joint, in addition to its location and orientation in space. Depending on the posture and the gesture, the user must be aware of the position of the Kinect sensor. For example, if the user performs the Lasso posture and points at the Kinect, it is possible that the Kinect will report false posture as the Lasso gesture seen from the front looks very similar to the Closed hand-posture. 3. Gesture Recognizer The Kinect SDK provides an API where the skeleton information of up to 6 people can be reported. It tracks a human body, even partially when some joints are hidden, and reports the position and orientation of each of the 25 joints. It also reports if a joint information is accurate or not; it is being tracked or it is not visible in the current frame and its value is inferred. Figure 1. The four joints needed by the gesture recognizer to define the gestures involving the right hand. In Kinect 2, which has a higher resolution depth sensor, the hands states are also reported; the main states are Open, Closed and Lasso open palm, fist, fist with index finger extending. Based on the upper body joint positions and the state of the hands we designed a gesture recognizer engine. The engine takes as input the joint information from the Kinect and determines which gesture / posture is being performed. The main advantages of this recognizer are that it is lightweight, rotation invariant, does not require any training, and in addition, it can be configured for many different gestures. The configuration, however, is done programmatically at this time but we are developing a graphical user interface tool where no programming will be needed to define new gestures.

12 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Based on only a few joints, we divide the user space into five areas. Figure 1 shows the four joints needed to recognize gestures for the right hand: the left and right shoulder, the right elbow and the right hand. The left elbow and the left hand are needed for the gestures involving the left hand, but for simplicity reasons we only show here the joints involved in the gesture recognizer for the right hand. shoulder line and to the left, b) above the shoulder line and to the right, and c) below the shoulder line and to the right. Figure 2. Calculating the 3 vectors needed for the recognizer. A vector defining the shoulder line, another vector that is perpendicular to the vector that defines the shoulder line, and another vector that defines the elbowhand. The H-SS vector is only used in the robotic arm application we will discuss later. If we treat the positions of these joints as vectors, we can define a vector RS-LS as shown in figure 2. Then we define a vector that is perpendicular to RS-LS, shown as perp(rs- LS). We can also calculate the vector RH-RE which is the vector defined be the right elbow and right hand. The Head and Spine_Shoulder joints are only used to control the roll of the robotic arm application that we will discuss later. Using the dot product operation of vectors, we can calculate in which area the right hand is located, as shown in figure 3. The hand can be in 3 different areas: a) above the Figure 3. Using the dot product of vectors, we can calculate in which area the right hand is. Two vector subtractions are needed to calculate the shoulder line and the elbow-hand vectors. Based on the shoulder line vector, we can easily construct a perpendicular to it vector as well. Then, two dot product operations are needed to determine in which area the hand is. Figure 4 shown the five areas we define for both hands. Even though a user could move his right hand to the area

Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 13 defined for the left hand, we don t consider motions like these as valid, as these movements obstruct the view of the user and they are anatomically awkward to perform. Using this technique, one can define other areas for tracking hands, such as using the waist line, or the spine line, etc. depending on the applications needs. Figure 5. The first gesture is used to increase the size of the picture, the second gesture is used to decrease the size of the picture, and the last gesture is used to rotate the selected picture. Figure 4. The five areas defined by the shoulder line and the two hands. 4. Picture Control Application The first application we designed based on our gesture recognizer, was a picture manipulation application. The application is designed in Java. Using the Java Native Interface (JNI), we call our C++ compiled functions that communicate with the Kinect and deliver the joint information and the hands states to our java gesture recognizer. As shown in figure 5, the user moves his hand near his head and closes his hands to grab a picture, and then pulls apart his hands to increase the size of the picture. Opening his hands, stops the current operation. Similarly, if the user grabs the picture with his hands apart (by closing his hands) and moves them close to each other, the size of the picture decreases; this operation is similar to what most users are familiar with on mobile devices but instead of using their hands, they use two finger. The last gesture is used to rotate an image. To activate and control the rotation of the picture, the user s left hand moves close to the body in the open posture, and the right hand performs the lasso posture. The orientation of the picture is controlled by the right hand s continues rotations. 5. Robotic Arm Control We designed a second application that uses our gesture recognizer which controls a robotic arm. Figure 6 shows a top view of the robot arm which consists of three heavy duty servo motors, a USB servo controller from Phidgets.com and a 5V / 5A power supply to power the servo motors. The servo motors are physically connected to each other to provide a three-degrees-of-freedom of the arm, shown in figure 7. Figure 7 also shows how the robotic arm is connected to a PC and the Kinect sensor. The sensor is connected to a PC via a proprietary Kinect-to-USB adapter. Via another USB port, the PC is connected to the servo-controller of the robotic arm. The application receives joint and hand state information from the Kinect, the gesture recognizer component interprets these commands and instructs the servo-controller to rotate the appropriate servo motors by a specified amount. Figure 8 shows the gestures implemented to control the robotic arm. The top two figures (in figure 8), are used to disengage and engage the servo motors respectively. The second set of gestures are used to control the bottom servos to make the arm rotate right and left respectively. The third set of gesture, are used to control the top servo motor and move the arm up and down. The last gesture is used to instruct the arm to follow the user s right hand. As the user moves his arm left-right and up-down, the robotic arm mimics these movements by controlling the three servo motors simultaneously and in real time. By leaning the head left and right, we change the roll of the arm ±10 degrees. As shown in figure 2, we construct the H-SS vector. By taking the dot product of the H-SS and the RS- LS vectors, we can determine the direction and the amount

14 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 of the roll (which is implemented by rotating the middle servo motor). Figure 6. The robotic arm (top view), showing the three servo motors attach to each other to provide 3-degrees-of- freedom. Next to the servo assembly is the Phidgets servo controller which receives its commands via its USB port. At the other end, the 5V / 5A power supply is shown which provides power to the three servo motors. Figure 7. The robotic arm (side view), and how it is connected to the controlling PC and the Kinect camera. The Kinect camera is connected to the PC via a proprietary adapter. There is also a USB connection between the PC and the robotic arm s servo controller.

Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 15 one would be able to define gestures and associated actions via a GUI instead of doing the same programmatically. Figure 8. The gestures used to control the robotic arm. The top two gestures are used to disengage and engage the servo motors, respectively. The next set of gestures are used to rotate the arm left-right. The next set of gestures are used to move the tip of the arm up-down. The last gesture at the bottom, is used to allow the robotic arm to follow the user s right hand. As the user moves his hand, the robotic arm mimics these movements in real time. 6. Conclusion Skeleton tracking with joint position and hand state information from the Kinect version 2 sensor can be very useful input to a gesture recognizer. Having a gesture recognizer, we can interact with software applications and other hardware devices without using a tangible controller. We illustrated our gesture recognizer in this paper by presenting a couple of applications utilizing it. Because this new version of Kinect reports hand state information, we can design many different applications that require gestures. We wish to develop a graphical interface where 7. References [1] Microsoft Corporation s Kinect version 2 home page. http://www.microsoft.com/en-us/kinectforwindows/ Retrieved March 2015. [2] Atif Khan, Febin Moideen, Juan Lopez, Wai L. Khoo and Zhigang Zhu. KinDectect: Kinect Detecting Objects. K. Miesenberger et al. (Eds.): Computers Helping People with Special Needs, Lecture Notes in Computer Science (LNCS) Volume 7383, 2012, pp 588-595, Springer-Verlag Berlin Heidelberg 2012. [3] K. Montgomery, M. Stephanides, S. Schendel, and M. Ross. User interface paradigms for patient-specific surgical planning: lessons learned over a decade of research. Computerized Medical Imaging and Graphics, 29(5):203 222, 2005. [4] Luigi Gallo, Alessio Pierluigi Placitelli, Mario Ciampi Controller-free exploration of medical image data: experiencing the Kinect. 24th International Symposium on Computer-Based Medical Systems (CBMS), June 30-27 2011 p1-6. [5] Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren, Dieter Fox. RGB-D mapping: Using Kinectstyle depth cameras for dense 3D modeling of indoor environments. The International Journal of Robotics Research 0(0) 1 17, March 14 2012. [6] Maged N Kamel Boulos, Bryan J Blanchard, Cory Walker, Julio Montero, Aalap Tripathy, Ricardo Gutierrez-Osuna. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation. International Journal of Health Geographics 2011, 10:45. [7] Wei-Chen Chiu, Ulf Blanke, Mario Fritz, Improving the Kinect by Cross-Modal Stereo, In Jesse Hoey, Stephen McKenna and Emanuele Trucco, Proceedings of the British Machine Vision Conference, pages 116.1-116.10. BMVA Press, September 2011. [8] Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li. Markerless Kinect-Based Hand Tracking for Robot Teleoperation. International Journal of Advanced Robotic Systems Vol 9(36) May 2012. [9] Zhou Ren, Junsong Yuan, Jingjing Meng, Zhengyou Zhang. Robust Part-Based Hand Gesture Recognition Using Kinect Sensor. IEEE Transactions on Multimedia, Vol. 15, No. 5, pp.1-11, Aug. 2013. [10] Jagdish L. Raheja, Ankit Chaudhary, Kunal Singal, Tracking of Fingertips and Centre of Palm using KINECT, In proceedings of the 3 rd IEEE International Conference on Computational Intelligence, Modelling and Simulation, Malaysia, 20-22 Sep, 2011, pp.248-252. [11] Valentino Frati, Domenico Prattichizzo, Using Kinect for hand tracking and rendering in wearable haptics.

16 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 IEEE World Haptics Conference 2011 21-24 June, Istanbul, Turkey, pp317-321. [12] Yang, D., Jin, L.W., Yin, J. and Others, An effective robust fingertip detection method for finger writing character recognition system, Proceedings of the Fourth International Conference On Machine Learning And Cybernetics, Guangzhou, China, 2005, pp. 4191 4196. [13] Oka, K., Sato, Y., Koike, H., Real time Tracking of Multiple Fingertips and Gesture Recognition for Augmented Desk Interface Systems, Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02), Washington, D.C., USA, May, 2002, pp. 411 416. [14] Ying H., Song, J., Renand, X., Wang, W., Fingertip Detection and Tracking Using 2D and 3D Information, Proceedings of the seventh World Congress on Intelligent Control and Automation, Chongqing, China, 2008, pp. 1149-1152. [15] Crowley, J. L., Berardand F., Coutaz, J., Finger Tacking As an Input Device for Augmented Reality, Proceedings of International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, 1995, pp. 195-200. [16] Raheja, J. L., Das, K., Chaudhary, A., An Efficient Real Time Method of Fingertip Detection, Proceedings of 7th International Conference on Trends in Industrial Measurements and Automation (TIMA 2011), CSIR Complex, Chennai, India, 6-8 Jan, 2011, pp. 447-450. [17] K. K. Biswas, Saurav Kumar Basu. Gesture Recognition using Microsoft Kinect. Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand, pp100-103. [18] Caifeng Shan. Gesture Control for Consumer Electronics. In Ling. Shao et. al., editors, Multimedia Interaction and Intelligent User Interfaces, Advances in Pattern Recognition, pages 107 128. Springer London, 2010.