Gesture Control in a Virtual Environment

Similar documents
CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2

What was the first gestural interface?

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Air Marshalling with the Kinect

KINECT CONTROLLED HUMANOID AND HELICOPTER

Toward an Augmented Reality System for Violin Learning Support

Gesture Recognition with Real World Environment using Kinect: A Review

Heads up interaction: glasgow university multimodal research. Eve Hoggan

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

Touch & Gesture. HCID 520 User Interface Software & Technology

CSE Tue 10/09. Nadir Weibel

A Kinect-based 3D hand-gesture interface for 3D databases

User Experience Guidelines

R (2) Controlling System Application with hands by identifying movements through Camera

EMMA Software Quick Start Guide

Touch & Gesture. HCID 520 User Interface Software & Technology

3D Data Navigation via Natural User Interfaces

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

User Experience Guidelines

Advancements in Gesture Recognition Technology

A Study on Motion-Based UI for Running Games with Kinect

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. Kinect2Scratch Workbook

MRT: Mixed-Reality Tabletop

Real Time Hand Gesture Tracking for Network Centric Application

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X

Project Multimodal FooBilliard

STRUCTURE SENSOR QUICK START GUIDE

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Direct gaze based environmental controls

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

WHITE PAPER Need for Gesture Recognition. April 2014

Optimization of user interaction with DICOM in the Operation Room of a hospital

Enabling Cursor Control Using on Pinch Gesture Recognition

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

The Making of a Kinect-based Control Car and Its Application in Engineering Education

Available online at ScienceDirect. Procedia Computer Science 50 (2015 )

Recent Progress on Wearable Augmented Interaction at AIST

Programming Project 2

Stabilize humanoid robot teleoperated by a RGB-D sensor

Virtual Touch Human Computer Interaction at a Distance

Lab Design of FANUC Robot Operation for Engineering Technology Major Students

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

HUMAN MACHINE INTERFACE

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Humera Syed 1, M. S. Khatib 2 1,2

Interior Design using Augmented Reality Environment

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

Omni-Directional Catadioptric Acquisition System

Augmented and Virtual Reality

Drawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert

Design of an Interactive Smart Board Using Kinect Sensor

Laboratory 2: Graphing

PWM LED Color Control

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Requirements Specification. An MMORPG Game Using Oculus Rift

Reference Guide. Store Optimization. Created: May 2017 Last updated: November 2017 Rev: Final

MEASURING AND ANALYZING FINE MOTOR SKILLS

Physics 131 Lab 1: ONE-DIMENSIONAL MOTION

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

Haptic presentation of 3D objects in virtual reality for the visually disabled

Apple ARKit Overview. 1. Purpose. 2. Apple ARKit. 2.1 Overview. 2.2 Functions

Image Manipulation Interface using Depth-based Hand Gesture

Wands are Magic: a comparison of devices used in 3D pointing interfaces

Classification for Motion Game Based on EEG Sensing

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

KINECT HANDS-FREE. Rituj Beniwal. Department of Electrical Engineering Indian Institute of Technology, Kanpur. Pranjal Giri

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane

Development of excavator training simulator using leap motion controller

WIRELESS CONTROL OF A ROBOTIC ARM USING 3D MOTION TRACKING SENSORS AND ARTIFICIAL NEURAL NETWORKS 13

Table of Contents. Creating Your First Project 4. Enhancing Your Slides 8. Adding Interactivity 12. Recording a Software Simulation 19

The development of a virtual laboratory based on Unreal Engine 4

Technical Specifications: tog VR

CHAPTER 1. INTRODUCTION 16

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces

BRUSHES AND LAYERS We will learn how to use brushes and illustration tools to make a simple composition. Introduction to using layers.

Design of Head Movement Controller System (HEMOCS) for Control Mobile Application through Head Pose Movement Detection

Chlorophyll Fluorescence Imaging System

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

CS 315 Intro to Human Computer Interaction (HCI)

Introduction to Embedded Systems

Robotics Laboratory. Report Nao. 7 th of July Authors: Arnaud van Pottelsberghe Brieuc della Faille Laurent Parez Pierre-Yves Morelle

Aerospace Sensor Suite

Input devices and interaction. Ruth Aylett

Keywords Mobile Phones, Accelerometer, Gestures, Hand Writing, Voice Detection, Air Signature, HCI.

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Community Update and Next Steps

Training NAO using Kinect

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray

FILE # 3DS CIRCLE PAD CALIBRATION FAILED

Robust Hand Gesture Recognition for Robotic Hand Control

Gesture Control FPS Horror/Survivor Game Third Year Project (COMP30040)

Transcription:

Gesture Control in a Virtual Environment Zishuo CHENG <u4815763@anu.edu.au> 29 May 2015 A report submitted for the degree of Master of Computing of Australian National University Supervisor: Prof. Tom Gedeon, Martin Henschke COMP8715: Computing Project Australian National University Semester 1, 2015

Acknowledgement I would like to sincerely appreciate my supervisors Professor Tom Gedeon and PhD student Martin Henschke for their constant guidance and kindest assistance along the research process. Their expertise, patience, enthusiasm and friendship greatly encouraged me. Page 1

Abstract In the recent years, gesture recognition has gained increasing popularity in the field of human-machine interaction. Vision-based gesture recognition and myoelectric recognition are the two main solutions in this area. Myoelectric controllers collect electromyography (EMG) signals from user s skin as the inputs. MYO armband is a new wearable device launched by Thalmic Lab in 2014. It is an innovation which accomplishes gesture control by detecting motion and muscle activities. Moreover, compared to EMG recognition, vision-based devices aim to achieve gesture recognition in the way of computer vision. Kinect is a line of motion sensing input device released by Microsoft in 2010 which recognises user s motion through cameras. Since both of these two methods have their own advantages and drawbacks, this project aims to assess the performance of MYO armband and Kinect in the aspect of virtual control. The analytic result is given for the purpose of refining user experience. Keywords: MYO armband, Kinect, electromyography signal, vision-based, gesture recognition, Human-computer Interaction List of Abbreviations EMG HCI IMU GUI NUI SDK Electromyography Human-computer Interaction Inertial Measurement Unit Graphical User Interface Natural User Interface Software Development Kit Page 2

Table of Contents Acknowledgement...1 Abstract...2 List of Abbreviations 2 List of Figures 5 List of Tables. 5 1. Introduction. 6 1.1 Overview...6 1.2 Motivation...6 1.3 Objectives.7 1.4 Contributions.7 1.5 Report Outline..7 2. Background.8 2.1 MYO armband..8 2.2 Kinect sensor 10 3. Methodology 13 3.1 Assessment on User-friendliness..13 3.1.1 Training Subjects.13 3.1.2 Evaluating Degree of Proficiency.14 3.1.3 Evaluating User-friendliness 15 3.2 Assessment on Navigation.17 3.2.1 Setting up the Virtual Environment 17 3.2.1.1 Virtual Environment Description 17 3.2.1.2 Settings of the Tested Devices.18 3.2.2 Experimental Data Collection.19 3.2.2.1 Navigation Data and Time..20 3.2.2.2 Error Rate 21 3.2.2.3 Subjective Evaluation.21 3.3 Assessment on Precise Manipulation.21 3.3.1 Setting up the Virtual Environment 21 Page 3

3.3.1.1 Virtual Environment Description 21 3.3.1.2 Settings of the Tested Devices. 22 3.3.2 Experimental Data Collection.23 3.3.2.1 Moving Range of Arm.23 3.3.2.2 Interaction Events and Time.25 3.3.2.3 Error Rate 26 3.3.2.4 Subjective Evaluation.26 3.3.4 Assessment on Other General Aspects 26 3.3.5 Devices Specification and Experimental Regulation..27 4. Result Analysis.28 4.1 Result Analysis of Experiment 1.28 4.1.1 Evaluation of Proficiency Test 28 4.1.2 Evaluation of Training Time.29 4.1.3 Evaluation of User-friendliness.30 4.2 Result Analysis of Experiment 2.30 4.2.1 Evaluation of the Number of Gestures.30 4.2.2 Evaluation of Error Rate.31 4.2.3 Evaluation of Completion Time.31 4.2.4 Self-Evaluation..32 4.3 Result Analysis of Experiment 3.32 4.3.1 Evaluation of Moving Range 32 4.3.2 Evaluation of Error Rate.32 4.3.3 Evaluation of Completion Time.33 4.3.4 Self-Evaluation..33 4.4 Analysis of Other Relevant Data.34 5. Conclusion and Future Improvement.34 5.1 Conclusion.35 5.2 Future Improvement..36 Reference.37 Appendix A..38 Appendix B..39 Page 4

List of Figures Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab].9 Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager] 10 Figure 3: A Kinect Sensor [credit: Microsoft]. 11 Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft Developer Network].12 Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control 15 Figure 6: Flow Chart of Experiment 1..16 Figure 7: 3D Demo of the Virtual Maze in Experiment 2 16 Figure 8: One of the Shortest Paths in Experiment 2..17 Figure 9: 3D Scene for Experiment 3.22 Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles].23 List of Tables Table 1: Interaction Event Mapper of MYO in Experiment 2..17 Table 2: Interaction Event Mapper of Kinect in Experiment 2.18 Table 3: Interaction Event Mapper of MYO in Experiment 3.. 21 Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband 28 Table 5: Completion Time in Cursor Control Test 28... 28 Table 6: Total Training Time for MYO armband and Kinect sensor 28 Table 7: Subject s Rate for the User-friendliness of MYO and Kinect..29 Table 8: The Number of Gestures Performed in Experiment 2 29 Table 9: Error Rate in Experiment 2.. 30 Table 10: Completion Time in Experiment 2. 31 Table 11: Subject s Self-Evaluation of the Performance in Experiment 2.. 31 Table 12: Range of Pitch Angle in Experiment 3 32 Table 13: Range of Yaw Angle in Experiment 3 32 Table 14: Time Spent in Experiment 3. 33 Table 15: Subject s Self-Evaluation of the Performance in Experiment 3.. 33 Page 5

Chapter 1 Introduction 1.1 Overview In recent years, traditional input devices such as keyboards and mouse are losing an amount of popularity due to an absence of flexibility and freedom. Compared to traditional graphical user interface (GUI), a natural user interface (NUI) enables human-machine interaction via the people s common behaviours such as gesture, voice, facial expression, eye movement and so on so forth. The concept of NUI was developed by Steve Mann in 1990s [1]. In the last two decades, developers made a variety of attempts to improve user experience by applying NUIs. Nowadays, NUIs as discussed in [2] are increasingly becoming an important part of the contemporary human-machine interaction. Electromyography (EMG) signal recognition plays an important role in NUIs. EMG is a technique for monitoring the electrical activity produced by skeletal muscles [3]. In recent years, there is a variety of wearable EMG devices released by numerous developers, such as MYO armband, Jawbone and some types of smartwatch. When muscle cells are electrically or neurologically activated, these devices monitor the electric potential generated by muscle cells in order to analyse the biomechanics of human movement. Vision-based pattern recognition is another significant part in NUI study which has been studied since the end of 20 th century [4]. By using camera to capture specific motion and patterns, vision-based devices enable to recognise the message that human being attempt to convey. There are many innovations in this area such as Kinect and Leap Motion. Generally speaking, most of vision-based devices perform gesture recognition through monitoring and analysing motion, depth, colour, shape and appearance [5]. 1.2 Motivation Even though EMG signal recognition and vision-based pattern recognition have been studied for many years, they are still far to break the dominance of the traditional GUI based on keyboard and mouse [6]. Moreover, both of them have their own problems which are the bottleneck of their development. Due to the defects of them, this project chooses MYO armband and Kinect as the typical example of EMG signal recognition and vision-based recognition and attempts to assess the their performance in virtual environment in order to identify the specific aspects that need to be improved by the Page 6

developers in the future. Moreover, the project also aims to summarise some valuable lessons for human-machine interaction. 1.3 Objectives The objectives of this project are to evaluate the performance of MYO armband and Kinect in the aspect of gesture control, to investigate the user experience of these two devices, and to attempt to identify if any improvement could be reached for the development of EMG and vision-based HCI. 1.4 Contribution Firstly, the project set up a 3D maze as the virtual environment to support the evaluation of gesture control. Secondly, the project used the Software Development Kit (SDK) of MYO armband and Kinect sensor to build the connection with the virtual environment. Thirdly, there were three HCI experiments held in the project. Last but not least, the project evaluated the experimental data and summarised some lessons for EMG signal recognition and vision-based pattern recognition. 1.5 Report Outline This project report is divided into five chapters. After this introduction, Chapter 2 introduces the background of MYO armband and Kinect sensor including the features and limitations of them. In Chapter 3, the research methodology is explained in details. It introduces the three experiments held in this project. Chapter 4 aims to analyse and discuss the experimental data from various dimensions. Lastly, the final conclusion and future improvement are discussed in Chapter5. Page 7

Chapter 2 Background The project selects MYO armband and Kinect as the typical device in the area of EMG signal recognition and vision-based pattern recognition respectively. By evaluating the performance of these two devices, the researcher enables to identify the advantages and defects of these two ways of gesture control. Thus, in this chapter, the features, specifications and limitations of MYO armband and Kinect are explained in more details. 2.1 MYO armband MYO armband is a wearable electromyography forearm band which was developed by Thalmic Lab in 2013 [4]. The original aim of this equipment is to provide a touch-free control of technology with gestures and motion. In 2014, the developer Thalmic Lab released the first shipment of the first generation product [4]. The armband allows user to wirelessly control the technology in the way of detecting the electrical activities in user s muscle and the motion of user s arm. One of the main features of MYO armband is that the band reads the electromyography signals from skeletal muscles and use it as the input commends of the corresponding gesture control events. As Figure 1 shows, the armband has 8 medical grade sensors which are used to monitor the EMG activities from the surface of user s skin. To monitors the spatial data about the movement and orientation of user s arm, the armband adopts a 9-axis Inertial Measurement Unit (IMU) which includes a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer [12]. Through the sensors and IMU, the armband enables to recognise user s gestures and track the motion of user s arm. Moreover, the armband uses Bluetooth 4.0 as the information channel to transmit the recognised signals to the paired devices. Page 8

Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab] Another feature of MYO armband is the open application program interfaces (APIs) and free SDK. Based on this feature, more people can be involved to build solutions for various uses such as home automation, drones, computer games and virtual reality. Thalmic Lab has released more than 10 versions of SDK since the initial version Alpha 1 was released in 2013. According to the log in [10], numerous new features were added into the SDK in each update to make the development environment more powerful. In Beta release 2, gesture data collection was added. Thus, developers enable to collect and analyse gesture data in order to help improve the accuracy of gesture recognition. In the latest version 0.8.1, a new function called mediakey() was added into the SDK, which allow to send media key events to system. So far, the MYO SDK has become a mature development environment with plenty of well-constructed functions. Nevertheless, there are a few drawbacks in the current generation of MYO armband. First of all, the poses that can be recognised by the band is limited. In the developer blog in [10], they announced that MYO armband can recognise 5 pre-set gestures including fist, wave left, wave right, finger spread and double tap. By setting up the connection through Bluetooth 4.0, users are able to map each gesture into a particular input event in order to interact with the paired device. On the one hand, the developers of the armband tend to simplify the human-machine interaction. Therefore, using only 5 gestures to interact with the environment is a user-friendly design which largely reduces the operation complexity. However, on the other hand, this design makes some restrictions on application development. Secondly, the accuracy of gesture recognition is not satisfactory, especially in a complex interaction. When a user aims to implement a complicated task with a combination of several gestures, the armband is not sensitive enough to detect the quick change of user s gestures. Page 9

Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager] 2.2 Kinect sensor Kinect is a line of motion sensing input devices released by Microsoft in 2010. The first generation Kinect was designed for the use of HCI in the video games listed in Xbox 360 store. Since its released date, Kinect sensor has attract the attention of numerous researchers because of its ability to perform vision-based gesture recognition [4]. Nowadays, Kinect is not only used for entertainment, but also for other purposes such as model building and HCI research. In the later chapter of this report, numerous parts of the HCI experiments are designed based on the product s characteristics discussed in the following paragraphs. One of the key characteristic of Kinect sensor is that it adopts to use 3 cameras to implement pattern recognition. As Figure 3 shows, a Kinect sensor consists of a RGB camera, two 3D depth sensors, a build-in motor and a multi-array microphone. The RGB camera is a traditional RGB camera which generates high-resolution colour images in real-time. As mentioned in [13], the depth sensor is composed of an infra-red (IR) projector and a monochrome complementary metal oxide semiconductor (CMOS) sensor. By measuring the reflection time of IR ray, the depth map can be presented. The video streams of both the RGB camera and depth sensor use the same video graphics array (VGA) resolution (640 480 pixels). Each pixel in the RGB viewer corresponds to a particular pixel in the depth viewer. Based on this working principle, Kinect sensor is able to display the depth, colour and 3D information of its captured objects. Another characteristic of Kinect sensor is its unique skeletal tracking system. As Figure 4 illustrates, Kinect uses 3-dimensional positions prediction of 20 joints of human body from a single depth image [7]. Through this system, Kinect is able to estimate the body parts invariant to pose, body shape, appearance, etc. This system allows developers to use the corresponding build-in functions in Kinect SDK to retrieve the real-time motion and poses. Thus, it not only provides a powerful development environment for the application developers, but also enhances the user experience of the Kinect applications. Page 10

The SDK in the third characteristic which enables Kinect to gain popularity. Similar to MYO armband, Kinect also has a non-commercial SDK released by Microsoft in 2011. In each updated version, Microsoft is attempting to add more useful functions and features and keeps optimising the development environment. For example, in the latest version SDK 2.0 released in October 2014, it supports wider horizontal and vertical field of view for depth and colour. For the skeletal tracking system, Microsoft increased number of joints that can be recognised from 20 to 25. Moreover, some new gestures such as open and closed hand gestures were also added into the SDK. However, Kinect sensor also has its own defects. Firstly, although Microsoft keeps improving the SDK, the depth sensor still has a limited sensing range. The sensing range of depth sensor is from 0.4 meters to 4 meters. But the calibration function performs differently in terms of the distances between objects and Kinect sensor. According to the research in [7], to achieve best performance, Kinect sensor is suggested to be located within a 30cm 30cm square at a distance of between 1.45 and 1.75 meters from the user. Secondly, the data of depth image measured by Kinect sensor is not reliable enough. The depth images can be interfered by some noises such as light and background. Figure 3: A Kinect Sensor [credit: Microsoft] Page 11

Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft Developer Network] Page 12

Chapter 3 Methodology This chapter introduces the details of the three HCI experiments held in this project. The main purpose of this phase is to design the experimental methodology in order to investigate the performance and user experience of MYO armband and Kinect sensor in the area of gesture control. The chapter contains five sections. Section 3.1 describes the first experiment in details. This experiment aims to help volunteers get familiar with the use of MYO armband and Kinect sensor and to evaluate the user-friendliness of them. Section 3.2 introduces the second experiments. In this experiment, a virtual environment is implemented in order to investigate the navigation performance of the devices. Sections 3.3 explains the third experiment. This experiment also set up a virtual environment to assess the performance of precise manipulation of each device. Section 3.4 illustrates other general points investigated in the experimental questionary. Lastly, Section 3.5 introduces the specification of the experimental devices and rules. 3.1 Assessment on User-friendliness This section introduces the Experiment 1 held in the project. There are two purposes for holding this experiment. Firstly, since both of MYO armband and Kinect sensor require special gestures to interact with the virtual environment. Therefore, before holding the experiments to evaluate their performance in virtual control, it is important to train subjects to be familiar with the use of these two devices. Secondly, if subjects are novice users of MYO armband and Kinect sensor, it is a good chance to investigate the user-friendliness of the devices. The process of this experiment is shown as Figure 6. 3.1.1 Training Subjects There are two phases in this experiment. For each subject, they are firstly required to learn the use of MYO armband. At the beginning of this phase, there is a demo video about using MYO armband shown to each subject. The contents of the demo video include wearing the armband, performing sync gesture, using IMU to track the arm motion and performing the five pre-set gestures which are fist, fingers spread, wave left, wave right and double tap. After displaying the demo video, subjects are asked to attempt to use the armband by themselves. Therefore, each subject needs to wear on and sync the armband with the paired experimental computer. After syncing successfully, they need to perform the five gestures and use their arms to control the cursor on the screen of the paired computer. Page 13

The second phase of this experiment is training the subject with the use of Kinect sensor. There is also a demo video shown to each subject, which includes the contents of activating the sensor, calibrating pattern recognition and tracking arm motion. Similar to the first phase, subjects are asked to active the Kinect sensor and do the calibration task by themselves. After this, they are also required to use their arm to control the cursor on the screen of the paired computer. 3.1.2 Evaluating Degree of Proficiency Since one of the purpose of this experiment is to train user to use the tested devices, therefore evaluating the degree of user s proficiency is meaningful and important. In this experiment, only if the subject s degree of proficiency is acceptable, he/she is allowed to do the Experiment 2 and 3. A program is implemented to assess each subject s degree of proficiency when they are using the devices to do the test. To evaluate subject s degree of proficiency of using MYO armband, two aspects are monitored and assessed by the program. Firstly, the program selects one of the five gestures randomly and then generates the text version of the chosen gesture on the screen. The program repeats to do this task in ten times, and each gesture will be selected by the program in two times. Subjects should perform the same gesture as they watch on the screen. An error will be counted if the subject performs a different gesture from the gesture shown on the screen. Secondly, the program generates a graph (1240 660 pixels) shown as Figure 5. There are five red points located at 15 330, 1225 330, 620 15, 620 645 and 620 330 respectively. As the graph is displayed on the screen, the cursor will be re-generated to the point 0 0 on the graph. Subjects are asked to use MYO armband to control the cursor to reach all the five points in one minute. A failure will be counted if the time is up. During these two tests, only if subject completes the first test with an error rate less than 20% and completes the second test within 1 minute, the subject will be assess as qualified. If the subject is not quailed, he/she is required to redo the failed part until it is passed. Similar to the evaluation on subject s degree of proficiency of using MYO armband, the program use the same graph to monitor subject s degree of proficiency of Kinect sensor. Since the manipulation on Kinect sensor does not need to perform any specific gestures, there is no need to ask subjects to perform gestures in this evaluation. Therefore, subject will be considered to be qualified if he/she can complete the cursor control test in 1 minute. However, if the subject fails, he/she needs to redo it until it is passed. Page 14

Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control 3.1.3 Evaluating User-friendliness To evaluate the user-friendliness, there are four aspects taken into account. Firstly, for each experimental device, a time will be counted after showing the demo video. The time will be stopped until the subject is proficient at manipulating this device. Thus, this time record (named as TotalTime ) illustrates how long a novice user spends on getting familiar with the operation of each device. Secondly, the time that each subject used in the cursor control test is also recorded (named as CursorControlTime ). Thirdly, for MYO armband, the error rate of its first test is recorded as ErrorRate. Lastly, when a subject passes all the training tests, they are asked to give a subjective evaluation about the user-friendliness of MYO armband and Kinect sensor. The question for this aspect is that Do you think MYO armband/kinect sensor is user-friendly. There are five degrees for them to choose which are strongly agree, agree, uncertain, disagree and strongly disagree. Page 15

Figure 6: Flow Chart of Experiment 1 Page 16

3.2 Assessment on Navigation This section introduces the Experiment 2 held in the project. The purpose of this experiment is to test the performance of MYO armband and Kinect sensor in the aspect of navigation, and to compare with traditional input devices. There is a virtual maze set in this experiment in order to support the evaluation. Moreover, to make the data analysis more conveniently, the interaction events of each tested device (i.e. MYO armband, Kinect sensor and keyboard) have been pre-set rather than being customised. Therefore, all the subjects need to use same input commends to interact with the virtual environment, and are not allow to set the interaction events according to their personal preferences. 3.2.1 Setting up the Virtual Environment This sub-section introduces the details of the virtual environment used in Experiment 2 and the settings of three tested devices. The virtual environment is a 3D maze. Subjects are required to use keyboard, MYO armband and Kinect sensor to move from the starting point to the specified destination. 3.2.1.1 Virtual Environment Description The virtual environment used in this whole project is a 3-demensional maze written in C#. The virtual maze consists of 209 objects. Each object in this virtual environment is mapped into a corresponding 2-dementional texture image. To enhance the sense of virtual reality, the player in the maze is shown in a first-person perspective. As Figure 7 shows, the structure of the maze is not complicated, which contains 4 rooms, 5 straight halls, 3 square halls and 2 stair halls. Each part in the maze is used for different testing purpose. In this experiment, the starting position is set at a corner of Room 1. To save more time in this experiment, the camera can be switched to this starting point by pressing key 1 on the keyboard. Therefore, researcher will press key 1 when the subject is going to take this test. One of the shortest paths is shown as Figure 8 which is considered as the expected value in this experiment. Each subject are asked to attempt their best to trace this shortest path. Figure 7: 3D Demo of the Virtual Maze in Experiment 2 Page 17

Figure 8: One of the Shortest Paths in Experiment 2 There are four interaction events set in this navigation task, which include moving forward, moving backward, turning left and turning right. It is important to notice that when turning left/right happens, the camera will be rotated to left/right rather than being horizontally shifted to left/right. Therefore, if users want to move to left/right, they need to turn the camera to the left/right first, and then move forward from the new direction. 3.2.1.2 Settings of the Tested Devices The three tested devices in this experiment are MYO armband, Kinect sensor and keyboard. For each of the device, the interaction events mentioned in the previous subsection are mapped into the corresponding gestures or keys. Moreover, since MYO armband and Kinect sensor cannot be directly connected with the virtual environment, it is necessary to build a connector in the code of the maze. In the process of building the connectors, the MYO SDK 0.8.1 and Kinect SDK 1.9 was used. The settings of the three devices is explained as below. Firstly, the settings of MYO armband is shown in Table 1. There is an Unlock event set into MYO mode in order to reduce the misuse. Thus, unless subject performs double tap to unlock the armband, other four gestures will not be detected. It is important to notice that the experiment does not adopt to use Finite State Machine (FSM) as the mathematical model. Therefore, users need to hold a gesture in order to keep the event being continued. Gesture Interaction Event Fist Move Forward Fingers Spread Move Backward Wave Left Turn Left Wave Right Turn Right Double Tap Unlock Table 1: Interaction Event Mapper of MYO in Experiment 2 Page 18

Secondly, because the version of Kinect SDK used in this experiment does not support hand gesture recognition, the Kinect sensor still needs to be used with mouse. When subjects are standing in front of the cameras of Kinect sensor, they are required to hold a mouse in their right hand. After Kinect mode is launched, the vision-based sensor will track subject s right shoulder, elbow and hand. Thus, subject is able to control the cursor on the screen by moving his/her right hand. The interaction event mapper is shown as Table 2. The cursor is constrained within the frame. Therefore, the cursor will be forced to stay at a border if user is trying to move the cursor out of the frame. If the position of the cursor is located on a border of the frame, the corresponding arrow will be displayed. Then if user holds both left and right buttons of the mouse held in his/her right hand, the user will be able to move toward or turn to the corresponding direction. Cursor Position Arrow Interaction Event Cursor.X 0 Turn Right Cursor.X Width Turn Left Cursor.Y 0 Move Forward Cursor.Y Heigth Move Backward Table 2: Interaction Event Mapper of Kinect in Experiment 2 Thirdly, the setting of keyboard is based on the custom of most 3D games. Therefore, key W maps moving forward, key S maps moving backward, key A maps turning left, key D maps turning right. 3.2.2 Experimental Data Collection This sub-section introduces the types of data collected in Experiment 2 and the method used in data collection. The following types of data are considered to be meaningful for the evaluation of the performance of the tested devices in navigation task. 3.2.2.1 Navigation Data and Time Firstly, if a moving or turning event is triggered, a clock function will be activated and keep counting time until the program of the virtual maze is closed. Therefore, it can calculate subject s completion time in this task. Moreover, the clock function will be reactivated per 0.02 second. For each time the clock function is activated, the experimental program will also record the navigation data. Therefore, each piece of navigation data includes the type of movement (move/ turn), direction (forward/ backward/ left/ right) and the corresponding time. Lastly, in MYO mode, the navigation data also includes the status of the armband (locked/ unlocked), the hand that armband is worn on (R/ L) and the gesture performed currently (rest/ fist/ fingers spread/ wave in/ wave out/ double tap). It is important to notice that subjects are allowed to use either their right or left arms to perform this task. Thus, there are two gestures have different names from the gestures shown in Table 1. The MYO armband is able to recognise which hand the user is using. Therefore, if the subject uses right hand, the Wave Left gesture in Table 1 will be recorded as wave Page 19

in, and Wave Right will be recorded as wave out. However, if the subject uses left hand to do this task, the Wave Left gesture will be recorded as wave out, and Wave Right will be recorded as wave in. Pseudo Code of Collecting Navigation Data InputMode = {KEYBOARD, MYO, KINECT} MYO = (Status, Hand, Gesture) Status = {unlock, lock} Hand = {L,R} Gesture = {rest, fist, fingers spread, wave in, wave out, double tap} Event = (Movement, Direction) Movement = {MOVE, TURN} Direction = {FORWARD, BACKWARD, LEFT, RIGHT} while virtual maze is launched Clock clock = new Clock case InputMode.KEYBOARD: StreamWriter file = new StreamWriter("Keyboard_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction) triggertime.clear() EndIf EndIf Break case InputMode.KINECT: StreamWriter file = new StreamWriter("Kinect_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction) triggertime.clear() EndIf EndIf Break case InputMode.MYO: StreamWriter file = new StreamWriter("MYO_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture) triggertime.clear() EndIf EndIf Break EndWhile *Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code. Page 20

3.2.2.2 Error Rate The error rate also can be considered as recognition error. For example, if a subject performs Fist gesture in MYO mode, but the armband recognises it as Double Tap, a recognition error will be counted. Due to the limit of the devices, they cannot detect and calibrate errors by themselves. Therefore, it needs a camera to record a video when a subject is perform this task and researcher needs to review the video to detect recognition errors. If researcher finds the gesture that the subject performed had wrong feedback in the virtual environment, a recognition error will be counted. 3.2.2.3 Subjective Evaluation After completing this test, subjects are asked to give a subjective evaluation to their performance for each tested device they used in this experiment. There are five degrees for them to choose, which include Excellent, Good, Average, Poor and Very Poor. Moreover, they are asked to choose their favourite device in this task and to list the reasons of their choices. 3.3 Assessment on Precise Manipulation This section introduces the Experiment 3 held in the project. The purpose of this experiment is to test the performance of MYO armband and Kinect sensor in the aspect of precise manipulation, and to make a comparison with traditional input devices. Similar to Experiment 2, subjects are asked to perform the precise manipulation in a virtual environment and the interaction events of each tested device (i.e. MYO armband, Kinect sensor and mouse) has been pre-set. The task for subjects in this experiment is using the tested device to pick up the keys generated on the screen, and using the keys to open the corresponding doors. 3.3.1 Setting up the Virtual Environment This sub-section introduces the details of the virtual environment used in Experiment 3 and the settings of three tested devices. The virtual environment is a 3D scene. Subjects are required to use mouse, MYO armband and Kinect sensor to select and drag the keys to the corresponding doors. Compared to Experiment 2, even though there are less interaction events set in this experiment, it requires subjects to control the cursor precisely and to perform the gestures more proficiently. 3.3.1.1 Virtual Environment Description The virtual environment used in this experiment is a square hall located in the 3- demensional maze introduced in Experiment 2. It can be considered as a scene because users are not allowed to move around. Same as Experiment 2, the scene also uses a first-person perspective. As Figure 9 shows, two keys are generated one by one. Subjects are asked to drag the key to the corresponding lock in order to open the door. After the first door is opened, the first key will be disappeared automatically and second Page 21

key will be displayed on the screen. To save more time in this experiment, after launching the virtual, the researcher will press key 2 to switch the camera to this scene. There are three interaction events set in this precise manipulation task, which include controlling cursor, selecting key, and grabbing key. Same as Experiment 2, the experiment does not use Finite State Machine (FSM) as the mathematical model. Therefore, users need to hold the input commend if they want the corresponding interaction event to be continued. 3.3.1.2 Settings of the Tested Devices The three tested devices in this experiment are MYO armband, Kinect sensor and mouse. For each of the device, the interaction events mentioned in the previous subsection are mapped into the corresponding gestures or keys. Firstly, the settings of MYO armband is shown as Table 3. Same as the setting in Experiment 2, Unlock event is also set into MYO mode to reduce the misuse in this experiment. However, to simplify the manipulation, the Unlock event shares the same input gesture with Toggle Mouse event. Therefore, if users perform Finger Spread gesture, the MYO armband will be unlocked and allow user to use arm to control the cursor. Moreover, to keep the event being continued, users should hold a gesture until they want to stop this event. When event Grab is continued, users are able to drag the key in terms of the movement of cursor. Gesture Interaction Event Fist Grab Fingers Spread Unlock and Toggle Mouse Double Tap Select Table 3: Interaction Event Mapper of MYO in Experiment 3 Secondly, similar to the setting in Experiment 2, Kinect mode also needs mouse to trigger the interaction events. However, the cursor will be tracked by the vision-based sensor of Kinect instead of mouse. Thus, when users use their right hand to put the cursor on the handle of the key, they are able to press the left mouse button the trigger the Select event. Then they are able to hold both of left and right mouse button to trigger the event Grab in order to drag the key to its corresponding lock. Thirdly, mouse is set in general sense. When left button is pressed and cursor is on the handle of the key, the key will be selected. Then if both of left and right button are being held, the key will be dragged as the cursor moves. Page 22

Figure 9: 3D Scene for Experiment 3 3.3.2 Experimental Data Collection This sub-section introduces the types of data collected in Experiment 3 and the method used in data collection. The following types of data are considered to be meaningful for the evaluation of the performance of the tested devices in precise manipulation. 3.3.2.1 Moving Range of Arm Since subjects need to use the motion of their arms to control the cursor on the screen when they are using MYO armband and Kinect sensor to perform the task in this experiment, therefore monitoring the moving range of subjects arms is meaningful to the evaluation. To calculate the moving range, the Euler Angle in 3-dementional Euclidean space is used. Euler Angle uses 3 angles to describe the orientation of a rigid body in 3-demntional Euclidean space [8]. The angle α, β, γ shown in Figure 10 respect to the parameter yaw, roll and pitch used in this experimental code. Since subjects do not need to roll their wrists in this test, therefore parameter roll is not taken account into the evaluation. However, to ensure the data integrity, roll angle is still collected in the experiment data. In [8], the researchers built a device to emulate upper body motion in a virtual 3D environment and used tri-axial accelerometers to detect human motions, which is similar to the idea of Experiment 3. The measurement method used in [8] is also reasonable to be applied into this experiment. That is, since each user has different Page 23

height and length of arm, it is hard to compare the Euler angles among numerous subjects. Therefore, the Euler angles in radian need to be converted to a scale in order to make the evaluation more reasonable and convincing. By using the formula provide by Thalmic Lab in [10], the angles in this experiment can be converted into a degree from 0 to 18. Radian roll = Angle roll + π 2π Radian pitch = Angle pitch + π 2 π Radian yaw = Angle yaw + π 2π 18 18 18 In MYO SDK 0.8.1, the developers use a quaternion to calculate the angle of roll, pitch and yaw. The parameters in the quaternion are x, y, z, w. The component w respects the scalar of this quaternion, and the component x, y, z respect the vectors in this quaternion [9]. To calculate the angle of roll, pitch and yaw, it needs to apply the formula provided by the developers in [10]. After calculating the angle of roll, pitch and yaw, it is able to use the formula above to convert the angle radian to a specific scale. Angle roll = atan2(2 (w x + y z), 1 2 (x x + y y)) Angle pitch = asin (max ( 1, min (1,2 (w y z x))) Angle yaw = atan2(2 (w x + x y), 1 2 (y y + z z)) For the Kinect SDK in [11], it is unfortunately that the library of Euler angle function can be only used to track the pose of head rather than hand. However, since wearing MYO armband does not influence the pattern recognition of Kinect sensor, the subjects are asked to wear the MYO armband to calculate the Euler angle of their arms when the virtual maze is under the Kinect mode. Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles] Page 24

3.3.2.2 Interaction Events and Time Firstly, as same the sub-section 3.2.2.1, as the subject triggers the interaction events, a clock function will be activated and keeps counting time until the program of the virtual maze is closed. The clock function will be reactivated per 0.02 second. For each time the clock function is generated, the experimental program will also record the interaction data which includes the status of the key (held/ not held), the type of event (select/ grab) and the corresponding time. In addition, in MYO and Kinect mode, the three degrees of each Euler angle are recorded. Lastly, in MYO mode, the interaction data also includes the status of the armband (locked/ unlocked), the hand that armband is worn on (R/ L) and the gesture performed currently (rest/ fist/ fingers spread/ double tap). Since the gesture Wave Left and Wave Right are not mapped into any interaction event in this experiment, the MYO armband will not give a feedback to these two gestures. Pseudo Code of Collecting Euler Angle and Interaction Data InputMode = {MOUSE, MYO, KINECT} MOUSE = CursorPosition MYO = (Status, Hand, Gesture, EulerAngle, CursorPosition) KINECT = (EulerAngle, Position) Status = {unlock, lock} Hand = {L,R} Gesture = {rest, fist, fingers spread, double tap} EulerAngle = (rollscale, pitchscale, yawscale) CursorPostion = (X,Y) Event = {SELECT, GRAB} Key = {held, not held} while virtual maze is launched case InputMode.MOUSE: Clock clock = new Clock StreamWriter file = new StreamWriter("MOUSE_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + CursorPostion.X + CursorPosition.Y) triggertime.clear() EndIf EndIf Break case InputMode.KINECT: Clock clock = new Clock StreamWriter file = new StreamWriter("Kinect_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + CursorPostion.X + CursorPosition.Y + EulerAngle.rollScale + EulerAngle.pitchScale + EulerAngle.yawScale) triggertime.clear() Page 25

EndIf EndIf Break case InputMode.MYO: Clock clock = new Clock StreamWriter file = new StreamWriter("MYO_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture + CursorPostion.X + CursorPosition.Y + EulerAngle.rollScale + EulerAngle.pitchScale + EulerAngle.yawScale) triggertime.clear() EndIf EndIf Break EndWhile *Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code. 3.3.2.3 Error Rate Same as Experiment 2, the error rate in this experiment also indicates the recognition error of the tested devices. The way to identify the error is also using camera to shoot a video for each subject and reviewing the video to identify the errors. 3.3.2.4 Subjective Evaluation After completing this test, subjects are asked to give a subjective evaluation to their performance for each device they used in Experiment 3. Same as Experiment 2, the degrees for them to choose are Excellent, Good, Average, Poor and Very Poor. Moreover, they are asked to choose their favourite device in precise manipulation and to list the reasons of their choices. 3.4 Assessment on Other General Aspects There are some other questions listed in the questionary. Before subjects doing the experiments, they need to fill out their name, gender, date of birth, and contact number, and answer some the pre-experiment questions, including How many years have you used computer with keyboard and mouse, Did you used any other NUI input device before?, Did you use MYO armband/ Kinect sensor before. These two parts aims to investigate the subject s background and provide more dimensions for data evaluation in the next chapter. Page 26

Apart from that, after completing all the three experiments, the subjects are asked to give a subjective assessment on the overall performance of MYO armband, Kinect sensor. Moreover, they are also asked to answer the questions that Do you have the willingness to use MYO armband/ Kinect sensor to replace mouse and keyboard in the future. The post-experiment questions aim to investigate the user experience in the perspective of subjects. It may provide a different view from the evaluation based on the data collected by the experimental program. 3.5 Devices Specification and Experimental Regulations The computer used in these three experiments is Asus F550CC. The product specification is shown in Appendix A. When subjects are using MYO armband or Kinect sensor to perform a task, they are required to stand at a distance of approximately 1.5 meters from the computer screen. Moreover, no barrier is allowed to block subject s view, arm or the lens of Kinect sensor. Lastly, the experiments are followed the National Statement on Ethical Conduct in Research Involving Humans. Page 27

Chapter 4 Result Analysis This chapter discusses the experimental data collected in the three HCI experiments introduced in the previous chapter. The main purpose of this phase is to assess the performance and user experience of MYO armband and Kinect based on the experimental data. Due to the constraint on time, there was few time left after setting up the virtual environment. Moreover, because it takes average more than 1 hour for each subject to do the three experiments, there are only five subjects have convened in the experiments so far. The data analysis in this chapter is based on the data set of the current five subjects. However, as the environment and the connections have been built, the later research can be continued based on the result of this project. The subjects convened in the experiments consist of 1 female and 4 males. Their age range is from 22 to 26. All of the subjects are the novice users of MYO armband whereas one of them had tried to use Kinect sensor for 1 hour in the purpose of entertainment. Moreover, all of them have used keyboard and mouse for more than 10 years. Therefore, they can be considered as the expert users of traditional input devices. Lastly, during the three experiments, the four male subjects are right-handers, and they used their right hand to hold the mouse and to wear the MYO armband. The female subject used her left hand to wear the MYO armband, but she used right hand to hold the mouse. 4.1 Result Analysis of Experiment 1 This section explains the results of Experiment 1. The result analysis is based on three aspects including the result of proficiency test, the total training time for each subject and their first impression of MYO armband and Kinect sensor. 4.1.1 Evaluation of Proficiency Test As mentioned in the previous chapter, there are two types of data collected in this test. For the proficiency test of MYO armband, the error rate (i.e. ErrorRate ) of performing five pre-set gestures and the completion time (i.e. CursorControlTime ) of the cursor control test were collected. For the proficiency test of Kinect sensor, only CursorControlTime was collected. The Table 4 and 5 shows the result of the proficiency test. From Table 4, it illustrates that no subject had failed in the test of performing the five pre-set gestures. However, the error rate is not satisfactory because two of the subjects completed the task with 20% error rate which is the maximum value being acceptable. Page 28

Moreover, 3 subjects made mistake in performing Wave In gesture. This does not reflect that they are not familiar with this gesture. From the data in later experiments, it shows that the recognition accuracy of Wave In is much lower than other four gestures. From the Table 5, it shows that all of the subjects spent less time in this task when they were using MYO armband. Therefore, it could mean that MYO armband performs better in cursor control. This guess has been proved by the data collected in Experiment 3. It is also important to notice that the Subject 5 spent 58.88 seconds on the Kinect cursor control test which is much more that the time other subject spent. Even though it is still within the tolerance range, it strengthen the conclusion that Kinect has worse performance in toggling cursor. Subject No Total Training Time Error Rate Incorrect Gesture 1 1 20% Fingers Spread, Wave Out 2 1 20% Double Tap, Wave In 3 1 0% N/A 4 1 10% Wave In 5 1 10% Wave In Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband Subject No CursorTimeMYO CursorTimeKinect 1 12.39 sec 21.35 sec 2 17.61 sec 19.78 sec 3 14.41 sec 21.43 sec 4 7.88 sec 26.18 sec 5 15.11 sec 58.88 sec Average Time 13.48 sec 33.72 sec Table 5: Completion Time in Cursor Control Test 4.1.2 Evaluation of Training Time The total training time of each subject is shown in Table 6. It shows that subjects apparently spent less time on the training of Kinect sensor. This is simply to be explained. Because the training of MYO consists of two tests whereas the training of Kinect contain only one test, therefore average training time of Kinect is much less than MYO. From this data, it reveals that MYO would have lower user-friendliness due to the longer training time. This opinion matches the subjects subjective evaluation on the user-friendliness of MYO armband and Kinect sensor. Subject No CursorTimeMYO CursorTimeKinect 1 124.85 sec 38.70 sec 2 119.95 sec 63.42 sec 3 97.23 sec 82.74 sec 4 92.45 sec 81.51 sec 5 118.95 sec 79.58 sec Average Time 110.69 sec 69.19 sec Table 6: Total Training Time for MYO armband and Kinect sensor Page 29

4.1.3 Evaluation of User-friendliness In Table 7, subjects evaluation on the user-friendliness is shown. The mode value for MYO is 3 while that for Kinect is 4. Even though there could be some other reason impacts on their choice, such as personal interest, it still can conclude that Kinect is more user-friendliness because it requires less amount of training tasks. Moreover, the result in 4.1.1 may also help to strengthen this point of view. In 4.1.1, four of the subjects made mistakes in the gesture performing test and three of them failed in performing Wave In gesture. Therefore, this failure experience could cause their negative impression on MYO armband. Subject No MYO Kinect 1 3 (Uncertain) 2 (Disagree) 2 3 (Uncertain) 4 (Agree) 3 3 (Uncertain) 5 (Strongly Agree) 4 3 (Uncertain) 4 (Agree) 5 2 (Disagree) 3 (Uncertain) Mode 3 (Uncertain) 4 (Agree) Table 7: Subject s Rate for the User-friendliness of MYO and Kinect 4.2 Result Analysis of Experiment 2 This section explains the results of Experiment 2. The result analysis is based on four aspects including the number of gestures used in the task, error rate, the time spent for each device and subject s self-evaluation of their performance in Experiment 2. 4.2.1 Evaluation of the Number of Gestures The number of the total gestures that a subject performed by using each device is shown in Table 8. According to the shortest path listed in Figure 8 in chapter 3, the expected value of this task is 8. From this table, it can conclude that using keyboard can perform less gestures than using MYO and Kinect. Moreover, when the subjects were using MYO armband, they performed the largest number of gestures. The reason of this is explained in the next sub-section. Subject No MYO Kinect Keyboard 1 16 13 11 2 14 11 10 3 10 10 15 4 16 12 11 5 13 17 11 Average Value 14 13 12 Table 8: The Number of Gestures Performed in Experiment 2 Page 30