Gesture Control in a Virtual Environment

Size: px
Start display at page:

Download "Gesture Control in a Virtual Environment"

Transcription

1 Gesture Control in a Virtual Environment Zishuo CHENG <u @anu.edu.au> 29 May 2015 A report submitted for the degree of Master of Computing of Australian National University Supervisor: Prof. Tom Gedeon, Martin Henschke COMP8715: Computing Project Australian National University Semester 1, 2015

2 Acknowledgement I would like to sincerely appreciate my supervisors Professor Tom Gedeon and PhD student Martin Henschke for their constant guidance and kindest assistance along the research process. Their expertise, patience, enthusiasm and friendship greatly encouraged me. Page 1

3 Abstract In the recent years, gesture recognition has gained increasing popularity in the field of human-machine interaction. Vision-based gesture recognition and myoelectric recognition are the two main solutions in this area. Myoelectric controllers collect electromyography (EMG) signals from user s skin as the inputs. MYO armband is a new wearable device launched by Thalmic Lab in It is an innovation which accomplishes gesture control by detecting motion and muscle activities. Moreover, compared to EMG recognition, vision-based devices aim to achieve gesture recognition in the way of computer vision. Kinect is a line of motion sensing input device released by Microsoft in 2010 which recognises user s motion through cameras. Since both of these two methods have their own advantages and drawbacks, this project aims to assess the performance of MYO armband and Kinect in the aspect of virtual control. The analytic result is given for the purpose of refining user experience. Keywords: MYO armband, Kinect, electromyography signal, vision-based, gesture recognition, Human-computer Interaction List of Abbreviations EMG HCI IMU GUI NUI SDK Electromyography Human-computer Interaction Inertial Measurement Unit Graphical User Interface Natural User Interface Software Development Kit Page 2

4 Table of Contents Acknowledgement...1 Abstract...2 List of Abbreviations 2 List of Figures 5 List of Tables Introduction Overview Motivation Objectives Contributions Report Outline Background MYO armband Kinect sensor Methodology Assessment on User-friendliness Training Subjects Evaluating Degree of Proficiency Evaluating User-friendliness Assessment on Navigation Setting up the Virtual Environment Virtual Environment Description Settings of the Tested Devices Experimental Data Collection Navigation Data and Time Error Rate Subjective Evaluation Assessment on Precise Manipulation Setting up the Virtual Environment 21 Page 3

5 Virtual Environment Description Settings of the Tested Devices Experimental Data Collection Moving Range of Arm Interaction Events and Time Error Rate Subjective Evaluation Assessment on Other General Aspects Devices Specification and Experimental Regulation Result Analysis Result Analysis of Experiment Evaluation of Proficiency Test Evaluation of Training Time Evaluation of User-friendliness Result Analysis of Experiment Evaluation of the Number of Gestures Evaluation of Error Rate Evaluation of Completion Time Self-Evaluation Result Analysis of Experiment Evaluation of Moving Range Evaluation of Error Rate Evaluation of Completion Time Self-Evaluation Analysis of Other Relevant Data Conclusion and Future Improvement Conclusion Future Improvement..36 Reference.37 Appendix A..38 Appendix B..39 Page 4

6 List of Figures Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab].9 Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager] 10 Figure 3: A Kinect Sensor [credit: Microsoft]. 11 Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft Developer Network].12 Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control 15 Figure 6: Flow Chart of Experiment Figure 7: 3D Demo of the Virtual Maze in Experiment 2 16 Figure 8: One of the Shortest Paths in Experiment Figure 9: 3D Scene for Experiment 3.22 Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles].23 List of Tables Table 1: Interaction Event Mapper of MYO in Experiment Table 2: Interaction Event Mapper of Kinect in Experiment 2.18 Table 3: Interaction Event Mapper of MYO in Experiment Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband 28 Table 5: Completion Time in Cursor Control Test Table 6: Total Training Time for MYO armband and Kinect sensor 28 Table 7: Subject s Rate for the User-friendliness of MYO and Kinect..29 Table 8: The Number of Gestures Performed in Experiment 2 29 Table 9: Error Rate in Experiment Table 10: Completion Time in Experiment Table 11: Subject s Self-Evaluation of the Performance in Experiment Table 12: Range of Pitch Angle in Experiment 3 32 Table 13: Range of Yaw Angle in Experiment 3 32 Table 14: Time Spent in Experiment Table 15: Subject s Self-Evaluation of the Performance in Experiment Page 5

7 Chapter 1 Introduction 1.1 Overview In recent years, traditional input devices such as keyboards and mouse are losing an amount of popularity due to an absence of flexibility and freedom. Compared to traditional graphical user interface (GUI), a natural user interface (NUI) enables human-machine interaction via the people s common behaviours such as gesture, voice, facial expression, eye movement and so on so forth. The concept of NUI was developed by Steve Mann in 1990s [1]. In the last two decades, developers made a variety of attempts to improve user experience by applying NUIs. Nowadays, NUIs as discussed in [2] are increasingly becoming an important part of the contemporary human-machine interaction. Electromyography (EMG) signal recognition plays an important role in NUIs. EMG is a technique for monitoring the electrical activity produced by skeletal muscles [3]. In recent years, there is a variety of wearable EMG devices released by numerous developers, such as MYO armband, Jawbone and some types of smartwatch. When muscle cells are electrically or neurologically activated, these devices monitor the electric potential generated by muscle cells in order to analyse the biomechanics of human movement. Vision-based pattern recognition is another significant part in NUI study which has been studied since the end of 20 th century [4]. By using camera to capture specific motion and patterns, vision-based devices enable to recognise the message that human being attempt to convey. There are many innovations in this area such as Kinect and Leap Motion. Generally speaking, most of vision-based devices perform gesture recognition through monitoring and analysing motion, depth, colour, shape and appearance [5]. 1.2 Motivation Even though EMG signal recognition and vision-based pattern recognition have been studied for many years, they are still far to break the dominance of the traditional GUI based on keyboard and mouse [6]. Moreover, both of them have their own problems which are the bottleneck of their development. Due to the defects of them, this project chooses MYO armband and Kinect as the typical example of EMG signal recognition and vision-based recognition and attempts to assess the their performance in virtual environment in order to identify the specific aspects that need to be improved by the Page 6

8 developers in the future. Moreover, the project also aims to summarise some valuable lessons for human-machine interaction. 1.3 Objectives The objectives of this project are to evaluate the performance of MYO armband and Kinect in the aspect of gesture control, to investigate the user experience of these two devices, and to attempt to identify if any improvement could be reached for the development of EMG and vision-based HCI. 1.4 Contribution Firstly, the project set up a 3D maze as the virtual environment to support the evaluation of gesture control. Secondly, the project used the Software Development Kit (SDK) of MYO armband and Kinect sensor to build the connection with the virtual environment. Thirdly, there were three HCI experiments held in the project. Last but not least, the project evaluated the experimental data and summarised some lessons for EMG signal recognition and vision-based pattern recognition. 1.5 Report Outline This project report is divided into five chapters. After this introduction, Chapter 2 introduces the background of MYO armband and Kinect sensor including the features and limitations of them. In Chapter 3, the research methodology is explained in details. It introduces the three experiments held in this project. Chapter 4 aims to analyse and discuss the experimental data from various dimensions. Lastly, the final conclusion and future improvement are discussed in Chapter5. Page 7

9 Chapter 2 Background The project selects MYO armband and Kinect as the typical device in the area of EMG signal recognition and vision-based pattern recognition respectively. By evaluating the performance of these two devices, the researcher enables to identify the advantages and defects of these two ways of gesture control. Thus, in this chapter, the features, specifications and limitations of MYO armband and Kinect are explained in more details. 2.1 MYO armband MYO armband is a wearable electromyography forearm band which was developed by Thalmic Lab in 2013 [4]. The original aim of this equipment is to provide a touch-free control of technology with gestures and motion. In 2014, the developer Thalmic Lab released the first shipment of the first generation product [4]. The armband allows user to wirelessly control the technology in the way of detecting the electrical activities in user s muscle and the motion of user s arm. One of the main features of MYO armband is that the band reads the electromyography signals from skeletal muscles and use it as the input commends of the corresponding gesture control events. As Figure 1 shows, the armband has 8 medical grade sensors which are used to monitor the EMG activities from the surface of user s skin. To monitors the spatial data about the movement and orientation of user s arm, the armband adopts a 9-axis Inertial Measurement Unit (IMU) which includes a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer [12]. Through the sensors and IMU, the armband enables to recognise user s gestures and track the motion of user s arm. Moreover, the armband uses Bluetooth 4.0 as the information channel to transmit the recognised signals to the paired devices. Page 8

10 Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab] Another feature of MYO armband is the open application program interfaces (APIs) and free SDK. Based on this feature, more people can be involved to build solutions for various uses such as home automation, drones, computer games and virtual reality. Thalmic Lab has released more than 10 versions of SDK since the initial version Alpha 1 was released in According to the log in [10], numerous new features were added into the SDK in each update to make the development environment more powerful. In Beta release 2, gesture data collection was added. Thus, developers enable to collect and analyse gesture data in order to help improve the accuracy of gesture recognition. In the latest version 0.8.1, a new function called mediakey() was added into the SDK, which allow to send media key events to system. So far, the MYO SDK has become a mature development environment with plenty of well-constructed functions. Nevertheless, there are a few drawbacks in the current generation of MYO armband. First of all, the poses that can be recognised by the band is limited. In the developer blog in [10], they announced that MYO armband can recognise 5 pre-set gestures including fist, wave left, wave right, finger spread and double tap. By setting up the connection through Bluetooth 4.0, users are able to map each gesture into a particular input event in order to interact with the paired device. On the one hand, the developers of the armband tend to simplify the human-machine interaction. Therefore, using only 5 gestures to interact with the environment is a user-friendly design which largely reduces the operation complexity. However, on the other hand, this design makes some restrictions on application development. Secondly, the accuracy of gesture recognition is not satisfactory, especially in a complex interaction. When a user aims to implement a complicated task with a combination of several gestures, the armband is not sensitive enough to detect the quick change of user s gestures. Page 9

11 Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager] 2.2 Kinect sensor Kinect is a line of motion sensing input devices released by Microsoft in The first generation Kinect was designed for the use of HCI in the video games listed in Xbox 360 store. Since its released date, Kinect sensor has attract the attention of numerous researchers because of its ability to perform vision-based gesture recognition [4]. Nowadays, Kinect is not only used for entertainment, but also for other purposes such as model building and HCI research. In the later chapter of this report, numerous parts of the HCI experiments are designed based on the product s characteristics discussed in the following paragraphs. One of the key characteristic of Kinect sensor is that it adopts to use 3 cameras to implement pattern recognition. As Figure 3 shows, a Kinect sensor consists of a RGB camera, two 3D depth sensors, a build-in motor and a multi-array microphone. The RGB camera is a traditional RGB camera which generates high-resolution colour images in real-time. As mentioned in [13], the depth sensor is composed of an infra-red (IR) projector and a monochrome complementary metal oxide semiconductor (CMOS) sensor. By measuring the reflection time of IR ray, the depth map can be presented. The video streams of both the RGB camera and depth sensor use the same video graphics array (VGA) resolution ( pixels). Each pixel in the RGB viewer corresponds to a particular pixel in the depth viewer. Based on this working principle, Kinect sensor is able to display the depth, colour and 3D information of its captured objects. Another characteristic of Kinect sensor is its unique skeletal tracking system. As Figure 4 illustrates, Kinect uses 3-dimensional positions prediction of 20 joints of human body from a single depth image [7]. Through this system, Kinect is able to estimate the body parts invariant to pose, body shape, appearance, etc. This system allows developers to use the corresponding build-in functions in Kinect SDK to retrieve the real-time motion and poses. Thus, it not only provides a powerful development environment for the application developers, but also enhances the user experience of the Kinect applications. Page 10

12 The SDK in the third characteristic which enables Kinect to gain popularity. Similar to MYO armband, Kinect also has a non-commercial SDK released by Microsoft in In each updated version, Microsoft is attempting to add more useful functions and features and keeps optimising the development environment. For example, in the latest version SDK 2.0 released in October 2014, it supports wider horizontal and vertical field of view for depth and colour. For the skeletal tracking system, Microsoft increased number of joints that can be recognised from 20 to 25. Moreover, some new gestures such as open and closed hand gestures were also added into the SDK. However, Kinect sensor also has its own defects. Firstly, although Microsoft keeps improving the SDK, the depth sensor still has a limited sensing range. The sensing range of depth sensor is from 0.4 meters to 4 meters. But the calibration function performs differently in terms of the distances between objects and Kinect sensor. According to the research in [7], to achieve best performance, Kinect sensor is suggested to be located within a 30cm 30cm square at a distance of between 1.45 and 1.75 meters from the user. Secondly, the data of depth image measured by Kinect sensor is not reliable enough. The depth images can be interfered by some noises such as light and background. Figure 3: A Kinect Sensor [credit: Microsoft] Page 11

13 Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft Developer Network] Page 12

14 Chapter 3 Methodology This chapter introduces the details of the three HCI experiments held in this project. The main purpose of this phase is to design the experimental methodology in order to investigate the performance and user experience of MYO armband and Kinect sensor in the area of gesture control. The chapter contains five sections. Section 3.1 describes the first experiment in details. This experiment aims to help volunteers get familiar with the use of MYO armband and Kinect sensor and to evaluate the user-friendliness of them. Section 3.2 introduces the second experiments. In this experiment, a virtual environment is implemented in order to investigate the navigation performance of the devices. Sections 3.3 explains the third experiment. This experiment also set up a virtual environment to assess the performance of precise manipulation of each device. Section 3.4 illustrates other general points investigated in the experimental questionary. Lastly, Section 3.5 introduces the specification of the experimental devices and rules. 3.1 Assessment on User-friendliness This section introduces the Experiment 1 held in the project. There are two purposes for holding this experiment. Firstly, since both of MYO armband and Kinect sensor require special gestures to interact with the virtual environment. Therefore, before holding the experiments to evaluate their performance in virtual control, it is important to train subjects to be familiar with the use of these two devices. Secondly, if subjects are novice users of MYO armband and Kinect sensor, it is a good chance to investigate the user-friendliness of the devices. The process of this experiment is shown as Figure Training Subjects There are two phases in this experiment. For each subject, they are firstly required to learn the use of MYO armband. At the beginning of this phase, there is a demo video about using MYO armband shown to each subject. The contents of the demo video include wearing the armband, performing sync gesture, using IMU to track the arm motion and performing the five pre-set gestures which are fist, fingers spread, wave left, wave right and double tap. After displaying the demo video, subjects are asked to attempt to use the armband by themselves. Therefore, each subject needs to wear on and sync the armband with the paired experimental computer. After syncing successfully, they need to perform the five gestures and use their arms to control the cursor on the screen of the paired computer. Page 13

15 The second phase of this experiment is training the subject with the use of Kinect sensor. There is also a demo video shown to each subject, which includes the contents of activating the sensor, calibrating pattern recognition and tracking arm motion. Similar to the first phase, subjects are asked to active the Kinect sensor and do the calibration task by themselves. After this, they are also required to use their arm to control the cursor on the screen of the paired computer Evaluating Degree of Proficiency Since one of the purpose of this experiment is to train user to use the tested devices, therefore evaluating the degree of user s proficiency is meaningful and important. In this experiment, only if the subject s degree of proficiency is acceptable, he/she is allowed to do the Experiment 2 and 3. A program is implemented to assess each subject s degree of proficiency when they are using the devices to do the test. To evaluate subject s degree of proficiency of using MYO armband, two aspects are monitored and assessed by the program. Firstly, the program selects one of the five gestures randomly and then generates the text version of the chosen gesture on the screen. The program repeats to do this task in ten times, and each gesture will be selected by the program in two times. Subjects should perform the same gesture as they watch on the screen. An error will be counted if the subject performs a different gesture from the gesture shown on the screen. Secondly, the program generates a graph ( pixels) shown as Figure 5. There are five red points located at , , , and respectively. As the graph is displayed on the screen, the cursor will be re-generated to the point 0 0 on the graph. Subjects are asked to use MYO armband to control the cursor to reach all the five points in one minute. A failure will be counted if the time is up. During these two tests, only if subject completes the first test with an error rate less than 20% and completes the second test within 1 minute, the subject will be assess as qualified. If the subject is not quailed, he/she is required to redo the failed part until it is passed. Similar to the evaluation on subject s degree of proficiency of using MYO armband, the program use the same graph to monitor subject s degree of proficiency of Kinect sensor. Since the manipulation on Kinect sensor does not need to perform any specific gestures, there is no need to ask subjects to perform gestures in this evaluation. Therefore, subject will be considered to be qualified if he/she can complete the cursor control test in 1 minute. However, if the subject fails, he/she needs to redo it until it is passed. Page 14

16 Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control Evaluating User-friendliness To evaluate the user-friendliness, there are four aspects taken into account. Firstly, for each experimental device, a time will be counted after showing the demo video. The time will be stopped until the subject is proficient at manipulating this device. Thus, this time record (named as TotalTime ) illustrates how long a novice user spends on getting familiar with the operation of each device. Secondly, the time that each subject used in the cursor control test is also recorded (named as CursorControlTime ). Thirdly, for MYO armband, the error rate of its first test is recorded as ErrorRate. Lastly, when a subject passes all the training tests, they are asked to give a subjective evaluation about the user-friendliness of MYO armband and Kinect sensor. The question for this aspect is that Do you think MYO armband/kinect sensor is user-friendly. There are five degrees for them to choose which are strongly agree, agree, uncertain, disagree and strongly disagree. Page 15

17 Figure 6: Flow Chart of Experiment 1 Page 16

18 3.2 Assessment on Navigation This section introduces the Experiment 2 held in the project. The purpose of this experiment is to test the performance of MYO armband and Kinect sensor in the aspect of navigation, and to compare with traditional input devices. There is a virtual maze set in this experiment in order to support the evaluation. Moreover, to make the data analysis more conveniently, the interaction events of each tested device (i.e. MYO armband, Kinect sensor and keyboard) have been pre-set rather than being customised. Therefore, all the subjects need to use same input commends to interact with the virtual environment, and are not allow to set the interaction events according to their personal preferences Setting up the Virtual Environment This sub-section introduces the details of the virtual environment used in Experiment 2 and the settings of three tested devices. The virtual environment is a 3D maze. Subjects are required to use keyboard, MYO armband and Kinect sensor to move from the starting point to the specified destination Virtual Environment Description The virtual environment used in this whole project is a 3-demensional maze written in C#. The virtual maze consists of 209 objects. Each object in this virtual environment is mapped into a corresponding 2-dementional texture image. To enhance the sense of virtual reality, the player in the maze is shown in a first-person perspective. As Figure 7 shows, the structure of the maze is not complicated, which contains 4 rooms, 5 straight halls, 3 square halls and 2 stair halls. Each part in the maze is used for different testing purpose. In this experiment, the starting position is set at a corner of Room 1. To save more time in this experiment, the camera can be switched to this starting point by pressing key 1 on the keyboard. Therefore, researcher will press key 1 when the subject is going to take this test. One of the shortest paths is shown as Figure 8 which is considered as the expected value in this experiment. Each subject are asked to attempt their best to trace this shortest path. Figure 7: 3D Demo of the Virtual Maze in Experiment 2 Page 17

19 Figure 8: One of the Shortest Paths in Experiment 2 There are four interaction events set in this navigation task, which include moving forward, moving backward, turning left and turning right. It is important to notice that when turning left/right happens, the camera will be rotated to left/right rather than being horizontally shifted to left/right. Therefore, if users want to move to left/right, they need to turn the camera to the left/right first, and then move forward from the new direction Settings of the Tested Devices The three tested devices in this experiment are MYO armband, Kinect sensor and keyboard. For each of the device, the interaction events mentioned in the previous subsection are mapped into the corresponding gestures or keys. Moreover, since MYO armband and Kinect sensor cannot be directly connected with the virtual environment, it is necessary to build a connector in the code of the maze. In the process of building the connectors, the MYO SDK and Kinect SDK 1.9 was used. The settings of the three devices is explained as below. Firstly, the settings of MYO armband is shown in Table 1. There is an Unlock event set into MYO mode in order to reduce the misuse. Thus, unless subject performs double tap to unlock the armband, other four gestures will not be detected. It is important to notice that the experiment does not adopt to use Finite State Machine (FSM) as the mathematical model. Therefore, users need to hold a gesture in order to keep the event being continued. Gesture Interaction Event Fist Move Forward Fingers Spread Move Backward Wave Left Turn Left Wave Right Turn Right Double Tap Unlock Table 1: Interaction Event Mapper of MYO in Experiment 2 Page 18

20 Secondly, because the version of Kinect SDK used in this experiment does not support hand gesture recognition, the Kinect sensor still needs to be used with mouse. When subjects are standing in front of the cameras of Kinect sensor, they are required to hold a mouse in their right hand. After Kinect mode is launched, the vision-based sensor will track subject s right shoulder, elbow and hand. Thus, subject is able to control the cursor on the screen by moving his/her right hand. The interaction event mapper is shown as Table 2. The cursor is constrained within the frame. Therefore, the cursor will be forced to stay at a border if user is trying to move the cursor out of the frame. If the position of the cursor is located on a border of the frame, the corresponding arrow will be displayed. Then if user holds both left and right buttons of the mouse held in his/her right hand, the user will be able to move toward or turn to the corresponding direction. Cursor Position Arrow Interaction Event Cursor.X 0 Turn Right Cursor.X Width Turn Left Cursor.Y 0 Move Forward Cursor.Y Heigth Move Backward Table 2: Interaction Event Mapper of Kinect in Experiment 2 Thirdly, the setting of keyboard is based on the custom of most 3D games. Therefore, key W maps moving forward, key S maps moving backward, key A maps turning left, key D maps turning right Experimental Data Collection This sub-section introduces the types of data collected in Experiment 2 and the method used in data collection. The following types of data are considered to be meaningful for the evaluation of the performance of the tested devices in navigation task Navigation Data and Time Firstly, if a moving or turning event is triggered, a clock function will be activated and keep counting time until the program of the virtual maze is closed. Therefore, it can calculate subject s completion time in this task. Moreover, the clock function will be reactivated per 0.02 second. For each time the clock function is activated, the experimental program will also record the navigation data. Therefore, each piece of navigation data includes the type of movement (move/ turn), direction (forward/ backward/ left/ right) and the corresponding time. Lastly, in MYO mode, the navigation data also includes the status of the armband (locked/ unlocked), the hand that armband is worn on (R/ L) and the gesture performed currently (rest/ fist/ fingers spread/ wave in/ wave out/ double tap). It is important to notice that subjects are allowed to use either their right or left arms to perform this task. Thus, there are two gestures have different names from the gestures shown in Table 1. The MYO armband is able to recognise which hand the user is using. Therefore, if the subject uses right hand, the Wave Left gesture in Table 1 will be recorded as wave Page 19

21 in, and Wave Right will be recorded as wave out. However, if the subject uses left hand to do this task, the Wave Left gesture will be recorded as wave out, and Wave Right will be recorded as wave in. Pseudo Code of Collecting Navigation Data InputMode = {KEYBOARD, MYO, KINECT} MYO = (Status, Hand, Gesture) Status = {unlock, lock} Hand = {L,R} Gesture = {rest, fist, fingers spread, wave in, wave out, double tap} Event = (Movement, Direction) Movement = {MOVE, TURN} Direction = {FORWARD, BACKWARD, LEFT, RIGHT} while virtual maze is launched Clock clock = new Clock case InputMode.KEYBOARD: StreamWriter file = new StreamWriter("Keyboard_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction) triggertime.clear() EndIf EndIf Break case InputMode.KINECT: StreamWriter file = new StreamWriter("Kinect_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction) triggertime.clear() EndIf EndIf Break case InputMode.MYO: StreamWriter file = new StreamWriter("MYO_Ex2_NavigationData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + Event.Movement + Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture) triggertime.clear() EndIf EndIf Break EndWhile *Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code. Page 20

22 Error Rate The error rate also can be considered as recognition error. For example, if a subject performs Fist gesture in MYO mode, but the armband recognises it as Double Tap, a recognition error will be counted. Due to the limit of the devices, they cannot detect and calibrate errors by themselves. Therefore, it needs a camera to record a video when a subject is perform this task and researcher needs to review the video to detect recognition errors. If researcher finds the gesture that the subject performed had wrong feedback in the virtual environment, a recognition error will be counted Subjective Evaluation After completing this test, subjects are asked to give a subjective evaluation to their performance for each tested device they used in this experiment. There are five degrees for them to choose, which include Excellent, Good, Average, Poor and Very Poor. Moreover, they are asked to choose their favourite device in this task and to list the reasons of their choices. 3.3 Assessment on Precise Manipulation This section introduces the Experiment 3 held in the project. The purpose of this experiment is to test the performance of MYO armband and Kinect sensor in the aspect of precise manipulation, and to make a comparison with traditional input devices. Similar to Experiment 2, subjects are asked to perform the precise manipulation in a virtual environment and the interaction events of each tested device (i.e. MYO armband, Kinect sensor and mouse) has been pre-set. The task for subjects in this experiment is using the tested device to pick up the keys generated on the screen, and using the keys to open the corresponding doors Setting up the Virtual Environment This sub-section introduces the details of the virtual environment used in Experiment 3 and the settings of three tested devices. The virtual environment is a 3D scene. Subjects are required to use mouse, MYO armband and Kinect sensor to select and drag the keys to the corresponding doors. Compared to Experiment 2, even though there are less interaction events set in this experiment, it requires subjects to control the cursor precisely and to perform the gestures more proficiently Virtual Environment Description The virtual environment used in this experiment is a square hall located in the 3- demensional maze introduced in Experiment 2. It can be considered as a scene because users are not allowed to move around. Same as Experiment 2, the scene also uses a first-person perspective. As Figure 9 shows, two keys are generated one by one. Subjects are asked to drag the key to the corresponding lock in order to open the door. After the first door is opened, the first key will be disappeared automatically and second Page 21

23 key will be displayed on the screen. To save more time in this experiment, after launching the virtual, the researcher will press key 2 to switch the camera to this scene. There are three interaction events set in this precise manipulation task, which include controlling cursor, selecting key, and grabbing key. Same as Experiment 2, the experiment does not use Finite State Machine (FSM) as the mathematical model. Therefore, users need to hold the input commend if they want the corresponding interaction event to be continued Settings of the Tested Devices The three tested devices in this experiment are MYO armband, Kinect sensor and mouse. For each of the device, the interaction events mentioned in the previous subsection are mapped into the corresponding gestures or keys. Firstly, the settings of MYO armband is shown as Table 3. Same as the setting in Experiment 2, Unlock event is also set into MYO mode to reduce the misuse in this experiment. However, to simplify the manipulation, the Unlock event shares the same input gesture with Toggle Mouse event. Therefore, if users perform Finger Spread gesture, the MYO armband will be unlocked and allow user to use arm to control the cursor. Moreover, to keep the event being continued, users should hold a gesture until they want to stop this event. When event Grab is continued, users are able to drag the key in terms of the movement of cursor. Gesture Interaction Event Fist Grab Fingers Spread Unlock and Toggle Mouse Double Tap Select Table 3: Interaction Event Mapper of MYO in Experiment 3 Secondly, similar to the setting in Experiment 2, Kinect mode also needs mouse to trigger the interaction events. However, the cursor will be tracked by the vision-based sensor of Kinect instead of mouse. Thus, when users use their right hand to put the cursor on the handle of the key, they are able to press the left mouse button the trigger the Select event. Then they are able to hold both of left and right mouse button to trigger the event Grab in order to drag the key to its corresponding lock. Thirdly, mouse is set in general sense. When left button is pressed and cursor is on the handle of the key, the key will be selected. Then if both of left and right button are being held, the key will be dragged as the cursor moves. Page 22

24 Figure 9: 3D Scene for Experiment Experimental Data Collection This sub-section introduces the types of data collected in Experiment 3 and the method used in data collection. The following types of data are considered to be meaningful for the evaluation of the performance of the tested devices in precise manipulation Moving Range of Arm Since subjects need to use the motion of their arms to control the cursor on the screen when they are using MYO armband and Kinect sensor to perform the task in this experiment, therefore monitoring the moving range of subjects arms is meaningful to the evaluation. To calculate the moving range, the Euler Angle in 3-dementional Euclidean space is used. Euler Angle uses 3 angles to describe the orientation of a rigid body in 3-demntional Euclidean space [8]. The angle α, β, γ shown in Figure 10 respect to the parameter yaw, roll and pitch used in this experimental code. Since subjects do not need to roll their wrists in this test, therefore parameter roll is not taken account into the evaluation. However, to ensure the data integrity, roll angle is still collected in the experiment data. In [8], the researchers built a device to emulate upper body motion in a virtual 3D environment and used tri-axial accelerometers to detect human motions, which is similar to the idea of Experiment 3. The measurement method used in [8] is also reasonable to be applied into this experiment. That is, since each user has different Page 23

25 height and length of arm, it is hard to compare the Euler angles among numerous subjects. Therefore, the Euler angles in radian need to be converted to a scale in order to make the evaluation more reasonable and convincing. By using the formula provide by Thalmic Lab in [10], the angles in this experiment can be converted into a degree from 0 to 18. Radian roll = Angle roll + π 2π Radian pitch = Angle pitch + π 2 π Radian yaw = Angle yaw + π 2π In MYO SDK 0.8.1, the developers use a quaternion to calculate the angle of roll, pitch and yaw. The parameters in the quaternion are x, y, z, w. The component w respects the scalar of this quaternion, and the component x, y, z respect the vectors in this quaternion [9]. To calculate the angle of roll, pitch and yaw, it needs to apply the formula provided by the developers in [10]. After calculating the angle of roll, pitch and yaw, it is able to use the formula above to convert the angle radian to a specific scale. Angle roll = atan2(2 (w x + y z), 1 2 (x x + y y)) Angle pitch = asin (max ( 1, min (1,2 (w y z x))) Angle yaw = atan2(2 (w x + x y), 1 2 (y y + z z)) For the Kinect SDK in [11], it is unfortunately that the library of Euler angle function can be only used to track the pose of head rather than hand. However, since wearing MYO armband does not influence the pattern recognition of Kinect sensor, the subjects are asked to wear the MYO armband to calculate the Euler angle of their arms when the virtual maze is under the Kinect mode. Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles] Page 24

26 Interaction Events and Time Firstly, as same the sub-section , as the subject triggers the interaction events, a clock function will be activated and keeps counting time until the program of the virtual maze is closed. The clock function will be reactivated per 0.02 second. For each time the clock function is generated, the experimental program will also record the interaction data which includes the status of the key (held/ not held), the type of event (select/ grab) and the corresponding time. In addition, in MYO and Kinect mode, the three degrees of each Euler angle are recorded. Lastly, in MYO mode, the interaction data also includes the status of the armband (locked/ unlocked), the hand that armband is worn on (R/ L) and the gesture performed currently (rest/ fist/ fingers spread/ double tap). Since the gesture Wave Left and Wave Right are not mapped into any interaction event in this experiment, the MYO armband will not give a feedback to these two gestures. Pseudo Code of Collecting Euler Angle and Interaction Data InputMode = {MOUSE, MYO, KINECT} MOUSE = CursorPosition MYO = (Status, Hand, Gesture, EulerAngle, CursorPosition) KINECT = (EulerAngle, Position) Status = {unlock, lock} Hand = {L,R} Gesture = {rest, fist, fingers spread, double tap} EulerAngle = (rollscale, pitchscale, yawscale) CursorPostion = (X,Y) Event = {SELECT, GRAB} Key = {held, not held} while virtual maze is launched case InputMode.MOUSE: Clock clock = new Clock StreamWriter file = new StreamWriter("MOUSE_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + CursorPostion.X + CursorPosition.Y) triggertime.clear() EndIf EndIf Break case InputMode.KINECT: Clock clock = new Clock StreamWriter file = new StreamWriter("Kinect_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + CursorPostion.X + CursorPosition.Y + EulerAngle.rollScale + EulerAngle.pitchScale + EulerAngle.yawScale) triggertime.clear() Page 25

27 EndIf EndIf Break case InputMode.MYO: Clock clock = new Clock StreamWriter file = new StreamWriter("MYO_Ex3_InteractionData.txt") if Event is triggered if triggertime = 0.02 sec file.write(clock.elaspedtime() + key + Event + Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture + CursorPostion.X + CursorPosition.Y + EulerAngle.rollScale + EulerAngle.pitchScale + EulerAngle.yawScale) triggertime.clear() EndIf EndIf Break EndWhile *Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code Error Rate Same as Experiment 2, the error rate in this experiment also indicates the recognition error of the tested devices. The way to identify the error is also using camera to shoot a video for each subject and reviewing the video to identify the errors Subjective Evaluation After completing this test, subjects are asked to give a subjective evaluation to their performance for each device they used in Experiment 3. Same as Experiment 2, the degrees for them to choose are Excellent, Good, Average, Poor and Very Poor. Moreover, they are asked to choose their favourite device in precise manipulation and to list the reasons of their choices. 3.4 Assessment on Other General Aspects There are some other questions listed in the questionary. Before subjects doing the experiments, they need to fill out their name, gender, date of birth, and contact number, and answer some the pre-experiment questions, including How many years have you used computer with keyboard and mouse, Did you used any other NUI input device before?, Did you use MYO armband/ Kinect sensor before. These two parts aims to investigate the subject s background and provide more dimensions for data evaluation in the next chapter. Page 26

28 Apart from that, after completing all the three experiments, the subjects are asked to give a subjective assessment on the overall performance of MYO armband, Kinect sensor. Moreover, they are also asked to answer the questions that Do you have the willingness to use MYO armband/ Kinect sensor to replace mouse and keyboard in the future. The post-experiment questions aim to investigate the user experience in the perspective of subjects. It may provide a different view from the evaluation based on the data collected by the experimental program. 3.5 Devices Specification and Experimental Regulations The computer used in these three experiments is Asus F550CC. The product specification is shown in Appendix A. When subjects are using MYO armband or Kinect sensor to perform a task, they are required to stand at a distance of approximately 1.5 meters from the computer screen. Moreover, no barrier is allowed to block subject s view, arm or the lens of Kinect sensor. Lastly, the experiments are followed the National Statement on Ethical Conduct in Research Involving Humans. Page 27

29 Chapter 4 Result Analysis This chapter discusses the experimental data collected in the three HCI experiments introduced in the previous chapter. The main purpose of this phase is to assess the performance and user experience of MYO armband and Kinect based on the experimental data. Due to the constraint on time, there was few time left after setting up the virtual environment. Moreover, because it takes average more than 1 hour for each subject to do the three experiments, there are only five subjects have convened in the experiments so far. The data analysis in this chapter is based on the data set of the current five subjects. However, as the environment and the connections have been built, the later research can be continued based on the result of this project. The subjects convened in the experiments consist of 1 female and 4 males. Their age range is from 22 to 26. All of the subjects are the novice users of MYO armband whereas one of them had tried to use Kinect sensor for 1 hour in the purpose of entertainment. Moreover, all of them have used keyboard and mouse for more than 10 years. Therefore, they can be considered as the expert users of traditional input devices. Lastly, during the three experiments, the four male subjects are right-handers, and they used their right hand to hold the mouse and to wear the MYO armband. The female subject used her left hand to wear the MYO armband, but she used right hand to hold the mouse. 4.1 Result Analysis of Experiment 1 This section explains the results of Experiment 1. The result analysis is based on three aspects including the result of proficiency test, the total training time for each subject and their first impression of MYO armband and Kinect sensor Evaluation of Proficiency Test As mentioned in the previous chapter, there are two types of data collected in this test. For the proficiency test of MYO armband, the error rate (i.e. ErrorRate ) of performing five pre-set gestures and the completion time (i.e. CursorControlTime ) of the cursor control test were collected. For the proficiency test of Kinect sensor, only CursorControlTime was collected. The Table 4 and 5 shows the result of the proficiency test. From Table 4, it illustrates that no subject had failed in the test of performing the five pre-set gestures. However, the error rate is not satisfactory because two of the subjects completed the task with 20% error rate which is the maximum value being acceptable. Page 28

30 Moreover, 3 subjects made mistake in performing Wave In gesture. This does not reflect that they are not familiar with this gesture. From the data in later experiments, it shows that the recognition accuracy of Wave In is much lower than other four gestures. From the Table 5, it shows that all of the subjects spent less time in this task when they were using MYO armband. Therefore, it could mean that MYO armband performs better in cursor control. This guess has been proved by the data collected in Experiment 3. It is also important to notice that the Subject 5 spent seconds on the Kinect cursor control test which is much more that the time other subject spent. Even though it is still within the tolerance range, it strengthen the conclusion that Kinect has worse performance in toggling cursor. Subject No Total Training Time Error Rate Incorrect Gesture % Fingers Spread, Wave Out % Double Tap, Wave In 3 1 0% N/A % Wave In % Wave In Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband Subject No CursorTimeMYO CursorTimeKinect sec sec sec sec sec sec sec sec sec sec Average Time sec sec Table 5: Completion Time in Cursor Control Test Evaluation of Training Time The total training time of each subject is shown in Table 6. It shows that subjects apparently spent less time on the training of Kinect sensor. This is simply to be explained. Because the training of MYO consists of two tests whereas the training of Kinect contain only one test, therefore average training time of Kinect is much less than MYO. From this data, it reveals that MYO would have lower user-friendliness due to the longer training time. This opinion matches the subjects subjective evaluation on the user-friendliness of MYO armband and Kinect sensor. Subject No CursorTimeMYO CursorTimeKinect sec sec sec sec sec sec sec sec sec sec Average Time sec sec Table 6: Total Training Time for MYO armband and Kinect sensor Page 29

31 4.1.3 Evaluation of User-friendliness In Table 7, subjects evaluation on the user-friendliness is shown. The mode value for MYO is 3 while that for Kinect is 4. Even though there could be some other reason impacts on their choice, such as personal interest, it still can conclude that Kinect is more user-friendliness because it requires less amount of training tasks. Moreover, the result in may also help to strengthen this point of view. In 4.1.1, four of the subjects made mistakes in the gesture performing test and three of them failed in performing Wave In gesture. Therefore, this failure experience could cause their negative impression on MYO armband. Subject No MYO Kinect 1 3 (Uncertain) 2 (Disagree) 2 3 (Uncertain) 4 (Agree) 3 3 (Uncertain) 5 (Strongly Agree) 4 3 (Uncertain) 4 (Agree) 5 2 (Disagree) 3 (Uncertain) Mode 3 (Uncertain) 4 (Agree) Table 7: Subject s Rate for the User-friendliness of MYO and Kinect 4.2 Result Analysis of Experiment 2 This section explains the results of Experiment 2. The result analysis is based on four aspects including the number of gestures used in the task, error rate, the time spent for each device and subject s self-evaluation of their performance in Experiment Evaluation of the Number of Gestures The number of the total gestures that a subject performed by using each device is shown in Table 8. According to the shortest path listed in Figure 8 in chapter 3, the expected value of this task is 8. From this table, it can conclude that using keyboard can perform less gestures than using MYO and Kinect. Moreover, when the subjects were using MYO armband, they performed the largest number of gestures. The reason of this is explained in the next sub-section. Subject No MYO Kinect Keyboard Average Value Table 8: The Number of Gestures Performed in Experiment 2 Page 30

CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2

CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2 CSE 165: 3D User Interaction Lecture #7: Input Devices Part 2 2 Announcements Homework Assignment #2 Due tomorrow at 2pm Sony Move check out Homework discussion Monday at 6pm Input Devices CSE 165 -Winter

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different

More information

Air Marshalling with the Kinect

Air Marshalling with the Kinect Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable

More information

KINECT CONTROLLED HUMANOID AND HELICOPTER

KINECT CONTROLLED HUMANOID AND HELICOPTER KINECT CONTROLLED HUMANOID AND HELICOPTER Muffakham Jah College of Engineering & Technology Presented by : MOHAMMED KHAJA ILIAS PASHA ZESHAN ABDUL MAJEED AZMI SYED ABRAR MOHAMMED ISHRAQ SARID MOHAMMED

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Heads up interaction: glasgow university multimodal research. Eve Hoggan

Heads up interaction: glasgow university multimodal research. Eve Hoggan Heads up interaction: glasgow university multimodal research Eve Hoggan www.tactons.org multimodal interaction Multimodal Interaction Group Key area of work is Multimodality A more human way to work Not

More information

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern ModaDJ Development and evaluation of a multimodal user interface Course Master of Computer Science Professor: Denis Lalanne Renato Corti1 Alina Petrescu2 1 Institute of Computer Science University of Bern

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

CSE Tue 10/09. Nadir Weibel

CSE Tue 10/09. Nadir Weibel CSE 118 - Tue 10/09 Nadir Weibel Today Admin Teams Assignments, grading, submissions Mini Quiz on Week 1 (readings and class material) Low-Fidelity Prototyping 1st Project Assignment Computer Vision, Kinect,

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

User Experience Guidelines

User Experience Guidelines User Experience Guidelines Revision 3 November 27, 2014 Introduction The Myo armband has the potential to transform the way people interact with their digital world. But without an ecosystem of Myo-enabled

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

EMMA Software Quick Start Guide

EMMA Software Quick Start Guide EMMA QUICK START GUIDE EMMA Software Quick Start Guide MAN-027-1-0 2016 Delsys Incorporated 1 TABLE OF CONTENTS Section I: Introduction to EMMA Software 1. Biomechanical Model 2. Sensor Placement Guidelines

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger There were things I resented

More information

3D Data Navigation via Natural User Interfaces

3D Data Navigation via Natural User Interfaces 3D Data Navigation via Natural User Interfaces Francisco R. Ortega PhD Candidate and GAANN Fellow Co-Advisors: Dr. Rishe and Dr. Barreto Committee Members: Dr. Raju, Dr. Clarke and Dr. Zeng GAANN Fellowship

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

User Experience Guidelines

User Experience Guidelines User Experience Guidelines Revision History Revision 1 July 25, 2014 - Initial release. Introduction The Myo armband will transform the way people interact with the digital world - and this is made possible

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

A Study on Motion-Based UI for Running Games with Kinect

A Study on Motion-Based UI for Running Games with Kinect A Study on Motion-Based UI for Running Games with Kinect Jimin Kim, Pyeong Oh, Hanho Lee, Sun-Jeong Kim * Interaction Design Graduate School, Hallym University 1 Hallymdaehak-gil, Chuncheon-si, Gangwon-do

More information

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device Hung-Chi Chu 1, Yuan-Chin Cheng 1 1 Department of Information and Communication Engineering, Chaoyang University

More information

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture Nobuaki Nakazawa 1*, Toshikazu Matsui 1, Yusaku Fujii 2 1 Faculty of Science and Technology, Gunma University, 29-1

More information

A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY

A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY H. ISHII, T. TEZUKA and H. YOSHIKAWA Graduate School of Energy Science, Kyoto University,

More information

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. Kinect2Scratch Workbook

Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl. Kinect2Scratch Workbook Inspiring Creative Fun Ysbrydoledig Creadigol Hwyl Workbook Scratch is a drag and drop programming environment created by MIT. It contains colour coordinated code blocks that allow a user to build up instructions

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

Real Time Hand Gesture Tracking for Network Centric Application

Real Time Hand Gesture Tracking for Network Centric Application Real Time Hand Gesture Tracking for Network Centric Application Abstract Chukwuemeka Chijioke Obasi 1 *, Christiana Chikodi Okezie 2, Ken Akpado 2, Chukwu Nnaemeka Paul 3, Asogwa, Chukwudi Samuel 1, Akuma

More information

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, 2012 10.5682/2066-026X-12-103 DEVELOPMENT OF A NATURAL USER INTERFACE FOR INTUITIVE PRESENTATIONS

More information

Project Multimodal FooBilliard

Project Multimodal FooBilliard Project Multimodal FooBilliard adding two multimodal user interfaces to an existing 3d billiard game Dominic Sina, Paul Frischknecht, Marian Briceag, Ulzhan Kakenova March May 2015, for Future User Interfaces

More information

STRUCTURE SENSOR QUICK START GUIDE

STRUCTURE SENSOR QUICK START GUIDE STRUCTURE SENSOR 1 TABLE OF CONTENTS WELCOME TO YOUR NEW STRUCTURE SENSOR 2 WHAT S INCLUDED IN THE BOX 2 CHARGING YOUR STRUCTURE SENSOR 3 CONNECTING YOUR STRUCTURE SENSOR TO YOUR IPAD 4 Attaching Structure

More information

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane Journal of Communication and Computer 13 (2016) 329-337 doi:10.17265/1548-7709/2016.07.002 D DAVID PUBLISHING Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

More information

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 174 ( 2015 ) 3102 3107 INTE 2014 Fabrication of the kinect remote-controlled cars and planning of the motion

More information

Direct gaze based environmental controls

Direct gaze based environmental controls Loughborough University Institutional Repository Direct gaze based environmental controls This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp

More information

WHITE PAPER Need for Gesture Recognition. April 2014

WHITE PAPER Need for Gesture Recognition. April 2014 WHITE PAPER Need for Gesture Recognition April 2014 TABLE OF CONTENTS Abstract... 3 What is Gesture Recognition?... 4 Market Trends... 6 Factors driving the need for a Solution... 8 The Solution... 10

More information

Optimization of user interaction with DICOM in the Operation Room of a hospital

Optimization of user interaction with DICOM in the Operation Room of a hospital Optimization of user interaction with DICOM in the Operation Room of a hospital By Sander Wegter GRADUATION REPORT Submitted to Hanze University of Applied Science Groningen in partial fulfilment of the

More information

Enabling Cursor Control Using on Pinch Gesture Recognition

Enabling Cursor Control Using on Pinch Gesture Recognition Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on

More information

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta 3D Interaction using Hand Motion Tracking Srinath Sridhar Antti Oulasvirta EIT ICT Labs Smart Spaces Summer School 05-June-2013 Speaker Srinath Sridhar PhD Student Supervised by Prof. Dr. Christian Theobalt

More information

The Making of a Kinect-based Control Car and Its Application in Engineering Education

The Making of a Kinect-based Control Car and Its Application in Engineering Education The Making of a Kinect-based Control Car and Its Application in Engineering Education Ke-Yu Lee Department of Computer Science and Information Engineering, Cheng-Shiu University, Taiwan Chun-Chung Lee

More information

Available online at ScienceDirect. Procedia Computer Science 50 (2015 )

Available online at   ScienceDirect. Procedia Computer Science 50 (2015 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 50 (2015 ) 503 510 2nd International Symposium on Big Data and Cloud Computing (ISBCC 15) Virtualizing Electrical Appliances

More information

Recent Progress on Wearable Augmented Interaction at AIST

Recent Progress on Wearable Augmented Interaction at AIST Recent Progress on Wearable Augmented Interaction at AIST Takeshi Kurata 12 1 Human Interface Technology Lab University of Washington 2 AIST, Japan kurata@ieee.org Weavy The goal of the Weavy project team

More information

Programming Project 2

Programming Project 2 Programming Project 2 Design Due: 30 April, in class Program Due: 9 May, 4pm (late days cannot be used on either part) Handout 13 CSCI 134: Spring, 2008 23 April Space Invaders Space Invaders has a long

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

Virtual Touch Human Computer Interaction at a Distance

Virtual Touch Human Computer Interaction at a Distance International Journal of Computer Science and Telecommunications [Volume 4, Issue 5, May 2013] 18 ISSN 2047-3338 Virtual Touch Human Computer Interaction at a Distance Prasanna Dhisale, Puja Firodiya,

More information

Lab Design of FANUC Robot Operation for Engineering Technology Major Students

Lab Design of FANUC Robot Operation for Engineering Technology Major Students Paper ID #21185 Lab Design of FANUC Robot Operation for Engineering Technology Major Students Dr. Maged Mikhail, Purdue University Northwest Dr. Maged B.Mikhail, Assistant Professor, Mechatronics Engineering

More information

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences Elwin Lee, Xiyuan Liu, Xun Zhang Entertainment Technology Center Carnegie Mellon University Pittsburgh, PA 15219 {elwinl, xiyuanl,

More information

HUMAN MACHINE INTERFACE

HUMAN MACHINE INTERFACE Journal homepage: www.mjret.in ISSN:2348-6953 HUMAN MACHINE INTERFACE Priyesh P. Khairnar, Amin G. Wanjara, Rajan Bhosale, S.B. Kamble Dept. of Electronics Engineering,PDEA s COEM Pune, India priyeshk07@gmail.com,

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Humera Syed 1, M. S. Khatib 2 1,2

Humera Syed 1, M. S. Khatib 2 1,2 A Hand Gesture Recognition Approach towards Shoulder Wearable Computing Humera Syed 1, M. S. Khatib 2 1,2 CSE, A.C.E.T/ R.T.M.N.U, India ABSTRACT: Human Computer Interaction needs computer systems and

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer University of Toronto Companion ECE1778 Winter 2015 Creative Applications for Mobile Devices Wei Hao Chang Apper Alexander Hong Programmer April 9, 2015 Contents 1 Introduction 3 1.1 Problem......................................

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Augmented and Virtual Reality

Augmented and Virtual Reality CS-3120 Human-Computer Interaction Augmented and Virtual Reality Mikko Kytö 7.11.2017 From Real to Virtual [1] Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS

More information

Drawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert

Drawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert Drawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert This set of notes describes how to prepare a Bode plot using Mathcad. Follow these instructions to draw Bode plot for any transfer

More information

Design of an Interactive Smart Board Using Kinect Sensor

Design of an Interactive Smart Board Using Kinect Sensor Design of an Interactive Smart Board Using Kinect Sensor Supervisor: Dr. Jia Uddin Nasrul Karim Sarker - 13201025 Muhammad Touhidul Islam - 13201021 Md. Shahidul Islam Majumder - 13201022 Department of

More information

Laboratory 2: Graphing

Laboratory 2: Graphing Purpose It is often said that a picture is worth 1,000 words, or for scientists we might rephrase it to say that a graph is worth 1,000 words. Graphs are most often used to express data in a clear, concise

More information

PWM LED Color Control

PWM LED Color Control 1 PWM LED Color Control Through the use temperature sensors, accelerometers, and switches to finely control colors. Daniyah Alaswad, Joshua Creech, Gurashish Grewal, & Yang Lu Electrical and Computer Engineering

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1 Active Stereo Vision COMP 4102A Winter 2014 Gerhard Roth Version 1 Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can

More information

Requirements Specification. An MMORPG Game Using Oculus Rift

Requirements Specification. An MMORPG Game Using Oculus Rift 1 System Description CN1 An MMORPG Game Using Oculus Rift The project Game using Oculus Rift is the game application based on Microsoft Windows that allows user to play the game with the virtual reality

More information

Reference Guide. Store Optimization. Created: May 2017 Last updated: November 2017 Rev: Final

Reference Guide. Store Optimization. Created: May 2017 Last updated: November 2017 Rev: Final Reference Guide Store Optimization Reference Guide Created: May 2017 Last updated: November 2017 Rev: Final Table of contents INTRODUCTION 3 2 AXIS PEOPLE COUNTER AND AXIS 3D PEOPLE COUNTER 3 2.1 Examples

More information

MEASURING AND ANALYZING FINE MOTOR SKILLS

MEASURING AND ANALYZING FINE MOTOR SKILLS MEASURING AND ANALYZING FINE MOTOR SKILLS PART 1: MOTION TRACKING AND EMG OF FINE MOVEMENTS PART 2: HIGH-FIDELITY CAPTURE OF HAND AND FINGER BIOMECHANICS Abstract This white paper discusses an example

More information

Physics 131 Lab 1: ONE-DIMENSIONAL MOTION

Physics 131 Lab 1: ONE-DIMENSIONAL MOTION 1 Name Date Partner(s) Physics 131 Lab 1: ONE-DIMENSIONAL MOTION OBJECTIVES To familiarize yourself with motion detector hardware. To explore how simple motions are represented on a displacement-time graph.

More information

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation University of California, Santa Barbara CS189 Fall 17 Capstone VR Telemedicine Product Requirement Documentation Jinfa Zhu Kenneth Chan Shouzhi Wan Xiaohe He Yuanqi Li Supervised by Ole Eichhorn Helen

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Apple ARKit Overview. 1. Purpose. 2. Apple ARKit. 2.1 Overview. 2.2 Functions

Apple ARKit Overview. 1. Purpose. 2. Apple ARKit. 2.1 Overview. 2.2 Functions Apple ARKit Overview 1. Purpose In the 2017 Apple Worldwide Developers Conference, Apple announced a tool called ARKit, which provides advanced augmented reality capabilities on ios. Augmented reality

More information

Image Manipulation Interface using Depth-based Hand Gesture

Image Manipulation Interface using Depth-based Hand Gesture Image Manipulation Interface using Depth-based Hand Gesture UNSEOK LEE JIRO TANAKA Vision-based tracking is popular way to track hands. However, most vision-based tracking methods can t do a clearly tracking

More information

Wands are Magic: a comparison of devices used in 3D pointing interfaces

Wands are Magic: a comparison of devices used in 3D pointing interfaces Wands are Magic: a comparison of devices used in 3D pointing interfaces Martin Henschke, Tom Gedeon, Richard Jones, Sabrina Caldwell and Dingyun Zhu College of Engineering and Computer Science, Australian

More information

Classification for Motion Game Based on EEG Sensing

Classification for Motion Game Based on EEG Sensing Classification for Motion Game Based on EEG Sensing Ran WEI 1,3,4, Xing-Hua ZHANG 1,4, Xin DANG 2,3,4,a and Guo-Hui LI 3 1 School of Electronics and Information Engineering, Tianjin Polytechnic University,

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko SPIDERMAN VR Adam Elgressy and Dmitry Vlasenko Supervisors: Boaz Sternfeld and Yaron Honen Submission Date: 09/01/2019 Contents Who We Are:... 2 Abstract:... 2 Previous Work:... 3 Tangent Systems & Development

More information

KINECT HANDS-FREE. Rituj Beniwal. Department of Electrical Engineering Indian Institute of Technology, Kanpur. Pranjal Giri

KINECT HANDS-FREE. Rituj Beniwal. Department of Electrical Engineering Indian Institute of Technology, Kanpur. Pranjal Giri KINECT HANDS-FREE Rituj Beniwal Pranjal Giri Agrim Bari Raman Pratap Singh Akash Jain Department of Aerospace Engineering Indian Institute of Technology, Kanpur Atharva Mulmuley Department of Chemical

More information

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane Makoto Yoda Department of Information System Science Graduate School of Engineering Soka University, Soka

More information

Development of excavator training simulator using leap motion controller

Development of excavator training simulator using leap motion controller Journal of Physics: Conference Series PAPER OPEN ACCESS Development of excavator training simulator using leap motion controller To cite this article: F Fahmi et al 2018 J. Phys.: Conf. Ser. 978 012034

More information

WIRELESS CONTROL OF A ROBOTIC ARM USING 3D MOTION TRACKING SENSORS AND ARTIFICIAL NEURAL NETWORKS 13

WIRELESS CONTROL OF A ROBOTIC ARM USING 3D MOTION TRACKING SENSORS AND ARTIFICIAL NEURAL NETWORKS 13 WIRELESS CONTROL OF A ROBOTIC ARM USING 3D MOTION TRACKING SENSORS AND ARTIFICIAL NEURAL NETWORKS Fernando Ríos, Georgia Southern University; Rocío Alba-Flores, Georgia Southern University; Imani Augusma,

More information

Table of Contents. Creating Your First Project 4. Enhancing Your Slides 8. Adding Interactivity 12. Recording a Software Simulation 19

Table of Contents. Creating Your First Project 4. Enhancing Your Slides 8. Adding Interactivity 12. Recording a Software Simulation 19 Table of Contents Creating Your First Project 4 Enhancing Your Slides 8 Adding Interactivity 12 Recording a Software Simulation 19 Inserting a Quiz 24 Publishing Your Course 32 More Great Features to Learn

More information

The development of a virtual laboratory based on Unreal Engine 4

The development of a virtual laboratory based on Unreal Engine 4 The development of a virtual laboratory based on Unreal Engine 4 D A Sheverev 1 and I N Kozlova 1 1 Samara National Research University, Moskovskoye shosse 34А, Samara, Russia, 443086 Abstract. In our

More information

Technical Specifications: tog VR

Technical Specifications: tog VR s: BILLBOARDING ENCODED HEADS FULL FREEDOM AUGMENTED REALITY : Real-time 3d virtual reality sets from RT Software Virtual reality sets are increasingly being used to enhance the audience experience and

More information

CHAPTER 1. INTRODUCTION 16

CHAPTER 1. INTRODUCTION 16 1 Introduction The author s original intention, a couple of years ago, was to develop a kind of an intuitive, dataglove-based interface for Computer-Aided Design (CAD) applications. The idea was to interact

More information

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces Content based on Dr.LaViola s class: 3D User Interfaces for Games and VR What is a User Interface? Where

More information

BRUSHES AND LAYERS We will learn how to use brushes and illustration tools to make a simple composition. Introduction to using layers.

BRUSHES AND LAYERS We will learn how to use brushes and illustration tools to make a simple composition. Introduction to using layers. Brushes BRUSHES AND LAYERS We will learn how to use brushes and illustration tools to make a simple composition. Introduction to using layers. WHAT IS A BRUSH? A brush is a type of tool in Photoshop used

More information

Design of Head Movement Controller System (HEMOCS) for Control Mobile Application through Head Pose Movement Detection

Design of Head Movement Controller System (HEMOCS) for Control Mobile Application through Head Pose Movement Detection Design of Head Movement Controller System (HEMOCS) for Control Mobile Application through Head Pose Movement Detection http://dx.doi.org/10.3991/ijim.v10i3.5552 Herman Tolle 1 and Kohei Arai 2 1 Brawijaya

More information

Chlorophyll Fluorescence Imaging System

Chlorophyll Fluorescence Imaging System Quick Start Guide Chlorophyll Fluorescence Imaging System Quick Start Guide for Technologica FluorImager software for use with Technlogica CFImager hardware Copyright 2006 2015 TECHNOLOGICA LIMITED. All

More information

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER Nils Gageik, Thilo Müller, Sergio Montenegro University of Würzburg, Aerospace Information Technology

More information

CS 315 Intro to Human Computer Interaction (HCI)

CS 315 Intro to Human Computer Interaction (HCI) CS 315 Intro to Human Computer Interaction (HCI) Direct Manipulation Examples Drive a car If you want to turn left, what do you do? What type of feedback do you get? How does this help? Think about turning

More information

Introduction to Embedded Systems

Introduction to Embedded Systems Introduction to Embedded Systems Edward A. Lee & Sanjit Seshia UC Berkeley EECS 124 Spring 2008 Copyright 2008, Edward A. Lee & Sanjit Seshia, All rights reserved Lecture 3: Sensors and Actuators Sensors

More information

Robotics Laboratory. Report Nao. 7 th of July Authors: Arnaud van Pottelsberghe Brieuc della Faille Laurent Parez Pierre-Yves Morelle

Robotics Laboratory. Report Nao. 7 th of July Authors: Arnaud van Pottelsberghe Brieuc della Faille Laurent Parez Pierre-Yves Morelle Robotics Laboratory Report Nao 7 th of July 2014 Authors: Arnaud van Pottelsberghe Brieuc della Faille Laurent Parez Pierre-Yves Morelle Professor: Prof. Dr. Jens Lüssem Faculty: Informatics and Electrotechnics

More information

Aerospace Sensor Suite

Aerospace Sensor Suite Aerospace Sensor Suite ECE 1778 Creative Applications for Mobile Devices Final Report prepared for Dr. Jonathon Rose April 12 th 2011 Word count: 2351 + 490 (Apper Context) Jin Hyouk (Paul) Choi: 998495640

More information

Input devices and interaction. Ruth Aylett

Input devices and interaction. Ruth Aylett Input devices and interaction Ruth Aylett Contents Tracking What is available Devices Gloves, 6 DOF mouse, WiiMote Why is it important? Interaction is basic to VEs We defined them as interactive in real-time

More information

Keywords Mobile Phones, Accelerometer, Gestures, Hand Writing, Voice Detection, Air Signature, HCI.

Keywords Mobile Phones, Accelerometer, Gestures, Hand Writing, Voice Detection, Air Signature, HCI. Volume 5, Issue 3, March 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Advanced Techniques

More information

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION Hand gesture recognition for vehicle control Bhagyashri B.Jakhade, Neha A. Kulkarni, Sadanand. Patil Abstract: - The rapid evolution in technology has made electronic gadgets inseparable part of our life.

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

Community Update and Next Steps

Community Update and Next Steps Community Update and Next Steps Stewart Tansley, PhD Senior Research Program Manager & Product Manager (acting) Special Guest: Anoop Gupta, PhD Distinguished Scientist Project Natal Origins: Project Natal

More information

Training NAO using Kinect

Training NAO using Kinect Training NAO using Kinect Michalis Chartomatsidis, Emmanouil Androulakis, Ergina Kavallieratou University of the Aegean Samos, Dept of Information & Communications Systems, Greece kavallieratou@aegean.gr

More information

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray Using the Kinect and Beyond // Center for Games and Playable Media // http://games.soe.ucsc.edu John Murray John Murray Expressive Title Here (Arial) Intelligence Studio Introduction to Interfaces User

More information

FILE # 3DS CIRCLE PAD CALIBRATION FAILED

FILE # 3DS CIRCLE PAD CALIBRATION FAILED 02 July, 2018 FILE # 3DS CIRCLE PAD CALIBRATION FAILED Document Filetype: PDF 134.86 KB 0 FILE # 3DS CIRCLE PAD CALIBRATION FAILED I think the sleep mode calibration works pretty good. Published by amazon

More information

Robust Hand Gesture Recognition for Robotic Hand Control

Robust Hand Gesture Recognition for Robotic Hand Control Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State

More information

Gesture Control FPS Horror/Survivor Game Third Year Project (COMP30040)

Gesture Control FPS Horror/Survivor Game Third Year Project (COMP30040) Gesture Control FPS Horror/Survivor Game Third Year Project (COMP30040) Student: Georgios Hadjitofallis Degree Program: BSc Computer Science Supervisor: Dr. Steve Pettifer The University of Manchester

More information