AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays

Similar documents
Haptics for Guide Dog Handlers

Part 1: Determining the Sensors and Feedback Mechanism

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

Heads up interaction: glasgow university multimodal research. Eve Hoggan

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Design Considerations for Wrist- Wearable Heart Rate Monitors

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

Camera Setup and Field Recommendations

High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

Testing of the FE Walking Robot

Mobile Audio Designs Monkey: A Tool for Audio Augmented Reality

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

HUMAN COMPUTER INTERFACE

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems

Project Multimodal FooBilliard

Initial Project and Group Identification Document September 15, Sense Glove. Now you really do have the power in your hands!

Design and Evaluation of Tactile Number Reading Methods on Smartphones

How to Create a Touchless Slider for Human Interface Applications

STRUCTURE SENSOR QUICK START GUIDE

User Experience Guidelines

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

User Experience Guidelines

Mechatronics Project Report

Mobile Interaction with the Real World

Journal of Theoretical and Applied Mechanics, Sofia, 2014, vol. 44, No. 1, pp ROBONAUT 2: MISSION, TECHNOLOGIES, PERSPECTIVES

Perception in Immersive Environments

Automated Mobility and Orientation System for Blind

TapBoard: Making a Touch Screen Keyboard

Where to Locate Wearable Displays? Reaction Time Performance of Visual Alerts from Tip to Toe

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Technology offer. Aerial obstacle detection software for the visually impaired

Evaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras

Running an HCI Experiment in Multiple Parallel Universes

Shape Memory Alloy Actuator Controller Design for Tactile Displays

Multimodal Interaction Concepts for Mobile Augmented Reality Applications

GART: The Gesture and Activity Recognition Toolkit

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION

MB1013, MB1023, MB1033, MB1043

Effects of Display Sizes on a Scrolling Task using a Cylindrical Smartwatch

Robust Wrist-Type Multiple Photo-Interrupter Pulse Sensor

MultiSensor 6 (User Guide)

580A Automatic Cable Tying Machine 580A

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

RED TACTON.

Virtual Reality Calendar Tour Guide

Chapter 1 - Introduction

GESTUR. Sensing & Feedback Glove for interfacing with Virtual Reality

GE 320: Introduction to Control Systems

Independent Tool Probe with LVDT for Measuring Dimensional Wear of Turning Edge

Paper on: Optical Camouflage

University of Missouri marching mizzou. drumline. audition packet

Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS

LDOR: Laser Directed Object Retrieving Robot. Final Report

3D ULTRASONIC STICK FOR BLIND

DESIGN OF AN AUGMENTED REALITY

I am currently seeking research and development roles focused on adapting new technologies for human-centered experiences.

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

RED TACTON ABSTRACT:

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Collaboration in Multimodal Virtual Environments

Trumpet Wind Controller

R (2) Controlling System Application with hands by identifying movements through Camera

CEEN Bot Lab Design A SENIOR THESIS PROPOSAL

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Enhanced Collision Perception Using Tactile Feedback

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering. (An ISO 3297: 2007 Certified Organization)

Spatial Low Pass Filters for Pin Actuated Tactile Displays

Predictive Maintenance with Multi-Channel Analysis in Route and Analyze Mode

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

2017 CASIO COMPUTER CO., LTD.

Harmony Remote Repair

VIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT

Towards affordance based human-system interaction based on cyber-physical systems

Toward an Augmented Reality System for Violin Learning Support

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Haptic messaging. Katariina Tiitinen

UWYO VR SETUP INSTRUCTIONS

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

EEL5666C IMDL Spring 2006 Student: Andrew Joseph. *Alarm-o-bot*

How Many Pixels Do We Need to See Things?

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

What was the first gestural interface?

Interactions and Applications for See- Through interfaces: Industrial application examples

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Frictioned Micromotion Input for Touch Sensitive Devices

these systems has increased, regardless of the environmental conditions of the systems.

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

IMPROVING PERFORMERS MUSICALITY THROUGH LIVE INTERACTION WITH HAPTIC FEEDBACK: A CASE STUDY

Comparison of Phone-based Distal Pointing Techniques for Point-Select Tasks

Development and Evaluation of a Centaur Robot

Chapter 14. using data wires

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

MOTOTRBO SL3500e PORTABLE TWO-WAY RADIO. BROCHURE SL3500e

Transcription:

AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays A Thesis Presented to The Academic Faculty by BoHao Li In Partial Fulfillment of the Requirements for the Degree B.S. Computer Science with Research Option in the College of Computing Georgia Institute of Technology May 2011

AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays Approved by: ----------------------------------------------------------------- Dr. Thad Starner, Advisor School of Interactive Computing Georgia Institute of Technology ----------------------------------------------------------------- Dr. Gregory Abowd School of Interactive Computing Georgia Institute of Technology ----------------------------------------------------------------- Dr. Amy Bruckman School of Interactive Computing Georgia Institute of Technology i

TABLE OF CONTENTS Page LIST OF FIGURES ACKNOWLEDGEMENTS Abstract iii iv v Introduction 1 Related work 2 Implementation 5 Task and Procedure 9 Results 11 Discussion and Future Work 13 Conclusion 14 References 15 ii

LIST OF FIGURES Page Figure 1: Air Touch internal components 6 Figure 2: Tactile display 6 Figure 3: New push to gesture mechanism 7 Figure 4: Software interface 8 Figure 5: Four gestures designed for the study 9 Figure 6: 2x2 conditions 9 Figure 7: Half blocked goggle 10 Figure 8: In-lab walking track path 11 Figure 9: Performance Accuracy 11 Figure 10: Four condition performance time 12 Figure 11: AirTouch design iterations 13 iii

ACKNOWLEDGEMENTS I wish to thank my Advisor, Dr. Thad Starner for his guidance and support. A special thanks to my graduate mentor Seungyon Claire Lee for supervising my work on the project. I would also like to thank Scott Gilliland and James Clawson for providing assistance on the project. Thank you for all your help and support. iv

Abstract AirTouch is a wrist-based gesture interface that enables mobile gesture interaction under limited visual attention settings. Since the original Gesture Watch lacked nonvisual feedback, visual attention was required. AirTouch addresses this visual distraction issue by adding tactile feedback. The tactile feedback is supported by a new push-togesture (PTG) mechanism, which gives the user the ability to cancel the gesture input and causes less fatigue compared to the previous mechanism. A user study was conducted to observe the effect of the tactile feedback and the new PTG. The results show that the added tactile feedback enables more accurate and faster interaction. We also found that the tactile feedback successfully compensates for limited vision. The perceived difficulties and performance time with tactile display under limited vision was similar to having full visual attention without tactile display. v

Introduction The growing popularity of mobile devices gives people a new way to use computing devices. Traditional mobile computing interfaces such as button and touch interfaces are designed to be used while the user is stationary. Operating such an interface often causes distractions, especially visual distractions, on people's primary tasks. With fast, forward-moving technology, mobile devices are getting increasing portable. This reduction in form factor also causes user interfaces to shrink. A smaller user interface requires precise hand-eye coordination, resulting in an increase in user distraction. Such visual distractions pose serious safety threats when the user is mobile, which could result in fatalities. One approach to this issue is a gesture interface. The gesture interfaces use large gestures performed by moving a user s hand or fingers over a relatively large interaction space. This approach improves usability greatly by moving the interaction area from small physical buttons and touch surfaces to a large space above the device. The user can interact with the device in a large virtual interaction pane rather than by pinpointing tiny buttons. The original Gesture Watch [3] uses such a gesture interface. It utilizes four proximity sensors to capture in-air hand gestures. Placing the gesture interface on the wrist means that it is glance-able and enables faster access [1]. During the original Gesture Watch study [3], participants were able to use the gesture watch with very minimal training. The gesture recognition accuracy was above 90% while users were walking outdoors. Even though the Gesture Watch achieved promising results, it still had several issues. Despite the use of a gesture interface, the Gesture Watch still 1

required visual attention to ensure the exact hand placement while performing the gesture. The Gesture Watch trigger mechanism requires tilting the wrist upwards for the entire duration of the gesture. This mechanism causes physical fatigue on the user s wrist. The original gesture watch also makes it impossible for the user to cancel an incorrect gesture once it has been performed. In order to address the visual distraction and fatigue issues that were discovered in the original gesture watch, we designed the AirTouch. Our solution is to add a tactile display that provides non-visual feedback, thus avoiding potential visual distraction. The AirTouch couples each sensor with one shaft-less vibration motor (diameter = 10mm, height = 3.4mm). The vibration motor is synchronized with the sensor, providing a preview of the in-air hand gesture without the need for visual attention. We also designed a new trigger mechanism to address the fatigue issue in the original mechanism. Now, tilting the wrist is needed only to confirm the gesture after previewing it via tactile feedback. Users can also cancel gesture input by not tilting their wrist within the confirmation time period. Related work Quickdraw by Ashbrook et al. [1] explored the impact of mobility and various on-body placements on device access time. The study examined three different mobile device placements on the body: in the pocket, on the hip in a holster, and on the wrist. Each placement was tested with two corresponding conditions: walking and standing still. During the study, each condition started with an alert that was generated by the mobile device (Motorola E680i camera phone). In response, participants needed to unlock the 2

device after retrieving it. Participants then selected the number previously displayed on the lock screen from a group of four. In the study, pocket placement and hip placement yielded 68% and 98% longer access time respectively, than wrist placement. (Access time is defined as the time required for the user to begin unlocking the device after the alarm has started.) For the two non-wrist based placements, 78% of the access time was consumed getting the device out of the pocket or holster. A wrist-based interface requires significantly less access time, which is an important factor in deciding whether or not to use the device. The predecessor of the AirTouch is the Gesture Watch project [3]. The Gesture Watch is a wrist-worn mobile gesture interface that uses non-contact hand gestures to control electronic devices. It uses four proximity sensors to capture the user s in-air hand gestures. It also has one proximity sensor facing towards the user s hand that can detect whether the user's wrist is tilted upwards. This sensor acts as a trigger sensor. The user turned on the watch by tilting his wrist to block the proximity sensor and thus inform the system of the start of the gesture. The user then performed the gesture while keeping his wrist tilted. Next, the user lowered his tilted wrist to complete the gesture. The system then interpreted and recognized the gesture. The Gesture Watch achieved promising results. The recognition accuracy was above 90% in mobile outdoor conditions. Since the gesture watch did not provide any feedback to the user, the user had to rely on visual feedback to confirm of correct handsensor alignment. In addition to the gesture interface, a wristwatch-based touchscreen interface has also been explored [2]. The touchscreen wristwatch uses a round watch face and 3

places buttons around the perimeter of the circular screen. Three different interaction methods were explored: Tapping buttons placed on edge of the screen, sliding in a straight line between buttons, and sliding from target to target around the rim. Fifteen participants were recruited, and each completed 108 tasks. From the study, mathematical models were developed for error rate, given interaction method and button size. For instance, 90% of the central area would be reserved for display, given that an error rate of 5% is desired on a ten button interface. In the study, targets in the upper left region -- from 9 o clock to 12 o clock were likely to be obscured by users fingers. Also, targets in the bottom of the watch were difficult to hit due to finger shape. Various studies have been done using tactile displays to provide feedback [4, 5]. A Wearable Tactile Display (WTD) created by the BuzzWear [4] project was developed to eliminate the need for visual attention for providing feedback. BuzzWear explores the difficulties in identifying 24 tactile patterns on the wrist by manipulating four factors intensity of vibration, starting point, temporal pattern and direction - under different conditions. Two experiments were conducted in the study. The first experiment focused on people's ability to perceive different tactile patterns. The second experiment explored the effect of the WTD under visually distracted conditions. Participants were able to perceive 24 different tactile patterns easily after 40 minutes of training. Their performance in the visually distracted condition did not decrease. The BuzzWear study concluded that wrist-worn tactile displays might be effective for implementing mobile interface. Improving the Form Factor Design of Mobile Gesture Interface on the Wrist by Deen et al [7] explores different design factors of on-wrist gesture interface placements. 4

This paper presents the design iteration process of the Gesture Watch with tactile feedback. The design challenges were broken down into wear-ability, mobility, and tactile perception. These factors were studied using sensor housing, strap, and motor housing configurations. In the first design iteration, all components were oriented on the upper side of the arm. In the second design iteration, the tactile display was moved to the lower side of the wrist in order to control tightness. Also, the sensor housing and strap were redesigned to improve the overall wear-ability and mobility. Through several design iterations, various issues in form factor design were improved and others will continue to be investigated further. Despite having the benefits of wrist-based gesture interface, users of the Gesture Watch still relied on visual feedback to know when the proximity sensors would be triggered. The AirTouch addresses this visual distraction issue of the original Gesture Watch by adding a tactile display. A new trigger mechanism is designed to solve the fatigue issues of the old mechanism. Together, these features provide a preview of in-air gestures through tactile feedback to facilitate reversible, error-free, and low distraction mobile gesture interaction. Implementation The AirTouch consists of two parts: the watch and the tactile display. Like the original Gesture Watch [3], the AirTouch watch uses four SHARP GP2Y0D340K infrared (IR) proximity sensors to capture the user s in-air hand gestures. These IR sensors have a range of 10 60 cm and are designed to operate in areas with other IR light sources, such as sunlight. The sensor outputs a digital high while not detecting 5

objects within the range, and a digital low when an object is detected. The sensors are arranged in an X shape (Figure 1), which is optimized for the tactile display. A fifth sensor (the confirmation sensor) is placed at the front of the watch, facing towards the Battery IR Sensors Micro-controller Bluetooth Confirmation sensor Figure 1. AirTouch internal components user s hand. The fifth sensor tilts approximately 20 degrees away from the wrist to avoid false triggering. Data from the proximity sensors are passed to an ATmega 168 micro-controller (Figure 1). The micro-controller stores sensor data and turns on the tactile display based on the sensors input. The micro-controller also controls the vibration motors through a Vibration Motors Figure 2. Tactile display. 6

transistor array. After the user has finished his hand gesture, the micro-controller transmits the gesture data only if the user confirms the gesture input within the two seconds confirmation period. Stored sensor data is discarded after the confirmation period times out, allowing the user to cancel incorrect gesture inputs by not triggering the confirmation sensor. The tactile display consists of four vibration motors (Figure 2) contained in two rubber housings. Each motor is paired with one IR sensor. The micro-controller turns on vibration motors based on the sensors input. It also synchronizes the gesture input and Figure 3. New push to gesture mechanism tactile display, providing the user with a tactile feedback of the in-air gesture. Power is supplied by a 3.7 V Lithium-ion battery. A power regulator is used to guarantee a stable 3.3 V power supply and ensure that the intensity of vibration remains consistent as the battery discharges. To input gestures to AirTouch, the user uses a push-to-gesture mechanism (Figure 3). Unlike the original push-to-gesture mechanism in the Gesture Watch, the user does not need to tilt his or her wrist to start the process. The user simply starts performing the input gesture in the interaction space. While the user performs the gesture, the tactile display provides the user feedback of the in-air-gesture. Tilting the wrist triggers the confirmation sensor, which in turn tells the system to send the gesture for recognition and 7

processing. A remote computer connects to the watch over Bluetooth and processes the sensor data through the gesture recognition process. Gesture recognition software (Figure 4) is Figure 4. Software interface implemented using the Gesture and Activity Toolkit (GART) [6]. GART uses the Hidden Markov Model from Cambridge University for training and recognition. Each participant trains the recognition system to generate a personalized gesture library in the beginning of the study. GART then uses this library for optimal real-time gesture recognition. Gesture software also logs each set of sensor data with a time stamp to be used for later analysis. At the end of the recognition process, GART matches the sensor data with one of the pre-defined gestures and outputs the recognition result. The result is then used by different modules; the demo module maps each gesture to a specific function (for example, play/pause in a music player) and the user study module logs the result for later data analysis. 8

Task and Procedure In order to study the efficacy of the tactile feedback, a formal user study was run. Sixteen participants were recruited for the study from the Georgia Institute of Technology (mean age = 24.25, three female, thirteen male) The average circumference of the user s wrists was 165.81mm, and the average width of the user s wrists was 56.26mm. All participants except one were right-handed. Additionally, 37.5% of the participants reported wearing a wristwatch on a daily basis. User performance accuracy and performance time were also measured during the experiment. The performance time was Figure 5. Four gestures designed for the study. Figure 6. 2x2 conditions. then broken further into gesture time and confirmation time. Gesture time measures that time between the first and last sensor data, and confirmation time measures the last sensor data and the confirmation event (either confirmation or abort). During the study, every participant wore headphones, a backpack and the 9

AirTouch system on their non-dominant wrist. A computer was placed in the backpack that was connected to the AirTouch system during the study. It ran the gesture recognition process, logged the sensor data, and gave the participant voice commands through the headphones. Subjective feedback from NASA-TLX and surveys were also collected at the end of the study. The study began with a training session, in which the participants learned about the four gestures designed for the study. After learning about the gestures, each participant then trained the gesture recognition system to generate a calibrated personalized recognition library (Figure 5). The primary section of the study started after the training session. It consisted of four conditions, arranged as a 2x2 within-users study (Figure 6). A half-blocked pair of goggles was worn to simulate limited vision around the wrist area (Figure 7). For each Figure 7. Half blocked goggle condition, the participant walked around a track in the laboratory (Figure 8). The track is approximately 26 meters long and the participant was guided with flags hanging from the ceiling. Each condition consisted of 24 trials arranged in a random order with a random delay between them. Each trial started with a voice command asking the participant to 10

Figure 8. In-lab walking track path perform one of the four gestures. In response, the participant performed the gesture and confirmed it. All sensory data was logged and sent to GART for gesture recognition. Accuracy and performance time were measured during the study. Results The average accuracy of the training session was 93.97%. Compared to the training section, the accuracy is relatively low in the primary section (60-80%). The accuracy in the sections that had tactile feedback was slightly higher than the sections that had no tactile feedback regardless of the visual condition (Figure 9). However, a 77.00% 76.00% 75.00% 74.00% 73.00% 72.00% Tactile Feedback No Tactile Full Vision Limited Vision 11

paired t-test showed that there is no statistical significance in this difference. The effect of visual restriction was not statistically significant. The result of one-way analysis of variance (ANOVA) showed that the type of gesture affects accuracy across all four conditions (p < 0.05). The tactile feedback had different effects in performance time in both full and 1735 1730 1725 1720 1715 1710 1705 1700 1695 1690 1685 Tactile Feedback No Tactile Full Vision Limited Vision Figure 10. Four condition performance time (ms) limited vision condition. Tactile feedback enabled faster performance time in the full vision condition (Figure 10). The effect was again not statistically significant (p = 0.052). However, tactile feedback enabled faster performance time with statistical significance in the limited vision condition (p = 0.011). This result shows that tactile feedback has a larger effect when vision is limited. The effect of restricting visual attention was statistically significant with tactile feedback (p = 0.046) and without (p = 0.012). The trend (Figure 9) was observed consistently in both gesture time and confirmation time. However, the trends observed in gesture time and confirmation time were not statically 12

significant. Subjective feedback showed that restricted vision increased the level of difficulty regardless of the presence of tactile feedback. However, participants reported that having tactile feedback made their tasks easier by increasing their confidence in both full and limited vision conditions. The perceived difficulties of limited vision with tactile feedback were close to the perceived difficulties of full vision without tactile feedback. These results suggest that tactile feedback compensates for the limited vision. Discussion and Future Work Comparison between performance time of no tactile full vision and tactile limited vision showed that tactile feedback could somewhat compensate for limited vision. Two limitations, which were caused by the test setting and hardware, were observed during the study. The first is that the indoor walking track does not simulate the chaos and unpredictability of the real world environment. Some participants reported only using vision to navigate around the track in full vision settings, while others reported shifting Figure 11. AirTouch design iterations. their visual attention between walking and the wrist. This indicates that the user s vision was not controlled as it was intended. Also, the sensors range needs to be shortened. 13

Depending on participants body posture, which is less consistent while walking than standing, sensors occasionally detected the participants chest or upper forearm and caused false triggering. The false triggering unintentionally begins the time out period count, and the attempts were sometimes aborted before the participants started applying gestures. Even though this artifact did not directly affect the accuracy of the study, participants reported frustration over the falsely triggered abort messages. In order to address this hardware limitation, we designed and implemented a new AirTouch prototype. The new AirTouch is significantly smaller physically (46.5mm x 17mm x 45mm) (Figure 11). Instead of pre-package IR proximity sensors, we implemented custom analog IR proximity sensors. The analog sensors can also provide hand watch distance, giving us full control over sensor range and could potentially expand the number of supported gesture types. On the new prototype, the sensor layout was re-rotated to cardinal arrangement to enable better gesture recognition. Further investigation is required to iterate the hardware, gesture design, and study design. Conclusion The tactile feedback enables more accurate and faster interaction under limited visual attention settings. The perceived difficulties and performance time with tactile display under limited vision was similar to having full visual attention without tactile display. Also, the new push-to-gesture mechanism reduces physical fatigue compared against the old mechanism and gives the user the ability to cancel gesture input. Thus, we conclude that tactile feedback can successfully assist mobile device interaction in limited visual settings. 14

References [1] D. Ashbrook, J. Clawson, K. Lyons, N. Patel and T. Starner, Quickdraw: The impact of mobility and on-body placement on device access time, 26th Annual CHI Conference on Human Factors in Computing Systems, CHI, April 5 - April 10, 2008, Association for Computing Machinery, Florence, Italy, 2008, pp. 219-222. [2] D. Ashbrook, K. Lyons and T. Starner, An investigation into round touchscreen wristwatch interaction, 10th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI, September 2 - September 5, 2008, Association for Computing Machinery, Amsterdam, Netherlands, 2008, pp. 311-314. [3] J. Kim, J. He, K. Lyons and T. Starner, The Gesture Watch: A wireless contactfree Gesture based wrist interface, 11th IEEE International Symposium on Wearable Computers, ISWC, October 11 - October 13, 2007, IEEE Computer Society, Boston, MA, United states, 2007, pp. 15-22. [4] S. Lee and T. Starner, BuzzWear: Alert perception in wearable tactile displays on the wrist, 28th Annual CHI Conference on Human Factors in Computing Systems, CHI, April 10 - April 15, 2010, Association for Computing Machinery, Atlanta, GA, United States, 2010, pp. 433-442. [5] S. C. Lee and T. Starner, Mobile gesture interaction using wearable tactile displays, 27th International Conference Extended Abstracts on Human Factors in Computing Systems, CHI, April 4 - April 9, 2009, Association for Computing Machinery, Boston, MA, United States, 2009, pp. 3437-3442. [6] K. Lyons, H. Brashear, T. Westeyn, J. S. Kim and T. Starner, GART: The Gesture and Activity Recognition Toolkit, 12th International Conference on Human- Computer Interaction, HCI International, July 22 - July 27, 2007, Springer Verlag, Beijing, China, 2007, pp. 718-727. 15