Effects of Display Sizes on a Scrolling Task using a Cylindrical Smartwatch
|
|
- Gilbert Wheeler
- 5 years ago
- Views:
Transcription
1 Effects of Display Sizes on a Scrolling Task using a Cylindrical Smartwatch Paul Strohmeier Human Media Lab Queen s University Kingston, ON, Canada paul@cs.queensu.ca Jesse Burstyn Human Media Lab Queen s University Kingston, ON, Canada jesse@cs.queensu.ca Roel Vertegaal Human Media Lab Queen s University Kingston, ON, Canada roel@cs.queensu.ca Abstract With a growing interest in wrist-worn devices, research has typically focused on expanding the available interaction area for smartwatches. In this paper, we instead investigate how different display sizes influence task performance, while maintaining a consistent input area. We conducted an experiment in which users completed a scrolling task using a small display, a large display, and a cylindrical display wrapped around the wrist. We found that the large and cylindrical displays resulted in faster task performances than the small display. We also found that the cylindrical display removed constraints on the participants body pose, suggesting that cylindrical displays have unique benefits for mobile interactions.! Author Keywords Wearable Computing; Smart Watches; Display Size; Flexible Displays; DisplaySkin; Organic User Interfaces. ACM Classification Keywords H.5.2. Information interfaces and presentation. Introduction Over the last few years there has been a growing interest in wrist-worn devices, a movement seen in both the research of novel wearable computers [3,6,7,9,12,13] and in the positive reception of smart-
2 Figure 1: Wrist-worn device prototype. Left: Small display Center: Large display Right: Cylindrical display watches by Pebble, Samsung, and Apple. Emerging deformable technologies such as flexible displays, batteries, and circuits can enable innovative form factors for wrist-worn devices. Despite these advances, most currently available smart-watches follow the design of conventional watches: a small display attached to the wrist by a flexible strap. This design has largely gone unquestioned in the past hundred years [1]. A different sort of question remains: if the restrictions that led to these solutions could be lifted, could a larger display improve interaction even further? In response, we created a wrist-worn device with a large, touchenabled cylindrical display [2] (Figure 1), which allowed us to investigate the effects of different display sizes. We asked: if the interaction space is kept constant, does a larger display support more efficient or new styles of interactions? The physical limitations of the traditional wrist-watch layout also limit the range of potential interactions. For example, a small display has a reduced area for touch input and is especially susceptible to occlusion. There is a large body of research investigating this issue. One approach decouples the interaction space from the display. Some of these explorations extend the interaction area by embedding a touch sensor directly into the wristband [9], while others do this by detecting a finger s position in the space above and around the watch face [3]. These types of systems facilitate precise and expressive input without increasing the size of the display. In this paper, we report on an experiment where participants performed a scrolling task on DisplaySkin [2], a prototype interactive wristband. To understand the effects of display size, we varied the active display area, while keeping the input method constant. We also present our observations of user behavior and strategies, as these were affected by display size. Related Work Prior work has explored some alternative display sizes and configurations. With Augmented Forearm, Olberding et al. [8] built a prototype wearable consisting of a series of small displays placed along the arm. They diverged from traditional wristwatch conventions, investigating a design space where
3 displays are an extension of the body an area also studied in projects like Armura [4]. Lyons et al. [6] demonstrated a cylindrical wrist-worn device made of segmented displays, each with separate functionality. Tarun et al. [11] presented Snaplet, a shape-changing wristband. When a user removes Snaplet from their wrist, its flexible E Paper display can be shaped into a tablet or a phone, depending on its context. Other work has looked at the topic of expanding the interaction space of wrist-worn devices; they are, in part, attempting to overcome the constraints of interacting with small displays. With Abracadabra, Harrison et al. [3] added gestural input in the space above and around the watch. Oakley and Lee [7] and Perrault et al. [9] have similar approaches. They used the edges of a smart-watch and the wristband as touch surfaces, respectively. In regard to novel display techniques, Xu and Lyons [13] explored different styles of glance based interactions by integrating LED indicators into a watchface. We previously presented DisplaySkin [2], a cylindrical E Paper device, to introduce the concept of a pose-aware display: one that orients content towards a user s face based on their body pose. Apparatus Our experimental device consists of a DisplaySkin [2], a 7 Plastic Logic Flexible E Paper Display wrapped around the user s wrist, forming a cylindrical shape (Figure 1). The display has a resolution of 354 by 944 pixels and is controlled by Flexkit [5] to run at 12.5 fps. The device is augmented with an infrared touch sensor that can detect both swipes and discrete taps along the circumference of the cylindrical display [10]. The touch sensor can detect touches with a precision of 3 mm, a sufficient amount of precision for our target size. Experiment Task Our task is similar to the experiment performed by Perrault et al. [9]. Participants were presented with a scrollable list of 184 countries, listed alphabetically. In each trial, an external display prompted the participants with the name of a country and asked them to find it within the list on the wristband. In all conditions, participants used relative touch scrolling with inertia to navigate the list. Once the target item was visible, they tapped it to complete the trial. The task is reminiscent of scrolling through a list of applications on a Pebble or Android Wear devices. Display Size We simulated three display sizes using different viewports on the E Paper screen (Figure 2). The small display was a 1.5 rectangle on the top of the wrist, similar to standard smart watches and the display used by Perrault et al. [9]. The large display consisted of a 3.5 rectangle that started at the top of the wrist and covered the visible area of the display, as viewed from above. The cylindrical display condition spanned the entire surface of the prototype. Input Area Although the viewport size varied between trials, participants were free to navigate using the entire touch surface of the display for all conditions. In other words, the available input area remained constant throughout all conditions. This setup ensured that the measurable effect is a consequence of the display size and not confounded by different input methods.
4 Target Distance We used 4 target distances, a subset of those evaluated by Perrault et al. [9]: 5 items, 20 items, 80 items, and 160 items. Each item had a height of ~1 cm. In the small display condition, the 5 th item is not visible at the start of the trial. It is, however, immediately visible in the large and cylindrical conditions. Measures Our dependent measure was navigation time, measured from the onset of the prompt to when the participant tapped on the correct target. Experiment Design We used a 3x4 factorial within-subject design with repeated measures. Our factors were: display size (small, large, and cylindrical) and target distance (5, 20, 80, and 160 items). Each participant performed 6 trials per combination of factors, for a total of 72 trials. Condition order was counter-balanced between participants. Participants practiced with each display size until they achieved less than 10% improvement between trials. The experiment lasted approximately 45 minutes, including practice. Questionnaires We asked participants three questions, to rate each display size if it was: efficient for searching, allowed an overview of data, and useful for bimanual interactions. Each question was structured using a 5-point Likert scale of agreement (1: Strongly Disagree-5: Strongly Agree). Participants The experiment was conducted with 12 participants (9 male, 3 female) between the ages of Most participants were right handed (9/12) and only 3 wore a wristwatch. All participants had some familiarity with touch gestures, e.g., on a smartphone or tablet. They were paid $10 for their participation. Hypotheses We hypothesized that larger display sizes would have faster navigation times (H1). As a control, we also hypothesized that larger target distances would result in longer navigation times (H2). Results Experiment Results We analyzed the collected measures by performing a repeated measures ANOVA using display size (3) x target distance (4) on navigation time. Table 1 outlines the means and standard errors for list navigation time. We found a significant main effect of display size (F 2,22=24.13, p<0.001) on list navigation time. Pairwise post-hoc tests, with Bonferroni corrected comparisons, reveal that the small display was significantly slower than both the large and cylindrical display sizes. The analysis also showed that target distance was a significant factor (F 3,33=303.11, p<0.001). Pairwise post-hoc comparisons, Bonferroni corrected, confirm that navigation times differed significantly between all target distances. Questionnaire Results Table 2 summarizes the median scores of the questionnaire responses. We analyzed the data using a Friedman s one-way ANOVA by Ranks on the
5 Small Large Cylinder The display enabled bimanual interaction The display supported task efficiency The display provided an overview of the data Table 2: Questionnaire Results (Median response. All different values are also significantly different. The Cylindrical display trended towards a higher result than the Large display for all questions. 1 = Strongly Disagree, 5 = Strongly Agree ). participants ratings, with Bonferroni corrected Wilcoxon Signed Rank post-hoc tests (evaluated by dividing the standard alpha of 0.05 by the number of comparisons, α = ). Results showed a significant effect of display size on participants ratings of their ability to use bimanual interactions (Friedman s χ 2 (2)=15.62, p<0.001). Post-hoc comparisons reveal that the cylindrical display was rated higher than both the large display (Z=-2.714, p<0.007) and the small display (Z=-2.716, p<0.007), and the large display was rated higher than the small display (Z=-2.511, p<0.012). Results also showed a significant effect of display size on participants impression of how efficiently they could complete the task (Friedman s χ 2 (2)=14.15, p<0.001). Post-hoc comparisons reveal that the cylindrical display was rated higher than the small display (Z=-2.738, p<0.006) and the large display was also rated higher than the small display (Z=-2.653, p<0.008) We also found significant differences in how users experienced their overview of data for the different display sizes (Friedman s χ 2 (2)=13.82, p<0.001). Posthoc comparisons reveal that the cylindrical display was rated higher than the small display (Z=-2.694, p<0.007) and the large display was also rated higher than the small display (Z=-2.766, p<0.006). Discussion The results of our experiment suggest that there is a benefit of increasing the display size for list navigation tasks. Results confirm our hypothesis (H1) that display size has a significant effect on navigation time: the large and cylindrical display sizes allowed for faster task completion. These results show that the current display sizes of smart watches limit the ability to Target Distances Total Small Display 7.21 (1.27) (2.14) (2.14) (3.37) (0.64) Navigation Times Large Display 2.92 (0.95) 8.03 (2.12) (1.76) (2.90) (0.58) Table 1: Mean (SD) navigation times (s). Cylindrical Display 2.60 (0.76) 6.94 (1.09) (1.82) (3.08) (0.57) efficiently navigate through information, even if the interaction space is larger than the display. As expected, we confirmed our control hypothesis (H2) that larger target distances would result in longer navigation times. Participants took advantage of the larger interaction area. For the small display condition many participants used a non-active area below the viewport for scrolling. This allowed them to scroll without causing any occlusion of the active area. This demonstrated that our results are not confounded by the known input issues of small displays. It also suggests that for most currently available devices that do not have the extended input area, the drawbacks of a small display could be more prominent than the ones we found.
6 Figure 2: Typical handpositions for different display sizes Like most scrolling experiments, we observed that the task is composed of a number of sub-tasks: 1) the participant estimates the target position relative to their current position in the list; 2) rapidly scrolls towards the target, either under- or over-shooting; 3) brings the target into the viewport with slower and more precise scrolls; and 4) selects the target. When the target is already visible within the display, participants skip step 2), an opportunity provided by the large and cylindrical display sizes in the smallest target distance condition. For larger target distances, this particular benefit does not occur. The overall results, however, suggest that these two sizes provide a significant advantage for steps 1) and 3), by providing the participant a better view of surrounding targets. Specifically, we see that the absolute performance differences between target distance conditions are fairly stable across display size conditions suggesting a constant advantage provided by increased display size. The relatively constant delta between navigation times for each list length is easily visible in a bar-graph (Table 3). We would like to point out that the absolute navigation speeds are different from those observed by Perrault et al. [9]. This difference in task completion times was likely due to implementation differences in the scrolling physics model, which in our case was constrained by the slower refresh times of the E Ink display. This led to a slower scrolling behavior, which affected absolute task completion times. Relative task completion times (the ratio between times to scroll through different list lengths) are, however, in agreement with their results. Task Completion (s) Items 20 Items 80 Items 160 Items Effects of a Cylindrical Display BIMANUAL INTERACTIONS Small Large Cylindrical Table 3: Task completion times for target distance and display types During our experimental evaluation, we observed distinct strategies in how participants interacted with different display sizes (Figure 2). Many participants chose to support their left hand on the table, as our experiment required them to scroll through lists for an extended period of time, which they reported to be tiring even with breaks. With the small display size, participants often rested their entire palm on the table (Figure 2 - A). In the large display size condition, participants often lifted their hands, supporting the weight with their fingers (Figure 2 - B), while orienting the active display area towards their face. In the cylindrical display condition, participants usually lifted their hand from the table (Figure 2 - C) to leverage bimanual interactions. We noticed three ways in which participants used bimanual interaction with the cylindrical display.
7 Figure 3: Bimanual swipe gesture Bimanual swiping was generally used to enable faster scrolling (Figure 3). When participants were close to the target, but it was not immediately visible, they would rotate their wrist to bring it into view. In addition, participants also used the rotation of their left hand to correct for the actions of the right: to accommodate for the inertial scrolling, they commonly rotated their wrist to respond to an overshoot or in anticipation of an upcoming target. These behaviors are supported by the questionnaire results. When asked to rate appropriateness for bimanual interactions, 75% of the participants stated that the cylindrical display supported bimanual interactions (rating it with a 4 or 5), compared to 41.7% for the large display, and only 16.7% for the small display condition. MOBILE INTERACTIONS The reason participants lifted their wrist off the table during the large display size condition was to orient the display towards their face. Viewed from the right angle, the viewport spanned the entire width of the wrist. When the active display area is oriented towards the face, the large and cylindrical display sizes were visually identical. This, however, is true only if a user s body is in the correct pose for interacting with the display. The use of bimanual interactions for completing the search task points to another affordance of the cylindrical display: it can be viewed from various angles. Although the difference between the task completion times for the large and cylindrical display sizes was not significant, we believe resulted from the static nature of our experimental setup. In day-to-day life, our bodies, and especially our hands, are usually in motion. Outside of a laboratory setting, we would expect this property of the cylindrical display to demonstrate additional benefits over the large display. Conclusion In this paper, we evaluated the effects of display size on navigation times for a scrolling task on a wrist-worn device. Our results demonstrate that there is a significant benefit of larger display sizes with respect to task efficiency. This suggests that, while increasing the interaction area has its own advantages, there is value in creating wrist-worn devices with larger displays and new form factors. At the same time, a display that wraps around the entire wrist was not significantly faster than one that covers the top of the wrist. Users can, however, view a cylindrical display from any angle; they are not constrained to a specific pose. This freedom allowed the participants to explore different positions of the arm and the wrist, in turn inspiring them to navigate with bimanual gestures demonstrating that while the cylindrical display was not more efficient than the large display in our controlled experiment, the form factor may provide additional benefits during mobile interaction. References 1.! Brearley, H.C.Time Telling through the Ages. Doubleday, Page & Co, New York, ! Jesse Burstyn, Paul Strohmeier, and Roel Vertegaal DisplaySkin: Exploring Pose- Aware Displays on a Flexible Electrophoretic Wristband. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15).
8 3.! Chris Harrison and Scott E. Hudson Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST '09). 4.! Chris Harrison, Shilpa Ramamurthy, and Scott E. Hudson On-body interaction: armed and dangerous. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction (TEI '12), 5.! David Holman, Jesse Burstyn, Ryan Brotman, Audrey Younkin, and Roel Vertegaal Flexkit: a rapid prototyping platform for flexible displays. In Proceedings of the adjunct publication of the 26th annual ACM symposium on User interface software and technology (UIST '13 Adjunct). 6.! Kent Lyons, David Nguyen, Daniel Ashbrook, and Sean White Facet: a multi-segment wrist worn system. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). 7.! Ian Oakley and Doyoung Lee Interaction on the edge: offset sensing for small devices. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). 8.! Simon Olberding, Kian Peen Yeo, Suranga Nanayakkara, and Jurgen Steimle AugmentedForearm: exploring the design space of a display-enhanced forearm. In Proceedings of the 4th Augmented Human International Conference (AH '13). 9.! Simon T. Perrault, Eric Lecolinet, James Eagan, and Yves Guiard Watchit: simple gestures and eyes-free interaction for wristwatches and bracelets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13) ! Paul Strohmeier DIY IR sensors for augmenting objects and human skin. In Proceedings of the 6th Augmented Human International Conference (AH '15) ! Aneesh P. Tarun, Byron Lahey, Audrey Girouard, Winslow Burleson, and Roel Vertegaal Snaplet: using body shape to inform function in mobile flexible display devices. In CHI '11 Extended Abstracts on Human Factors in Computing Systems (CHI EA '11) ! Robert Xiao, Gierad Laput, and Chris Harrison Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14) ! Cheng Xu and Kent Lyons Shimmering Smartwatches: Exploring the Smartwatch Design Space. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15).
Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces
Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei
More informationSensing Human Activities With Resonant Tuning
Sensing Human Activities With Resonant Tuning Ivan Poupyrev 1 ivan.poupyrev@disneyresearch.com Zhiquan Yeo 1, 2 zhiquan@disneyresearch.com Josh Griffin 1 joshdgriffin@disneyresearch.com Scott Hudson 2
More informationPopObject: A Robotic Screen for Embodying Video-Mediated Object Presentations
PopObject: A Robotic Screen for Embodying Video-Mediated Object Presentations Kana Kushida (&) and Hideyuki Nakanishi Department of Adaptive Machine Systems, Osaka University, 2-1 Yamadaoka, Suita, Osaka
More informationWatchIt: Simple Gestures and Eyes-free Interaction for Wristwatches and Bracelets
WatchIt: Simple Gestures and Eyes-free Interaction for Wristwatches and Bracelets 1st Author Name 2nd Author Name 3 rd Author Name 4 th Author Name Affiliation Address e-mail address Optional phone number
More informationExploring Tilt for No-Touch, Wrist-Only Interactions on Smartwatches
Exploring Tilt for No-Touch, Wrist-Only Interactions on Smartwatches Anhong Guo Human-Computer Interaction Institute Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213 anhongg@cs.cmu.edu
More informationA Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones
A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones Jianwei Lai University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 USA jianwei1@umbc.edu
More informationTactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions Euan Freeman, Stephen Brewster Glasgow Interactive Systems Group University of Glasgow {first.last}@glasgow.ac.uk Vuokko Lantz
More informationInvestigating Gestures on Elastic Tabletops
Investigating Gestures on Elastic Tabletops Dietrich Kammer Thomas Gründer Chair of Media Design Chair of Media Design Technische Universität DresdenTechnische Universität Dresden 01062 Dresden, Germany
More informationOcclusion-Aware Menu Design for Digital Tabletops
Occlusion-Aware Menu Design for Digital Tabletops Peter Brandl peter.brandl@fh-hagenberg.at Jakob Leitner jakob.leitner@fh-hagenberg.at Thomas Seifried thomas.seifried@fh-hagenberg.at Michael Haller michael.haller@fh-hagenberg.at
More informationAirTouch: Mobile Gesture Interaction with Wearable Tactile Displays
AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays A Thesis Presented to The Academic Faculty by BoHao Li In Partial Fulfillment of the Requirements for the Degree B.S. Computer Science
More informationAuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing
AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing 1 Junhan Zhou2 Yang Zhang1 Gierad Laput1 Chris Harrison1 2 Human-Computer Interaction Institute, Electrical and
More informationAuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing
AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing 1 Junhan Zhou2 Yang Zhang1 Gierad Laput1 Chris Harrison1 2 Human-Computer Interaction Institute, Electrical and
More informationBiometric Data Collection Device for User Research
Biometric Data Collection Device for User Research Design Team Daniel Dewey, Dillon Roberts, Connie Sundjojo, Ian Theilacker, Alex Gilbert Design Advisor Prof. Mark Sivak Abstract Quantitative video game
More informationProgramming reality: From Transitive Materials to organic user interfaces
Programming reality: From Transitive Materials to organic user interfaces The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationPinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data
Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft
More informationTapBoard: Making a Touch Screen Keyboard
TapBoard: Making a Touch Screen Keyboard Sunjun Kim, Jeongmin Son, and Geehyuk Lee @ KAIST HCI Laboratory Hwan Kim, and Woohun Lee @ KAIST Design Media Laboratory CHI 2013 @ Paris, France 1 TapBoard: Making
More informationA Gesture Oriented Android Multi Touch Interaction Scheme of Car. Feilong Xu
3rd International Conference on Management, Education, Information and Control (MEICI 2015) A Gesture Oriented Android Multi Touch Interaction Scheme of Car Feilong Xu 1 Institute of Information Technology,
More informationTwist n Knock: A One-handed Gesture for Smart Watches
Twist n Knock: A One-handed Gesture for Smart Watches Vikram Kamath Cannanure* Xiang Anthony Chen Jennifer Mankoff ABSTRACT Interacting with a smart watch requires a fair amount of attention, which can
More informationAR Tamagotchi : Animate Everything Around Us
AR Tamagotchi : Animate Everything Around Us Byung-Hwa Park i-lab, Pohang University of Science and Technology (POSTECH), Pohang, South Korea pbh0616@postech.ac.kr Se-Young Oh Dept. of Electrical Engineering,
More informationZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field
ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field Figure 1 Zero-thickness visual hull sensing with ZeroTouch. Copyright is held by the author/owner(s). CHI 2011, May 7 12, 2011, Vancouver, BC,
More informationDesign Considerations for Wrist- Wearable Heart Rate Monitors
Design Considerations for Wrist- Wearable Heart Rate Monitors Wrist-wearable fitness bands and smart watches are moving from basic accelerometer-based smart pedometers to include biometric sensing such
More informationA cutaneous stretch device for forearm rotational guidace
Chapter A cutaneous stretch device for forearm rotational guidace Within the project, physical exercises and rehabilitative activities are paramount aspects for the resulting assistive living environment.
More informationBeyond Actuated Tangibles: Introducing Robots to Interactive Tabletops
Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Sowmya Somanath Department of Computer Science, University of Calgary, Canada. ssomanat@ucalgary.ca Ehud Sharlin Department of Computer
More informationDouble-side Multi-touch Input for Mobile Devices
Double-side Multi-touch Input for Mobile Devices Double side multi-touch input enables more possible manipulation methods. Erh-li (Early) Shen Jane Yung-jen Hsu National Taiwan University National Taiwan
More informationQS Spiral: Visualizing Periodic Quantified Self Data
Downloaded from orbit.dtu.dk on: May 12, 2018 QS Spiral: Visualizing Periodic Quantified Self Data Larsen, Jakob Eg; Cuttone, Andrea; Jørgensen, Sune Lehmann Published in: Proceedings of CHI 2013 Workshop
More informationEvaluating Touch Gestures for Scrolling on Notebook Computers
Evaluating Touch Gestures for Scrolling on Notebook Computers Kevin Arthur Synaptics, Inc. 3120 Scott Blvd. Santa Clara, CA 95054 USA karthur@synaptics.com Nada Matic Synaptics, Inc. 3120 Scott Blvd. Santa
More informationDesign and Evaluation of Tactile Number Reading Methods on Smartphones
Design and Evaluation of Tactile Number Reading Methods on Smartphones Fan Zhang fanzhang@zjicm.edu.cn Shaowei Chu chu@zjicm.edu.cn Naye Ji jinaye@zjicm.edu.cn Ruifang Pan ruifangp@zjicm.edu.cn Abstract
More informationPhysical Affordances of Check-in Stations for Museum Exhibits
Physical Affordances of Check-in Stations for Museum Exhibits Tilman Dingler tilman.dingler@vis.unistuttgart.de Benjamin Steeb benjamin@jsteeb.de Stefan Schneegass stefan.schneegass@vis.unistuttgart.de
More informationA Kinect-based 3D hand-gesture interface for 3D databases
A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity
More informationBimanual Input for Multiscale Navigation with Pressure and Touch Gestures
Bimanual Input for Multiscale Navigation with Pressure and Touch Gestures Sebastien Pelurson and Laurence Nigay Univ. Grenoble Alpes, LIG, CNRS F-38000 Grenoble, France {sebastien.pelurson, laurence.nigay}@imag.fr
More informationarxiv: v1 [cs.hc] 14 Jan 2015
Expanding the Vocabulary of Multitouch Input using Magnetic Fingerprints Halim Çağrı Ateş cagri@cse.unr.edu Ilias Apostolopoulous ilapost@cse.unr.edu Computer Science and Engineering University of Nevada
More informationClassic3D and Single3D: Two unimanual techniques for constrained 3D manipulations on tablet PCs
Classic3D and Single3D: Two unimanual techniques for constrained 3D manipulations on tablet PCs Siju Wu, Aylen Ricca, Amine Chellali, Samir Otmane To cite this version: Siju Wu, Aylen Ricca, Amine Chellali,
More informationIllusion of Surface Changes induced by Tactile and Visual Touch Feedback
Illusion of Surface Changes induced by Tactile and Visual Touch Feedback Katrin Wolf University of Stuttgart Pfaffenwaldring 5a 70569 Stuttgart Germany katrin.wolf@vis.uni-stuttgart.de Second Author VP
More informationTest of pan and zoom tools in visual and non-visual audio haptic environments. Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten
Test of pan and zoom tools in visual and non-visual audio haptic environments Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten Published in: ENACTIVE 07 2007 Link to publication Citation
More informationProject Multimodal FooBilliard
Project Multimodal FooBilliard adding two multimodal user interfaces to an existing 3d billiard game Dominic Sina, Paul Frischknecht, Marian Briceag, Ulzhan Kakenova March May 2015, for Future User Interfaces
More informationEffects of Curves on Graph Perception
Effects of Curves on Graph Perception Weidong Huang 1, Peter Eades 2, Seok-Hee Hong 2, Henry Been-Lirn Duh 1 1 University of Tasmania, Australia 2 University of Sydney, Australia ABSTRACT Curves have long
More informationTilt or Touch? An Evaluation of Steering Control of Racing Game on Tablet or Smartphone
Tilt or Touch? An Evaluation of Steering Control of Racing Game on Tablet or Smartphone Muhammad Suhaib Ph.D. Candidate Sr. Full Stack Developer, T-Mark, Inc, Tokyo, Japan Master of Engineering, Ritsumeikan
More informationMOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device
MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device Enkhbat Davaasuren and Jiro Tanaka 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577 Japan {enkhee,jiro}@iplab.cs.tsukuba.ac.jp Abstract.
More informationDesign and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone
ISSN (e): 2250 3005 Volume, 06 Issue, 11 November 2016 International Journal of Computational Engineering Research (IJCER) Design and Implementation of the 3D Real-Time Monitoring Video System for the
More informationReFlex: A Flexible Smartphone with Active Haptic Feedback for Bend Input
ReFlex: A Flexible Smartphone with Active Haptic Feedback for Bend Input Paul Strohmeier 1,2, Jesse Burstyn 1, Juan Pablo Carrascal 1, Vincent Levesque 3, Roel Vertegaal 1 1 Human Media Lab, Queen s University,
More informationVolGrab: Realizing 3D View Navigation by Aerial Hand Gestures
VolGrab: Realizing 3D View Navigation by Aerial Hand Gestures Figure 1: Operation of VolGrab Shun Sekiguchi Saitama University 255 Shimo-Okubo, Sakura-ku, Saitama City, 338-8570, Japan sekiguchi@is.ics.saitama-u.ac.jp
More informationA Comparison of Competitive and Cooperative Task Performance Using Spherical and Flat Displays
A Comparison of Competitive and Cooperative Task Performance Using Spherical and Flat Displays John Bolton, Kibum Kim and Roel Vertegaal Human Media Lab Queen s University Kingston, Ontario, K7L 3N6 Canada
More informationComparison of Phone-based Distal Pointing Techniques for Point-Select Tasks
Comparison of Phone-based Distal Pointing Techniques for Point-Select Tasks Mohit Jain 1, Andy Cockburn 2 and Sriganesh Madhvanath 3 1 IBM Research, Bangalore, India mohitjain@in.ibm.com 2 University of
More informationEvaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:
More informationWristWhirl: One-handed Continuous Smartwatch Input using Wrist Gestures
WristWhirl: One-handed Continuous Smartwatch Input using Wrist Gestures Jun Gong 1, Xing-Dong Yang 1, Pourang Irani 2 Dartmouth College 1, University of Manitoba 2 {jun.gong.gr; xing-dong.yang}@dartmouth.edu,
More informationMonoTouch: Single Capacitive Touch Sensor that Differentiates Touch Gestures
MonoTouch: Single Capacitive Touch Sensor that Differentiates Touch Gestures Ryosuke Takada University of Tsukuba 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan rtakada@iplab.cs.tsukuba.ac.jp Buntarou
More informationDrumtastic: Haptic Guidance for Polyrhythmic Drumming Practice
Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The
More informationMeasuring FlowMenu Performance
Measuring FlowMenu Performance This paper evaluates the performance characteristics of FlowMenu, a new type of pop-up menu mixing command and direct manipulation [8]. FlowMenu was compared with marking
More informationMicrosoft Scrolling Strip Prototype: Technical Description
Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features
More informationFlick-and-Brake: Finger Control over Inertial/Sustained Scroll Motion
Flick-and-Brake: Finger Control over Inertial/Sustained Scroll Motion Mathias Baglioni, Sylvain Malacria, Eric Lecolinet, Yves Guiard To cite this version: Mathias Baglioni, Sylvain Malacria, Eric Lecolinet,
More informationPaint with Your Voice: An Interactive, Sonic Installation
Paint with Your Voice: An Interactive, Sonic Installation Benjamin Böhm 1 benboehm86@gmail.com Julian Hermann 1 julian.hermann@img.fh-mainz.de Tim Rizzo 1 tim.rizzo@img.fh-mainz.de Anja Stöffler 1 anja.stoeffler@img.fh-mainz.de
More informationOpen Archive TOULOUSE Archive Ouverte (OATAO)
Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More informationMany Fingers Make Light Work: Non-Visual Capacitive Surface Exploration
Many Fingers Make Light Work: Non-Visual Capacitive Surface Exploration Martin Halvey Department of Computer and Information Sciences University of Strathclyde, Glasgow, G1 1XQ, UK martin.halvey@strath.ac.uk
More informationHaptics for Guide Dog Handlers
Haptics for Guide Dog Handlers Bum Jun Park, Jay Zuerndorfer, Melody M. Jackson Animal Computer Interaction Lab, Georgia Institute of Technology bpark31@gatech.edu, jzpluspuls@gmail.com, melody@cc.gatech.edu
More informationObjective Data Analysis for a PDA-Based Human-Robotic Interface*
Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes
More informationNovel Modalities for Bimanual Scrolling on Tablet Devices
Novel Modalities for Bimanual Scrolling on Tablet Devices Ross McLachlan and Stephen Brewster 1 Glasgow Interactive Systems Group, School of Computing Science, University of Glasgow, Glasgow, G12 8QQ r.mclachlan.1@research.gla.ac.uk,
More informationFigure 1. The game was developed to be played on a large multi-touch tablet and multiple smartphones.
Capture The Flag: Engaging In A Multi- Device Augmented Reality Game Suzanne Mueller Massachusetts Institute of Technology Cambridge, MA suzmue@mit.edu Andreas Dippon Technische Universitat München Boltzmannstr.
More informationIntegration of Hand Gesture and Multi Touch Gesture with Glove Type Device
2016 4th Intl Conf on Applied Computing and Information Technology/3rd Intl Conf on Computational Science/Intelligence and Applied Informatics/1st Intl Conf on Big Data, Cloud Computing, Data Science &
More informationTesting of the FE Walking Robot
TESTING OF THE FE WALKING ROBOT MAY 2006 1 Testing of the FE Walking Robot Elianna R Weyer, May 2006 for MAE 429, fall 2005, 3 credits erw26@cornell.edu I. ABSTRACT This paper documents the method and
More informationA Gestural Interaction Design Model for Multi-touch Displays
Songyang Lao laosongyang@ vip.sina.com A Gestural Interaction Design Model for Multi-touch Displays Xiangan Heng xianganh@ hotmail ABSTRACT Media platforms and devices that allow an input from a user s
More informationPhonePaint: Using Smartphones as Dynamic Brushes with Interactive Displays
PhonePaint: Using Smartphones as Dynamic Brushes with Interactive Displays Jian Zhao Department of Computer Science University of Toronto jianzhao@dgp.toronto.edu Fanny Chevalier Department of Computer
More informationFlexStylus: Leveraging Bend Input for Pen Interaction
FlexStylus: Leveraging Bend Input for Pen Interaction Nicholas Fellion Carleton University Ottawa, Canada nicholas.fellion@carleton.ca Thomas Pietrzak University of Lille Villeneuve d Ascq, France thomas.pietrzak@univ-lille1.fr
More informationGazemarks-Gaze-Based Visual Placeholders to Ease Attention Switching Dagmar Kern * Paul Marshall # Albrecht Schmidt * *
CHI 2010 - Atlanta -Gaze-Based Visual Placeholders to Ease Attention Switching Dagmar Kern * Paul Marshall # Albrecht Schmidt * * University of Duisburg-Essen # Open University dagmar.kern@uni-due.de,
More informationEvaluation of a Soft-Surfaced Multi-touch Interface
Evaluation of a Soft-Surfaced Multi-touch Interface Anna Noguchi, Toshifumi Kurosawa, Ayaka Suzuki, Yuichiro Sakamoto, Tatsuhito Oe, Takuto Yoshikawa, Buntarou Shizuki, and Jiro Tanaka University of Tsukuba,
More informationMarkus Schneider Karlsruhe Institute of Technology (KIT) Campus Süd, Fritz-Erlerstr Karlsruhe, Germany
Katrin Wolf Stuttgart University Human Computer Interaction Group Sim-Tech Building 1.029 Pfaffenwaldring 5a 70569 Stuttgart, Germany 0049 711 68560013 katrin.wolf@vis.uni-stuttgart.de Markus Schneider
More informationThis is a postprint of. The influence of material cues on early grasping force. Bergmann Tiest, W.M., Kappers, A.M.L.
This is a postprint of The influence of material cues on early grasping force Bergmann Tiest, W.M., Kappers, A.M.L. Lecture Notes in Computer Science, 8618, 393-399 Published version: http://dx.doi.org/1.17/978-3-662-44193-_49
More informationVEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu
More informationA Technique for Touch Force Sensing using a Waterproof Device s Built-in Barometer
Late-Breaking Work B C Figure 1: Device conditions. a) non-tape condition. b) with-tape condition. A Technique for Touch Force Sensing using a Waterproof Device s Built-in Barometer Ryosuke Takada Ibaraki,
More informationCollaboration on Interactive Ceilings
Collaboration on Interactive Ceilings Alexander Bazo, Raphael Wimmer, Markus Heckner, Christian Wolff Media Informatics Group, University of Regensburg Abstract In this paper we discuss how interactive
More informationTwo-Handed Interactive Menu: An Application of Asymmetric Bimanual Gestures and Depth Based Selection Techniques
Two-Handed Interactive Menu: An Application of Asymmetric Bimanual Gestures and Depth Based Selection Techniques Hani Karam and Jiro Tanaka Department of Computer Science, University of Tsukuba, Tennodai,
More informationCOMET: Collaboration in Applications for Mobile Environments by Twisting
COMET: Collaboration in Applications for Mobile Environments by Twisting Nitesh Goyal RWTH Aachen University Aachen 52056, Germany Nitesh.goyal@rwth-aachen.de Abstract In this paper, we describe a novel
More informationSerendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch
Serendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch Hongyi Wen 1 Julian Ramos Rojas 2 Anind K. Dey 2 1 Department of Computer Science and Technology, Tsinghua University, Beijing,
More informationExploring Virtual Depth for Automotive Instrument Cluster Concepts
Exploring Virtual Depth for Automotive Instrument Cluster Concepts Nora Broy 1,2,3, Benedikt Zierer 2, Stefan Schneegass 3, Florian Alt 2 1 BMW Research and Technology Nora.NB.Broy@bmw.de 2 Group for Media
More informationWaveForm: Remote Video Blending for VJs Using In-Air Multitouch Gestures
WaveForm: Remote Video Blending for VJs Using In-Air Multitouch Gestures Amartya Banerjee banerjee@cs.queensu.ca Jesse Burstyn jesse@cs.queensu.ca Audrey Girouard audrey@cs.queensu.ca Roel Vertegaal roel@cs.queensu.ca
More information3D Interactions with a Passive Deformable Haptic Glove
3D Interactions with a Passive Deformable Haptic Glove Thuong N. Hoang Wearable Computer Lab University of South Australia 1 Mawson Lakes Blvd Mawson Lakes, SA 5010, Australia ngocthuong@gmail.com Ross
More informationMulti-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit Alan Esenther and Kent Wittenburg TR2005-105 September 2005 Abstract
More informationNAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS
NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present
More informationGestureCommander: Continuous Touch-based Gesture Prediction
GestureCommander: Continuous Touch-based Gesture Prediction George Lucchese george lucchese@tamu.edu Jimmy Ho jimmyho@tamu.edu Tracy Hammond hammond@cs.tamu.edu Martin Field martin.field@gmail.com Ricardo
More informationRendering Moving Tactile Stroke on the Palm Using a Sparse 2D Array
Rendering Moving Tactile Stroke on the Palm Using a Sparse 2D Array Jaeyoung Park 1(&), Jaeha Kim 1, Yonghwan Oh 1, and Hong Z. Tan 2 1 Korea Institute of Science and Technology, Seoul, Korea {jypcubic,lithium81,oyh}@kist.re.kr
More informationUsing Hands and Feet to Navigate and Manipulate Spatial Data
Using Hands and Feet to Navigate and Manipulate Spatial Data Johannes Schöning Institute for Geoinformatics University of Münster Weseler Str. 253 48151 Münster, Germany j.schoening@uni-muenster.de Florian
More informationToolkit For Gesture Classification Through Acoustic Sensing
Toolkit For Gesture Classification Through Acoustic Sensing Pedro Soldado pedromgsoldado@ist.utl.pt Instituto Superior Técnico, Lisboa, Portugal October 2015 Abstract The interaction with touch displays
More informationNavigating the Virtual Environment Using Microsoft Kinect
CS352 HCI Project Final Report Navigating the Virtual Environment Using Microsoft Kinect Xiaochen Yang Lichuan Pan Honor Code We, Xiaochen Yang and Lichuan Pan, pledge our honor that we have neither given
More informationBeats Down: Using Heart Rate for Game Interaction in Mobile Settings
Beats Down: Using Heart Rate for Game Interaction in Mobile Settings Claudia Stockhausen, Justine Smyzek, and Detlef Krömker Goethe University, Robert-Mayer-Str.10, 60054 Frankfurt, Germany {stockhausen,smyzek,kroemker}@gdv.cs.uni-frankfurt.de
More informationRunning an HCI Experiment in Multiple Parallel Universes
Author manuscript, published in "ACM CHI Conference on Human Factors in Computing Systems (alt.chi) (2014)" Running an HCI Experiment in Multiple Parallel Universes Univ. Paris Sud, CNRS, Univ. Paris Sud,
More informationEnabling Cursor Control Using on Pinch Gesture Recognition
Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on
More informationMultimodal Metric Study for Human-Robot Collaboration
Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems
More informationEvaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras
Evaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras TACCESS ASSETS 2016 Lee Stearns 1, Ruofei Du 1, Uran Oh 1, Catherine Jou 1, Leah Findlater
More informationRemote Shoulder-to-shoulder Communication Enhancing Co-located Sensation
Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation Minghao Cai and Jiro Tanaka Graduate School of Information, Production and Systems Waseda University Kitakyushu, Japan Email: mhcai@toki.waseda.jp,
More informationBluetooth Low Energy Sensing Technology for Proximity Construction Applications
Bluetooth Low Energy Sensing Technology for Proximity Construction Applications JeeWoong Park School of Civil and Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr. N.W., Atlanta,
More informationEarTouch: Turning the Ear into an Input Surface
EarTouch: Turning the Ear into an Input Surface Takashi Kikuchi tkiku393760@gmail.com Yuta Sugiura sugiura@keio.jp Katsutoshi Masai masai@imlab.ics.keio.ac.jp Maki Sugimoto sugimoto@ics.keio.ac.jp ABSTRACT
More informationGaze-enhanced Scrolling Techniques
Gaze-enhanced Scrolling Techniques Manu Kumar Stanford University, HCI Group Gates Building, Room 382 353 Serra Mall Stanford, CA 94305-9035 sneaker@cs.stanford.edu Andreas Paepcke Stanford University,
More informationTouch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence
Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Ji-Won Song Dept. of Industrial Design. Korea Advanced Institute of Science and Technology. 335 Gwahangno, Yusong-gu,
More informationShopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction
Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp
More informationGesture Control By Wrist Surface Electromyography
Gesture Control By Wrist Surface Electromyography Abhishek Nagar and Xu Zhu Samsung Research America - Dallas 1301 E. Lookout Drive Richardson, Texas 75082 Email: {a.nagar, xu.zhu}@samsung.com Abstract
More informationDo Stereo Display Deficiencies Affect 3D Pointing?
Do Stereo Display Deficiencies Affect 3D Pointing? Mayra Donaji Barrera Machuca SIAT, Simon Fraser University Vancouver, CANADA mbarrera@sfu.ca Wolfgang Stuerzlinger SIAT, Simon Fraser University Vancouver,
More informationIntegrated Driving Aware System in the Real-World: Sensing, Computing and Feedback
Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu
More informationINTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,
More informationMobile Audio Designs Monkey: A Tool for Audio Augmented Reality
Mobile Audio Designs Monkey: A Tool for Audio Augmented Reality Bruce N. Walker and Kevin Stamper Sonification Lab, School of Psychology Georgia Institute of Technology 654 Cherry Street, Atlanta, GA,
More informationPaperPhone: Understanding the Use of Bend Gestures in Mobile Devices with Flexible Electronic Paper Displays
PaperPhone: Understanding the Use of Bend Gestures in Mobile Devices with Flexible Electronic Paper Displays Byron Lahey1,2, Audrey Girouard1, Winslow Burleson2 and Roel Vertegaal 1 1 2 Human Media Lab
More information