Test of pan and zoom tools in visual and non-visual audio haptic environments. Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten

Similar documents
The ENABLED Editor and Viewer simple tools for more accessible on line 3D models. Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten

Magnusson, Charlotte; Rassmus-Gröhn, Kirsten; Szymczak, Delphine

An Audio-Haptic Mobile Guide for Non-Visual Navigation and Orientation

Published in: HAVE IEEE International Workshop on Haptic Audio Visual Environments and their Applications

Angle sizes for pointing gestures Magnusson, Charlotte; Rassmus-Gröhn, Kirsten; Szymczak, Delphine

Magnusson, Charlotte; Molina, Miguel; Rassmus-Gröhn, Kirsten; Szymczak, Delphine

Local Coloring and Regional Identity:

Heterogeneity and homogeneity in library and information science research

A 100MHz CMOS wideband IF amplifier

Citation for published version (APA): Olausson, D., & Ekengren, F. (2014). Editorial. Lund Archaeological Review, 20, 5-5.

Interactive Exploration of City Maps with Auditory Torches

Impact of the size of the hearing aid on the mobile phone near fields Bonev, Ivan Bonev; Franek, Ondrej; Pedersen, Gert F.

Broadband array antennas using a self-complementary antenna array and dielectric slabs

Measured propagation characteristics for very-large MIMO at 2.6 GHz

Aspemyr, Lars; Jacobsson, Harald; Bao, Mingquan; Sjöland, Henrik; Ferndal, Mattias; Carchon, G

SUSPENSION CRITERIA FOR IMAGE MONITORS AND VIEWING BOXES.

Open Access to music research in Sweden the pros and cons of publishing in university digital archives

Evaluation of the Danish Safety by Design in Construction Framework (SDCF)

QS Spiral: Visualizing Periodic Quantified Self Data

The Archaeology of Time travel An introduction

Microsoft Scrolling Strip Prototype: Technical Description

Petersson, Mikael; Årzén, Karl-Erik; Sandberg, Henrik; de Maré, Lena

Log-periodic dipole antenna with low cross-polarization

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic presentation of 3D objects in virtual reality for the visually disabled

Guiding Tourists through Haptic Interaction: Vibration Feedback in the Lund Time Machine

The Game Experience Questionnaire

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers

Falsework & Formwork Visualisation Software

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Design and Measurement of a 2.45 Ghz On-Body Antenna Optimized for Hearing Instrument Applications

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

The Forensic Architecture Project : Virtual imagery as evidence in the contemporary context of the war on terror

The workspace design concept: A new framework of participatory ergonomics

Lightning transient analysis in wind turbine blades

Novel Electrically Small Spherical Electric Dipole Antenna

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software:

Educating Maritime Engineers for a Globalised Industry

Aalborg Universitet. Linderum Electricity Quality - Measurements and Analysis Silva, Filipe Miguel Faria da; Bak, Claus Leth. Publication date: 2013

Published in: Proceedings of NAM 98, Nordic Acoustical Meeting, September 6-9, 1998, Stockholm, Sweden

Understanding OpenGL

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

Citation for published version (APA): Parigi, D. (2013). Performance-Aided Design (PAD). A&D Skriftserie, 78,

Photoshop Exercise 2 Developing X

Characteristic mode based pattern reconfigurable antenna for mobile handset

Effect of ohmic heating parameters on inactivation of enzymes and quality of not-fromconcentrate

Methods for Haptic Feedback in Teleoperated Robotic Surgery

Assessing the utility of dual finger haptic interaction with 3D virtual environments for blind people

Calculation of antenna radiation center using angular momentum

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Characterization of additive manufacturing processes for polymer micro parts productions using direct light processing (DLP) method

AutoCAD 2D. Table of Contents. Lesson 1 Getting Started

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Distance Protection of Cross-Bonded Transmission Cable-Systems

RKSLAM Android Demo 1.0

Automatic Online Haptic Graph Construction

Comparing Two Haptic Interfaces for Multimodal Graph Rendering

Phantom-X. Unnur Gretarsdottir, Federico Barbagli and Kenneth Salisbury

Exploring Geometric Shapes with Touch

Repeated Measures Twoway Analysis of Variance

ArcGIS Pro: Tips & Tricks

Copyright Digital Film Tools, LLC All Rights Reserved

Audio makes a difference in haptic collaborative virtual environments

Directional dependence of loudness and binaural summation Sørensen, Michael Friis; Lydolf, Morten; Frandsen, Peder Christian; Møller, Henrik

12. Creating a Product Mockup in Perspective

Yu, W. and Brewster, S.A. (2003) Evaluation of multimodal graphs for blind people. Universal Access in the Information Society 2(2):pp

An image-based method for objectively assessing injection moulded plastic quality

Can a haptic force feedback display provide visually impaired people with useful information about texture roughness and 3D form of virtual objects?

First English edition for Ulead COOL 360 version 1.0, February 1999.

Direct Manipulation. and Instrumental Interaction. Direct Manipulation 1

The Visible Ear Simulator Dissection Manual.

A Quick Spin on Autodesk Revit Building

Adaptive Level of Detail in Dynamic, Refreshable Tactile Graphics

A Waveguide Transverse Broad Wall Slot Radiating Between Baffles

PRODIM CT 3.0 MANUAL the complete solution

Direct Manipulation. and Instrumental Interaction. Direct Manipulation

Investigation of a Hybrid Winding Concept for Toroidal Inductors using 3D Finite Element Modeling

Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills

RF Explorer. User Manual. RF Explorer User Manual v Page 1 of 13. Updated to firmware v1.05. Edition date: 2011/Mar/01.

The current distribution on the feeding probe in an air filled rectangular microstrip antenna

Rodil, Kasper; Eskildsen, Søren; Morrison, Ann Judith; Rehm, Matthias; Winschiers- Theophilus, Heike

Benefits of using haptic devices in textile architecture

The Danish Test Facilities Megavind Offspring

Bandwidth limitations in current mode and voltage mode integrated feedback amplifiers

Cricut Design Space App for ipad User Manual

Map Direct Lite. Contents. Quick Start Guide: Drawing 11/05/2015

Detection of mechanical instability in DI-fluxgate sensors

Lettering Fabric Preparation deco 340 aurora 430E & 440QEE NAME artista 630E, 635LE & 640E

Calibration of current-steering D/A Converters

Photoshop Elements Hints by Steve Miller

Presentations from The Bolund Experiment: Workshop 3-4th December 2009

Glasgow eprints Service

VIRTUAL MUSEUM BETA 1 INTRODUCTION MINIMUM REQUIREMENTS WHAT DOES BETA 1 MEAN? CASTLEFORD TIGERS HERITAGE PROJECT

Photo Editing in Mac and ipad and iphone

Transcription:

Test of pan and zoom tools in visual and non-visual audio haptic environments Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten Published in: ENACTIVE 07 2007 Link to publication Citation for published version (APA): Magnusson, C., Gutierrez, T., & Rassmus-Gröhn, K. (2007). Test of pan and zoom tools in visual and non-visual audio haptic environments. In ENACTIVE 07 General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. L UNDUNI VERS I TY PO Box117 22100L und +46462220000 Download date: 06. Mar. 2019

Test of pan and zoom tools in visual and non-visual audio haptic environments Charlotte Magnusson * Teresa Gutierrez * Kirsten Rassmus-Gröhn * (*)Lund University, Lund, Sweden (* )LABEIN, Bilbao, Spain E-mail: charlotte.magnusson@certec.lth.se, tere@labein.es, kirre@certec.lth.se Abstract To enable visually impaired users to experience large virtual 3D models with a relatively small haptic device a test of a set of pan and zoom tools has been performed. The pan tools tested (keyboard, pressing the sides of the limiting box and click & drag using the PHANToM) were all seen to be useful. For the zoom tool the discrete keyboard press design was seen to work well, while the drag type zoom implemented worked poorly and will need to be redesigned. The test results show different preferences in visual and non-visual navigation, indicating the need for specially designed interaction utilities for the non-visual case. 1. Introduction In most haptic virtual environments for the blind, the working area of the haptic device (such as the PHANToM) limits the size and complexity of the virtual environment that can be displayed. Haptic non-visual pan and zooming using a PHANToM device has been studied to some limited extent. Within the ENORASI study described in [1], a simple pan function was tested by a subset of the test users. Zoom functions to gain access to greater detail in virtual haptic line graphs are also suggested by Roberts et al [2], a test of a set of pan and zoom functions has been reported in [3] and this type of functions were also included in the virtual audiohaptic traffic environment described in [4]. The studies described in this paper were designed to further investigate pan and zoom utilities in a more realistic environment. 2. Test environment The environment used was essentially the traffic model described in [4] (see figure 1), but with the model no longer divided into five parts, but instead joined into a single model. Three different pan or scroll functions were implemented. They were similar to the ones used in [3]: moving the world using the arrow keys, moving the world by pushing the sides of the limiting box with the PHANToM stylus and moving the world by clicking & dragging using the PHANToM stylus. Figure 1. Views from the traffic environment model used. When moved by pressing the arrow keys, the world moved in the direction of the key pressed. When the user pushed the limiting box, the world moved as if the box was actually pushed by the user over the model. For the click and drag pan function the world became attached to the PHANToM stylus, although to avoid unpredicted movements in the vertical direction the movement was restricted to the horizontal plane. The two discrete move functions (arrow keys and limiting box) allowed for different move lengths. The move length that should be used was selected by the user. For the zoom we used two basically different implementations: stepwise zoom by pressing a key on the keyboard and click & drag zoom using the PHANToM stylus. The click & drag zoom was implemented based on a rubber band metaphor i.e two points can be dragged with respect to each other to enlarge or reduce the size of the model (like you would to enlarge or reduce the size of a rubber band). Since the interaction device (the PHANToM) only has one interaction point, the position of the first point was indicated by a point and click action, and then the user pointed the stylus at the second point, held down the switch and used drag to resize the world. The stepwise zoom used fixed zoom factors of 0.8 for zooming out and 1.25 for zooming in. It was implemented so that the tip of the PHANToM stylus Proceedings of 4th International Conference on Enactive Interfaces Grenoble, France, November 19 th -22 nd, 2007 165

was always in the same position relative to the world before & after the zoom. To test the importance of knowing the contact point with respect to the world, users were asked to do the stepwise zoom in two different ways: in contact with the model and lifting the stylus a distance above the world. 3. Test setup The test was performed by 12 sighted users (age 23-60). To speed up the learning process and also to compare sighted interaction with interaction using only touch and hearing the test users first performed the test with vision and after that without visual feedback. The order in which the tools within the pan and zoom groups were tested was permuted to avoid learning effects. The pan tools were tested before the zoom tools both in the visual and the non-visual case. All test persons started the test with visual feedback, and were initially allowed to familiarize themselves with the environment and with the different move lengths. The move lengths were not tested individually, but instead the users were instructed to use whatever lengths they wanted and the usage of different length settings was recorded. After the initial familiarization, the pan test task was performed for all three pan tools. The pan task was to locate the bus stop in the south part of the environment and then to move the PHANToM stylus to a rocket in the north part of the environment while making use of the specified pan tool when panning was needed. The initial position of the world was always the same (showing the middle part of the world). The zoom test task was to first locate the bus stop and then to zoom in and point to 3 objects at the bus stop: the waste paper basket, the sofa and the stairs. This task was done for all three zoom tools. As a final task the user was asked to do the pan and the zoom tasks once again but this time using his or her favorite tools the user could use any of the tested tools, and the tool use was recorded. After this, all these test tasks were performed again but without visual feedback. 4. Results All users were able to complete the test tasks both with and without visual feedback. The results were analyzed with analysis of variance (ANOVA) where the independent within-group variable was the navigational strategy used. Post hoc tests were done using the Tukey test and the significance level was set to 0.05 throughout the analyses. Pan method Mean (s) Sd (s) Arrow keys, vision 112 82 Limiting box, vision 111 50 PHANToM drag, vision 92 46 Arrow keys, no vision 447 346 Limiting box, no vision 320 109 PHANToM drag, no vision 372 234 Favorite pan, vision 76 38 Favorite pan, no vision 198 91 Table 1. Means and standard deviations (Sd) for the time to complete for the different pan tasks. For the pan tools, the ANOVA for the dependent variable time to complete (see table 1) revealed significant differences (F(7,77)=14,1, p<0.05). As expected, the post hoc test showed significant differences between all three pan methods with vision and all the three non-visual pan methods, while the difference between the favorite pan with and without vision, although showing a tendency in this direction, did not turn out to be significant (Q(8,77)=3,2). In the non-visual condition the use of the arrow keys and the PHANToM drag was significantly slower than the final test where the favorite tool(s) was(were) used (Q(8,77)=6.5, 4.5). A tendency (Q(8,77)=3,2) was seen also for the comparison with the limiting box. No such effect was seen for the visual case which indicates a stronger training effect for the non-visual interaction. As expected all the non-visual pan methods were all significantly slower than the favorite pan with vision. In the visual condition, the favorite tool selected for the final task by 8 users was the arrow keys while 5 users preferred the drag (one user used both). None of the users preferred pushing the limiting box for interaction in the visual case. For the non-visual condition, the tools preferred by the users in the final pan test task are different. For non-visual interaction 6 users preferred the PHANToM drag, 4 users preferred the arrow keys and 2 users preferred pushing the limiting box. For the discrete pan tools, the most often used move factors were 10, 20 and 60 in both conditions. The results in the non-visual condition were a bit more pronounced (probably due to the fact that the users now were more familiar with the environment) but the same move factors were most frequently used. The actual length of the move was calculated as 0.0016 m (the proxy radius) times the move factor. In the current environment a move factor of 60 corresponded to a page up/page down type of move: if the environment was moved in the north south direction, the move was done so that the points visible at a bottom south position would be visible at a top north position after the move. 166

Zoom method Mean (s) Sd (s) Keyboard no contact, vision 94 34 Keyboard with contact, vision 87 62 Zoom drag, vision 135 102 Keyboard no contact, no vision 196 64 Keyboard with vision, no contact 168 63 Zoom drag, no vision 215 105 Favorite zoom, vision 66 32 Favorite zoom, no vision 144 91 Table 2. Means and standard deviations (Sd) for the time to complete for the different zoom tasks. The results for the zoom are summarized in table 2. The zoom drag turned out to be hard to use, which also shows up in the post hoc test. While both keyboard zooms were significantly faster than all the zoom tools without vision, this was not the case for the zoom drag. The tendency for the favorite zoom with vision to be faster than the non visual favorite is more pronounced than for the pan tools (Q(8,77)=4.2) but it is not significant (the significance level is at 4.4). Comparing the zoom tools with the corresponding favorites did not show any significant differences (less of a training effect). This could be due to the fact that the zoom tools were tested after the pan tools but the task for the zoom operations also appeared to be less favored by the visual interaction (some of the objects the user should locate were usually hidden from view). In the non-visual condition, one very experienced PHANToM user liked the zoom drag tool (due both to the fact that this tool provided some feedback about the amount of zoom and to the fact that one only needed to zoom once). Finally, if we look only at the mean values, we see that after some training and with a choice of interaction techniques the completion times for the non-visual case are only 2-3 times longer than the times obtained with vision. 5. A related study A related study was performed at LABEIN, Bilbao, Spain. Figure 2. The types of graphs used in the LABEIN test. This test was done using the GRAB [5] two-point haptic device, and tested the way visually impaired users were able to interact with audio-haptic maps and graphs (figure 2). The applications used in this test include pan and zoom utilities, it provides an interesting complement to the results presented above. 5.1. Pan and zoom implementation The pan function in the LABEIN test was implemented so that the position of the control finger was fixed relative to the virtual graph scene. The movement of the control finger was then used for translating the scene. While the users were panning, they received a resistance force to indicate that the workspace was being moved. The user was unable to move outside the graph scene. This implementation was similar to the click & drag panning tested in the Lund test (although in Lund you could pan outside the model). The zoom function implemented for the LABEIN tests was designed in a similar way as the stepwise zoom function used in the Lund test. As the size was changed by pressing a keyboard key the control finger remained in the same position relative to the world, while the other finger would end up in a different position due to the size change. 5.2. Test results The LABEIN test was performed with visually impaired users the map application was tested by 4 congenitally blind persons and 5 persons with low vision, while the graphs application was tested by 4 congenitally blind persons and 6 persons with low vision. The users were allowed a training session before the actual test to familiarize themselves with the applications. For the graphs application 7 users thought the advanced utilities such as pan and zoom were easy to use while 3 thought they were not too easy but not too hard. For the maps application 8 users thought these utilities were easy to use and one user thought it was not too easy but not too hard. Looking at the user feedback received, two users with low vision stated that the panning was not intuitive. These users were used to work with magnifiers so they hoped to move the workspace (or the tool) and not the graph. Some users found it hard to realize that a graph did not fit in the workspace (indicating the usefulness of feedback on this point). It was also seen that it could be useful to be able to select the range of the zooming and scaling, or at least to provide the user with feedback about the range. 167

6. Discussion and conclusion The results of the Lund traffic environment pan and zoom test show that all the pan function designs are working quite well. In contrast with [3], it now turns out the PHANToM drag is the most popular pan tool for non-visual interaction. The usefulness of the drag function is further strengthened by the results obtained at LABEIN. Still, all three tool designs were shown to be useful (as in [3]). One problem with the arrow key panning is to know which way the world will move. In the present implementation it is the model that moves in the direction of the arrows. Some users initially had problems with this, which indicates the importance of providing the user with feedback of what has happened. In Lund no users had problems with the direction of the click and drag type of panning, while in the LABEIN test two users had problems with the panning direction for this type of panning. The test result shows the need of at least two (maybe three) different move lengths for the discrete pan operations. One really large, like a page down/page up and one shorter for the finer adjustments is needed. In the current test the proportions for the three most used lengths were 1:2:6. One problem was the fact that it was possible to move the model far away from the work space. This indicates that when the size of the world is known an environment like this should simply not allow the model to move completely out of the workspace (already implemented in the LABEIN applications). For the zooming operations it was clear that the design where the user is in contact with the same point of the model before and after the zoom works well. Despite this, several users complained about the lack of feedback on how much the world was zoomed, and more feedback on this point needs to be added. Both these results agree with the results from the LABEIN test. The design of the drag zoom function was not a success. Despite this the user responses indicate that some kind of direct manipulation type zoom could be quite useful. Further investigation is clearly needed on this point. The results obtained also support the earlier study by Jansson et al [6] which show the importance of training our results for the pan tools indicate a training effect particularly for the non-visual interaction. The Lund test highlights the fact that user preferences depend a lot on whether or not they have access to visual feedback. This is a reminder for anyone developing non-visual applications that the interaction needs to be specially designed for the non-visual case. Finally this test confirms that it is quite possible to understand and interact with large and complex haptic environment also in the non-visual case. 7. Acknowledgements The study was carried out with the financial assistance of the European Commission which cofunded the IP ENABLED, FP6-2003 - IST - 2, No 004778 and the NoE ENACTIVE - ENACTIVE Interfaces, FP6-2002-IST-1, No IST-2002-002114. The authors are grateful to the EU for the support given in carrying out these activities. References [1] C. Magnusson, K. Rassmus-Gröhn, C. Sjöström and H. Danielsson, Navigation and recog-nition in complex haptic virtual environments reports from an extensive study with blind users, Eurohaptics 2002, Edinburgh, UK [2] J.C. Roberts, K. Franklin, and J. Cullinane, Virtual Haptic Exploratory Visualization of Line Graphs and Charts, The Engineering Reality of Virtual Reality 2002, Mark T. Bolas, editor, volume 4660B of Electronic Imaging Conference, page 10. IS&T/SPIE, January 2002 [3] C. Magnusson, K. Rassmus-Gröhn, Non-visual Zoom and Scrolling Operations in a Virtual Haptic Environment, EuroHaptics 2003, Dublin, Ireland [4] C. Magnusson, K. Rassmus-Gröhn, A Virtual Traffic Environment for People with Visual Impairments, Visual Impairment Research, pp 1-12 Vol 7. No 1. 2005 [5] R. Iglesias, S. Casado, T. Gutiérrez, J.I. Barbero, C.A. Avizzano, S. Marcheschi, M. Bergamasco, Computer graphics access for blind people through a haptic and audiovirtual environment, HAVE 2004 (Ottawa, Canada, 2-3 October 2004) [6] G.Jansson and A. Ivås, Can the efficiency of a haptic display be increased by short-time practice in exploration? Proceedings of Haptic Human-Computer Interaction 2001 168