From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness

Similar documents
UNIVERSITY OF CALGARY. Low Cost Indoor Localization Within and Across Disjoint Ubiquitous Environments using. Bluetooth Low Energy Beacons

Wi-Fi Fingerprinting through Active Learning using Smartphones

Kissenger: A Kiss Messenger

UbiBeam++: Augmenting Interactive Projection with Head-Mounted Displays

Multi-Surface Systems for the Emergency Operations Centre of the Future

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment

Recent Progress on Augmented-Reality Interaction in AIST

synchrolight: Three-dimensional Pointing System for Remote Video Communication

Open Archive TOULOUSE Archive Ouverte (OATAO)

Tangible Lenses, Touch & Tilt: 3D Interaction with Multiple Displays

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit

Recognizing Gestures on Projected Button Widgets with an RGB-D Camera Using a CNN

Hardware-free Indoor Navigation for Smartphones

COMET: Collaboration in Applications for Mobile Environments by Twisting

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Simulation of Tangible User Interfaces with the ROS Middleware

Indoor Positioning with a WLAN Access Point List on a Mobile Device

SELECTING THE OPTIMAL MOTION TRACKER FOR MEDICAL TRAINING SIMULATORS

Cooperative localization (part I) Jouni Rantakokko

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

ITS '14, Nov , Dresden, Germany

Advanced Technologies & Intelligent Autonomous Systems in Alberta. Ken Brizel CEO ACAMP

Geo-Located Content in Virtual and Augmented Reality

Range Sensing strategies

Cooperative navigation (part II)

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications

Diploma Thesis Final Report: A Wall-sized Focus and Context Display. Sebastian Boring Ludwig-Maximilians-Universität München

Portfolio. Swaroop Kumar Pal swarooppal.wordpress.com github.com/swarooppal1088

Københavns Universitet

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

Gradual Engagement: Facilitating Information Exchange between Digital Devices as a Function of Proximity

Recent Progress on Wearable Augmented Interaction at AIST

Figure 1. The game was developed to be played on a large multi-touch tablet and multiple smartphones.

Technology Challenges and Opportunities in Indoor Location. Doug Rowitch, Qualcomm, San Diego

IoT Wi-Fi- based Indoor Positioning System Using Smartphones

Brainstorm. In addition to cameras / Kinect, what other kinds of sensors would be useful?

A Kinect-based 3D hand-gesture interface for 3D databases

Touch & Gesture. HCID 520 User Interface Software & Technology

Capacitive Face Cushion for Smartphone-Based Virtual Reality Headsets

CENG 5931 HW 5 Mobile Robotics Due March 5. Sensors for Mobile Robots

Indoor navigation with smartphones

ASC IMU 7.X.Y. Inertial Measurement Unit (IMU) Description.

Ubiquitous Positioning: A Pipe Dream or Reality?

Multi-Modal User Interaction

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

VR/AR Concepts in Architecture And Available Tools

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Using Scalable, Interactive Floor Projection for Production Planning Scenario

3D and Sequential Representations of Spatial Relationships among Photos

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

MEMS Solutions For VR & AR

Smart Space - An Indoor Positioning Framework

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

NavShoe Pedestrian Inertial Navigation Technology Brief

MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS

FLCS V2.1. AHRS, Autopilot, Gyro Stabilized Gimbals Control, Ground Control Station

Cooperative navigation: outline

Indoor Positioning Using a Modern Smartphone

Introduction to Mobile Sensing Technology

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

Intelligent Robotics Sensors and Actuators

Virtual Reality Calendar Tour Guide

Collaborative Interaction through Spatially Aware Moving Displays

Gesture Recognition with Real World Environment using Kinect: A Review

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018.

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Homework 10: Patent Liability Analysis

Augmented Reality And Ubiquitous Computing using HCI

Exploring Passive Ambient Static Electric Field Sensing to Enhance Interaction Modalities Based on Body Motion and Activity

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Multi-User, Multi-Display Interaction with a Single-User, Single-Display Geospatial Application

Smartphone Motion Mode Recognition

Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS

GPS-Aided INS Datasheet Rev. 2.7

Robust Positioning for Urban Traffic

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

Sensing Human Activities With Resonant Tuning

Indoor Positioning 101 TECHNICAL)WHITEPAPER) SenionLab)AB) Teknikringen)7) 583)30)Linköping)Sweden)

Apple ARKit Overview. 1. Purpose. 2. Apple ARKit. 2.1 Overview. 2.2 Functions

Improved Pedestrian Navigation Based on Drift-Reduced NavChip MEMS IMU

ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality

Classifying 3D Input Devices

Investigating Gestures on Elastic Tabletops

Enabling Remote Proxemics through Multiple Surfaces

M.Gesture: An Acceleration-Based Gesture Authoring System on Multiple Handheld and Wearable Devices

Multimodal Interaction Concepts for Mobile Augmented Reality Applications

The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies

Mario Romero 2014/11/05. Multimodal Interaction and Interfaces Mixed Reality

HELPING THE DESIGN OF MIXED SYSTEMS

Transcription:

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness Alaa Azazi, Teddy Seyed, Frank Maurer University of Calgary, Department of Computer Science 2500 University Dr. NW {alaa.azazi, teddy.seyed, frank.maurer}@ucalgary.ca ABSTRACT Current implementations of spatially-aware multi-surface environments rely heavily on instrumenting the room with different tracking technologies (e.g. Microsoft Kinect, Vicon Cameras). Prior research, however, has shown that real-world deployment using such approaches leads to feasibility issues and users being uncomfortable with the technology in the environment. In this work, we attempt to address these issues by examining the use of a dedicated inertial measurement unit (IMU) in a MSE. We performed a limited user study and present our results that suggest measurements provided by an IMU do not provide value over sensor fusion techniques for spatially-aware MSE s. Author Keywords Inertial tracking systems; inertial measurement unit; indoor navigation systems; gestures and interactions; HCI; multisurface applications; API design. ACM Classification Keywords H.5.2. [Information interfaces and presentation]; User Interfaces; Input devices and strategies. INTRODUCTION Multi-surface Environments (MSE s) integrate a variety of different devices smartphones, tablets, digital tabletops, and large wall displays into a single interactive environment [6]. These environments allow for information and interaction to be spread across and between devices and enable users to take advantage of the distinctive affordances supported by each device. For example, information can be shared amongst different devices in the environment, but a device such as a digital tabletop can be used as a public sharing space for the information, while a tablet can be used for private components of the information. Spatially-aware MSE s use the spatial layout of the environment in order to support cross-device spatial interactions, such as flicking [3], or picking and dropping [5]. In the previous example, spatial awareness allows a user to perform a flick gesture with the tablet towards the digital tabletop to transfer information. To design such spatially-aware MSE s, the environment needs knowledge such as the location and orientation of devices in in the environment. Building spatially-aware MSE s and interactions introduces a number of challenges from a system engineering perspective. A key challenge that motivates the work presented, is related to the choice of tracking sensors that can provide spatial awareness in MSE s. The choice of sensor tracking technologies impacts room instrumentation cost and set-up effort required, especially when using tracking technologies such as Vicon 1 Cameras or the Microsoft Kinect 2. One potential solution is the integration of attachable high-precision inertial measurement units (IMUs) into the multi-surface environment. An IMU attached to a mobile device becomes responsible for calculating both position and orientation of the device in the MSE. In the work presented, we evaluated an IMU to determine its accuracy for location and orientation tracking within spatially-aware MSE s. Specifically, we evaluated the applicability and usability of the SmartCube IMU, developed at the Alberta Center for Advanced MNT Products (ACAMP) 3. Our work answered two major questions: How accurate are the position and orientation measurements returned by the SmartCube? And whether it is a feasible alternative to sensor fusion techniques? The remainder of the paper is organized as follows: the next section is a literature review on the concepts and established research in the area of tracking within spatiallyaware MSE s. The approach, design and setup of the experiment is introduced next followed by results of the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. 1 Vicon www.vicon.com 2 Kinect www.microsoft.com/en-us/kinectforwindows 3 ACAMP www.acamp.ca ITS 14, November 16 19, 2014, Dresden, Germany. Copyright 2014 ACM 978-1-4503-2138-9/13/06...$15.00.

experiment. Next, a discussion of the implications of these results is presented and possible future work. RELATED WORK The research space of multi-surface environments has been well defined in the past few years, with a significant amount of research conducted from the human computer interaction perspective and the system engineering perspective. Multi-surface environments can be divided into two categories: non-spatially aware, and spatially aware environments. Comparing Environments Non-spatially aware MSE s do not have a model of the spatial relationships between the devices and the users in an environment. Consequently, selecting a device to interact with is either done explicitly - by selecting a device from a list, or implicitly - by always sending to a single device. Alternatively, spatially aware MSE s are environments which have a model of the spatial relationships of the devices and the users in an environment. This creates opportunities for inter-device interactions that are more dynamic based on properties such as proximity or orientation. Spatial awareness in multi-surface environments is often achieved through the fusion of different sensor data at either an environmental level or at a singular level, with a user and their device. When comparing these two approaches, non-spatially aware MSE s are typically less expensive to implement than their spatially aware counterparts since they do not require tracking hardware to identify the spatial layout of the environment. However, they provide a less engaging user experience as interaction flows in a manner that is less natural for users. Building Spatially Aware Environments Building a spatially aware environment requires the integration of a number of components such as the tracking hardware and the software running on the different surfaces in the environment. Instrumenting an Environment Current implementations of spatial MSE s rely heavily on instrumenting the environment using sensor fusion techniques, where sensors track users or marked objects within the environment. An example of an API for building such environments is the Multi-surface Environment API (or MSE-API) developed by Burns et al [1] which uses fusion of lower-end tracking systems (such as the Microsoft Kinect) and device-embedded sensors. Proximity Toolkit by Marquardt et al [4] is another toolkit that builds spatially aware environments, using the higher precision but markerbased Vicon Motion Tracking technology. A significant drawback for these types of toolkits for system engineers, however, is that sensor fusion approaches require continually instrumenting the environment and calibrating applications, thus making them difficult to scale to a larger area. Another challenge, from a usability perspective, is that users feel uncomfortable and unfamiliar with technologies that track their movements [7]. Instrumenting for Users and their Devices Instrumenting for users and their devices is an alternative, but is a largely unexplored implementation technique for building spatially-aware MSE s. It relies on equipping the devices with dedicated specialized sensors to create a spatially-aware environment. A recent example of this approach, is Project Tango 4 by Google, where a mobile device is equipped with customized sensors and software that track the motion of the device in 3D space. This custom design provides real-time position and orientation information of the device, creating a 3D model of the environment. Using purely dedicated and specialized sensors on individual devices to replace sensor fusion techniques is an approach that has not been deeply evaluated for multisurface environments in the research literature. This provides the motivation for the work presented in this paper to evaluate the applicability and approach for using purely dedicated device sensors to provide spatial awareness in multi-surface environments. THE SMARTCUBE IMU In collaboration with the Alberta Center for Advanced MNT Products (ACAMP), we chose to evaluate the Smartcube IMU (Figure 1) for providing spatial-awareness in MSE s. The SmartCube is a 2 cm 3 IMU module that incorporates IMU functionality with pressure, positioning and temperature sensing. The cube uses a modular design, where the different components are stacked vertically as layers. Each layer in the cube is segregated by function and is developed individually. The IMU layer provides access to 3 independent acceleration channels and 3 angular rate channels through the embedded 3D digital accelerometer and gyroscope. Figure 1: The experimental SmartCube Inertial Measurement Unit 4 Project Tango www.google.com/atap/projecttango

a b Figure 2: Study participant performing the study tasks. Figure 2 (a) provides an overview of the user study scenario setup, highlighting a participant holding an IMU connected to a Microsoft Surface tablet and the wall-display surface. Figure 2 (b) illustrates the different objectives of the study, with the user starting at the calibration point, then walking to marked points. USER STUDY The primary goal of our initial user study was to evaluate a dedicated inertial device tracking approach (specifically the SmartCube) for spatially aware MSE s. The tasks of our user study are based on prior research by Voida et al., which focused on moving content between devices [8], a common task in MSE s [6, 9]. Specifically, we looked at the accuracy of orientation and position data from the SmartCube and its impact on tracking within multi-surface environments. Apparatus The study was conducted using ACAMP s Smartcube, serving as the dedicated tracking device. A specialized C# application was written to display a set of targets on a large wall-display connected to a PC. This application allowed us to simulate sending content from a tablet to the shown targets. A Microsoft Surface tablet application was also created in order to communicate data from the SmartCube. Data was recorded from the tablet application to capture detailed spatial information - position, tilt and orientation, at each of the performed tasks. To consider distance consistently, predetermined locations were marked on the floor and participants were instructed to move between these locations for certain tasks. Participants Ten unpaid volunteers participated in the study. Participants were recruited using word of mouth. All participants had a background in computer science and no participants were excluded based on experience with tablets or motion tracking systems. Procedure The user study conducted addressed a content-sending task, which allows the accuracy of the SmartCube to be evaluated in spatially-dependent interactions between devices in the environment. Figure 2.a illustrates the primary scenario for this user study. At the start of each experiment, an application is started on the large walldisplay, a mobile application is started on the tablet and the user is asked to stand at a marked calibration point in the room. The experiment accomplishes four objectives: In the first objective, the user is instructed to walk to a number of different marked points in the room - as shown in Figure 2.b, with the application recording the position measurements at each point. The goal of this objective is to evaluate the accuracy of the position measurements returned by the SmartCube independent of all other interactions. In the second and third objectives, the user is instructed to send content to a number of visual targets that are shown on the display - one target at a time, by rotating the device in the 3D space. The application records the success or failure of each attempt. The goal of these objectives is to evaluate the accuracy of targeting based on the orientation measurements returned by the SmartCube independent of the user's position. We provided two conditions, one with visual feedback and one without. This was to examine the

DEviation from Target (cm) Deviation from Target(cm) issues with error and how tolerant users could be with position and orientation accuracy. In the fourth objective, the user is instructed to walk to a random point in the room each time a new target is shown on the display. The user is, then, instructed to send content to the shown target, with the application recording the success or failure of each attempt. The goal of this objective is to evaluate the accuracy of the combined measurements of the SmartCube. We used the sensor information to compute the virtual intersection of a beam coming from the tablet with the wall display. RESULTS From our 10 participants, we collected a total of 480 readings (10 participants 12 commands 4 objectives). These were classified based on the objectives discussed previously. Sending content to the display from a fixed location, without visual feedback, showed a success rate of 7%, deviating 21.4 cm from the target on average (Figure 3). Performing the same task with visual feedback of position on the large wall-display had a higher success rate 21%, with target deviation averaging at 20.6 cm (Figure 4). Tasks that depend on the location measurements, returned by the SmartCube, showed negative results and proved to be unusable, with a success rate of 0%, and deviating from the target by 1 to 3 meters. In general, the early feedback received from the study participants indicated that attaching an external module to the tablet was impractical and that it reduced the tablet s mobility. The participants thought that the visual feedback was crucial in order to understand the system s perspective of the room. They, however, commented on the measurements returned by the SmartCube through the visual feedback being inconsistent, and were, generally, uncomfortable with the idea of facing wrong directions in order to send to the target on the large wall-display. 75 60 45 30 15 0 Study Participants Min Deviation Max Deviation Average Deviation Figure 3: Degree of error in unsuccessful attempts (without visual feedback) 135 120 105 90 75 60 45 30 15 0 Study Participants Min Deviation Max Deviation Average Deviation Figure 4: Degree of error in unsuccessful attempts (with visual feedback) DISCUSSION An interesting observation revealed from the study and comments from participants was the use of visual feedback to offset sensor inaccuracy. This may suggest that providing visual feedback for multi-surface interactions is valuable and will allow users to compensate for potentially inaccurate tracking technologies or multi-surface environments that require constant calibration. Overall, our results although initial, resurface discussions on purely sensor-based approaches and sensor-fusion based approaches for spatial awareness in multi-surface environments. In both approaches, there is still a need for environment setup, from both an infrastructure level as well as an application level. Comparatively however, the setup time required for the purely sensor based approach is significantly less than those of sensor fusion based techniques, and initial comments from the participants indicate that prior issues related to practical real-world feasibility and comfort for users are solved [7]. Looking forward, self-contained integrated sensor approaches that are more accurate (e.g. Google s Tango) may also provide a more feasible alternative to inertial tracking and room instrumentation. FUTURE WORK Our future work following this initial study is multi-faceted. A potential research direction will be to utilize the modular approach of the SmartCube to use additional sensors - such as compass and GPS sensors, together with the gyroscope and accelerometer in a mobile device, in order to provide potentially more accurate spatial information. Secondly, we intend to do a full comparative study of spatial awareness in multi-surface environments using both the SmartCube and a sensor-fusion based approach. Finally, we also intend on comparing different types of pure sensor based approaches, such as Googles Tango. CONCLUSION In this paper, we explored the use of a purely sensor based approaches as an alternative to the typical room

instrumentation based approaches for providing spatialawareness in MSE s. This was based on prior research indicating the challenges with room instrumentation based approaches [7]. We approached this problem by collaborating with ACAMP and using their SmartCube IMU as the tracking device. Our results indicated additional work needs to occur in order for this technology to be more feasible alternative to room instrumentation techniques, however, we hope this initial work will trigger greater interest in using purely sensor based techniques in the multi-surface research community. REFERENCES 1. Burns, C., Seyed, T., Hellmann, T., Sousa, M. C., and Maurer, F. A Usable API for Multi-surface Systems. Proc. BLEND 2013. 2. Chung, H., Ojeda, L., and Borenstein, J. Sensor Fusion for Mobile Robot Dead-reckoning with a Precisioncalibrated Fiber Optic Gyroscope. Proc. ICRA 2001, Vol. 4 (2001), 3588-3593. 3. Dachselt, R., and Buchholz, R. Natural Throw and Tilt Interaction between Mobile Phones and Distant Displays. Ext. Proc. CHI 2009, ACM Press (2009). 4. Marquardt, N., Diaz-Marino, R., Boring, S., and Greenberg, S. The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies. Proc. UIST 2011, ACM Press (2011), 315-326. 5. Rekimoto, J. Pick-and-drop: A Direct Manipulation Technique for Multiple Computer Environments. Proc. UIST 1997, ACM Press (1997), 31-39. 6. Seyed, T., Burns, C., Sousa, M. C., Maurer, F., and Tang, A. Eliciting Usable Gestures for Multi-Display Environments. Proc. ITS 2012, ACM Press (2012), 41-50. 7. Seyed, T., Sousa, M. C., Maurer, F., and Tang, A. SkyHunter: a Multi-Surface Environment for Supporting Oil and Gas Exploration. Proc. ITS 2013. 8. Voida, S., Podlaseck, M., Kjeldsen, R., and Pinhanez, C. A Study on the Manipulation of 2D Objects in a Projector/Camera-based Augmented Reality Environment. Proc. CHI 2005, ACM Press (2005), 611-620. 9. Yatani, K., Tamura, K., Hiroki, K., Sugimoto, M., and Hashizume, H. 2006. Toss-It: Intuitive Information Transfer Techniques for Mobile Devices Using Toss and Swing Actions. IEICE-Trans. Inf. Syst., 89 (1): 150-157.