ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality

Similar documents
Beyond: collapsible tools and gestures for computational design

G-stalt: A chirocentric, spatiotemporal, and telekinetic gestural interface

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

3D and Sequential Representations of Spatial Relationships among Photos

synchrolight: Three-dimensional Pointing System for Remote Video Communication

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Interior Design using Augmented Reality Environment

New interface approaches for telemedicine

Figure 1. The game was developed to be played on a large multi-touch tablet and multiple smartphones.

AR Tamagotchi : Animate Everything Around Us

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

FlexAR: A Tangible Augmented Reality Experience for Teaching Anatomy

Programming reality: From Transitive Materials to organic user interfaces

APPEAL DECISION. Appeal No USA. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan

Study of the touchpad interface to manipulate AR objects

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Telepresence Interaction by Touching Live Video Images

Multi-touch Interface for Controlling Multiple Mobile Robots

Chapter 1 - Introduction

Exploration of Alternative Interaction Techniques for Robotic Systems

Affordance based Human Motion Synthesizing System

Application of 3D Terrain Representation System for Highway Landscape Design

3D Interaction Techniques

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Toward an Augmented Reality System for Violin Learning Support

Virtual Reality Calendar Tour Guide

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1

Interior Design with Augmented Reality

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

iwindow Concept of an intelligent window for machine tools using augmented reality

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

VR/AR Concepts in Architecture And Available Tools

Implementation of Image processing using augmented reality

Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space

HydroMorph: Shape Changing Water Membrane for Display and Interaction

Building a gesture based information display

Organic UIs in Cross-Reality Spaces

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

Augmented Reality And Ubiquitous Computing using HCI

mixed reality mixed reality & (tactile and) tangible interaction (tactile and) tangible interaction class housekeeping about me

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass

Building a bimanual gesture based 3D user interface for Blender

Spatial Mechanism Design in Virtual Reality With Networking

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

The Mixed Reality Book: A New Multimedia Reading Experience

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

Recent Progress on Wearable Augmented Interaction at AIST

Simultaneous Object Manipulation in Cooperative Virtual Environments

Interactive Exploration of City Maps with Auditory Torches

Spatial augmented reality to enhance physical artistic creation.

R (2) Controlling System Application with hands by identifying movements through Camera

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane

Paint with Your Voice: An Interactive, Sonic Installation

Utilizing Physical Objects and Metaphors for Human Robot Interaction

Interactive Multimedia Contents in the IllusionHole

Kissenger: A Kiss Messenger

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Geo-Located Content in Virtual and Augmented Reality

ITS '14, Nov , Dresden, Germany

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

User Interface Software Projects

DATA GLOVES USING VIRTUAL REALITY

GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING

Efficient In-Situ Creation of Augmented Reality Tutorials

International Journal of Computer Engineering and Applications, Volume XII, Issue IV, April 18, ISSN

Augmented Reality and Its Technologies

Novel machine interface for scaled telesurgery

The Amalgamation Product Design Aspects for the Development of Immersive Virtual Environments

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness

Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation

Development of a telepresence agent

PRODUCTS AND LAB SOLUTIONS

EXPLORING SENSING-BASED KINETIC DESIGN

Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

Cord UIs: Controlling Devices with Augmented Cables

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

project gnosis tech ed development centre Teaching Kids since 2013

Reflections on augmented reality for Heavy machinery- practical usage and challenges.

HUMAN COMPUTER INTERFACE

Guidelines for choosing VR Devices from Interaction Techniques

IMGD 4000 Technical Game Development II Interaction and Immersion

Occlusion-Aware Menu Design for Digital Tabletops

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

Development of excavator training simulator using leap motion controller

Mid-term report - Virtual reality and spatial mobility

ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field

Devastator Tank Mobile Platform with Edison SKU:ROB0125

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray


Multi-Modal User Interaction

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment

Transcription:

ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Shunichi Kasahara, Ryuma Niiyama, Valentin Heun, and Hiroshi Ishii. 2013. extouch: spatially-aware embodied manipulation of actuated objects mediated by augmented reality. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction (TEI '13). ACM, New York, NY, USA, 223-228. http://dx.doi.org/10.1145/2460625.2460661 Association for Computing Machinery (ACM) Version Author's final manuscript Accessed Sun Nov 18 16:34:11 EST 2018 Citable Link http://hdl.handle.net/1721.1/79861 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/

extouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality Shunichi Kasahara Ryuma Niiyama Valentin Heun Hiroshi Ishii Sony Corporation Shunichi.Kasahara@jp.sony.com ryuma@media.mit.edu heun@media.mit.edu ishii@media.mit.edu ABSTRACT As domestic robots and smart appliances become increasingly common, they require a simple, universal interface to control their motion. Such an interface must support a simple selection of a connected device, highlight its capabilities and allow for an intuitive manipulation. We propose "extouch", an embodied spatially-aware approach to touch and control devices through an augmented reality mediated mobile interface. The "extouch" system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object. We demonstrate that the system can be used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room. Author Keywords Tangible interfaces, Direct Manipulation, Embodied Interactions, Augmented Reality, Spatially Aware Interface ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User Interfaces: Input devices and strategies; H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities; I.3.6 [Methodology and techniques]: Interaction techniques. General Terms Design, Human Factors INTRODUCTION Increasingly, smart appliances are widely used in domestic settings, for instance, networked TV, motorized window shades and digitally controlled lights. Meanwhile, users have to adapt to their complicated operation. Moreover, the growth of robotic technology provides new kinds of products for the living space such as pan/tilt security cameras, robotic vacuum cleaners, and robotic lawn mowers. These products require not only simple on/off Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. TEI 2013, Feb 10-13, 2013, Barcelona, Spain. Copyright 2013 ACM 978-1-4503-1898-3/13/02...$15.00. Figure 1. extouch, User can simply touch and drag the target device through live video on the screen control but also complicated 3D motion control in space. These situations led to invent a universal remote controller. A universal controller using a portable computer (often a smart phone or tablet) has benefits compared to conventional switch/button interface. Specifically, it provides unlimited interaction design by software, control of unreachable objects, and secure remote operation. On the other hand, existing universal controllers require undesirable effort for selecting a target device from multiple devices and are not well suited for advanced motion control. Recent approaches for direct selection of the target device are based on hand gestures, and laser pointers [1]. Another approach for device selection is using a touch screen with camera captured image [2][4]. To provide assistive guidance for the target device with complicated action and function, overlay image based on augmented reality approach is a standard technique [5]. In addition, spatial interaction with virtual objects overlaid on live video is a highly discussed issue [6]. However, developing intuitive interfaces for three-dimensional motion control of physical objects in real life has been a challenge that has not yet been solved elegantly from user s point of view. To address these issues, we propose the "extouch" system (Fig.1). Users can control the physical motion of a target

device by simply touching and dragging through firstperson live view video on the screen. The system is based on a portable computer with built-in camera, and does not require external equipment like large-scale motion capture system or ceiling cameras. We demonstrate that the system is applicable for smart devices such as an omnidirectional vehicle, a drone, and moving furniture. RELATED WORK There have been several studies for intuitive interface to control objects in living space. Laser Gesture [1] is a system for operating mobile robots by using a laser pointer and ceiling camera for detecting pointer. It is convenient but not applicable for complex motion and also faces laser safety issues. Another promising approach uses camera captured images on a touch screen. The idea of controlling devices through a video image was introduced by Tani et al. [2] in the project Object Oriented Video. Users can manipulate digital functions through the captured video. CRISTAL [4] is a system for manipulating home electronics in the room via tabletop multi-touch interface that uses perspective image of the room. For physical motion control, TouchMe [3] is a system for tele-operating a mobile robot through a touch panel. These approaches provide real-time visual feedback and intuitive touch interaction. However, they require the user to consider the spatial relation between miniature world in the monitor and the real world as reading a scale map. SPATIALLY-AWARE INTERACTION We present a concept of "spatially-aware interaction" to transfer the benefits of direct manipulation to the touch screen interface. The spatially-aware interaction satisfies the following features; 1) user-egocentric manipulation, and 2) spatially consistent relation between virtual world and Figure 2. Spatial projection of the user touch input into the target motion real world. The user-centered manipulation means that the action of target object is always described in the user s coordinate system independent from the object posture. In contrast, conventional gamepad/joystick sends a command based on object s coordinate system. Spatial consistency represents that a camera captured image represents on the control interface always corresponds to the real world, in terms of position, orientation, and scale. For example, when a tourist reads a scale map and walks, the tourist has to consider the conversion of direction, standing, and scale. In the spatially-aware interaction, the virtual world on the screen emulates user s egocentric view in real time. EXTOUCH User interaction The extouch is a system for motion control of actuated objects based on spatially-aware interaction mediated by augmented reality. A user holds the mobile device with a touch screen and back-mounted camera, and points it at the target object. Once the object comes into the range of the Table 1. extouch Interaction for motion control. Users can change the position and rotation through multi-touch gestures or by physically moving the mobile device spatially.

camera, superimposed virtual graphics are rendered on the captured image based on AR approach. Users can control the target objects by dragging the superimposed graphics with multi-touch gestures on the screen or by physically moving the device. Then the physical object moves to match the location of the on-screen graphics. User can see the results through the live video on the screen. Implementation overview The extouch system employs augmented reality recognition technology to obtain a three-dimensional pose matrix (homogeneous transformation matrix), which describes the position and orientation of the target object in relation to the screen. The system allows spatial projection of the user input into the target motion by using egocentric live image from back-mounted camera and touch screen (Fig. 2). The system performs visual servo control to make the physical object closer to a desired position by using the difference between the recognized object position and dragged virtual graphics position. Motion Control Interaction The fundamental metaphor used in extouch for remote motion control of physical objects is touch and drag. Users can change the position and rotation of the target object through two types of touch and drag interaction. For convenience, we call these screen drag and physical drag to describe these gestural grammar (Table 1). When a user touches the screen, a control plane corresponding to the object movements is determined based on the touch points. Once the control plane is determined, touch points are projected into the recognized coordinate system based on the pose matrix of the control plane and the camera parameters. Finger motion on the touch screen and physical movement of the device while touching both turn out to be a motion of projected point on the control plane. Therefore, both screen drag and physical drag interactions can be used simultaneously. Screen Drag Users can use gestures with one or two fingers on the touch screen. For translational motion, user can control planar motion by one-finger drag, which implies an impression that the finger is generating a force to move the object. For vertical motion user can use two-finger pinch in/out gesture which is based on the metaphor that the object looks larger when it s closer. For rotation, user can control roll axis by two-finger rotation gesture, which is well known multitouch gesture in touch screen device, and user can control pitch/yaw axes by two-finger drag gesture. This design resolves the ambitiousness of translation motion and rotation motion by assigning one-finger and two-finger gesture. Physical Drag Physical movement of the device while touching the screen also results in movements of the projective point on the control plane as well as the screen drag. This feature allows user to control translational and rotational motion by Figure 3. Virtual graphics show the virtual destination subject to motion constraints. Left; planar translational motion, Right; one axis rotation. Figure 4. Visual servoing process. Motion control signal for the actuated object is generated from the difference between the recognized object position and dragged virtual graphics position physically moving the mobile device while touching the object on the screen. This interaction provides the metaphor that a finger fixes the spatial relationship between the object and the finger. Graphic motion affordance When objects are recognized from the camera image, prompting graphics are overlaid on the camera-captured image in real time. Moreover, once the specific control plane is determined, assistive graphics are rendered to indicate the motion constraints of the object (Fig. 3). These guide graphics also show the virtual destination of the object in real-time. This allows users to confirm where the target object is going to move in the real world. In our current implementation, when user releases his/her finger, the virtual destination moves back toward the current position and orientation of the object with animated transition. Visual Servoing To achieve a desired position and orientation designated by user, the system performs visual servo control (Fig. 4). The pose difference is calculated from the current pose matrix of the object and the target pose matrix. The pose difference is mapped into control command and sent to the selected object. APPLICATION We implemented the extouch system with several different actuated objects to understand its real world performance. In order to explore the effectiveness of the extouch s interaction design, devices with various degrees of freedom of movements were chosen, omnidirectional vehicle, a drone, and moving furniture.

We employ image based AR recognition technology (Vuforia [7]) which recognizes a natural image as a target planar object with registered dictionary data. Our prototype system composed of the mobile application running on an Apple ipad 2, and actuated objects connected via wireless network. Omni directional robot Wheeled mobile robots are the most popular form of robots in office and home use. We built an omnidirectional mobile robot to simulate practical robots such as cleaner robot, and tele-presence robot. The robot has two degrees of freedoms (DoF) for planar translation and 1 DoF for rotation about a vertical axis (Fig 1). Flying drone control Control of an unmanned aerial vehicle (UAV), commonly known as a drone, is an emerging field. The hovering flight of a quadcopter drone is suited to carrying instrument such as flying video/photo shooting. Our mini quadcopter has 3DoF for translation and 1DoF for yaw rotation (Fig 5-a). Transformable furniture control Transformable furniture is a novel approach to utilize robotics technology in future homes. We used motorized 1DoF motorized wall/rack for the test (Fig 5-b). DISCUSSION The drag metaphor is used to command an absolute target position through the touch screen. While the physical object cannot pursue the target as quick as user gesture due to the limitation of motion control, the superimposed virtual graphic greatly helps to buffer this issue. In our system, the control signal is determined based on visual servo control so that the target object approaches an absolute target position from the user s point of view. Thus the user does not need to consider the position difference or control command. One of advantage of the extouch gesture design is that screen drag and physical drag are not exclusive, and can be performed simultaneously. This enables continuous interaction with limited size screen of mobile devices. The prototype application suggests the necessity of further evaluation of the user's cognitive load regarding spatial perception. Especially, the manipulation from free angle by using proposed system should be assessed as compared with fixed camera system. Validating usability of the system at sight with inexperienced users is another issue. CONCLUSION We introduced "extouch", a spatially-aware approach to control devices through an augmented reality mediated mobile interface. The system allows spatial projection of the user input into the target motion by using egocentric camera captured image and multi-touch gesture. User can perform both screen drags (move finger on the screen) and (a) (b) Figure 5. Application example. Flying drone (a), Transformable furniture (b). physical drags (fix finger and move the screen physically) simultaneously. This feature allows embodied continual interaction not limited by the screen size. The preliminary exploration on the control of omnidirectional vehicle, a drone, and motorized furniture show the potential effectiveness on various types of actuated objects. We achieved spatial extension of touch screen interaction through spatially-aware approach by augmented reality. We envision that proposed spatially-aware interaction provide further enhancement of human physical ability through spatial extension of user interaction. ACKNOWLEDGMENTS We thank Kent Larson and Hasier Larrea for their help with moving wall system and discussions about human home interaction. REFERENCES 1. Ishii, K., Zhao, S., Inami, M., Igarashi, T., and Imai, M., Designing Laser Gesture Interface for Robot Control. In Proc INTERACT 2009, Part 2, pp.479-492. 2. Tani, M., Yamaashi, K., Tanikoshi, K., Futakawa, M., and Tanifuji, S. Object-oriented video: interaction with real-world objects through live video. In Proc. CHI 1992, pp. 593-598. 3. Hashimoto, S., Ishida, A., Inami, M., and Igarashi, T. TouchMe: An Augmented Reality Based Remote Robot Manipulation. The 21st International Conference on Artificial Reality and Telexistence, Proceedings of ICAT2011. 4. Seifried, T., Haller, M., Scott,S., Perteneder, F., Rendl, C., Sakamoto, D., and Inami, M. CRISTAL: Design and Implementation of a Remote Control System Based on a Multi-touch Display. In Proc. ITS2009, pp. 37-44. 5. D. W. F. van Krevelen, R. Poelman. A Survey of Augmented Reality Technologies, Applications and Limitations. The International Journal of Virtual Reality, 2010 Vol. 9, No. 2., pp. 1-20. 6. Bimber, O., Raskar, R. Spatial Augmented Reality: Merging Real and Virtual Worlds. A K Peters, Ltd, Wellesley, Massachusetts. Retrieved December 1, 2010. 7. Qualcomm, VuforiaTM : http://www.qualcomm.com/solutions/augmented-reality