A Mixed Reality Approach to HumanRobot Interaction

Similar documents
UNIVERSITY OF CALGARY TECHNICAL REPORT (INTERNAL DOCUMENT)

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

WHAT IS MIXED REALITY, ANYWAY? CONSIDERING THE BOUNDARIES OF MIXED REALITY IN THE CONTEXT OF ROBOTS

The Mixed Reality Book: A New Multimedia Reading Experience

MOVING A MEDIA SPACE INTO THE REAL WORLD THROUGH GROUP-ROBOT INTERACTION. James E. Young, Gregor McEwan, Saul Greenberg, Ehud Sharlin 1

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

Re-build-ing Boundaries: The Roles of Boundaries in Mixed Reality Play

Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface

Utilizing Physical Objects and Metaphors for Human Robot Interaction

3D and Sequential Representations of Spatial Relationships among Photos

AR Tamagotchi : Animate Everything Around Us

HeroX - Untethered VR Training in Sync'ed Physical Spaces

Ubiquitous Home Simulation Using Augmented Reality

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

Effective Iconography....convey ideas without words; attract attention...

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

Augmented Reality Lecture notes 01 1

Tableau Machine: An Alien Presence in the Home

Human Robotics Interaction (HRI) based Analysis using DMT

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Adapting SatNav to Meet the Demands of Future Automated Vehicles

Interior Design using Augmented Reality Environment

Guidelines for Implementing Augmented Reality Procedures in Assisting Assembly Operations

Multiple Presence through Auditory Bots in Virtual Environments

Gameplay as On-Line Mediation Search

VIRTUAL REALITY AND SIMULATION (2B)

synchrolight: Three-dimensional Pointing System for Remote Video Communication

Service Cooperation and Co-creative Intelligence Cycle Based on Mixed-Reality Technology

Shared Presence and Collaboration Using a Co-Located Humanoid Robot

Vocational Training with Combined Real/Virtual Environments

Interactive Exploration of City Maps with Auditory Torches

Mixed-Initiative Interactions for Mobile Robot Search

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems

Gesture Recognition with Real World Environment using Kinect: A Review

CURRICULUM VITAE. Evan Drumwright EDUCATION PROFESSIONAL PUBLICATIONS

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence

Abstract. Keywords: virtual worlds; robots; robotics; standards; communication and interaction.

Guidelines for Implementing Augmented Reality Procedures in Assisting Assembly Operations

Avatar: a virtual reality based tool for collaborative production of theater shows

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Distributed Robotics: Building an environment for digital cooperation. Artificial Intelligence series

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass

H2020 RIA COMANOID H2020-RIA

A Hybrid Immersive / Non-Immersive

Chapter 1 Virtual World Fundamentals

With a New Helper Comes New Tasks

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

Augmented Reality in Transportation Construction

Motivation and objectives of the proposed study

DESIGN A MODEL AND ALGORITHM FOR FOUR GESTURE IMAGES COMPARISON AND ANALYSIS USING HISTOGRAM GRAPH. Kota Bilaspur, Chhattisgarh, India

Virtual Reality Based Scalable Framework for Travel Planning and Training

Designing a New Communication System to Support a Research Community

CPE/CSC 580: Intelligent Agents

Technology offer. Aerial obstacle detection software for the visually impaired

CREATING TOMORROW S SOLUTIONS INNOVATIONS IN CUSTOMER COMMUNICATION. Technologies of the Future Today

Implementation of Image processing using augmented reality

Context-Aware Interaction in a Mobile Environment

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

SnakeSIM: a Snake Robot Simulation Framework for Perception-Driven Obstacle-Aided Locomotion

THE AI REVOLUTION. How Artificial Intelligence is Redefining Marketing Automation

FM Knowledge Modelling and Management by Means of Context Awareness and Augmented Reality

Augmented and mixed reality (AR & MR)

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology

Individual Test Item Specifications

R (2) Controlling System Application with hands by identifying movements through Camera

Intelligent interaction

Beyond: collapsible tools and gestures for computational design

COMP5121 Mobile Robots

THE VIRTUAL-AUGMENTED-REALITY ENVIRONMENT FOR BUILDING COMMISSION: CASE STUDY

QS Spiral: Visualizing Periodic Quantified Self Data

AUGMENTED REALITY, FEATURE DETECTION Applications on camera phones. Prof. Charles Woodward, Digital Systems VTT TECHNICAL RESEARCH CENTRE OF FINLAND

Virtual Environments. Ruth Aylett

Image Extraction using Image Mining Technique

Applying CSCW and HCI Techniques to Human-Robot Interaction

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Stereoscopic Augmented Reality System for Computer Assisted Surgery

Summary of robot visual servo system

International Journal of Advanced Research in Computer Science and Software Engineering

Application of 3D Terrain Representation System for Highway Landscape Design

Augmented and Virtual Reality

CHAPTER 1. INTRODUCTION 16

Computer Graphics. Spring April Ghada Ahmed, PhD Dept. of Computer Science Helwan University

- 9_12TI7973-QUIZ2 - Print Test

Welcome, Introduction, and Roadmap Joseph J. LaViola Jr.

Artificial intelligence, made simple. Written by: Dale Benton Produced by: Danielle Harris

Human Autonomous Vehicles Interactions: An Interdisciplinary Approach

MAPS & ENHANCED CONTENT

Interior Design with Augmented Reality

Electronic Navigation Some Design Issues

AUGMENTED REALITY FOR COLLABORATIVE EXPLORATION OF UNFAMILIAR ENVIRONMENTS

SECOND YEAR PROJECT SUMMARY

What is Augmented Reality?

interactive laboratory

Transcription:

A Mixed Reality Approach to HumanRobot Interaction First Author Abstract James Young This paper offers a mixed reality approach to humanrobot interaction (HRI) which exploits the fact that robots are both digital and physical entities. We use mixed reality (MR) to integrate digital interaction into the physical environment, allowing users to interact with robots' ideas and thoughts directly within the shared physical interaction space. We also present a taxonomy which we use to organise and classify the various interaction techniques that this environment offers. We demonstrate this environment and taxonomy by detailing two interaction techniques, thought crumbs and bubblegrams, and to evaluate these techniques, we offer the design of an implementation prototype. University of Calgary 2500 University Drive NW Calgary, AB T2N 1N4 jyoung@cpsc.ucalgary.ca Second Author Ehud Sharlin University of Calgary 2500 University Drive NW Calgary, AB T2N 1N4 ehud@cpsc.ucalgary.ca Keywords Mixed Reality, Human-robot interaction ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User interfaces---input devices and strategies, Interaction styles Copyright is held by the author/owner(s). Introduction CHI 2006, April 22 27, 2006, Montreal, Canada. Robot technology is advancing steadily, and some believe that a robotics revolution is upon us[7]. As such, it is important that we understand the various ACM 1-xxxxxxxxxxxxxxxxxx.

issues and problems surrounding HRI and develop effective interfaces to work with robots. Current HRI interfaces often fail to acknowledge that robots are simultaneously both physical and digital entities, and this separation of interaction spaces can hinder communication between humans and robots[2]. MR offers one solution to this problem by augmenting parts of the physical world with computer data (commonly accomplished using projectors or head-mounted displays (HMDs)), allowing a human user to interact with the digital information directly within their physical interaction space. Combining the physical and MR interaction spaces, this provides an environment which the humans and robots can use to interact. We call this environment the MR Integrated Environment(MRIE)1 Given that robots are generally autonomous and mobile, they have a very large and dynamic physical interaction space. With the MRIE, we offer an environment which allows robots to utilise this space both physically and digitally, resulting in an extremely flexible interaction environment which robots can use to express their digital ideas and thoughts. There are many possibilities for human-robot interaction within the MRIE. As a method of organising this, we introduce a taxonomy of the MRIE which we use to classify and compare various interaction techniques. This taxonomy maps the MRIE into four variables: virtuality, lifespan, ownership, and activity. To demonstrate the use of this taxonomy, two MRIE interaction techniques (thought crumbs and bubblegrams) are presented. Furthermore, we detail the design for a preliminary prototype which we will use 1 pronounced merry to realise and evaluate the ideas and techniques presented here. Related Work MR has been used as a means of combining digital information with the physical world for various applications, including animating storybooks[1], controlling robots[5] and assisting with medical surgery[3]. Most MR techniques can be classified as either using head-mounted-display (HMD) visualisation or projective visualisation. HMD visualisation offers portability and flexibility, given that they are often light weight and can be connected to a wearable computer. However, HMDs may constrict the user's vision due to a poor field-of-view and low resolution. Projective visualisation, on the other hand, can be integrate seamlessly into a user's entire field of view, allowing the user to fully use their natural vision. The downside, however, is that projectors are generally less portable than HMDs and they require a projection surface and appropriate lighting to work well. There has been work on organising the wide range of interaction techniques offered by MR. For exmple, Milgram and Kishino map MR interaction to a taxonomy which classifies techniques as somewhere between the pure physical environment and a complete virtual environment[6]. The literature offers other taxonomies, such as a taxonomy for multi-robot systems which uses task type and criticality for classification[10]. While mixed reality has been used for various interaction applications, there has been a limited amount of work using mixed reality for HRI. In relation to this work, our MRIE is unique in that it offers an integrated digital and physical environment to be

shared between humans and autonomous robots. We also offer a taxonomy for the MRIE, similar to other taxonomies offered; Milgram's taxonomy is simply a single category within our taxonomy. Just as Milgram's taxonomy of MR offers a clear way to categorise various MR interaction systems, our taxonomy expands this and offers similar benefits for analysing MRIE interaction techniques. The MRIE deals primarily with humans and robots interacting in a shared physical space and so portability and flexibility are very important aspects of prototype plan. As such, our prototype plan incorporates the use of HMDs for the mixed reality interface. A Taxonomy for the MRIE Our taxonomy maps the MRIE into four key variables, lifespan, ownership, activity, and virtuality, providing criteria which we use to describe MRIE interaction techniques. The lifespan variable determines how long instances of a MRIE interaction technique last. For example, a robot may place a permanent MR element into the environment for information purposes, resulting in an arbitrarily long or permanent lifespan. On the other hand, a robot may display a surprise mark which is designed to disappear soon, resulting in a very short lifespan. The ownership variable determines which robot in the MRIE, if any, owns the technique instance. This variable also includes partial ownership by other entities in the environment, so that entities in the environment may have control to alter instances created by a different entity. The owner robot can decide which other entities can view or edit aspects of a technique instance. The activity variable determines how active an interaction technique instance is, including the level at which it attracts attention and how it responds to interaction. An example of a technique with very low activity is a MR element which displays a static decoration on a wall; this technique does not actively invite attention, and does not react to interaction attempts. An example of a technique with high activity is a MR interactive menu system which incorporates three dimensional animation and sounds for interaction purposes. The virtuality variable is based on Milgram's taxonomy and the virtuality continuum which he uses to represent it[6]; it categorises the representation technique as somewhere between purely physical and purely virtual. A purely physical technique could involve physically touching a robot, while a purely virtual technique could be to use virtual reality to control a robot. Interaction Techniques This section presents two techniques within the MRIE, bubblegrams and thought crumbs, illustrating how the MRIE and taxonomy can be used. Bubblegrams Bubblegrams are MRIE interaction techniques which are based on comic-style thought and speech bubbles, with the idea that they represent a robot's thoughts and expressions. They use MR to overlay the bubbles onto the physical interaction scene, floating next to the robot which generated it. Bubblegrams can also be interactive, offering interfaces such as status displays

or system menus, resembling an interactive physical display. Lifespan: Bubblegrams are designed for short-term and specific interaction, and are generally not used for longterm tasks. For example, a surprise bubblegram floating over a robot's head may last for five seconds. Ownership: Following the comic-style bubble motivation, bubblegrams have a single owner and are represented as being spatially attached to the owner in the MRIE interaction space. Activity: While interactivity is not implied by the thought bubble motivation, bubblegrams offer a wide range of activity, ranging from a static graphic with no interactivity to a full-fledged animated and interactive menu. Virtuality: Bubblegrams have medium virtuality, since they can actively bring complex digital data into the user's interaction space. Figure 1: Artistic rendition of a bubblegram Thought Crumbs Thought crumbs, inspired by bread crumbs from the children's story Hansel and Gretel, is an interaction technique which uses pieces of digital information to represent a robot's thoughts or observations. These thought crumbs are left behind in the MRIE by virtually attaching an MR element to specific physical locations. For example, search and rescue robots may use thought crumbs to leave information such as temperature levels at particular locations. Lifespan: Thought crumbs can have a lifespan of any length. A short-lived thought crumb may be a note left by a cleaning robot after cleaning a floor to say that the floor is wet; this thought crumb would expire after approximately ten minutes. A long term thought crumb could be a set of directional arrows left by a robot to direct a flow of traffic. These arrows would possibly be left until explicitly destroyed, possibly weeks later. Ownership: Thought crumbs, once placed, are public elements within the shared environment and have no owner. Being an independent MR element, any entity within the space has full access to modify or destroy the element. For example, a cleaning robot may destroy thought crumbs which marked the areas it had to clean, or a human may remove thought crumb notes left behind by a cleaning robot. Activity: Thought crumbs generally offer little to medium activity. Interactive thought crumbs only give basic expansions on the data already presented. An example of an interactive thought crumb could be a box which displays a bit of text, with scroll bars on the side to scroll through the text. The activity of the thought crumb can also be dependent on its age, such that the

age is conveyed to human users. For example, an old thought crumb may look wrinkled, faded, or rusty. Virtuality: Thought crumbs have medium virtuality, as they actively use the virtual techniques to convey information. Applications Imagine a futuristic human and robot collaborative search and rescue team which uses the MRIE as a versatile and dynamic interaction environment. As the team enters a burning building, the robot team members rush ahead, surveying the building and leaving behind MR thought crumbs. These thought crumbs augment the vision of the humans, suggesting routes to take and highlighting various observations along the way. These observations include locked icons on doors which the robots found to be obstructed or skull and cross-bone poison icons representing dangerous gas levels. Finding a human survivor or victim, a robot notifies the human team members, and the humans can follow the particular robot's thought crumbs directly to the person. Upon encountering a human team member, a robot can display a bubblegram to the human, popping up a MR system menu which can be used to get status information about the robot or to give the robot commands. Humans can also issue queries to the robot, to get information about other robots or vital signs of a survivor, and will receive live results in an active bubblegram. This scenario does not use the decorations interaction technique because decorations are for longterm static elements, whereas the search and rescue team will not likely frequent the same rescue locations. Preliminary Prototype In order to build a working prototype which uses the MRIE, we have selected an Icuiti DV920 HMD and a Toshiba tablet PC in combination with a Sony AIBO robot dog as our MR platform. Next, computer vision needs to be used to identify AIBOs and locations in space for our interaction techniques. We will use an object recognition technique[9] in combination with a pre-mapped and tagged environment as our interaction space. The tags in the environment simulate points which entities would use for the interaction techniques. We have already successfully implemented this technique for AIBO detection. Thirdly, an input method must be implemented which allows human users to interact with the MR elements. We are currently using the tablet interface, but will explore alternatives. Finally, a framework will be created to implemented the shared MR space. We have designed a central network server which will be used by all entities within the space. Once implemented, this prototype can be used to evaluate the MRIE and presented techniques. Figure 2: Artistic rendition of a thought crumb

Future Work The immediate future work is to finish the prototype implemention, and to evaluate the MRIE techniques presented. A comprehensive evaluation criteria must be developed as a means to compare the particular techniques. From here, various user studies will be conducted to test the system using the evaluation criteria. There is room to add depth to our taxonomy, breaking each existing category into multiple, more narrow, categories. For example, the activity variable could be broken down into response to interaction and representation technique. Entity-independent MR elements, such as thought crumbs, reside within the MRIE. It would be interesting to embed AI into these elements to create agent-like entities which reside purely within the mixed reality portion of the MRIE, just as humans reside purely in the physical portion and the robots reside in both. Conclusion Robot technology is advancing steadily, and it is important that we understand the various issues and problems surrounding interaction with robots. The fact that robots reside and interact in both the digital and physical worlds introduces interesting interaction challenges. To meet these challenges, we propose a solution called the MR Integrated Environment (MRIE) which provides a virtual environment of graphics and sound integrated directly into the real world. Using mixed reality, this environment allows a human user to interact with a robot's ideas and thoughts directly within the shared physical interaction space. We also offer a taxonomy of the MRIE as a method of classifying the various interaction techniques that the MRIE offers. We demonstrate the MRIE and taxonomy through the introduction and discussion of two interaction techniques, thought crumbs and bubblegrams, and the mapping of these to the taxonomy. This paper also presented a design outline of a preliminary prototype which, once developed, can be used as a platform for evaluating various MRIE techniques in practice. References [1] Billinghurst, M., Kato, H., and Poupyrev, I. The MagicBook: Moving Seamlessly Between Reality and Virtuality. IEEE Comput. Graph. Appl. 21, 3 (2001). [3] Grimson, W., Ettinger, G., Kapur, T., Leventon, M., Wells, W., and Kikinis, R. Utilizing Segmented MRI Data in Image-Guided Surgery. Int. Journal of Pattern Recognition and AI 11, 8 (Feb. 1998), 1367 1397. [5] Milgram, P., Drasic, D., and Zhai, S. Applications of Augmented Reality in Human-Robot Communication. In proc. of the 1993 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (1993), pp. 1244 1249. [6] Milgram, P., and Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. on Information and Systems E77-D, 12 (Dec. 1994), 1321 1329. [7] Norman, D. A. Emotional Design: Why We Love (or Hate) Everyday Things. Basic Books, New York, 2004. [8] Ramesh, R., Welch, G., and Fuchs, H. Spatially Augmented Reality. In First IEEE Workshop on Augmented Reality (IWAR 98) (Nov. 1998). [9] Viola, P., and Jones, M. Rapid Object Detection Using a Boosted Cascade of Simple Features.CVPR 2001 [10] Yanco, H., and Drury, J. Classifying Human-Robot Interaction: an Updated Taxonomy. In Systems, Man and Cybernetics, 2004 IEEE Int. Conf. on (Oct. 2004), vol. 3, pp. 2841 2846.