Context-Aware Interaction in a Mobile Environment

Similar documents
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Toward the Synchronized Experiences between Real and Virtual Museum

Designing Semantic Virtual Reality Applications

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a

EDUCATIONAL PROGRAM YEAR bachiller. The black forest FIRST YEAR OF HIGH SCHOOL PROGRAM

THE IMPACT OF INTERACTIVE DIGITAL STORYTELLING IN CULTURAL HERITAGE SITES

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

A User-Friendly Interface for Rules Composition in Intelligent Environments

CURRENT interactive systems determine an evolution in

Supporting the Design of Self- Organizing Ambient Intelligent Systems Through Agent-Based Simulation

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

The presentation based on AR technologies

Editorial Manager(tm) for Multimedia Tools and Applications Manuscript Draft

Agent-Based Home Simulation and Control

School of Computing, National University of Singapore 3 Science Drive 2, Singapore ABSTRACT

EDUCATIONAL PROGRAM YEAR bachiller. The black forest SECOND YEAR OF HIGH SCHOOL PROGRAM

Agent-Based Systems. Agent-Based Systems. Agent-Based Systems. Five pervasive trends in computing history. Agent-Based Systems. Agent-Based Systems

Natural Interaction with Social Robots

Marco Cavallo. Merging Worlds: A Location-based Approach to Mixed Reality. Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Social Network Analysis in HCI

SPQR RoboCup 2016 Standard Platform League Qualification Report

Issues on using Visual Media with Modern Interaction Devices

This list supersedes the one published in the November 2002 issue of CR.

Elements of Artificial Intelligence and Expert Systems

Using Mixed Reality as a Simulation Tool in Urban Planning Project for Sustainable Development

A Survey of Mobile Augmentation for Mobile Augmented Reality System

Service Cooperation and Co-creative Intelligence Cycle Based on Mixed-Reality Technology

Augmented Real-Time Virtual Environments

Minerva: An Artificial Intelligent System for Composition of Museums Francesco ~rni~oni"', Viola ~chiaf~onati'" and Marco ~omalvico~ *)

Dynamic Designs of 3D Virtual Worlds Using Generative Design Agents

INAM-R2O07 - Environmental Intelligence

Meta-models, Environment and Layers: Agent-Oriented Engineering of Complex Systems

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

Development of Video Chat System Based on Space Sharing and Haptic Communication

Re-build-ing Boundaries: The Roles of Boundaries in Mixed Reality Play

TECHNOLOGICAL COOPERATION MISSION COMPANY PARTNER SEARCH

Towards affordance based human-system interaction based on cyber-physical systems

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information

AMIMaS: Model of architecture based on Multi-Agent Systems for the development of applications and services on AmI spaces

WHO. 6 staff people. Tel: / Fax: Website: vision.unipv.it

A Demo for efficient human Attention Detection based on Semantics and Complex Event Processing

Virtual prototyping based development and marketing of future consumer electronics products

Towards a Methodology for Designing Artificial Conscious Robotic Systems

Experimenting with Sound Immersion in an Arts and Crafts Museum

Negotiation Process Modelling in Virtual Environment for Enterprise Management

Designing a context-aware architecture for emotionally engaging mobile storytelling

Visualization and Analysis of Visiting Styles in 3D Virtual Museums

Agent Oriented Software Engineering

Methodology for Agent-Oriented Software

Networked Virtual Environments

FP7 ICT Call 6: Cognitive Systems and Robotics

Interactive System for Origami Creation

Joining Forces University of Art and Design Helsinki September 22-24, 2005

Interaction Design for the Disappearing Computer

Augmented reality for machinery systems design and development

Using Agent-Based Methodologies in Healthcare Information Systems

are in front of some cameras and have some influence on the system because of their attitude. Since the interactor is really made aware of the impact

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Description of and Insights into Augmented Reality Projects from

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y

2018 Avanade Inc. All Rights Reserved.

Q. No. BT Level. Question. Domain

Guidelines for Implementing Augmented Reality Procedures in Assisting Assembly Operations

City in The Box - CTB Helsinki 2003

CISC 1600 Lecture 3.4 Agent-based programming

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 03 STOCKHOLM, AUGUST 19-21, 2003

Planning in autonomous mobile robotics

An Unreal Based Platform for Developing Intelligent Virtual Agents

Ubiquitous Home Simulation Using Augmented Reality

An Agent-Based Architecture for Large Virtual Landscapes. Bruno Fanini

Multi-Agent Systems in Distributed Communication Environments

Context-Aware Emergent Behaviour in a MAS for Information Exchange

RescueRobot: Simulating Complex Robots Behaviors in Emergency Situations

Visualizing Sensor Data: Towards an Experiment and Validation Platform

Mobile Telepresence Services for Virtual Enterprise

Multi-Robot Cooperative System For Object Detection

The Mixed Reality Book: A New Multimedia Reading Experience

EK-02-01: Güzel Sanatlar ve Mimarlık Fakültesi, Mimarlık Bölümü Eğitim Öğretim Yılı İngilizce Ders Müfredatı Değişikliği

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

Agent-Oriented Software Engineering

Presenting Past and Present of an Archaeological Site in the Virtual Showcase

Context Sensitive Interactive Systems Design: A Framework for Representation of contexts

Interactive Multimedia Contents in the IllusionHole

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

From Tabletop RPG to Interactive Storytelling: Definition of a Story Manager for Videogames

Overview Agents, environments, typical components

School of Computer Science. Course Title: Introduction to Human-Computer Interaction Date: 8/16/11

Intelligent Agents Living in Social Virtual Environments Bringing Max Into Second Life

A DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607)

OASIS concept. Evangelos Bekiaris CERTH/HIT OASIS ISWC2011, 24 October, Bonn

EXTENDED TABLE OF CONTENTS

Using VRML and Collaboration Tools to Enhance Feedback and Analysis of Distributed Interactive Simulation (DIS) Exercises

The AMADEOS SysML Profile for Cyber-physical Systems-of-Systems

6 System architecture

An End-User Development Approach for Crafting Smart Interactive Experiences

INTERACTIVE SKETCHING OF THE URBAN-ARCHITECTURAL SPATIAL DRAFT Peter Kardoš Slovak University of Technology in Bratislava

Transcription:

Context-Aware Interaction in a Mobile Environment Daniela Fogli 1, Fabio Pittarello 2, Augusto Celentano 2, and Piero Mussio 1 1 Università degli Studi di Brescia, Dipartimento di Elettronica per l'automazione Via Branze 38, 25123 Brescia, Italia {fogli,mussio}@ing.unibs.it 2 Università Ca' Foscari di Venezia, Dipartimento di Informatica Via Torino 155, 30172 Mestre (Ve), Italia {pitt,auce}@dsi.unive.it Abstract. This paper addresses context awareness of user interaction in real spaces where a number of places devoted to interaction are defined, following a concept called interaction locus (IL). In the IL a coordinated set of information notifies the user about the specific nature of the place he/she has currently entered. The interaction takes place through mobile devices which manage the context of the user, and is mediated by two agents that are called the genius loci and the numen of the user. Context awareness is achieved by cooperation between the two agents, which interact according to the user history and the place interaction opportunities. An implementation architecture is described, suited for mixed reality environments. A case study related to cultural heritage is presented. 1 Introduction In this paper we elaborate on a novel approach to interaction in lightweight mixed reality environments, i.e., mixed reality environments [5] where humans interact with small portable devices, which has been presented and discussed in earlier papers [1,3]. Here we present an architecture for implementing Experiential Interaction Paradigms (EIP) in a context-aware mobile environment [2]. EIPs extend the Positional Interaction Paradigms, where humans participate to the interaction with their body, whose position is tracked by some input device and considered as one of the main data for interaction. In EIPs the experience of a user interacting with a system becomes an important source of knowledge. To deal with such knowledge we propose a methodology and a system architecture for the observation of the interaction and for the recognition of recurrent user behaviors. The approach aims at supporting the user during navigation and interaction, and also at supporting the designer in the discovery of usability problems and in the consequent improvement of system design. The approach is based on a set of cooperating agents [4], which act keeping track of the initial background of the human involved in the experience and of the relevant interaction in order to facilitate further interaction. Agents become aware of the context and able to adapt their behavior to it. L. Chittaro (Ed.): Mobile HCI 2003, LNCS 2795, pp. 434-439, 2003. Springer-Verlag Berlin Heidelberg 2003

Context-Aware Interaction in a Mobile Environment 435 The approach is based also on the interaction locus (IL) concept, introduced in the context of a research that aimed at resolving current weaknesses for interaction inside 3D environments [6]. In the context of this paper an interaction locus is a connected portion of space characterized by the presence of an underlying base world, the possibility of perceiving when a user enters and exits the IL, and the presence of identifiable interaction devices which support the exchange of information between the user and the world. In a mixed reality environment an interaction locus is the part of the world to which experiences are attached, i.e., the part of the world in which the user can interact with the embedded computing devices. Different types of interaction devices receive user input and provide information to the user: interactive objects, on which the user operates directly, which change their state or their appearance as a consequence of the user interaction; artifacts, mediators of the interaction between the user and the world, which make evident to the user interaction opportunities that would otherwise be unknown; dynamic information objects, which modify their state or appearance, e.g., show up or hide, as a consequence of the user interaction on other devices in the environment. 2 Agent-Mediated Interaction The interaction between a user and the objects of an interaction locus is observed and mediated by two agents, one associated to the locus, and the other associated to the user. The agent bound to the IL knows the information opportunities of the specific place and the interaction possibilities, therefore is able to assist the user in exploring the place. Borrowing the terminology from ancient roman religion we call such an agent a genius loci, a kind of local divinity who takes care of the place by giving the visitors the opportunity to get most benefit from its exploration. The agent can perceive the user behavior and actions such as entering and exiting the IL, moving freely or along paths, interacting with artifacts and interactive objects, etc. It is of course limited in perception to what the user is able to manifest about him/herself. Therefore, the agent is bound to the presence of a number of sensors in the scene, that are the devices which sense the user actions. Also the user has his/her own genius, that we call numen, a kind of guardian angel who follows the user during navigation by accumulating and managing knowledge about him/her. The numen knows the user character (the profile), can accumulate the exploration history across several places, and is able to interact with the genii of the different places in order to give them information about how to help the user in his/her visit. The two agents mediate the interaction between a user and a rich and differentiated environment accumulating, maintaining and exchanging knowledge about the user and the interaction place. Figure 1 pictorially describes this scenario in a virtual exhibition application; the genius loci is represented by a snake. The communication protocol between a user numen and a genius loci is activated when the user enters an IL and is defined by four steps. (1) The genius loci reacts to the user presence and starts a dialog with the user numen, by asking information about the user. (2) The numen gives the genius the information requested. In particular, the numen knows two kinds of information: a user profile, which is a static collection of properties and data about the user, and a user history, which is a set of data about

436 Daniela Fogli et al. previous user actions collected during exploration, transmitted by the genii of the other loci visited. (3) On the basis of the information provided by the numen, the genius is able to modify, if needed, the properties of interaction in that locus [1]. (4) On exiting the interaction locus, the genius loci returns to the numen the result of its observations, i.e., the discovered patterns of interactions for that locus. The numen decides, according to its own knowledge base (i.e., the user profile and the accumulated knowledge about other genii observations), how to consider the received information, which is processed according to the numen rules. More details can be found in [1]. Both types of agents are context-aware. The context of the numen is constituted by the user and by the genius loci; the numen adapts its knowledge according to the data received by the genius loci and to the current state of the user. The context of the genius loci is constituted by the user and by the numen; the genius loci determines its own behavior according to the data on the user obtained by the numen, to its current perception of the user and to its knowledge of the IL itself. Fig. 1. The genius loci mediating the user actions 3 An Implementation Architecture Figure 2 describes an architecture targeted towards distributed context-aware interaction in mixed-reality environments, implementing the concepts described above. The monitoring of the user position in the real world requires a special attention, because of the errors that occur in determining it using, e.g., GPS technology. For diminishing the impact of errors, we split the components for monitoring the user position from the representational and proactive part of the system. Two different 3D representations of the real 3D scene where the user interaction will take place are modeled: the 3D base world with the associated experience layer containing a 3D model of the scene to display on the PDA and a set of ILs mapped over this model with their genii loci, and the 3D internal representation of the real scene. The latter representation is not meant for visualization purposes and contains only a georeferenced set of volumetric sensors that have the same geometric limits of the set of ILs used for the experience layer. The positional parameters of the user received by the GPS are compared with the limits of these volumetric sensors. The result of the monitoring activity is passed to the filter component that interprets data. For example, if the user stays in a certain position for some time, the filter may infer that the user doesn't move because he/she's looking to something interesting and therefore he/she's probably inside an interaction locus, even if the raw GPS data

Context-Aware Interaction in a Mobile Environment 437 communicates that the user is on an empty' area. The filtered information is then passed in the form of updated positional data for the current camera of the 3D experience layer, i.e., the viewpoint in the 3D scene as seen by the user on the PDA screen. The filter may be programmed for sending a continuous correction of the raw data or for sending the coordinates of a significant stable point of view on a certain IL, once it has inferred that the user is inside the IL. The software simulator discussed below uses the second approach. Concerning the user input, we can distinguish two different categories of data: user profile, that is made of data collected from the user at the beginning of the interaction, and usage data, that are generated by the user during the interaction. Besides, there is a third category of data coming from the environment (time, weather, light, etc.) that can be meaningful for the interaction and therefore may be monitored from the system. A specific component of the architecture, the experience selector, composes the mixed reality experience, selecting the experience layer and the associated 3D internal representation. The experience selection is guided by the user, through the choice of his/her profile at the beginning of the interaction. The experience selector will compose the experience according to the mapping between experiences and user profiles, determined in the authoring phase [6]. The user is enabled to change the profile during the interaction. In that case, the experience selector will be responsible for coordinating the actions necessary to conclude the current experience in a consistent way and to load the experience layer related to the new choice. The filtered user position and the user interaction with the mobile device are caught by a set of sensors embedded in the experience layer. Activation and changes of their significant parameters are communicated to the script components of the world. These script components are embedded into the experience layer and implement the genii loci. Their function is to collect the user actions perceived through the sensors in specific areas of the environment (the ILs) and to mediate the action of the associated interactive objects. Each script component runs a separate computation process. The state of the computation is maintained consistent by a different component, the experience handler, external to the 3D world; it implements the numen entity discussed above. The experience handler receives information from the script components concerning the usage data; besides, it accesses the user profile and environmental data as input for its coordination activity. The accumulated knowledge is filtered and passed to the script components that will pilot the interaction objects. A software simulator of the architecture described in Figure 2 has been built. The main components, a GPS simulator for streaming user position data and a prototypical user interface have been implemented. We used VRML, the language for describing geometry and interaction primitives related to a 3D environment, for building the experience layer and all the components embedded inside of it. GeoVRML [7], an extension of the VRML language that allows to represent accurately georeferenced data, has been used for building the 3D internal representation of the environment.

438 Daniela Fogli et al. Fig. 2. An architecture for interaction in mixed reality environments 4 A Case Study In order to test the potentialities of the system in a real case study, we started a collaboration with the National Archaeological Museum of Altino, located near to a roman archeological site nearby Venice, Italy. We used three different sets of interaction loci to map the archaeological area, corresponding to three different user profiles: the student, the average visitor and the expert. We are currently working on the implementation of different proactive behaviors. In the experimental scenario, visitors are free to wander through the open area. Each time the user enters a locus that was never visited by him/her, a 3D representation of the IL is visualized on the lower part of the PDA interface, an auditory description of the locus starts and a portion of a roman coin is added on the right upper part of the interface. When the user enters the last locus left to visit, an additional message informs him/her that there are no other interesting places in the archeological area. Besides, if the user enters a locus already visited, no portion of coin is added and an alternative text or audio message advises him/her of this condition. This behavior results from the interaction between the numen and the genius loci. Each time the user enters an IL, the genius loci notifies the numen the event. The numen passes to the genius loci the accumulated patterns of interaction related with previous user actions, which include the information that the current locus has already been visited. The genius loci uses this information to activate the standard behavior (add a new portion to the coin and start standard description of the location) or to start the alternative messages discussed above.

Context-Aware Interaction in a Mobile Environment 439 References [1] Celentano, A., Fogli, D., Mussio, P., Pittarello, F.: Agents for distributed context-aware interaction, Proc. Workshop Artificial Intelligence in Mobile Systems (AIMS 2002), ECAI Conference, Lyon, France, July 2002, pp. 29-36. [2] Chen, G., Kotz, D.: A Survey of Context-Aware Mobile Computing, Technical Report TR2000-381, Dartmouth College, Department of Computer Science, 2000. [3] Fogli, D., Mussio, P., Celentano, A., Pittarello, F.: Toward a Model-Based Approach to the Specification of Virtual Reality Environments, Proc. Multimedia Software Engineering (MSE'2002), Newport Beach (CA), USA, December 2002, pp. 148-155. [4] Jennings, N. R.: An Agent Based Approach for Building Complex Software Systems, Communications of the ACM, 44(4) (2001) 35-41. [5] Milgram, P., Kishino, F.: A Taxonomy of Mixed Reality Visual Displays, IEICE Transactions on Information Systems, Vol. E77-D No. 12 (1994) 1321-1329. [6] Pittarello, F.: Accessing Information Through Multimodal 3D Environments: Towards Universal Access, Universal Access in the Information Society Journal, 2(2) (2003) 1-16 [7] Reddy, M., Iverson, L.: GeoVRML 1.1. Specification, Web 3D Consortium, (2002) http://www.geovrml.org/1.1/doc/index.html.