New Human-Computer Interactions using tangible objects: application on a digital tabletop with RFID technology

Similar documents
Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit

Available online at ScienceDirect. 5th International Conference on Ambient Systems, Networks and Technologies (ANT-2014)

Interaction between tangible and virtual agents on interactive tables : Principles and case study

Around the Table. Chia Shen, Clifton Forlines, Neal Lesh, Frederic Vernier 1

HELPING THE DESIGN OF MIXED SYSTEMS

EnhancedTable: An Augmented Table System for Supporting Face-to-Face Meeting in Ubiquitous Environment

DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications

HUMAN COMPUTER INTERFACE

Plastic presentation of control data in Context- Awareness environment

Interaction Design for the Disappearing Computer

From Table System to Tabletop: Integrating Technology into Interactive Surfaces

Double-side Multi-touch Input for Mobile Devices

Information Layout and Interaction on Virtual and Real Rotary Tables

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Context-Aware Interaction in a Mobile Environment

End User Tools for Ambient Intelligence Environments: An Overview.

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

CHAPTER 1. INTRODUCTION 16

Superflick: a Natural and Efficient Technique for Long-Distance Object Placement on Digital Tables

! Computation embedded in the physical spaces around us. ! Ambient intelligence. ! Input in the real world. ! Output in the real world also

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

User Interface Agents

Multitouch Finger Registration and Its Applications

COMET: Collaboration in Applications for Mobile Environments by Twisting

Interactive Multimedia Contents in the IllusionHole

Under the Table Interaction

SUPPORTING LOCALIZED ACTIVITIES IN UBIQUITOUS COMPUTING ENVIRONMENTS. Helder Pinto

Complete Software Defined RFID System Using GNU Radio

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 03 STOCKHOLM, AUGUST 19-21, 2003

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process

ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field

NUI. Research Topic. Research Topic. Multi-touch TANGIBLE INTERACTION DESIGN ON MULTI-TOUCH DISPLAY. Tangible User Interface + Multi-touch

Tracking Cooking tasks using RFID CS 7470 Final Project Report Rahul Nair, Osman Ullah

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience

Diploma Thesis Final Report: A Wall-sized Focus and Context Display. Sebastian Boring Ludwig-Maximilians-Universität München

Context-based bounding volume morphing in pointing gesture application

Humera Syed 1, M. S. Khatib 2 1,2

Multi-touch Interface for Controlling Multiple Mobile Robots

The Mixed Reality Book: A New Multimedia Reading Experience

MotionBeam: Designing for Movement with Handheld Projectors

Organic UIs in Cross-Reality Spaces

ShapeTouch: Leveraging Contact Shape on Interactive Surfaces

Context Sensitive Interactive Systems Design: A Framework for Representation of contexts

Pseudo-Weight: Making Tabletop Interaction with Virtual Objects More Tangible

TViews: An Extensible Architecture for Multiuser Digital Media Tables

DATE: 17/08/2006 Issue No 2 e-plate Operation Overview

Interaction Technique for a Pen-Based Interface Using Finger Motions

A Gestural Interaction Design Model for Multi-touch Displays

Flux: Enhancing Photo Organization through Interaction and Automation

Simulation of Tangible User Interfaces with the ROS Middleware

Subway simulator Case study

DiamondTouch: A Multi-User Touch Technology

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

3D Printing of Embedded Optical Elements for Interactive Objects

Proceedings RF Harvesting Circuit for Batteryless Connected Sensor

A TANGIBLE ENVIRONMENT FOR ANIMATIONS CREATIONS

Relation-Based Groupware For Heterogeneous Design Teams

Reflecting on Domestic Displays for Photo Viewing and Sharing

Application of 3D Terrain Representation System for Highway Landscape Design

A user-centered approach for the design and implementation of KDD-based DSS: A case study in the healthcare domain

Towards new Web Services based Supervisory Systems dedicated to Nomadic Operators

Hand Gesture Recognition Using Radial Length Metric

Using Hands and Feet to Navigate and Manipulate Spatial Data

Beyond: collapsible tools and gestures for computational design

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b

Pervasive Services Engineering for SOAs

Classic3D and Single3D: Two unimanual techniques for constrained 3D manipulations on tablet PCs

Pen and Paper Techniques for Physical Customisation of Tabletop Interfaces

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

UNIT-III LIFE-CYCLE PHASES

Social and Spatial Interactions: Shared Co-Located Mobile Phone Use

UMI3D Unified Model for Interaction in 3D. White Paper

Research on Public, Community, and Situated Displays at MERL Cambridge

Interaction Design. Chapter 9 (July 6th, 2011, 9am-12pm): Physical Interaction, Tangible and Ambient UI

Occlusion-Aware Menu Design for Digital Tabletops

Developing a Mobile, Service-Based Augmented Reality Tool for Modern Maintenance Work

Dynamic Composition of Process Federations for Context Aware Perception of Human Activity

Virtual Reality: Basic Concept

Haptic Feedback in Remote Pointing

Reflections on a WYFIWIF Tool for Eliciting User Feedback

Investigating Gestures on Elastic Tabletops

From Tabletop RPG to Interactive Storytelling: Definition of a Story Manager for Videogames

The Control of Avatar Motion Using Hand Gesture

A Tool for Evaluating, Adapting and Extending Game Progression Planning for Diverse Game Genres

mixed reality & (tactile and) tangible interaction

PhantomParasol: a parasol-type display transitioning from ambient to detailed

R (2) Controlling System Application with hands by identifying movements through Camera

Interface Design V: Beyond the Desktop

Optical component modelling and circuit simulation

A Service Oriented Definition of Context for Pervasive Computing

End-User Programming of Ubicomp in the Home. Nicolai Marquardt Domestic Computing University of Calgary

A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect

SOFTWARE AGENTS IN HANDLING ABNORMAL SITUATIONS IN INDUSTRIAL PLANTS

mixed reality mixed reality & (tactile and) tangible interaction (tactile and) tangible interaction class housekeeping about me

Mobile Interaction with the Real World

Transcription:

New Human-Computer Interactions using tangible objects: application on a digital tabletop with RFID technology Sébastien Kubicki 1, Sophie Lepreux 1, Yoann Lebrun 1, Philippe Dos Santos 1, Christophe Kolski 1 and Jean Caelen 2 1 LAMIH - UMR8530, University of Valenciennes and Hainaut-Cambrésis, Le Mont-Houy, F-59313 Valenciennes Cedex 9, France {firstname.name}@univ-valenciennes.fr 2 Multicom, Laboratoire d'informatique de Grenoble (LIG) BP 53, 38041 Grenoble cedex 9, France jean.caelen@imag.fr Abstract. This paper presents a new kind of interaction between users and a tabletop. The table described is interactive and associated with tangible and traceable objects using RFID technology. As a consequence, some Human- Computer Interactions become possible implying these tangible objects. The multi-agent architecture of the table is also explained, as well as a case study based on a scenario. Keywords: Human-Computer Interaction, RFID, tabletop, tangible objects, Multi-Agent System. 1 Introduction Tabletops are a far cry from the personal computers currently used. Indeed, with the concept of the interactive table, we can imagine a collaborative and co-localized workspace allowing us to bring several users into work at the same time. Dietz and Leigh [1] propose an interactive table called DiamondTouch. They suggest an example of application concerning a plumber and an electrician working together on the same table. In this example, each participant can modify only the plans associated with his/her field. Nowadays, applications and platforms which allow simultaneous collaboration between users, such as the multi-finger or sharing of documents in real time [2], are unusual. Therefore, current researches aim at exploring the possibilities of such new technologies [3]. Shen et al. [4] propose a software named DiamondSpin to simplify the development of interactive applications using the DiamondTouch tactile interactive table. DiamondSpin is a toolkit for the efficient prototyping and experimentation with multi-user, concurrent interfaces for interactive shared displays. It allows document positioning and orientation on a tabletop surface and also supports multiple work areas within the same digital tabletop. Besacier et al. [5] propose a

whole of metaphors [6] in direct relationship with the traditional use of a paper sheet on an interactive table. It would be possible to handle a document in a virtual way (for example to turn over it or group a whole of documents). These new interactions make it possible several co-localized users around an interactive table to work in a collaborative way. However, even though several applications have been proposed for tabletops, very few of them use tangible objects to interact with users. In this paper, we describe a new type of tabletop based on RFID (Radio Frequency IDentification) technology which enables the users to manipulate tangible objects equipped with RFID tags (offering the possibility to store data of different types); so, several participants around the table can interact and work in a collaborative way on applications using physical objects (like in design or production tasks, games, etc.). This paper details the TTT project, the multi-agent architecture of the tabletop as well as a case study. 2 TTT Project The TTT (interactive Table with Tangible and Traceable objects) project proposes an alternative vision of the way of using tangible objects in conjunction with an interactive tabletop. For that, a new technology was implemented by using RFID tags stuck on objects by the RFIdées 1 Company. Four partners are involved in the TTT project: two laboratories (LAMIH 2, LIG 3 ) and two companies (CEA 4, RFIdées). The magnetic table is connected to a computer, including an array of RFID antennas in the form of "tiles". The Figure 1(a) shows the prototype v1 of the digital table. It is possible to distinguish the different RFID antennas which compose the table delimited by the black line. The prototype is composed of so called tiles (Fig. 1(b)) each containing 64 antennas (8 x 8) per 2.5 cm². Each tile contains a DSP processor which reads the RFID antennas, an antenna multiplexer and communication processor. The reading strategies are prioritized and the code is distributed between the processor reader antennas, the processor in charge of multiplexing and the host computer. The table measures about one meter square and contains 25 tiles (5 x 5) or 1600 antennas in total. The tiles are associated to each other via a control interface connected to the host computer by an Ethernet bus. At this time, the prototype v3 can communicate with all the layers of the structure. We will explain this layered structure in the next part. The delay between two displacements is acceptable, we are able to play with some RFID tags like a marbles game but the delay could be again improved on the prototype v4. 1 www.rfidees.fr 2 www.univ-valenciennes.fr/lamih 3 www.liglab.fr 4 www.cea.fr

(a) (b) Fig. 1. (a) Prototype v1 of the table and (b) a tile containing 8x8 antennas (prototype v3) 2.1 Structure and communication An architecture including three layers has been adopted for the table (Fig. 2): 1. The Capture and Interface layer handles tangible objects provided with one or more tags per object and creates a java object by associating it to a form. 2. The Traceability layer handles events associated to the objects and communicates the modifications of object positions to the applicative layer. 3. The Application layer manages the specificities of the application associated to the table. This figure 2 also shows the data flows between each one of the layers. The data flows can only move from one layer into the adjacent one and must pass through an application interface. The applicative interface is used as connection between the layers and defines the exit and entrance points. It is via this interface that two layers are linked and will be able to communicate. The applicative layer is broken up into two parts: The part integrating the Multi-Agent System (MAS) whose the supervisor agent called Genius has a total vision of the virtual and physical (tangible) objects, knowing all the characteristics of the application (role of each object, rules of the game, and so on). The Human-Computer Interaction (HCI) part which is given the responsibility of communicating with the users and which makes it possible to transmit virtual information (for example the displacement of a virtual object by the user).

Fig. 2. The three layers composing the TTT Project

2.2 The Multi-Agent System The structure of a Multi-Agent System could be described by multiple ways according to the context of the application concerned. The main concepts used are the following: Tangible agents (represented by physical objects such as a book, a counter, a mobile phone), Virtual agents (displayed if they represent a digital object such as a colored zone for example), The users (people in interaction with the application). So, it is possible to emphasize the various possible strong interactions (Fig. 4) (a strong interaction is an interaction which results from the dependence of an element compared to another). We find in this case the different interactions between the tangible or virtual agents and between the users and the tangible or virtual agents. As a consequence, we can deduce for example that a tangible agent can act on the virtual agent location and not the opposite. A tangible object can modify the virtual object location (please note that the reverse goes against all physical laws and it is not, in our case, possible). It will be possible, with this interactive table, to interact with some tangible and virtual objects but for virtual objects, a tactile technology must be available. For that, two solutions are possible: 1. The next prototype can include a tactile display (Fig. 3) and permit to interact directly with fingers. 2. We can adopt the solution of an interactive glove with RFID tags to simulate a tactile technology. That is why we distinguish the possibilities of interactions between user and agents (Fig. 4). In the case when the tactile is not implemented, the user can only interact with tangible objects; so, tangible agents associated to each object are involved. If the tactile is available, the user could interact with the tangible and virtual objects. (a) (b) Fig. 3. An example of tactile display (a) and inclusion of RFID tags on a glove (b)

Fig. 4. The possible interactions between agents and users The Genius agent used is a software entity able to answer any question of the users about the objects location and their roles. This agent is the central point of the Multi- Agent System; the agents of the application interact with it to announce or modify their location. The multi-agent organization proposed follows a hierarchical structure with flexible levels. At the top of this hierarchy, the Genius agent can be considered as an observant agent (or even coordinator depending on applications) which establishes the link between the users and the agents. On the last level, the located agents (tangible agents dependant on a physical object), propose a reflection of the tangible objects present on the interactive table. Each object is associated with an agent which has the characteristics of this object such as: role, location, environment. These agents return the various internal modifications to the Genius which knows the general map of the table. 2.3 Interactions between the table and users Initially, we have to agree that all the virtual objects have to be defined when designing the application. All these objects are initialized when the application is starting. After that, the user can interact with these virtual objects to move them, for example, with a finger or an object equipped with a RFID tag. The first stage to develop consists in initializing the virtual and tangible objects which will be used in the application; that is why we distinguish two types of users: The end user of the application does not need particular knowledge to use the application. He or she can be of any age, knowledgeable in data

processing or not; he or she has just to know the rule(s) of the game or the principle of the application. The user known as administrator has a complete knowledge of the application; the administrator can intervene on the application (internal modification). However, one can suppose that an end user could take the role of administrator for some operations. The Human-Computer Interface will have to be easy to use. We have presented the layered structure defined during the TTT project. The Captures and Interface layer manages the interactions on the table; it transmits information to the Traceability layer which creates the historic of the different objects. Then the Traceability layer transmits information to the Applicative layer, more exactly to the Multi-Agent System layer which gives a role to each object and informs the Human-Computer Interaction layer to display the result to the users. To illustrate an application using the table, we propose an example which uses the entire layered structure and shows the possible interactions with users. 3 Case study and modeling We propose an example of application which points out the structure presented before. After a presentation of the application called Luminous Zone, we present a scenario using it. The scenario emphasizes the utilization of Luminous Zone but we explain more especially the first stage which consists in initializing the different objects. In fact, before using the application with some objects, the user must initialize all the objects which have to be used. Two sequence diagrams (Fig. 5 and 6) modeled with UML2 will show the communication between each layer. 3.1 Example In order to illustrate the used architecture (Fig. 2), we propose an example in which the table has to illuminate a zone specified beforehand according to the location of a switch object (a tangible object initialized previously). In this example, there are several objects, each with a role. First, the tangible objects could be everything (a pen, a counter, a book for example). If the user places a tangible object in one of the virtual colored zones (projected), the lighting zone (LED included in the table) lights up color in which the tangible object (the switch) is located. It is possible to use several switches. For example, the colored zone would be divided according to the number of users (let us suppose a side for a user, therefore, four users is the maximum).

3.2 Scenario We propose a sequence diagram which is modeled with UML2 (Fig. 5) to illustrate a scenario using the table with the Luminous Zone application. The user moves an object having for role switch. Displacement is detected by the Capture and Interface layer which transmits information to the Traceability layer. This one sends the new location of the object to the Genius. In MAS layer, each object (tangible or virtual) is represented by an agent. Each agent questions its local environment in order to know its location compared to the other objects. Here, the agent associated with the switch object checks if it is set in a colored zone; if necessary, this agent transmits to the Genius the need for lighting one of the luminous zones. After reception of the data by the MAS layer, the HCI layer assigns a color to the luminous zone and light it. Fig. 5. A scenario using the table with Luminous Zone application To use the application, all the objects must be initialized. For that, a user interface is necessary. This one has to offer the choice to name an object, define these roles and behaviors. The sequence diagram (Fig. 6) shows such aspects.

Fig. 6. Initialization of a new object and association of it with a role and behavior 4 Conclusion The table has the original characteristic of interacting directly with users and tangible objects. This new working aspect is different compared to the interactive tables available currently. It permits to explore a new way of research in HCI as well as in MAS. The association between interactive table and Multi-Agent System is original and brings promising possibilities [7]. HCI will be used for direct interaction with the users allowing a simple and intuitive use of the applications of the table. It will propose innovations in terms of HCI in the use of an interactive table which manages tangible and traceable objects. Some examples of this are the modification of context

[8] and [9] during the initialization of a tangible object (detection of the user, loading of personal parameters ) and so on, the adaptation to the context [10] and [11] (mono or multi-user, modification of environment, and so on). Our objective is now to apply this research and use the different UML diagrams to develop a set of applications using the table and its specificities. At this time, the first application Luminous Zone is under test. A new version of the table is under development by the RFIdées Company. Acknowledgements The present research work is supported by the "Agence Nationale de la Recherche (ANR). We would also like to thank our two partner companies in the TTT project: RFIdées and the CEA. References 1. P. Dietz, D. Leigh, Diamondtouch: A multiuser touch technology, in: UIST 01: Proceedings of the 14th annual ACM symposium on User interface software and technology, ACM Press, Orlando, Florida, 2001, pp. 219 226. 2. M. Wu, R. Balakrishnan, Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays, in: UIST 03: Proceedings of the 16th annual ACM symposium on User interface software and technology, ACM, Vancouver, Canada, 2003, pp. 193 202. 3. N.Couture, G.Rivière, P.Reuter, GeoTUI: A Tangible User Interface for Geoscience, Proceedings of the second ACM International Conference on Tangible and Embedded Interaction, Bonn, Germany, 2008, pp. 89-96. 4. Shen, C., Vernier, F., Forlines, C., Ringel, M.: DiamondSpin: An extensible toolkit for around-the-table interaction. In: CHI 04 International Conference on Human Factors in Computing Systems, ACM Press, New York, 2004, pp. 167 174. 5. Besacier, G., Rey, G., Najm, M., Buisine, S., Vernier, F.: Paper metaphor for tabletop interaction design. In: HCII 07 Human Computer Interaction International. LNCS, Springer, Heidelberg, 2007, pp. 758-767. 6. Agarawala, A., Balakrishnan, R.: Keepin it real: pushing the desktop metaphor with physics, piles and the pen. In: Proc. CHI, 2006, pp. 1283 1292. 7. E. Adam, R. Mandiau, Flexible roles in a holonic multi-agent system, in: V. Marik, V. Vyatkin, A. Colombo (Eds.), Holonic and Multi-Agent Systems for Manufacturing, Third International Conf. on Industrial Applications of Holonic and Multi-Agent Systems, HoloMAS 2007, Springer-Verlag, Regensburg, 2007, pp. 59 70, LNCS 4659. 8. A. K. Dey, D. Salber, M. Futakawa, G. D. Abowd, An architecture to support context-aware applications, GVU Technical Reports. 9. M. A. Hariri, D. Tabary, S. Lepreux, C. Kolski, Context aware business adaptation toward user interface adaptation, Communications of SIWN, 3, 2008, pp. 46 52. 10. J.-S. Sottet, G. Calvary; J. Coutaz; J.-M. Favre, J. Vanderdonckt, A. Stanciulescu, S. Lepreux, A Language Perspective on the Development of Plastic Multimodal User Interfaces, Journal of Multimodal User Interfaces, 2007, 1, pp. 1-12. 11. S. Lepreux, M. A. Hariri, J. Rouillard, D. Tabary, J.-C. Tarby, C. Kolski, in Jacko, J. (ed.), Towards Multimodal User Interfaces Composition based on UsiXML and MBD principles, HCI International 2007, 12th International Conference, HCI International 2007, 12th International Conference, 2007, 134-143.