Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration
|
|
- Oliver Ellis
- 6 years ago
- Views:
Transcription
1 Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration Anders Green Helge Hüttenrauch Kerstin Severinson Eklundh KTH NADA Interaction and Presentation Laboratory Stockholm, SWEDEN Abstract This paper describes how the Wizard-of-Oz framework can be applied to a service robotics scenario. A scenario the Home Tour Scenario involving a collaborative service discovery and configuration multimodal dialogue for the robot is described. The role of the wizard operators producing dialogue and robot movements is discussed as well as the specific simulation tools used: the Dialogue Production Tool and the Joystick Navigation Tool. Some attention will be paid to the pilot studies performed as a preparation for the unified Home Tour Scenario. 1. Introduction One of the challenges for human-robot interaction is to study actual use of service robots with cognitive capabilities. Since these robots do not yet exist as a product and prototyping is costly, there is a need to simulate robot interfaces to enable the study of ordinary users engaging in human-robot communication. In this paper we discuss Wizard-of-Oz simulation as a research methodology in HRI. We also describe how it is currently used for the study of interaction in a particular scenario called the Home-Tour Scenario, as a part of the ongoing European project COGNIRON. The scenario is aimed at interactive configuration of the system and joint service discovery involving multi-modal human-robot communication. The main goal of the simulation framework described in this paper is to Evaluate the simulated prototype system by providing relevant data on human-robot communication. Enable the discovery of characteristics of multimodal communication that are specific to human-robot interaction Provide the means to verify patterns of human face to face dialogue that recur in human-robot communication. 2. Wizard-of-Oz as a research method High-fidelity simulation or Wizard-of-Oz is a methodology used for simulation of high-level functions in an interactive system [1, 2]. One of the most common uses of Wizard-of-Oz is as a means for finding out how users react on a system that uses natural language as an interface. The general idea is to simulate those parts of the system that require most effort in terms of development or to assess the suitability of the chosen interface metaphor. The methodology has been used since the 1970s in different type of applications. An important feature of Wizard-of-Oz simulation is that it enables the interaction designer to act as the system [2]. This is also the source of particular problems and usually there is a need to develop special tools to facilitate the tasks of the wizards. The data that we get from a wizard of oz study is to a large extent qualitative. The users experience of interacting with the simulated system can be assessed with similar methods as in regular user interface usability testing, working with methods like questionnaires, interviews etc. The data generated from interactions may consist of video footage and sound recordings of the users actions, recordings of screen interaction and data on from the wizard interface such as system actions and actions to promote analysis e.g., annotations and synchronization information (provided during sessions). The motivations for the use a wizard-of-oz study seems to revolve around three themes: conceptualization, data collection and testing Tryouts and conceptualizing Simulation can be set up rapidly and enable the designer to conceptualize and tryout new interface ideas and provide important early design experience during preimplementation phases of the system development [3]. At the same time users may get an impression of how the system is intended to support the users tasks. By carefully considering the scenario, data from a relevant setting for the task can be obtained by simulating the system that is used in the way that is similar to what may be expected at a later stage [1]. Another type of information that can be tried using the wizard of oz methodology is to assess the suitability of the different modalities allowing the designer to make design decisions from the perspective of a potentially large design space and during this process discover particular design flaws.
2 2.2. Training and education Data collected in the trials may be used as an information source for various approaches aimed at gaining knowledge based on empirical data. One common goal of data collection is to be able to train technical components such as phonetic models, lexicons, grammars and dialogue models may be extracted and used to train system components. Maulsby [2] noted that system designers benefit from becoming a wizard and in that way increase their ability to understand what kind of interaction their system should support. Another typical result from a wizard of oz data collection is to publish the data as a corpus that can be used for different kinds of purposes. A corpus may have different formats and different purposes, but some common data found in corpora are linguistic data [4], task strategies; error handling strategies, that are implicit within the interactions Tests and evaluation Data collected during the trials can be used to quantitative measures of system performance. Wizard-of-Oz data can be used for quantitative evaluation of system performance. One example is automatic evaluation of individual system components, e.g., speech recognizers [4]. Questionnaire response from the simulation phase of system development can be used as comparison benchmark for later working prototypes. From a research point of view wizard-of-oz methodology can be used to test hypotheses of human-machine communication, e.g., inducing social behaviours from users that support the hypothesis that computers can be considered to be social actors [5]. In the following sections, we describe in detail an ongoing study of the Home-Tour Scenario, and how we have applied the Wizard-of-Oz method, including the tools developed and lessons learned so far. 3. Simulating the Home Tour Scenario The Home-Tour Scenario concerns three different phases of interaction. First of all there is an Introduction phase, aimed to acquaint the user and the robot with each other. The second aspect is the Demonstration phase, where the user is shown around the environment and is taught places and objects by the user using speech and gesture input. The demonstration phase may be interleaved with the Validation phase, where the aim for the interaction is to test that the robot has learned what it has been told by the user. Figure 1. The simulated robot with a camera, a signal LED and directional microphones The simulated robot The robot targeted for in the simulation environment is equipped with an onboard camera and far-field speech recognition microphones (Figure 1). However, these input devices are a part of the simulation, i.e., they are visible to the user but are not actually used within the system. We will also be able to collect audio and video data from the camera and microphones to be used for development of components to be used in the interface at a later stage. The wizard attempts to map users actions to states of an envisioned Person Attention System (PAS). The information from the attention system is used (in the real system) to control speech recognition depending on cues of user behavior picked up by the robot s sensors. Visual feedback is given to the user by means of a camera showing the direction in which the robot is looking together with a signal lamp (a Light Emitting Diode) which is lit when the user is positioned appropriately in front of the robot. Thus the camera will work as an anthropomorphic gaze, providing an apparent front of the robot. The interaction distance, i.e., where the robot is able to recognize the user is in the interval meters. The dialogue capability of the robot based on spoken dialogue and gesture input is simulated by the wizard. As output, the wizard uses synthesized speech that is produced onboard the robot using a text-to-speech system. The phonetic characteristics of the synthesized speech affect the perceived character of the robot and need to be carefully considered. We have therefore chosen a speech synthesizer that can be parameterized to provide an appropriate character for the robot while assuring that the speech remains comprehensible. 4. Dialogue handling In order to be able to simulate a system that works as a unity we have used the dialogue patterns provided by one of the collaboration partners in the COGNIRON project (University of Bielefeld). These dialogue patterns were used as a starting point when creating a dialogue
3 design with the aim that the system will be perceived as habitable and complete by the user. This means that there should be few points in the interaction where the dialogue breaks down because there is no model to handle the user behavior. However, it does not mean that the user will be able to utter an arbitrary command to the system and expect the robot to understand it, but rather that there is a relevant help message for those cases when there is no obvious response for the particular error event, according to the task model, making it possible to recover from the situation at hand. The manner in which we have reworked the dialogue patterns results in a task-oriented dialogue model together with one or more patterns for error handling. The taskoriented model handles the behavior that the system displays under normal (positive) circumstances. Errors that may be detected by the system (through the wizard) are to be associated with the error handling dialogue for a particular task. The relation between the different dialog patterns is that smooth transitions between them are possible, e.g. switching from Follow dialogue to Show object can, for instance, be done by saying this is the cup Interleaved tasks The type of dialogues that the implemented system should incorporate, and hence needs to be simulated using the wizard environment encompasses these general dialogue tasks, that may be performed in sequence or can be interleaved during the trial session. Thus the user will be able to: Introducing/closing interactions. In the beginning and at the end of the interactions the user may introduce herself. The robot will then state that the user has been recognized and remembered. Likewise the robot acknowledges that the user has finished the interactions e.g. by saying good bye. Demonstrate objects and locations to the robot using speech and deictic gestures together with the follow behavior of the robot. Validate the learning process of the robot by querying it for information about the objects and locations and ordering it to goto locations and to find objects in the environment. 5. Scenario descriptions The underlying assumption for introducing the user to a simulation of a natural language user interface is to provide the freedom to interact in a way that seems natural to the user without actually implementing the system for real. However, it is important to provide a set of constraints that bring some realism into the situation of use. On the abstract level we have chosen to describe three kinds of information for each task or phenomenon we wish to study in a simulated scenario: User instruction providing information that enables the user to perform the intended task. These instructions answer the question What should the user do? Behavior hypotheses regarding the designer s beliefs about what the users will do in the scenario. The hypotheses answer the question What will the user do? Robot behavior specifies what the robot does, either as controlled by the wizard operator or as an autonomous system. The behavior description answers the question What should the robot do? This information not only constrains the situation of use but also sets the focus on what questions we want to answer in the specific scenario. A document of this kind may contain descriptions for several tasks at different levels of abstraction ranging from an overall description of the robot s behavior to detailed descriptions of single tasks like specifying a new location or showing an object. Our intention is that these descriptions should work as a both an aid for the wizard and a constraining factor. However, since speed of execution is vital for the success of realizing the simulation scenario the wizards need to train on the task using the descriptions as a guide to the scenario. We also see this as an iterative process and the aim is to develop the descriptions together within the design team. In the following simplified example we have described the instruction to the user, what we believe the user will do and what the robot should do in a table (below). In this example the task for the user is to navigate the robot using the follow behavior of the robot. User instruction Behavior hypothesis Robot behavior The robot is able to follow you on your command. Stand so that the robot is able to detect you with its camera, and then say follow me. The user will get in front of the camera to say the command The user will turn around and walk while monitoring the robot. The user will vary her speed to see what the robot does. Robot follows the user at a distance of approximately 1-3 meters. Robot follows the user and stops if the user stops. If the user moves closer to the robot it turns towards the user. It does not move in any direction before the distance increases and become larger than the threshold (1 m).
4 6. Wizard Tools In a multimodal interaction scenario, using two or more wizards seems to be a typical choice to reduce the amount of work for each of the wizards. However, since the aim is to produce the appearance of a single system wizard operators need training to act as one system. They also need to collaborate closely during the trials runs. The task allocation between wizards has been given some consideration when constructing the wizard tools for the study. In our trial setup we have considered the case when two wizard operators control the robot. We have allocated two main interactive tasks: dialogue and navigation. Aside from the interactive tasks both wizards may perform some simple manual logging and administrative tasks such as changing log files, synchronizing the audio and video capture etc. The purpose of the tools is mainly to speed up the process of providing robot response by the wizard(s) during the experiments. Even if we foresee some possibility of using the behaviors and dialogue models explicitly and implicitly developed within the tools directly as a kind of robot programming we must stress that this is not the intended use of these tools. Another purpose with the tools is that is should be relatively easy to use them at different stages of development. At this stage, when the final version of the architecture of the real robot system is yet pending, we are striving to modularize our system, e.g., by using XML schemas, so that it will be able to work as a module side by side with real components of the system. This will allow for a open ended approach where real components are tested together with simulated and thereby allow user testing to influence the design also at later stages in the design process Joystick Navigator Using the Joystick Navigator tool the wizard is able to directly control the robot s movements using a standard type gaming joystick. This tool is used for navigation tasks such as follow behaviors. It may also be used to simulated point-to-point navigation. For this task the most important feedback to the wizard is provided by directly monitoring the robot itself. The robots we are working with (PeopleBot and Nomadic Scout) are equipped with ultrasonic rangers. The data from the sensor positioned in the middle, left and right is collected and is visualized as a vertical bar (Figure 3). The middle bar is color coded to visualize the position of the user. The middle bar is fairly large in order that is may be visible in the peripheral field of view when the wizard operator is focused on the robot and the user on the floor. The left and right bars are grey to allow the wizard to get information when focusing on the interface but remain invisible in the periphery otherwise. The gaming joystick (a Logitech Wingman) provides a set of programmable buttons that can be used to implement shortcuts that facilitate quick responses and speed up wizard reactions Dialog production tool To provide multimodal dialogue capabilities for the robot the wizard responsible for the interaction has a tool that provide interface for a large collection of phrases at a glance (Figure 2). Since the dialogue interface we are simulating is aimed at a task-oriented type of dialogue we have assumed that phrases may have two functional types: task-related and feedback. This is reflected in the interface where the left table holds task-related phrases and the right table holds feedback phrases. To handle phrases containing locations or objects we have added columns to hold objects and locations. When a task-oriented phrase containing a type marker (e.g. LO- CATION ) is selected a dialogue window containing the set of expanded phrases for the possible location is displayed. This makes it possible to produce hundreds of phrases in one or two steps. The dialogue production tool also contains fields for command to produce simultaneous robot actions, e.g., sending a move command while letting the robot say moving forward. If the robot has an expressive character, e.g. the CERO character [3], that can be used to provide conversational feedback, commands may be sent as well. 7. Data collection One of the goals when conducting wizard-of-oz simulations within a research context is to collect data in a way that it may meet several purposes. In a multimodal interaction scenario it is possible to collect a large amount of data on various aspects of use. It is therefore important to determine at an early stage what type of data needs to be collected, in relation to the research issues. In our current study we will collect video images of the scenario with the user in focus. We also collect data from the Joystick Navigator Tool, such as the movement of the joystick and the positioning data together with results from ultrasonic rangers.
5 Figure 2. The Wizard Dialog Tool. From the robot we may store video images from the onboard camera. Apart from storing video from the robot we will use a SICK laser range finder to collect data that may be used to inform the design of, and evaluate person tracking systems, being developed in parallel with the dialogue system Pre-studies and early experiences We have performed pre-studies in order to train the wizards and to explore the potential for wizard-of-oz simulation in a service robotics setting. As of now we have let about ten users guide the robot using a combination of speech and gestures in a simplified task: to make the robot pass over marks on the floor that correspond to a map held by the user. In this study a single wizard controlled both the dialogue and the movements of the robot. We found that this was at the very limit of what the wizard could handle. After some re-design of the wizard software and by assuming a fairly restrictive dialogue model a single wizard could interpret and react upon simultaneous gesture and speech input. Within the Home-Tour scenario we have also performed a study aimed to work as a combined training and data collection trial for the Follow-behavior. Here we let approximately 35 users guide the robot between two points placed 15 meters apart. The users were simply told that they could ask the robot to follow them. Then the Dialog Wizard responded using the Dialog Production tool (described above), and the Navigation Wizard let the robot Figure 3. The Joystick Navigator Tool follow the user over the floor using the joystick. Rather surprisingly we found that the wizards, after some runs, made the robot display a kind of a regular, or patterned behavior. For instance, some users seemed to take a step back and monitor the robot as it moved towards the goal. At some points the wizard(s) failed to notice this and involuntarily the wizard made the robot display what would seem to be a plan-based robot behavior rather than the reactive behavior we assume for the follow behavior of the Home Tour scenario. In the coming months we will pre-test other parts of the Home Tour scenario focusing on developing the different dialogue parts of the scenario to work with simulated follow behavior in a unified and complete scenario. 8. The validity of Wizard-of-Oz Sometimes the Wizard-of-Oz framework is criticized for being a method that does not provide necessary data for evaluating practical dialogue, e.g., Wizard-of-Oz methods typically fail to involve users that bring real tasks to the system. This should be seen in contrast to what is generally believed, namely that the Wizard-of-Oz method is an open-ended method for collection of data on user behavior. Fitts [6] proposed a list describing what people do better than machines and what machines do better than people. In a simulation scenario, the problem of function allocation is twofold. First of all we need to consider the characteristics of the system we are simulating and sec-
6 ondly we need to think about the function allocation of the system we are using for the simulation. Following Sheridan [7] we see may see Fitts List as a set of accepted statements making up the foundation for how to reason when designing function allocation. According to Fitts people are better than machines at detecting small changes in the environment, perceiving patterns; improvising, memorizing, making judgments and reasoning inductively. Machines are better at responding quickly to signals, applying great force, storing and erasing information and reasoning deductively [7]. When we design a system with the aim of simulating a real system we may consider Fitts List when reasoning about function allocation between the wizard and the real parts of the system. It is fruitful to think in terms of what-ifs : what if the system would perform this task what would it look like in terms of machine behavior, and; is it possible for the operator to simulate this behavior in a realistic manner? In the Human-Robot Interaction domain there are several compelling reasons for performing simulations. First of all there is the matter of the task that the robot is performing. In many of the envisioned scenarios humanrobot communication is focused on things that people in general does not pay much attention to: negotiation of simple tasks [3], detecting people and describing routes [8]. A service robot is not supposed to replace someone; they are instead, in a specific and limited way, forming new social relations with people and taking part in the cocreation of new specialized human-robot sublanguages. Even in scenarios where robots could be useful when it comes to actually replace people, in dirty, dangerous or distant situations, the conversation between field workers, like divers or rescue workers would probably be of little use to designers of human robot communication given the technical constraints of the robot s interaction sensors. Secondly there are arguments that conversational systems that behave exactly like humans should not be the final aim when designing speech interfaces. People treat computers differently than people [1]. This is most likely to extend to robots as well. We therefore assume that users regard the robot like a kind of a collaborative tool with which they may choose to interact in a way that is similar but not equal to other human beings. Hence there is a need to collect Wizard-of-Oz corpora rather than using corpora using a human-human corpus of interaction in a similar domain (if such can be found). 9. Conclusion We have described a method for performing hi-fi simulation for interactive service robots in the Wizard-of- Oz framework. Even if there is a certain amount of work involved in setting up a Wizard-of-Oz study like the one described in this paper we are of the opinion that seeing that system working before it is actually built provides useful insights about the future system and thus providing a transparent way of visualizing an interactive scenario both to users and designers, allowing for evaluation of simulated prototypes. Another important aspect of simulation is that it provides the opportunity to collect data that is the closest we get to a real system in this sense simulation provides a peep hole into future scenarios of human-robot communication. Thus these kinds of scenarios may be studied as instances of human-robot communication per se rather than as a more clear-cut systems evaluation. Acknowledges The work described in this paper was partially conducted within the EU Integrated Project COGNIRON ("The Cognitive Companion") and funded by the European Commission Division FP6-IST Future and Emerging Technologies under Contract FP References [1] N. Dahlbäck, A. Jönsson, and L. Ahrenberg, "Wizard of Oz studies - why and how," Knowledge-Based Systems, vol. 6, pp , [2] D. Maulsby, S. Greenberg, and R. Mander, "Prototyping an Intelligent Agent through Wizard of Oz," in INTERCHI'93, 1993, pp [3] A. Green and K. S. Eklundh, "Task-oriented Dialogue for CERO: a User-centered Approach," presented at Proceedings of 10th IEEE International Workshop on Robot and Human Interactive Communication, Bordeaux/Paris, [4] G. Antoniol, R. Cattoni, M. Cettolo, and M. Federico, "Robust Speech Understanding for Robot Telecontrol," in Proceedings of the 6th International Conference ofn Advanced robotics. Tokyo, Japan, 1993, pp [5] C. Nass, J. Steuer, and E. R. Tauber, "Computers are Social Actors," in Proceedings of CHI'94. Boston, MA, USA, [6] P. M. Fitts, "Human engineering for an effective air navigation and traffic control system," [7] T. B. Sheridan, "Function allocation: algorithm, alchemy or apostasy?," International Journal of Human-Computer Studies, vol. 52, pp , [8] J. Fry, H. Asoh, and T. Matsui, "Natural Dialogue with the JIJO-2 Office Robot," Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp , 1998.
With a New Helper Comes New Tasks
With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping Anders Green 1 Helge Hüttenrauch 1 Cristian Bogdan 1 Kerstin Severinson Eklundh 1 1 School of Computer Science
More informationENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS
BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of
More informationNatural Interaction with Social Robots
Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,
More informationDeveloping a Contextualized Multimodal Corpus for Human-Robot Interaction
Developing a Contextualized Multimodal Corpus for Human-Robot Interaction Anders Green, Helge Hüttenrauch, Elin Anna Topp, Kerstin Severinson Eklundh Royal Institute of Technology School of Computer Science
More informationSven Wachsmuth Bielefeld University
& CITEC Central Lab Facilities Performance Assessment and System Design in Human Robot Interaction Sven Wachsmuth Bielefeld University May, 2011 & CITEC Central Lab Facilities What are the Flops of cognitive
More informationInvestigating spatial relationships in human-robot interaction
Investigating spatial relationships in human-robot interaction HELGE HÜTTENRAUCH KERSTIN SEVERINSON EKLUNDH ANDERS GREEN ELIN A TOPP Human computer interaction (HCI) Computer science and communication
More information1 Publishable summary
1 Publishable summary 1.1 Introduction The DIRHA (Distant-speech Interaction for Robust Home Applications) project was launched as STREP project FP7-288121 in the Commission s Seventh Framework Programme
More informationA Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures
A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)
More informationBeing natural: On the use of multimodal interaction concepts in smart homes
Being natural: On the use of multimodal interaction concepts in smart homes Joachim Machate Interactive Products, Fraunhofer IAO, Stuttgart, Germany 1 Motivation Smart home or the home of the future: A
More informationResearch Issues for Designing Robot Companions: BIRON as a Case Study
Research Issues for Designing Robot Companions: BIRON as a Case Study B. Wrede, A. Haasch, N. Hofemann, S. Hohenner, S. Hüwel, M. Kleinehagenbrock, S. Lang, S. Li, I. Toptsis, G. A. Fink, J. Fritsch, and
More informationHELPING THE DESIGN OF MIXED SYSTEMS
HELPING THE DESIGN OF MIXED SYSTEMS Céline Coutrix Grenoble Informatics Laboratory (LIG) University of Grenoble 1, France Abstract Several interaction paradigms are considered in pervasive computing environments.
More informationEssay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam
1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are
More informationTopic Paper HRI Theory and Evaluation
Topic Paper HRI Theory and Evaluation Sree Ram Akula (sreerama@mtu.edu) Abstract: Human-robot interaction(hri) is the study of interactions between humans and robots. HRI Theory and evaluation deals with
More informationMultimodal Metric Study for Human-Robot Collaboration
Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems
More informationMotivation and objectives of the proposed study
Abstract In recent years, interactive digital media has made a rapid development in human computer interaction. However, the amount of communication or information being conveyed between human and the
More informationBenchmarking Intelligent Service Robots through Scientific Competitions: the approach. Luca Iocchi. Sapienza University of Rome, Italy
Benchmarking Intelligent Service Robots through Scientific Competitions: the RoboCup@Home approach Luca Iocchi Sapienza University of Rome, Italy Motivation Benchmarking Domestic Service Robots Complex
More informationDistributed Vision System: A Perceptual Information Infrastructure for Robot Navigation
Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp
More informationUser Interface Agents
User Interface Agents Roope Raisamo (rr@cs.uta.fi) Department of Computer Sciences University of Tampere http://www.cs.uta.fi/sat/ User Interface Agents Schiaffino and Amandi [2004]: Interface agents are
More informationCognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many
Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July
More informationAn Interactive Interface for Service Robots
An Interactive Interface for Service Robots Elin A. Topp, Danica Kragic, Patric Jensfelt and Henrik I. Christensen Centre for Autonomous Systems Royal Institute of Technology, Stockholm, Sweden Email:
More informationThe aims. An evaluation framework. Evaluation paradigm. User studies
The aims An evaluation framework Explain key evaluation concepts & terms. Describe the evaluation paradigms & techniques used in interaction design. Discuss the conceptual, practical and ethical issues
More informationEffects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork
Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork Cynthia Breazeal, Cory D. Kidd, Andrea Lockerd Thomaz, Guy Hoffman, Matt Berlin MIT Media Lab 20 Ames St. E15-449,
More informationA DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607)
117 From: AAAI Technical Report WS-94-04. Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved. A DAI Architecture for Coordinating Multimedia Applications Keith J. Werkman* Loral Federal
More informationA SURVEY OF SOCIALLY INTERACTIVE ROBOTS
A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why
More informationClose Encounters: Spatial Distances between People and a Robot of Mechanistic Appearance *
Close Encounters: Spatial Distances between People and a Robot of Mechanistic Appearance * Michael L Walters, Kerstin Dautenhahn, Kheng Lee Koay, Christina Kaouri, René te Boekhorst, Chrystopher Nehaniv,
More informationA DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL
A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL Nathanael Chambers, James Allen, Lucian Galescu and Hyuckchul Jung Institute for Human and Machine Cognition 40 S. Alcaniz Street Pensacola, FL 32502
More informationFP7 ICT Call 6: Cognitive Systems and Robotics
FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media
More informationContext-sensitive speech recognition for human-robot interaction
Context-sensitive speech recognition for human-robot interaction Pierre Lison Cognitive Systems @ Language Technology Lab German Research Centre for Artificial Intelligence (DFKI GmbH) Saarbrücken, Germany.
More informationHuman Robot Dialogue Interaction. Barry Lumpkin
Human Robot Dialogue Interaction Barry Lumpkin Robots Where to Look: A Study of Human- Robot Engagement Why embodiment? Pure vocal and virtual agents can hold a dialogue Physical robots come with many
More informationrobot BIRON, the Bielefeld Robot Companion.
BIRON The Bielefeld Robot Companion A. Haasch, S. Hohenner, S. Hüwel, M. Kleinehagenbrock, S. Lang, I. Toptsis, G. A. Fink, J. Fritsch, B. Wrede, and G. Sagerer Bielefeld University, Faculty of Technology,
More informationHaptic Camera Manipulation: Extending the Camera In Hand Metaphor
Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium
More information2. Publishable summary
2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research
More informationKnowledge Representation and Cognition in Natural Language Processing
Knowledge Representation and Cognition in Natural Language Processing Gemignani Guglielmo Sapienza University of Rome January 17 th 2013 The European Projects Surveyed the FP6 and FP7 projects involving
More informationCHAPTER 8 RESEARCH METHODOLOGY AND DESIGN
CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN 8.1 Introduction This chapter gives a brief overview of the field of research methodology. It contains a review of a variety of research perspectives and approaches
More informationRobotic Applications Industrial/logistics/medical robots
Artificial Intelligence & Human-Robot Interaction Luca Iocchi Dept. of Computer Control and Management Eng. Sapienza University of Rome, Italy Robotic Applications Industrial/logistics/medical robots Known
More informationSECOND YEAR PROJECT SUMMARY
SECOND YEAR PROJECT SUMMARY Grant Agreement number: 215805 Project acronym: Project title: CHRIS Cooperative Human Robot Interaction Systems Period covered: from 01 March 2009 to 28 Feb 2010 Contact Details
More informationTraffic Control for a Swarm of Robots: Avoiding Group Conflicts
Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots
More informationIntelligent driving TH« TNO I Innovation for live
Intelligent driving TNO I Innovation for live TH«Intelligent Transport Systems have become an integral part of the world. In addition to the current ITS systems, intelligent vehicles can make a significant
More informationEvaluation of Passing Distance for Social Robots
Evaluation of Passing Distance for Social Robots Elena Pacchierotti, Henrik I. Christensen and Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology SE-100 44 Stockholm, Sweden {elenapa,hic,patric}@nada.kth.se
More informationACTIVE, A PLATFORM FOR BUILDING INTELLIGENT OPERATING ROOMS
ACTIVE, A PLATFORM FOR BUILDING INTELLIGENT OPERATING ROOMS D. GUZZONI 1, C. BAUR 1, A. CHEYER 2 1 VRAI Group EPFL 1015 Lausanne Switzerland 2 AIC SRI International Menlo Park, CA USA Today computers are
More informationProf. Subramanian Ramamoorthy. The University of Edinburgh, Reader at the School of Informatics
Prof. Subramanian Ramamoorthy The University of Edinburgh, Reader at the School of Informatics with Baxter there is a good simulator, a physical robot and easy to access public libraries means it s relatively
More informationCognitive Systems and Robotics: opportunities in FP7
Cognitive Systems and Robotics: opportunities in FP7 Austrian Robotics Summit July 3, 2009 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media European
More informationDipartimento di Elettronica Informazione e Bioingegneria Robotics
Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote
More informationYears 9 and 10 standard elaborations Australian Curriculum: Digital Technologies
Purpose The standard elaborations (SEs) provide additional clarity when using the Australian Curriculum achievement standard to make judgments on a five-point scale. They can be used as a tool for: making
More informationHuman-Robot Interaction in Service Robotics
Human-Robot Interaction in Service Robotics H. I. Christensen Λ,H.Hüttenrauch y, and K. Severinson-Eklundh y Λ Centre for Autonomous Systems y Interaction and Presentation Lab. Numerical Analysis and Computer
More informationBenchmarking Intelligent Service Robots through Scientific Competitions. Luca Iocchi. Sapienza University of Rome, Italy
RoboCup@Home Benchmarking Intelligent Service Robots through Scientific Competitions Luca Iocchi Sapienza University of Rome, Italy Motivation Development of Domestic Service Robots Complex Integrated
More informationMethodology for Agent-Oriented Software
ب.ظ 03:55 1 of 7 2006/10/27 Next: About this document... Methodology for Agent-Oriented Software Design Principal Investigator dr. Frank S. de Boer (frankb@cs.uu.nl) Summary The main research goal of this
More informationLearning and Interacting in Human Robot Domains
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART A: SYSTEMS AND HUMANS, VOL. 31, NO. 5, SEPTEMBER 2001 419 Learning and Interacting in Human Robot Domains Monica N. Nicolescu and Maja J. Matarić
More informationHUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY
HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY *Ms. S. VAISHNAVI, Assistant Professor, Sri Krishna Arts And Science College, Coimbatore. TN INDIA **SWETHASRI. L., Final Year B.Com
More informationBirth of An Intelligent Humanoid Robot in Singapore
Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing
More informationSemi-Autonomous Parking for Enhanced Safety and Efficiency
Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University
More informationHMM-based Error Recovery of Dance Step Selection for Dance Partner Robot
27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,
More informationAn Example Cognitive Architecture: EPIC
An Example Cognitive Architecture: EPIC David E. Kieras Collaborator on EPIC: David E. Meyer University of Michigan EPIC Development Sponsored by the Cognitive Science Program Office of Naval Research
More informationArgumentative Interactions in Online Asynchronous Communication
Argumentative Interactions in Online Asynchronous Communication Evelina De Nardis, University of Roma Tre, Doctoral School in Pedagogy and Social Service, Department of Educational Science evedenardis@yahoo.it
More informationLEGO MINDSTORMS CHEERLEADING ROBOTS
LEGO MINDSTORMS CHEERLEADING ROBOTS Naohiro Matsunami\ Kumiko Tanaka-Ishii 2, Ian Frank 3, and Hitoshi Matsubara3 1 Chiba University, Japan 2 Tokyo University, Japan 3 Future University-Hakodate, Japan
More informationEnd-to-End Infrastructure for Usability Evaluation of ehealth Applications and Services
End-to-End Infrastructure for Usability Evaluation of ehealth Applications and Services Martin Gerdes, Berglind Smaradottir, Rune Fensli Department of Information and Communication Systems, University
More informationDomain Understanding and Requirements Elicitation
and Requirements Elicitation CS/SE 3RA3 Ryszard Janicki Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada Ryszard Janicki 1/24 Previous Lecture: The requirement engineering
More informationTowards Intuitive Industrial Human-Robot Collaboration
Towards Intuitive Industrial Human-Robot Collaboration System Design and Future Directions Ferdinand Fuhrmann, Wolfgang Weiß, Lucas Paletta, Bernhard Reiterer, Andreas Schlotzhauer, Mathias Brandstötter
More informationOutline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types
Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as
More informationMulti-Robot Cooperative System For Object Detection
Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based
More informationDetecticon: A Prototype Inquiry Dialog System
Detecticon: A Prototype Inquiry Dialog System Takuya Hiraoka and Shota Motoura and Kunihiko Sadamasa Abstract A prototype inquiry dialog system, dubbed Detecticon, demonstrates its ability to handle inquiry
More informationA CYBER PHYSICAL SYSTEMS APPROACH FOR ROBOTIC SYSTEMS DESIGN
Proceedings of the Annual Symposium of the Institute of Solid Mechanics and Session of the Commission of Acoustics, SISOM 2015 Bucharest 21-22 May A CYBER PHYSICAL SYSTEMS APPROACH FOR ROBOTIC SYSTEMS
More informationSafe Speech by Knowledge
Safe Speech by Knowledge Fredrik Kronlid 2013-09-24 Vehicle safety Table of Contents 1 Executive Summary 3 2 Background 3 3 Objective 4 4 Project Realisation 5 4.1 Analysis 5 4.2 User Study 6 4.3 Implementation
More informationEvaluating the Augmented Reality Human-Robot Collaboration System
Evaluating the Augmented Reality Human-Robot Collaboration System Scott A. Green *, J. Geoffrey Chase, XiaoQi Chen Department of Mechanical Engineering University of Canterbury, Christchurch, New Zealand
More informationAN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS
AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting
More informationAn Evaluation Framework. Based on the slides available at book.com
An Evaluation Framework The aims Explain key evaluation concepts & terms Describe the evaluation paradigms & techniques used in interaction design Discuss the conceptual, practical and ethical issues that
More informationRobot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences
Robot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences Michael L. Walters, Kheng Lee Koay, Sarah N. Woods, Dag S. Syrdal, K. Dautenhahn Adaptive Systems Research Group,
More informationEdgewood College General Education Curriculum Goals
(Approved by Faculty Association February 5, 008; Amended by Faculty Association on April 7, Sept. 1, Oct. 6, 009) COR In the Dominican tradition, relationship is at the heart of study, reflection, and
More informationDevelopment of a general purpose robot arm for use by disabled and elderly at home
Development of a general purpose robot arm for use by disabled and elderly at home Gunnar Bolmsjö Magnus Olsson Ulf Lorentzon {gbolmsjo,molsson,ulorentzon}@robotics.lu.se Div. of Robotics, Lund University,
More informationFlexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information
Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems September 28 - October 2, 2004, Sendai, Japan Flexible Cooperation between Human and Robot by interpreting Human
More informationRobotic Systems ECE 401RB Fall 2007
The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation
More informationDesign Science Research Methods. Prof. Dr. Roel Wieringa University of Twente, The Netherlands
Design Science Research Methods Prof. Dr. Roel Wieringa University of Twente, The Netherlands www.cs.utwente.nl/~roelw UFPE 26 sept 2016 R.J. Wieringa 1 Research methodology accross the disciplines Do
More informationWhat will the robot do during the final demonstration?
SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such
More informationAffordance based Human Motion Synthesizing System
Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract
More informationInteraction Design in Digital Libraries : Some critical issues
Interaction Design in Digital Libraries : Some critical issues Constantine Stephanidis Foundation for Research and Technology-Hellas (FORTH) Institute of Computer Science (ICS) Science and Technology Park
More informationSTUDY ON REFERENCE MODELS FOR HMI IN VOICE TELEMATICS TO MEET DRIVER S MIND DISTRACTION
STUDY ON REFERENCE MODELS FOR HMI IN VOICE TELEMATICS TO MEET DRIVER S MIND DISTRACTION Makoto Shioya, Senior Researcher Systems Development Laboratory, Hitachi, Ltd. 1099 Ohzenji, Asao-ku, Kawasaki-shi,
More informationA Conceptual Modeling Method to Use Agents in Systems Analysis
A Conceptual Modeling Method to Use Agents in Systems Analysis Kafui Monu 1 1 University of British Columbia, Sauder School of Business, 2053 Main Mall, Vancouver BC, Canada {Kafui Monu kafui.monu@sauder.ubc.ca}
More informationDoes the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?
19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands
More informationDesigning for End-User Programming through Voice: Developing Study Methodology
Designing for End-User Programming through Voice: Developing Study Methodology Kate Howland Department of Informatics University of Sussex Brighton, BN1 9QJ, UK James Jackson Department of Informatics
More informationData-Driven HRI : Reproducing interactive social behaviors with a conversational robot
Title Author(s) Data-Driven HRI : Reproducing interactive social behaviors with a conversational robot Liu, Chun Chia Citation Issue Date Text Version ETD URL https://doi.org/10.18910/61827 DOI 10.18910/61827
More informationSocio-cognitive Engineering
Socio-cognitive Engineering Mike Sharples Educational Technology Research Group University of Birmingham m.sharples@bham.ac.uk ABSTRACT Socio-cognitive engineering is a framework for the human-centred
More informationNCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects
NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS
More informationE90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright
E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7
More informationINTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,
More informationIntroduction to Foresight
Introduction to Foresight Prepared for the project INNOVATIVE FORESIGHT PLANNING FOR BUSINESS DEVELOPMENT INTERREG IVb North Sea Programme By NIBR - Norwegian Institute for Urban and Regional Research
More informationSensors & Systems for Human Safety Assurance in Collaborative Exploration
Sensing and Sensors CMU SCS RI 16-722 S09 Ned Fox nfox@andrew.cmu.edu Outline What is collaborative exploration? Humans sensing robots Robots sensing humans Overseers sensing both Inherently safe systems
More informationMulti-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living
Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living Javier Jiménez Alemán Fluminense Federal University, Niterói, Brazil jjimenezaleman@ic.uff.br Abstract. Ambient Assisted
More informationAR-Enhanced Human-Robot-Interaction Methodologies Algorithms
AR-Enhanced Human-Robot-Interaction Methodologies Algorithms Nils Andersson, Angelos Argyrou, Frank Nägele, Fernando Ubis, Urko Esnaola Campos, Maite Ortiz de Zarate, Robert Wilterdink Presenter: Nils
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More information* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged
ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing
More informationTA2 Newsletter April 2010
Content TA2 - making communications and engagement easier among groups of people separated in space and time... 1 The TA2 objectives... 2 Pathfinders to demonstrate and assess TA2... 3 World premiere:
More informationMulti-modal System Architecture for Serious Gaming
Multi-modal System Architecture for Serious Gaming Otilia Kocsis, Todor Ganchev, Iosif Mporas, George Papadopoulos, Nikos Fakotakis Artificial Intelligence Group, Wire Communications Laboratory, Dept.
More informationQUTIE TOWARD A MULTI-FUNCTIONAL ROBOTIC PLATFORM
QUTIE TOWARD A MULTI-FUNCTIONAL ROBOTIC PLATFORM Matti Tikanmäki, Antti Tikanmäki, Juha Röning. University of Oulu, Computer Engineering Laboratory, Intelligent Systems Group ABSTRACT In this paper we
More informationAir Marshalling with the Kinect
Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable
More informationUsability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions
Sesar Innovation Days 2014 Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions DLR German Aerospace Center, DFS German Air Navigation Services Maria Uebbing-Rumke, DLR Hejar
More informationDESIGN AGENTS IN VIRTUAL WORLDS. A User-centred Virtual Architecture Agent. 1. Introduction
DESIGN GENTS IN VIRTUL WORLDS User-centred Virtual rchitecture gent MRY LOU MHER, NING GU Key Centre of Design Computing and Cognition Department of rchitectural and Design Science University of Sydney,
More informationPOLICY SIMULATION AND E-GOVERNANCE
POLICY SIMULATION AND E-GOVERNANCE Peter SONNTAGBAUER cellent AG Lassallestraße 7b, A-1020 Vienna, Austria Artis AIZSTRAUTS, Egils GINTERS, Dace AIZSTRAUTA Vidzeme University of Applied Sciences Cesu street
More informationAutonomous Robotic (Cyber) Weapons?
Autonomous Robotic (Cyber) Weapons? Giovanni Sartor EUI - European University Institute of Florence CIRSFID - Faculty of law, University of Bologna Rome, November 24, 2013 G. Sartor (EUI-CIRSFID) Autonomous
More informationAutonomic gaze control of avatars using voice information in virtual space voice chat system
Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16
More informationREBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL
World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced
More information