Investigating spatial relationships in human-robot interaction

Size: px
Start display at page:

Download "Investigating spatial relationships in human-robot interaction"

Transcription

1 Investigating spatial relationships in human-robot interaction HELGE HÜTTENRAUCH KERSTIN SEVERINSON EKLUNDH ANDERS GREEN ELIN A TOPP Human computer interaction (HCI) Computer science and communication (CSC) Royal institute of technology (KTH)

2 HCI-31 In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS 2006), Oct. 9 15, 2006, Beijing, China {hehu, kse, green, topp}@csc.kth.se Human computer interaction (HCI) Computer science and communication (CSC) Royal institute of technology (KTH) S Stockholm, Sweden URL:

3 Investigating Spatial Relationships in Human-Robot Interaction Helge Huettenrauch, Kerstin Severinson Eklundh, Anders Green, Elin A.Topp School of Computer Science and Communication (CSC) Royal Institute of Technology (KTH) Stockholm, Sweden {hehu, kse, green, Abstract - Co-presence and embodied interaction are two fundamental characteristics of the command and control situation for service robots. This paper presents a study of spatial distances and orientation of a robot with respect to a human user in an experimental setting. Relevant concepts of spatiality from social interaction studies are introduced and related to Human-Robot Interaction (HRI). A Wizard-of-Oz study quantifies the observed spatial distances and spatial formations encountered. However, it is claimed that a simplistic parameterization and measurement of spatial interaction misses the dynamic character and might be counterproductive in the design of socially appropriate robots. Index Terms spatiality in human-robot interaction. I. INTRODUCTION Humans engaged in physical activities deal with spatial relationships. The physical mass and degrees of freedom of body, head, and limbs need to be orchestrated for movements or manipulation based upon sensory perception and cognitive abilities. The necessary understanding of spatiality is claimed to have its origin in evolutionary traits that shaped not only perception, but influenced human usage of linguistic metaphors in daily usage [17]. A service robot that operates in the co-presence of a human user might become engaged in activities that are determined by the human s and the robot s co-presence, mobility, multimodal communication, and embodied interaction [6]. Trained by daily experience humans are in general skilled in dealing with other people in managing space and in handling objects. The signaling, whether through nonverbal or verbal expressions is well understood, building upon the ability to notice other people s body movements, gaze exchanges, gestures or mimic expressions [4]. Furthermore signaling in and through the environment is possible and anchored within the sociocultural context and practice [2], e.g., a closed door can signify a please do not disturb me right now if this convention is well established and adhered to. Interactive mobile robots are machines that test many human assumptions about interactive artefacts by pushing the borderline of our understanding, differentiation, and Fig. 1 User teaching the robot objects reactions towards what is alive or inanimate [16], [20]. The robots self-locomotion and the attribution of body - movement as expression of own intentions are contributing factors. As humans and robots interact, this attribution of character towards robots might influence humans spatial behaviour in the presence of such devices. This paper investigates the spatial management in a Human-Robot-Interaction scenario as illustrated in figure 1: A user guides her new robot around with a follow-me behaviour and shows it the operation area. In this way the user is teaching the robot places and objects that will allow the robot to perform service missions afterwards. At the intended locations, the user and the robot need to position themselves so that objects or locations can be shown and named to the robot by speech dialogue. Our research questions related to the scenario can be phrased like this: How do the spatial distances and orientations of a user in relation to a robot vary throughout a cooperative task performed in a home-like environment? Can patterns of spatial HRI behaviours be identified to guide the design of robots spatial conduct? To investigate these questions we performed a study with 22 subjects where we observed and recorded the movement and positioning during this interaction to X/06/$ IEEE

4 understand how robot motion and interaction behaviours can be designed to be perceived as socially appropriate. We are interested in this spatial management behaviour as it requires the active monitoring and dynamic reaction to each others movement and position changes. We also want to determine how the robot should select appropriate movement behaviours when interacting with a human user in a spatial context. Understanding posture and positioning changes in HRI are prerequisites to reading one another s signalling through joint spatial management. It is assumed that it is used in parallel to other communication modalities like spoken utterances. To find the relevant features of such spatial interaction between a robot and a user we let a robot interact with users and analysed the interaction for variations in distance and spatial orientation. The remaining paper is organized as follows: The background to relevant concepts from social interaction studies and related research in robotics is given in the next part. In Section III the user study conducted is presented. Finally, in section IV we discuss the findings of the study. II. BACKGROUND Many disciplines contribute to our understanding of spatial (inter-) action in co-presence of people and (interactive) artefacts. Below relevant concepts such as Hall s Proxemics and Kendon s F-formation system are introduced and discussed for their possible significance in HRI. A. Hall s Proxemics Hall studied interpersonal distances and coined the term Proxemics [10], i.e., the interrelated observations and theories of man s use of space as a specialized elaboration of culture [ibid, p.1]. In the human-robot-interaction context of posture and positioning, mainly three findings are of importance: The classification of interpersonal distances into 4 different classes, the realization of cultural differences in the spatial behaviour, and last but not least man s perception of space. From his observations in the USA, Hall concluded that social interaction is based upon and governed by four interpersonal distances: intimate ( meter), personal ( meters), social ( meters), and public (>3.66 meters). The combination of measurable spatial parameters, human ergonomic and kinetic capabilities, different social roles and interaction as well as typical characteristics and interaction situations make Hall s interpersonal distances interesting for HRI. It might be hypothesized that the most co-present HRI exchanges and reciprocal adaptations between a human and a robot will happen in the social and the personal distances. The public distance is of interest as this seems like an appropriate distance to perhaps try to signal that an exchange can or is about to happen. The social and the personal distance seem appropriate in theory to facilitate both the communication and the exchange of goods (for example the manipulation with a robotic arm). The intimate distance seems to be better suited for exchanges with, e.g., so called mental commit robots like the seal-robot Paro [18], where touch is an intended interaction modality. B. Kendon s F-formation system Kendon s F-Formation system [12] is based upon the observations that people often group themselves in a spatial formation, e.g., in clusters, lines, circles, or other patterns. The term formation is used to express the dynamic aspect of this spatial arrangement, i.e., the need to actively sustain it during interaction. This can be observed as small, well synchronized movements of the participating interactors. An F-Formation arises when two or more people form a shared space between them to which they have equal and direct access due to their sustained spatial and orientational configuration. The necessary behavioural organization and movement patterns which are used to sustain this F-Formation is called an F-Formation system. The F-Formation system can be applied directly to an interactive encounter between a robot and a human: Between the two a so called transactional segment or o-space is established (marked with ellipses in figure 2), i.e., a space that both participants are able to look and speak into, and in which they can handle objects of shared interest. Fig. 2: Kendon s F-formation arrangements Kendon showed that joint activities and spatial interactions are supported by certain F-Formation system arrangements, and thus often are encountered in prototypical situations. In the Vis-à-vis arrangement (figure 2, left) two participants normally face one another directly; an L- Shape arrangement (see fig. 2, middle) usually indicates a joint system in which something is shared in the o-space. The Side-by-side configuration (fig.2, right) allows two participants to stand closely together and to face the same direction. This arrangement often occurs in situations were both interactors are facing an outer edge given externally by the environment, e.g., in the form of a table or a wall. For HRI it is important to notice that all F-formation arrangements support a triadic relationship between the two interactors and one or more objects of shared interest, e.g., objects that a robot should learn. C. Spatiality in HRI Several systems have been designed or studied to enable the robots to actively manage spatiality in interaction with humans. Yoda and Shiota [22] take the need for safety in passing a human in a hallway as motivation to develop control

5 strategies for the robot. Three types of encounters were anticipated as test cases for their control algorithm, including a standing, a walking, and a running person. Nakauchi and Simmons [13] present another approach by first collecting empirical data on how people stand in line. They use these data to model a set of behaviours for a robot that needs to get into a queue, wait and advance in the queue for being serviced along with other people there. Butler and Agah [3] varied a robot s movement behaviours and performed a user study to evaluate how different robot speeds and distances were perceived by users. However, no interactive task was performed by the robot or user during this experiment. A study reported by Althaus et al. [1] used a complex, room-based sensor array to track the fine movements and spatial adaptations of a group of people and a robot during its initial appearance, its joining of the group, and finally, the robot s departure. The authors concluded that the spatial adaptation observed for the humans could be matched by the robot s reacting (in turn) with a dynamic adaptation in its positioning. Prassler et al. [15] introduced a robot wheelchair control system that allowed the system to stay close to an accompanying person in a crowed subway station, i.e., the robot movement (with a person) in a highly dynamic context could be demonstrated. Other people (besides the accompanying person) in this public space were treated as dynamic obstacles that needed to be avoided. In [19] Topp and Christensen also addressed the dynamic, joint movement of a robot and its user. However, their robot operation setting is confined to an indoor office space. The interaction is focused on providing a robot navigation component that can follow users with its laserbased tracking system during a so called Human Augmented Mapping mission. Using Hall s interpersonal distances as parameters in a robotic system, Pacchierotti et al. [14] recently devised an algorithm that allows robots to pass people in hallways. III. USER STUDY To investigate the spatial distances and orientations of a user interacting with a robot we designed a study based upon the idea of a Home Tour [5], where a user shows a robot around and teaches it places and objects in a office or home-like environment. A. Scenario and setup In our trial scenario a user has received a robot and is ready to use it for the first time. To introduce the robot to the environment it needs to be shown around to learn relevant places and objects. Once the robot has learned these the user is encouraged to test the robot. Users could send Fig. 3 Living-room experiment area the robot on a search- or a find mission to verify that it could find locations or previously encountered objects. The task embedded in the HRI scenario was thus for invited trial users to (a) get familiar with the robot and navigate it by letting it follow him or her, (b) teach it places and objects, (c) validate already taught places and objects, and (d) handle interaction practically with the robot, including an initial opening and a closing. The robot used in this study is an ActivMedia Performance PeopleBot 1. It comes equipped with an on-board pantilt-zoom camera. Trial users were told that this camera was employed by the robot for object and place recognition. They were also informed that the microphones placed upon the robot were used by the interactive speech system enabling the commanding of the robot by speech. The trial was conducted in a room approximately five by five meter in size. It is furnished with IKEA living room furniture, including different tables, a bookshelf, and two sofas (see figure 3). Indicated with numbers are the entrance, the bookshelf, the Wizard of Oz control station (with a video camera), a small table with a telephone, a low coffee table upon which different objects like a remote control and magazines were placed, two sofas, a TV and a VCR combination placed on a small table, and finally, a small dining table with a fruit bowl, a coffee cup etc. The trial subjects were recruited within the Royal Institute of Technology, i.e., young technical students of both genders. Requirements for selection were that they did not work with or performed research in robotics or computer vision, as this was judged to be the requirement of a robot 1

6 encounter with inexperienced users. We conducted 22 trials (after 4 initial pilot trials for trial-adjustments) with 9 women and 13 men. Participants of the study were rewarded a cinema ticket for their time and effort. Upon arrival participants received an introduction to the robot and the task, both in written form and in the form of a short demonstration by one of the experiment leaders. They were then asked to use the robot to teach it new places and objects and validate these. A time-limit was set for the interaction, i.e., after 15 minutes a sound indicating empty batteries for the robot was played to end the trial. Upon completion of the experiment users were asked to fill in a questionnaire before being debriefed and told that the robot s behaviours during the experiment were simulated. The robot behaviours were controlled by two experiment leaders who used a wireless robot navigation and onboard-camera control and a speech synthesizer to produce spoken dialogue in a Wizard-of-Oz setup [8]. B. Data collection Multiple sources were used for data collection during the trial: An external video camera taped the trial in audio and video from the experiment leaders position and perspective. Placed in the room s corners, four webcams running at a frame-rate of about 1 Hz recorded the interaction. The images taken with the webcams ensured that the user and robot movements, postures, and gestures would be captured from different angles to avoid possible occlusions. Data from a laser range finder on the robot were stored and analysed with the help of a person tracking system [19]. This data represents information about the spatial distance and positioning of the user under the condition that the user is in a 180 degree half-circle in front of the robot. A system log stored all commands that were sent to the robot. The different systems mentioned were synchronized against a local Network Time Protocol (NTP) server. Together with the timing information the robot trials can thus be run in a simulator at a later point of time. Finally, a digital recorder was used to record the spoken commands on the robot itself for detailed speech dialog analysis and future speech recognition training. C. Data analysis To find the relevant spatial interaction patterns and ways to categorize them, we first carefully examined the data of a few trials. After this first round of finetuning we settled for our analysis on a process as follows. As starting point for the analysis the timeline of the external video was taken to synchronize the interaction transcriptions. Based upon these synchronized transcriptions the interaction was then categorized into three interaction episodes termed FOLLOW (user guiding the robot around), SHOW (user teaching the robot places and objects), and VALIDATE (user testing the taught Figure 4: Visual inspection tool places and objects by sending the robot on missions to find them again). Another category of interaction was termed BREAKDOWN, i.e., scenes where miscommunication and /or task-level incidents led to interruptions in interaction. Often this was accompanied by repair attempts through speech dialogue, adaptations of position towards the interaction partner, speech-command repetitions, or a change of interaction strategy altogether (see [7] for details). For each of the identified interaction episodes the initial posture and positioning, i.e., the distance, orientation, body posture, gesture(s), utterances, and dynamic positioning changes within the episode itself were annotated. The spatial formation of the user and the robot was analysed with help of the laser range finder data and a visual inspection tool (see figure 4). As the laser range finder data is only available when the user is standing in a 180º degree half-circle in front of the robot, the visual inspection tool was applied in situations in which the user was standing behind the robot or laser data was unavailable. The visual inspection tool displays different webcam images simultaneously and supports the annotation of the posture and positioning by pressing pre-defined keys on a keyboard. Before loaded into this visual annotation tool, still images are first overlaid and fused with a calibration image so that virtual dots on the images mark a grid to calculate distances and positions with. With this aid, marks in the trial environment that could possibly bias users to align themselves with were avoided. Image sequences can be played back and forth and give the possibility to quickly annotate movements, positioning, and postures. The outcome of the analysis has been termed a thick description giving the literally frame-by-frame commented observations from the trials. These thick descriptions are accompanied with numerical, quantitative interaction episode descriptions (including still image-sequences for illustration) for each of the observed Follow-, Show-, and Validate episodes. This analysis has been conducted for 11 trials so far, i.e., only half of the available data have been subjected to this in-depth spatial interaction analysis. Focusing on the questions posed initially with respect to the spatial distance and formations of the robot and the human, the following section will focus on the results of

7 the spatial management during the Follow, Show, and Validate episodes as analysed from eleven trial sessions based upon a total of N=321 HRI initiations. A. Findings Tables 1-3 give the summarized findings for the HRI episodes Follow, Show, and Validate as introduced above for eleven trial subjects. Column 1 (numbering from left to right) gives the trial-subject s identity, column 2 holds the number of episodes encountered. Episodes themselves were then categorized according to Hall s interpersonal distances of Intimate, Personal, and Social depicted in columns 3-5 by checking the metric distance between the robot and the user and classifying it according to the appropriate Hall distance. TABLE I FOLLOW-EPISODES ANALYSIS TABLE II SHOW-EPISODES ANALYSIS TABLE III VALIDATE-EPISODES ANALYSIS Finally a categorization according to Kendon s F- formation arrangements was made: Columns 6-8 give the number of events recorded as Vis-à-vis, L-Shape, or Side-by-side F-formations. Note that the subtotals do not necessarily have to add up to the absolute number of episodes. The reason is that subjects also initiated missions while not being in one of the Kendon F-formations analyzed. Subjects were free to decide for themselves how to conduct the trial in detail. Some choose to first iterate FOLLOW and SHOW missions to teach places and objects to the robot before trying VALIDATE-missions with a few selected places and objects. As an alternative strategy subjects could keep a strict sequential order of FOLLOW, SHOW, and VALIDATE after one another. The preference to iterate multiple Follow- and Show-missions first as well as the observation that Validate missions are taking longer in duration than Follow- or Show-missions explain why only 93 Validate missions were observed. For the Hall s interpersonal distances it is striking how predominant the Personal zone is, i.e., independent upon mission-type subjects preferred to position themselves in the range of 1 to 4 feet (0.45 to 1.2 meters). Interesting is that the number of subjects who command a Follow-, Show-, or Validate-mission from the intimate zone is much smaller than, for example the robot approach distance reported in Walters et al. [21]. The authors requested subjects to move toward the robot as far as they felt comfortable and reported that up to 40% of their subjects came closer than 0.45 meters. Our figures on users entering into an intimate distance towards the robot are much smaller as given in Table 1-3 above, e.g., for Follow = 5 (4.6%); Show = 7 (5.8%), and Validate = 12 ( 12.9%) of users ordered the robot to perform a mission while being in the intimate Hall distance. Although both experiments used an ActivMedia PeopleBot 2 the high number of people coming very close to the robot was not encountered in our experiment. Looking at the Kendon F-formations a dominance of the Vis-à-vis (or face-to-face) positioning of the user towards the robot can be noted, independent upon interaction episode. The L-Shape F-formation arrangement is in comparison less often observed. Especially in the Followepisodes the L-Shape formations are rarely encountered. The Validate and the Show episodes seem more appropriate to be handled in an L-Shape formation as can be seen from the more frequent occurrences. Especially for the Show episodes, used to present and label objects and places in the environment for the robot, the formation of the L-Shape seems to be more natural. 2 Note that Walters et al [21] had modified their robot: The on-board camera position was different additionally, a lifting arm was put on the robot.

8 Fig. 5 Robot centric laser data plot showing distance between robot and subject during different interaction episodes Side-by-side F-formation arrangements were rarely encountered; most often they occurred in the Follow episode. This spatial formation facing an outer edge together is likely very dependent upon the environment in which the human-robot interaction is conducted. The setup in the living room, e.g., in furniture, might, beside the bookshelf, simply not provide the situation of this formation to appear very often. An important limitation to tables 1-3 above should be explicitly mentioned: Each occurrence in the table is based upon a clearly identifiable, often speech-dialog initiated, interaction episode of Follow, Show, or Validate. It is thus the starting point that was taken as marker of the spatial relationship between the robot and a subject. This limits the categorization to a static perspective, i.e., the dynamics of change over (even short) time periods is not covered. The fact that this missing dynamic aspect might however have deeper implications can be seen in figure 5. It shows the laser tracking plot of a subject s distance 3 from the robot centric perspective. The user is approaching the robot (coming into the view of the laser range finder) and starts the Follow-1 episode (depicted through boxes below the graph) after a short while standing still in front of the robot at a distance of about 1.2 meters. After spoken dialogue initiation the subject takes a step from the robot and waits for the robot s initial movement as feedback (visible as an increase of the distance, then again a decrease). When the robot starts its movement this feedback signal is taken up by the subject who starts going 3 as a graphical reduction, orientation data of the subject was removed; towards a corner of the room, rapidly increasing the distance towards the robot (peaking at about 2.3 meters). Arriving at a goal-position the subject stops and turns around waiting for the robot. The robot s approach towards the non-moving subject gives a sharp falling flank at the end of Follow-1. Once the robot has reached the subject s position the trial participant makes an observable position and orientation switch that mark the beginning of the following two Show -episodes. These are then initiated after one another without noticeable changes in position from the subject. This is shown through the almost horizontal (distance-) line of Show-1 and Show-2. Note however the small position changes in distance just before and at the end of the Show -episodes (pointed out by arrows in the graph). Almost none-noticeable in the video-data, these small alignment movements can be found in the data to often signify transitions from one interaction episode into another. We find these micro-spatial adaptations interesting as they might in the future provide a possibility to try sensor-perception-based triggers indicating that new interaction tasks or episodes are prepared for. The subject s mission depicted in figure 5 is continued with multiple Follow -episodes; the illustration example finally ends with another Show. While somewhat disturbed by laser-sensor jitter, even the Show-4 episode is characterized by a straight horizontal line. From the data we have analyzed so far we saw that different HRI-interaction episodes will also produce

9 different spatial patterns in the sensor readings that monitor the (subject) user s movements and positioning. Summarizing, we describe the observed dimensions and differences of the interaction episodes of Follow, Show, and Validate by their characteristics: Follow is best typified by a paired-dynamic and user-initiative driven joint activity which, e.g., can be seen from the dynamics of distance/orientation measurements and high spatial-change frequencies. Show instead has a paired-static, joint interactivity attribution. Movements are confined to small adaptive and co-operative engagements and each of the interactors can be acting or reacting in shaping the interaction progress. Finally and as in our scenario tried, Validate is neither paired, nor tightly coupled. Once initiated from the user the robot is acting autonomously while the user becomes a supervisor monitoring the progress at best, or possibly, starting a side activity altogether. What becomes more important with this type of interaction episode is thus, how both the robot and the user come together again and continue with their joint track of interaction once the Validate-mission has been finished. IV. DISCUSSION A descriptive analysis of static measurements showed that Hall s personal distance, i.e., a distance between robot and user in the range of 0.46 to 1.22 meters was preferred in 73% of the observed Follow-, in 85% of the observed Show- and in 78% of the observed Validate-mission initiations. Furthermore, Kendon s Vis-à-vis F-formation arrangement was found to be prevailing among the spatial configurations tested for. A note of caution was raised to the applicability of the terms of both Hall and Kendon however: The dynamic changes and transitions from one interaction episode state into another one are difficult to express in terms of Hall s interpersonal distances and Kendon s F-formations arrangements. Kendon s F- formations arrangements are dynamically sustained by small position changes, but the lead-in and lead-out into these formations, e.g., from a human and a robot need to be carefully studied. A simplistic parameterization of the preferred Hall distances and Kendon F-formation configurations alone therefore seem unsuited to achieve a socially appropriate robot behavior. A more successful alternative might reside in the attempt to make other robot interaction components aware both of the communicative as well as the coordinating requirements of spatial interaction. Examples would be to allow the spoken dialogue model to trigger spatial behavior-signaling or preemptive robot movements as spatial prompts in HRI [9]. A. Design Implications Findings from this trial might be applied to test the following robot design enhancements and behavior strategies to improve spatial management in HRI: Testing an interactive robot in its targeted usage scenario in an early design phase will reveal spatial management challenges that can be used to improve the robot s performance and HRI; detailed findings might differ according to the context studied. Established user preferences in interaction distance (personal to social) and formation (Vis-à-vis or L- Shape), are dependent upon specific interactionepisodes. The transitions between different interaction episodes should be carefully evaluated and designed for with appropriate HRI spatial management behaviors of the robot. Looking at the available internal-states and perceptual data from the robot s sensory system, e.g., the laserrange finder (figure 5), it seems possible to evaluate the current interaction state and spatial management in HRI by looking for reoccurring and characterizing patterns such as user-quickly-leaving, robotapproaching-standing-user, user-standing-still-andshowing-object, small alignment movements, etc. This context-interpretation could be used as input in the design of an interaction planner. The robot s perceptual capabilities should allow for an extended view on the operation environment and human movements within it; the used 180 degrees in front of the robot (based upon the capacity of the laser range finder) appear too limiting in this regard. A perceptual extension could enable the tracking of user positions in a 360º degree around the robot. This would, for example, give the robot the chance to detect users approaching the robot from behind. B. Future Work The study reported was constrained in several aspects to keep complexity to a level that allowed us to experiment and investigate aspects of the spatial interaction with a robot. Potential directions to extend our work include gradually phasing out the Wizard-of-Oz control elements with working robot system components as a first step. Especially the teleoperation control of the robot s locomotion (and orientation) will be substituted to validate our findings and to compare them with experimental HRI data on spatial positioning that is only governed by implemented robot behaviors. We are also interested in extending our trial setup to multiple rooms, possibly making it necessary to traverse narrow passages together to examine further how elements of the physical environment shape the spatial cooperation between a human and a robot. V. ACKNOWLEDGEMENT The work described in this paper was conducted within the EU Integrated Project COGNIRON ( The Cognitive Robot Companion, and was funded by

10 the European Commission Division FP6-IST Future and Emerging Technologies under Contract FP REFERENCES [1] Althaus, P., Ishiguro, H., Kanda, T., Miyashita T., and Christensen, H. I., Navigation for human-robot interaction tasks,. in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), vol. 2, pp , April [2] Argyle, M. Social Interaction. London: Tavistock Publications Ltd., [3] Butler, J. T. and Agah, A., Psychological effects of behavior patterns of a mobile personal robot, Autonomous Robots, Vol. 10, pp , March [4] Clark, H.H., Krych, M.A., Speaking while monitoring addressees for understanding, in Journal of Memory and Language, Vol.50, pp Elsevier, [5] COGNIRON. Annex 1 Description of Work, EU Project document, Contract number FP6-IST , [6] Dourish, P. Where The Action Is: The Foundations of Embodied Interaction. MIT Press [7] Green, A., Severinson Eklundh, K. Wrede, B., Li, S. Integrating Miscommunication Analysis in Natural Language Interface Design for a Service Robot, submitted. [8] Green, A, Hüttenrauch, H., Severinson Eklundh, K. Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration. In. Proc. of the 13 th IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Japan, [9] Green, A, Hüttenrauch, H., Severinson Eklundh, K. Making a Case for Spatial Prompting in Human-Robot Communication., submitted. [10] Hall, E.T., The Hidden Dimension: Man's Use of Space in Public and Private. The Bodley Head Ltd, London, UK, [11] Kanda, Takayuki, Ishiguro, Hiroshi, Imai, Michita, and Tetsuo Ono, Body Movement Analysis of Human-Robot Interaction, in International Joint Conference on Artificial Intelligence (IJCAI 2003), pp , [12] Kendon, A., Conducting Interaction Patterns of Behavior in Focused Encounters. Cambridge University Press, [13] Nakauchi, Y. and Simmons, R., A social robot that stands in line, in Proc. of the IEEE/RSJ Intern. Conference on Intelligent Robots and Systems (IROS), pp , [14] Pacchierotti, E., Christensen, H.I., Jensfelt, P. Human-Robot Embodied Interaction in Hallway Settings: A Pilot Study. In IEEE International Workshop on Robots and Human Interactive Communication (Ro-man), pp , [15] Prassler, E., Bank, D., and Kluge, B., Motion Coordination between a Human and a Mobile Robot, in Proc. of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, Lausanne, Switzerland, October 2002 [16] Reeves, B. & Nass, C. The media equation: How people treat computers, television, and the new media like real people and places. NY: Cambridge University Press, [17] Shepard, R. N., Ecological constraints on internal representation: Resonant kinematics of perceiving, imagining, thinking, and dreaming, Psychological Review, Vol. 91, pp , 1984 [18] Shibata, T.; Wada, K.; Tanie, K., Subjective evaluation of a seal robot in the National Museum of Science and Technology in Stockholm, in Robot and Human Interactive Communication (RO-MAN 2003), pp , [19] Topp, E.A., Christensen, H.I. Tracking for Following and Passing Persons. In Proc. of the IEEE/RSJ Intern. Conference on Intelligent Robots and Systems (IROS), pp , 2005 [20] Turkel, S. Life on the Screen. Simon and Schuster. New York, New York [21] Walters et al. The Influence of Subjects Personality Traits on Personal Spatial Zones in a Human-Robot Interaction Experiment. In: 2005 IEEE International Workshop on Robots and Human Interactive Communication. pp , [22] Yoda, M. and Shiota, Y.,.The mobile robot which passes a man, in Proc. of the IEEE International Workshop on Robot and Human Interactive Communication (ROMAN), pp , September 1997.

Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration

Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration Applying the Wizard-of-Oz Framework to Cooperative Service Discovery and Configuration Anders Green Helge Hüttenrauch Kerstin Severinson Eklundh KTH NADA Interaction and Presentation Laboratory 100 44

More information

Evaluation of Passing Distance for Social Robots

Evaluation of Passing Distance for Social Robots Evaluation of Passing Distance for Social Robots Elena Pacchierotti, Henrik I. Christensen and Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology SE-100 44 Stockholm, Sweden {elenapa,hic,patric}@nada.kth.se

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Embodied social interaction for service robots in hallway environments

Embodied social interaction for service robots in hallway environments Embodied social interaction for service robots in hallway environments Elena Pacchierotti, Henrik I. Christensen, and Patric Jensfelt Centre for Autonomous Systems, Swedish Royal Institute of Technology

More information

With a New Helper Comes New Tasks

With a New Helper Comes New Tasks With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping Anders Green 1 Helge Hüttenrauch 1 Cristian Bogdan 1 Kerstin Severinson Eklundh 1 1 School of Computer Science

More information

Human-Robot Embodied Interaction in Hallway Settings: a Pilot User Study

Human-Robot Embodied Interaction in Hallway Settings: a Pilot User Study Human-obot Embodied Interaction in Hallway Settings: a Pilot User Study Elena Pacchierotti, Henrik I Christensen and Patric Jensfelt Centre for Autonomous Systems oyal Institute of Technology SE-100 44

More information

Evaluation of Distance for Passage for a Social Robot

Evaluation of Distance for Passage for a Social Robot Evaluation of Distance for Passage for a Social obot Elena Pacchierotti Henrik I. Christensen Centre for Autonomous Systems oyal Institute of Technology SE-100 44 Stockholm, Sweden {elenapa,hic,patric}@nada.kth.se

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Klaus Buchegger 1, George Todoran 1, and Markus Bader 1 Vienna University of Technology, Karlsplatz 13, Vienna 1040,

More information

Developing a Contextualized Multimodal Corpus for Human-Robot Interaction

Developing a Contextualized Multimodal Corpus for Human-Robot Interaction Developing a Contextualized Multimodal Corpus for Human-Robot Interaction Anders Green, Helge Hüttenrauch, Elin Anna Topp, Kerstin Severinson Eklundh Royal Institute of Technology School of Computer Science

More information

Design of an office guide robot for social interaction studies

Design of an office guide robot for social interaction studies Design of an office guide robot for social interaction studies Elena Pacchierotti, Henrik I. Christensen & Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology, Stockholm, Sweden

More information

Natural Interaction with Social Robots

Natural Interaction with Social Robots Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Body Movement Analysis of Human-Robot Interaction

Body Movement Analysis of Human-Robot Interaction Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,

More information

Close Encounters: Spatial Distances between People and a Robot of Mechanistic Appearance *

Close Encounters: Spatial Distances between People and a Robot of Mechanistic Appearance * Close Encounters: Spatial Distances between People and a Robot of Mechanistic Appearance * Michael L Walters, Kerstin Dautenhahn, Kheng Lee Koay, Christina Kaouri, René te Boekhorst, Chrystopher Nehaniv,

More information

Design of an Office-Guide Robot for Social Interaction Studies

Design of an Office-Guide Robot for Social Interaction Studies Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems October 9-15, 2006, Beijing, China Design of an Office-Guide Robot for Social Interaction Studies Elena Pacchierotti,

More information

Exploratory Study of a Robot Approaching a Person

Exploratory Study of a Robot Approaching a Person Exploratory Study of a Robot Approaching a Person in the Context of Handing Over an Object K.L. Koay*, E.A. Sisbot+, D.S. Syrdal*, M.L. Walters*, K. Dautenhahn* and R. Alami+ *Adaptive Systems Research

More information

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback? 19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands

More information

Multimodal Metric Study for Human-Robot Collaboration

Multimodal Metric Study for Human-Robot Collaboration Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems

More information

Robot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences

Robot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences Robot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences Michael L. Walters, Kheng Lee Koay, Sarah N. Woods, Dag S. Syrdal, K. Dautenhahn Adaptive Systems Research Group,

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Sven Wachsmuth Bielefeld University

Sven Wachsmuth Bielefeld University & CITEC Central Lab Facilities Performance Assessment and System Design in Human Robot Interaction Sven Wachsmuth Bielefeld University May, 2011 & CITEC Central Lab Facilities What are the Flops of cognitive

More information

An Interactive Interface for Service Robots

An Interactive Interface for Service Robots An Interactive Interface for Service Robots Elin A. Topp, Danica Kragic, Patric Jensfelt and Henrik I. Christensen Centre for Autonomous Systems Royal Institute of Technology, Stockholm, Sweden Email:

More information

2. Publishable summary

2. Publishable summary 2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

Birth of An Intelligent Humanoid Robot in Singapore

Birth of An Intelligent Humanoid Robot in Singapore Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing

More information

Levels of Description: A Role for Robots in Cognitive Science Education

Levels of Description: A Role for Robots in Cognitive Science Education Levels of Description: A Role for Robots in Cognitive Science Education Terry Stewart 1 and Robert West 2 1 Department of Cognitive Science 2 Department of Psychology Carleton University In this paper,

More information

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,

More information

Proceedings of th IEEE-RAS International Conference on Humanoid Robots ! # Adaptive Systems Research Group, School of Computer Science

Proceedings of th IEEE-RAS International Conference on Humanoid Robots ! # Adaptive Systems Research Group, School of Computer Science Proceedings of 2005 5th IEEE-RAS International Conference on Humanoid Robots! # Adaptive Systems Research Group, School of Computer Science Abstract - A relatively unexplored question for human-robot social

More information

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 Abstract Navigation is an essential part of many military and civilian

More information

Person Tracking with a Mobile Robot based on Multi-Modal Anchoring

Person Tracking with a Mobile Robot based on Multi-Modal Anchoring Person Tracking with a Mobile Robot based on Multi-Modal M. Kleinehagenbrock, S. Lang, J. Fritsch, F. Lömker, G. A. Fink and G. Sagerer Faculty of Technology, Bielefeld University, 33594 Bielefeld E-mail:

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

A practical experiment with interactive humanoid robots in a human society

A practical experiment with interactive humanoid robots in a human society A practical experiment with interactive humanoid robots in a human society Takayuki Kanda 1, Takayuki Hirano 1, Daniel Eaton 1, and Hiroshi Ishiguro 1,2 1 ATR Intelligent Robotics Laboratories, 2-2-2 Hikariai

More information

Hey, I m over here How can a robot attract people s attention?

Hey, I m over here How can a robot attract people s attention? Hey, I m over here How can a robot attract people s attention? Markus Finke Neuroinformatics and Cognitive Robotics Group Faculty of Informatics and Automatization Technical University Ilmenau P.O.Box

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

Knowledge Representation and Cognition in Natural Language Processing

Knowledge Representation and Cognition in Natural Language Processing Knowledge Representation and Cognition in Natural Language Processing Gemignani Guglielmo Sapienza University of Rome January 17 th 2013 The European Projects Surveyed the FP6 and FP7 projects involving

More information

SOCIAL ROBOT NAVIGATION

SOCIAL ROBOT NAVIGATION SOCIAL ROBOT NAVIGATION Committee: Reid Simmons, Co-Chair Jodi Forlizzi, Co-Chair Illah Nourbakhsh Henrik Christensen (GA Tech) Rachel Kirby Motivation How should robots react around people? In hospitals,

More information

Estimating Group States for Interactive Humanoid Robots

Estimating Group States for Interactive Humanoid Robots Estimating Group States for Interactive Humanoid Robots Masahiro Shiomi, Kenta Nohara, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita Abstract In human-robot interaction, interactive humanoid robots

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS

SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS The 2nd International Conference on Design Creativity (ICDC2012) Glasgow, UK, 18th-20th September 2012 SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS R. Yu, N. Gu and M. Ostwald School

More information

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why

More information

Physical and Affective Interaction between Human and Mental Commit Robot

Physical and Affective Interaction between Human and Mental Commit Robot Proceedings of the 21 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 21 Physical and Affective Interaction between Human and Mental Commit Robot Takanori Shibata Kazuo Tanie

More information

The Disappearing Computer. Information Document, IST Call for proposals, February 2000.

The Disappearing Computer. Information Document, IST Call for proposals, February 2000. The Disappearing Computer Information Document, IST Call for proposals, February 2000. Mission Statement To see how information technology can be diffused into everyday objects and settings, and to see

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

Person Identification and Interaction of Social Robots by Using Wireless Tags

Person Identification and Interaction of Social Robots by Using Wireless Tags Person Identification and Interaction of Social Robots by Using Wireless Tags Takayuki Kanda 1, Takayuki Hirano 1, Daniel Eaton 1, and Hiroshi Ishiguro 1&2 1 ATR Intelligent Robotics and Communication

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Drumming with a Humanoid Robot: Lessons Learnt from Designing and Analysing Human-Robot Interaction Studies

Drumming with a Humanoid Robot: Lessons Learnt from Designing and Analysing Human-Robot Interaction Studies Drumming with a Humanoid Robot: Lessons Learnt from Designing and Analysing Human-Robot Interaction Studies Hatice Kose-Bagci, Kerstin Dautenhahn, and Chrystopher L. Nehaniv Adaptive Systems Research Group

More information

Can a social robot train itself just by observing human interactions?

Can a social robot train itself just by observing human interactions? Can a social robot train itself just by observing human interactions? Dylan F. Glas, Phoebe Liu, Takayuki Kanda, Member, IEEE, Hiroshi Ishiguro, Senior Member, IEEE Abstract In HRI research, game simulations

More information

Being natural: On the use of multimodal interaction concepts in smart homes

Being natural: On the use of multimodal interaction concepts in smart homes Being natural: On the use of multimodal interaction concepts in smart homes Joachim Machate Interactive Products, Fraunhofer IAO, Stuttgart, Germany 1 Motivation Smart home or the home of the future: A

More information

Collaboration on Interactive Ceilings

Collaboration on Interactive Ceilings Collaboration on Interactive Ceilings Alexander Bazo, Raphael Wimmer, Markus Heckner, Christian Wolff Media Informatics Group, University of Regensburg Abstract In this paper we discuss how interactive

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Measurement report. Laser total station campaign in KTH R1 for Ubisense system accuracy evaluation.

Measurement report. Laser total station campaign in KTH R1 for Ubisense system accuracy evaluation. Measurement report. Laser total station campaign in KTH R1 for Ubisense system accuracy evaluation. 1 Alessio De Angelis, Peter Händel, Jouni Rantakokko ACCESS Linnaeus Centre, Signal Processing Lab, KTH

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

Human-Computer Interaction

Human-Computer Interaction Human-Computer Interaction Prof. Antonella De Angeli, PhD Antonella.deangeli@disi.unitn.it Ground rules To keep disturbance to your fellow students to a minimum Switch off your mobile phone during the

More information

Human-Robot Interaction in Service Robotics

Human-Robot Interaction in Service Robotics Human-Robot Interaction in Service Robotics H. I. Christensen Λ,H.Hüttenrauch y, and K. Severinson-Eklundh y Λ Centre for Autonomous Systems y Interaction and Presentation Lab. Numerical Analysis and Computer

More information

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Humanoid robot. Honda's ASIMO, an example of a humanoid robot Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Tableau Machine: An Alien Presence in the Home

Tableau Machine: An Alien Presence in the Home Tableau Machine: An Alien Presence in the Home Mario Romero College of Computing Georgia Institute of Technology mromero@cc.gatech.edu Zachary Pousman College of Computing Georgia Institute of Technology

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

HUMAN COMPUTER INTERFACE

HUMAN COMPUTER INTERFACE HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

Using Reactive and Adaptive Behaviors to Play Soccer

Using Reactive and Adaptive Behaviors to Play Soccer AI Magazine Volume 21 Number 3 (2000) ( AAAI) Articles Using Reactive and Adaptive Behaviors to Play Soccer Vincent Hugel, Patrick Bonnin, and Pierre Blazevic This work deals with designing simple behaviors

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Human Robot Dialogue Interaction. Barry Lumpkin

Human Robot Dialogue Interaction. Barry Lumpkin Human Robot Dialogue Interaction Barry Lumpkin Robots Where to Look: A Study of Human- Robot Engagement Why embodiment? Pure vocal and virtual agents can hold a dialogue Physical robots come with many

More information

Contents. Mental Commit Robot (Mental Calming Robot) Industrial Robots. In What Way are These Robots Intelligent. Video: Mental Commit Robots

Contents. Mental Commit Robot (Mental Calming Robot) Industrial Robots. In What Way are These Robots Intelligent. Video: Mental Commit Robots Human Robot Interaction for Psychological Enrichment Dr. Takanori Shibata Senior Research Scientist Intelligent Systems Institute National Institute of Advanced Industrial Science and Technology (AIST)

More information

Evaluating the Augmented Reality Human-Robot Collaboration System

Evaluating the Augmented Reality Human-Robot Collaboration System Evaluating the Augmented Reality Human-Robot Collaboration System Scott A. Green *, J. Geoffrey Chase, XiaoQi Chen Department of Mechanical Engineering University of Canterbury, Christchurch, New Zealand

More information

Prospective Teleautonomy For EOD Operations

Prospective Teleautonomy For EOD Operations Perception and task guidance Perceived world model & intent Prospective Teleautonomy For EOD Operations Prof. Seth Teller Electrical Engineering and Computer Science Department Computer Science and Artificial

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

FP7 ICT Call 6: Cognitive Systems and Robotics

FP7 ICT Call 6: Cognitive Systems and Robotics FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

1 Abstract and Motivation

1 Abstract and Motivation 1 Abstract and Motivation Robust robotic perception, manipulation, and interaction in domestic scenarios continues to present a hard problem: domestic environments tend to be unstructured, are constantly

More information

Motivation and objectives of the proposed study

Motivation and objectives of the proposed study Abstract In recent years, interactive digital media has made a rapid development in human computer interaction. However, the amount of communication or information being conveyed between human and the

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

Topic Paper HRI Theory and Evaluation

Topic Paper HRI Theory and Evaluation Topic Paper HRI Theory and Evaluation Sree Ram Akula (sreerama@mtu.edu) Abstract: Human-robot interaction(hri) is the study of interactions between humans and robots. HRI Theory and evaluation deals with

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

Human Robotics Interaction (HRI) based Analysis using DMT

Human Robotics Interaction (HRI) based Analysis using DMT Human Robotics Interaction (HRI) based Analysis using DMT Rimmy Chuchra 1 and R. K. Seth 2 1 Department of Computer Science and Engineering Sri Sai College of Engineering and Technology, Manawala, Amritsar

More information

A*STAR Unveils Singapore s First Social Robots at Robocup2010

A*STAR Unveils Singapore s First Social Robots at Robocup2010 MEDIA RELEASE Singapore, 21 June 2010 Total: 6 pages A*STAR Unveils Singapore s First Social Robots at Robocup2010 Visit Suntec City to experience the first social robots - OLIVIA and LUCAS that can see,

More information

Mobile Robots Exploration and Mapping in 2D

Mobile Robots Exploration and Mapping in 2D ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA. Mobile Robots Exploration and Mapping in 2D Sithisone Kalaya Robotics, Intelligent Sensing & Control (RISC)

More information

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space Chapter 2 Understanding and Conceptualizing Interaction Anna Loparev Intro HCI University of Rochester 01/29/2013 1 Problem space Concepts and facts relevant to the problem Users Current UX Technology

More information

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Interaction in Virtual and Augmented Reality 3DUIs

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Interaction in Virtual and Augmented Reality 3DUIs Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática Interaction in Virtual and Augmented Reality 3DUIs Realidade Virtual e Aumentada 2017/2018 Beatriz Sousa Santos Interaction

More information

Fig. 1. User closely observing the robot during the HRI trial soding Thethe. reported in [4]

Fig. 1. User closely observing the robot during the HRI trial soding Thethe. reported in [4] The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), Hatfield, UK, September 6-8, 2006 What's in the gap? Interaction transitions that make HRI work Helge Hiittenrauch,

More information

Benchmarking Intelligent Service Robots through Scientific Competitions: the approach. Luca Iocchi. Sapienza University of Rome, Italy

Benchmarking Intelligent Service Robots through Scientific Competitions: the approach. Luca Iocchi. Sapienza University of Rome, Italy Benchmarking Intelligent Service Robots through Scientific Competitions: the RoboCup@Home approach Luca Iocchi Sapienza University of Rome, Italy Motivation Benchmarking Domestic Service Robots Complex

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Social Navigation - Identifying Robot Navigation Patterns in a Path Crossing Scenario

Social Navigation - Identifying Robot Navigation Patterns in a Path Crossing Scenario Social Navigation - Identifying Robot Navigation Patterns in a Path Crossing Scenario Christina Lichtenthäler 1, Annika Peters 2, Sascha Griffiths 3, Alexandra Kirsch 4 1 Institute for Advanced Study,

More information

Towards affordance based human-system interaction based on cyber-physical systems

Towards affordance based human-system interaction based on cyber-physical systems Towards affordance based human-system interaction based on cyber-physical systems Zoltán Rusák 1, Imre Horváth 1, Yuemin Hou 2, Ji Lihong 2 1 Faculty of Industrial Design Engineering, Delft University

More information

Context-Aware Interaction in a Mobile Environment

Context-Aware Interaction in a Mobile Environment Context-Aware Interaction in a Mobile Environment Daniela Fogli 1, Fabio Pittarello 2, Augusto Celentano 2, and Piero Mussio 1 1 Università degli Studi di Brescia, Dipartimento di Elettronica per l'automazione

More information

Narrative Guidance. Tinsley A. Galyean. MIT Media Lab Cambridge, MA

Narrative Guidance. Tinsley A. Galyean. MIT Media Lab Cambridge, MA Narrative Guidance Tinsley A. Galyean MIT Media Lab Cambridge, MA. 02139 tag@media.mit.edu INTRODUCTION To date most interactive narratives have put the emphasis on the word "interactive." In other words,

More information

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

Objective Data Analysis for a PDA-Based Human-Robotic Interface* Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes

More information

Sound rendering in Interactive Multimodal Systems. Federico Avanzini

Sound rendering in Interactive Multimodal Systems. Federico Avanzini Sound rendering in Interactive Multimodal Systems Federico Avanzini Background Outline Ecological Acoustics Multimodal perception Auditory visual rendering of egocentric distance Binaural sound Auditory

More information

Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno. Editors. Intelligent Environments. Methods, Algorithms and Applications.

Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno. Editors. Intelligent Environments. Methods, Algorithms and Applications. Dorothy Monekosso. Paolo Remagnino Yoshinori Kuno Editors Intelligent Environments Methods, Algorithms and Applications ~ Springer Contents Preface............................................................

More information