A Constructive Approach for Communication Robots. Takayuki Kanda

Size: px
Start display at page:

Download "A Constructive Approach for Communication Robots. Takayuki Kanda"

Transcription

1 A Constructive Approach for Communication Robots Takayuki Kanda

2

3 Abstract In the past several years, many humanoid robots have been developed based on the most advanced robotics technologies. If these human-like bodies are used effectively during interaction, these robots will be capable of smooth and natural communication with humans. We believe such robots will exist as our partners in daily life and will serve to keep us informed, through their communication functions. That is, communication robots will become a new form of information media. Unfortunately, robots are still uncapable of these level of communication with humans. We presume that this is because of the following two reasons. First, many researchers assume communication robots are similar to industrial robots, which enactly perform precise human commands. Contrarily, we believe that in order to achieve natural communication, it is necessary that robots behave as our partner, not as our slaves, and to actually perform bi-directional communication. The second problem is the complexity of communicative cues provided by the humanoid body. Communication robots need to fully articulate their body in order to smoothly and naturally communicating with humans. To solve these problems, our approach is to constructively build a communication robot. In other words, to progressively develop it, analyze its interaction ability with humans, apply it in human society, and to continue cycling these steps over a long period of time. We believe this long-term constructive loop is necessary for developing social informatics products that interact with humans (including robots, software agents, and non-agent interface). i

4 We utilize the same constructive design policy in short-term development of interactive behaviors. Instead of assuming an internal model before developing, we take a bottom-up construction approach. We start by preparing fundamental sensor-actuator behaviors for interacting with humans, and then progressively add more interactive behaviors, until humans believe the robot possesses a more lifelike existence than a simple automatic machine. We feel this short-term constructive approach lets us determine suitable methods for developing of communication robots. Through the long- and short-term constructive process, we intend to discover how to effectively use a humanoid body for communication in order for the robot to establish relationships and facilitate interaction with humans. Furthermore, we want to develop communication robots capable of participating in our daily lives. We believe such real-world immersion is essential for studying social robots (including communication robots). Our approach and results reported in this thesis are summarized as follows: Development method of communication robots We propose a software architecture for communication robots and development support tool. The software architecture consists of communicative units for effective use of humanoid body, situated modules for recognizing complex human behavior under a specific situations (produced by the robot), and episode rules for controlling transitions between situated modules. The robot also has mechanisms for basic sociability: communication channels to other robots and computer networks, as well as person identification and adaptation. The development support software implements tools, for editing, searching, and visualization which enable our constructive approach, especially, to implement as many simple behaviors and relations as possible. Based on the development method, interactive humanoid robots that have 102 situated modules and 884 episode rules have been developed. Analysis of human-robot interaction Through two experiments, relationships among human parameters ii

5 (personality, behaviors, evaluation) and robot parameters (behavior patterns, behaviors, internal status) are investigated. The first experimental result will show the sufficient interactive performance of the developed robot, as well as the comparison result about robot s behavior-transition patterns. The second ones will demonstrate the importance of cooperative behaviors as well as present a new analytical method based on precise measurement of body movements. Analysis of social relationships Basic social relationships among humans and robots will be investigated. How to establish relationships is one of the important problems of human-robot communication. Our approach is to solve this problem by multi-robot cooperation during communication. We will propose an effective cooperation model for multi-robots in the interests of promoting human-robot communication, and then verify the mechanism. Application of communication robots to human society This is an important trial for communication robots to participate in real human society with social role. We performed an exploratory experiment to investigate the long-term relationships between humans and the partner robots that have a communication task. In the experiment, students in an elementary school interacted with two robots over 18 days. Each robot spoke in English with the Japanese students. We expected the students to form stable relationships with the robot by way of multi-modal communication and, as a result, improve their English language listening and speaking abilities. The experimental results show possibility of the partner robot in the language education; and they also provide considerable insights into developing partner robots that are well suited for immersion in human society. iii

6

7 Acknowledgments I would like to express my sincere gratitude to Professor Toru Ishida for his continuous guidance, valuable advice, and helpful discussions. I would also like to thank Professor Hiroshi Ishiguro at Osaka University for his patient and inspiring guidance. I gratefully acknowledge valuable comments of other members of my thesis committee, Professor Haruo Hayashi and Professor Hiroshi Okuno at Kyoto University. Professor Tetsuro Sakai at Kyoto University and Professor Shin ichi Yuta at Tsukuba University, my vice advisers, also gave me lots of useful suggestions. I wish to thank Dr. Norihiro Hagita, Dr. Kiyoshi Kogure, and Dr Kenji Mase at ATR (Advanced Telecommunications Research Institute International), where I completed large parts of the work in the thesis. I also wish to express my special thanks to Associate Professor Tetsuo Ono at Future University Hakodate, Assistant Professor Michita Imai at Keio University, Associate Professor Shoji Itakura at Kyoto University, and Takeshi Maeda at Vstone Corporation for their constructive discussions and kindly helps in developing Robovie s interactive behaviors and designing the experiments. Brief mention should be made of the work by my colleagues at ATR. Hitoshi Miyauchi (came from Mitsubishi Heavy Industries, Ltd.) helped developing the Episode Editor. The experiment in the elementary school for English education was performed with Takayuki Hirano (Japan Advanced Institute of Science and Technology), Daniel Eaton (University of British Columbia), Kayoko Iwase (Doshisha University), and Masahiro Shiomi (Wakayama University). I am also thankful to the other members of v

8 ATR for their kind support. Thanks are also due to all subjects that attended my experiments. Without their participation, I would not have been able to advance my research. The English education experiment owe to teachers and students at the elementary school attached to Wakayama University. Finally, I wish to thank all members of Professor Ishida s laboratory for their help and fruitful discussion. This research was supported in part by a grant from the Telecommunications Advancement Organization of Japan. vi

9 Contents 1 Introduction Background Objectives and Approaches Outline of the Thesis Toward Communication Robots Communication between Robots and Humans Elemental Technology of Using Humanoid Body Integration Technology Evaluation Technology Summary Development Method Based on the Constructive Approach Constructive Approach Function-Based Architecture and Behavior-Based Architecture Constructive Approach to Communication Robots Hardware of a Humanoid Robot Software Architecture Communicative Unit Situated Module Reactive Module Episode Rule Person Identification vii

10 3.3.6 Public and Private Episode Rule Communication Functions Using Computer Networks Sensor and Actuator Module Implemented Behaviors Episode Editor Implementation Support of Episode Rules Searching Support of Episode Rules Visualization Support of Internal Status Verification of the Visualization Support Summary Analysis of Human-Robot Interaction Comparison of Behavior Pattern Behavior Patterns Experiment Method Results Discussion Numerical Analysis of Human-Robot Interaction Analysis of Body Movement Experiment Method Results Evaluation of the Implemented Behaviors Discussion Toward Establishing Evaluation Scales for Human-Robot Interaction Summary Analysis of Basic Social Relationships Chained Relationship Model Hypothesis Experiment Method Results Discussion viii

11 5.6 Summary Application An Educational Task of Communication Robots Hypothesis Experiment method Results Results for Long-Term Relationship: first grade students Results for Long-Term Relationship: Comparison with sixth grade Results for Foreign Language Education: speaking opportunity Results for Foreign Language Education: hearing ability Children s Behaviors toward the Robots Summary Discussion Future Direction of the Constructive Approach Future Directions of Human-Robot Interaction Analysis Reminded Problems for Future Social Robot Conclusion 121 Bibliography 125 Publications 133 ix

12

13 List of Figures 2.1 Communication robot research Hybrid architecture Robovie: an interactive humanoid robot Environment and the settings of a cognitive experiment using Robovie Results of a cognitive experiment using Robovie Communicative unit and situated module Implementation of situated module Three kinds of transitions of situated module Illustrated example of the transitions of the current situated modules ruled by episode rules Attached antenna and tags Software architecture Readable area of person identification system Scene of experiment on participant-observer distinction Participant-observer distinction of person identification system Public and private episode Example of robot-robot communication Example of implemented interactive behaviors Display screen of Episode Editor Editing screen of Episode Editor Searching screen of Episode Editor xi

14 3.20 Result of the experiment about Episode Editor Compared three behavior patterns of the robot Illustration of the comparison of the impressions The time of first touches Emergence of interpersonal behaviors Subjects utterance to the robot Attached markers(left) and obtained 3-D numerical position data of body movement (right) Illustration of entrainment score Chained relationship model Outline of the experiment Scenes of the experiment Subjects understandings of the utterance Subject s behaviors toward the robot Give responses to the robot Comparison of subjective voice quality and impressions of the robot Environment of the elementary school Wireless tags embedded in nameplates Example of applied questionnaires about English sentences The scene of the experiment of first day (for first grade students) The scene of the experiment after first week (for first grade students) Transition of interaction with children (1st grade) The scene of the experiment of first day (for sixth grade students) The scene of the experiment after first week (for sixth grade students) Transition of interaction with children (6th grade) Transition of children s English utterance xii

15 6.11 Transition of English hearing score (left: first grade, right: sixth grade) Scene of an experiment for animate-inanimate distinction based on imitating behaviors of babies Scene of an experiment for animate-inanimate distinction based on theory of mind mechanism of children Future directions and applications of social robots that participate in our daily life xiii

16

17 List of Tables 3.1 Grammer of episode rule Results of impression (SD ratings) Factor pattern of the SD ratings (Varimax normalized) Comparison of subjects impressions (factor scores of the SD ratings) Analysis of the subjects behaviors Example of conversations between a subject and the robot The adjective-pairs for subjective evaluation, and the mean and standard deviation as the result Results of body movements Correlation between body movements and subjective evaluation Subjects personalities and their correlation with subjective evaluation and body movements Standardized partial regression coefficients obtained by multiple linear regression analysis Worst five situated modules based on average entrainment score Best five situated modules based on average entrainment score Subjects understandings of the utterance and behaviors toward the robot xv

18 6.1 Results about the change of children s behaviors at an elementary school Transition of scores of English hearing test Comparison of friend-related behaviors xvi

19 Chapter 1 Introduction 1.1 Background There are two research directions in robotics; one is to develop task-oriented robots that work in limited environments, and the other is to develop communication robots that communicate with humans and participate in human society. An industrial robot is an instance of the former direction. It works in factories with specific tasks, such as assembling industrial parts. On the other hand, a communication robot will exist as our partner in daily life. It will not just interact with humans, but behave socially in human society and communicate with humans. As well as performing physical support functions, these robots will act as a new media for information communications. Computer interfaces are rapidly becoming embodied. The embodiment allows interfaces to share non-language information such as facial expression, eye gaze, pointing, and positional movement. Since robots have real bodies, they can perform non-language communication more effectively. For example, possessing a real body enables interfaces to move around and be touched anywhere. It is important for the communication robot to effectively use their body to transfer non-language information. Many humanoid robots have been developed in the past several years. In the task to develop robots that work in our daily life, we feel that by possessing a humanoid body the robot is much more capable to smoothly and naturally communi- 1

20 cate with humans. As a human can easily communicate with a peer by such non-verbal mechanisms as eye contact or hand gestures, a robot s humanlike body would encourage human-robot communication. Previous research has proposed various kinds of communicative behaviors made available by human-like bodies. For instance, the eye is a very important communicative body part. So, eye gaze and contact are therefore often implemented in robots. Since eyes allow humans to share attention with their peers, robots that have a joint-attention mechanism have been developed. Also, robot s arms are used for making gestures. For instance, Ono and his colleagues have verified the importance of eye contact, arm gestures, and appropriate positional relationships (orientation of body direction) in a route guide robot [Ono01]. As mentioned above, there are various body parts that have been explored with respect to their importance to non-verbal communication. If we effectively combine these fundamental mechanisms, humans will be able to communicate with robots smoothly and naturally, as if they have communicating with humans. Consequently, robots could participate in human society and perform various communication roles. For example, if a person cannot use a computer, they could simply communicate with a robot, by using not only language but also body gestures and eye movements, and have the robot perform the desired computer work. With this feature, the robot can be a control device in a ubiquitous environment. This is desireble, because people prefer to communicate with a real entity, such as a robot that has a human-like body, rather than with simple wall, where many sensors are embedded. These communication robots can also become our companions. SONY has already developed a pet robot named AIBO and made a market for such companion robots. Whereas animal pets and pet robots can only perform emotional interactions, humanoid robots have wider possibilities. If robots can communicate with humans by vocal languages, the conversation ability of the robots creates more friendly relationships with humans, and can be used to console elderly persons. In reality, such a trial has already been started [Noguchi02]. In their study, they put a stuffed animal robot in an 2

21 aged person s house that can perform conversation and pass messages. Similarily, a humanoid robot will behave as our partner with its conversation ability and will keep us informed through their communicative functions. In other words, communication robots will become a new form of information media. 1.2 Objectives and Approaches Unfortunately, robots still cannot smoothly and naturally communicate with humans. We feel this is due to the following two reasons. First, many researchers assume communication robots are similar to industrial robots, which must only perform precise human commands. By looking at communication between humans, we cleary see this assumption is flawed. In this situation, communication is rather one-way; humans order and robots follow. We feel that it is necessary for natural communication that robots behave as our partners, not as our slaves, and perform bi-directional communication. The second problem is communicative relationships. It is important to establish a relationship before trying to communicate something. Specifically, humans have difficulty in understanding a robot s utterance without such a relationship, whereas they are able to understand once the relationship is established [Ono00]. We believe communication robots can establish such a relationship by effectively using their body. In human communications, non-verbal communication is believed to transfer more information than verbal, and communicative relationships are established through the exchange of non-verbal information such as eye-contact. About the development method of such a communication robot, there is no method to satisfy these requirements. We need to realize the autonomy of robots in bi-directional communication. That is to say, the software mechanism to autonomously interact with humans. Traditional software architectures of intelligent robots were purposed for achieving a particular task. Whereas Brooks proposed a behavior-based architecture called 3

22 subsumption architecture, which realizes close coupling to the real world [Brooks86], it still has problems with development. Communication robots need to have various interactive behaviors, which encourage humans to interact with it. Thus, it is difficult to directly apply previous architectures to communication robot that need to treat dynamic human behaviors. To solve these problems, our approach is to constructively build a communication robot. In other words, to progressively develop it, analyze its interaction ability with humans, apply it in human society, and to continue cycling these steps over a long period of time. We believe this long-term constructive loop is necessary for developing social informatics products that interact with humans (including robots, software agents, and non-agent interface). Since we have developed a communication robot, we can evaluate and analyze it, and then use the result obtained by evaluating the effectiveness of its body to determine the next step for development. We cannot predict human attitudes, reactions, and behaviors towards the developed products. By repetitively going through the loop, such products can achieve an existence in our daily lives and perform information-related task for humans. For our robot, the first step is the daily communication task, and we believe it is the initial task for any communication robot. We utilize the same constructive design policy in short-term development of interactive behaviors. Instead of assuming an internal model before developing, we take a bottom-up construction approach. We start by preparing fundamental sensor-actuator behaviors for interacting with humans, and then progressively add more interactive behaviors, until humans believe the robot possesses a more lifelike existence than a simple automatic machine. Each behavior is prepared by combining fundamental units of communicative behavior such as eye-contact and pointing. Thus, it is easy for developers to implement. We feel this short-term constructive approach lets us determine suitable methods for developing of communication robots. Through the long- and short-term constructive process, we intend to discover how to effectively use a humanoid body for communication in order for the robot to establish relationships and facilitate interaction with humans. Furthermore, we want to develop communication robots capable of 4

23 participating in our daily lives. We believe such real-world immersion is essential for studying social robot (including communication robot). Our approach can be summarized as follows: Create autonomous interactive behaviors for communication robots Establish methods and tools for the development of communication robots Discover evaluation methods of human-robot interaction (both subjective and objective measurements) Analyze social relationships between humans and robots Apply the developed robot to real human society 1.3 Outline of the Thesis This thesis consists of eight chapters, including this chapter, the introduction. As a background of this thesis, Chapter 2 introduces related past research works and perspective of communication robot, which issued to motivate the necessity of a humanoid body in human-robot communication. The main support for this argument comes from the readily observable nonverbal information in human communication. Humanoid robots can use their body to facilitate interaction. Previous research works have invented many subsets of communicative behaviors that the humanoid body affords. However, the problem of how to combine these fundamental behaviors in a totally embodied communication robot remains unsolved. Chapter 3 proposes a development methodology and software architecture for communication robots. This methodology is based on our constructive approach. Instead of assuming models and intermediate representations (such as an emotional model) before starting development, we adopt a bottom-up approach. This means, we start by preparing appropriate sensor-actuator behaviors for interacting with humans and progressively 5

24 add more. We believe that the combination of many simple and appropriate sensory-motor modules generates a perceived complexity, intelligence, and communication ability in the robot. The manner in which to combine elemental behaviors to develop interactive behaviors is explained. The elemental behaviors are developed in previous cognitive science research. This interactive behavior also has the capability of easing the recognition of complex behaviors. By using enough physical expression to produce a certain situation with predictable human responses, recognition between these possible responses is much easier. A simple rule to control execution of interactive behaviors is introduced along with a development support tool that facilitates implementation of the components of this mechanism. Chapter 4 details an analysis of human-robot interaction. There are two experiments reported in the chapter. The first experiment is about behavior patterns of the robot. It shows us the effects of the robot s internal parameter on humans. The second experiment is about the correlation between body movements made by the robot and human. The experimental results indicate the importance of cooperative behaviors and the effect of the human internal parameters (personality). With these experiments, relationships among human parameters (personality, behaviors, evaluation) and robot parameters (patterns, behaviors, internal status) are investigated. Chapter 5 is dedicated to the investigation of basic social relationships among humans and robots. The method to establish relationships is one of the important problems in human-robot interaction. We feel multi-robot cooperation in communication with a human facilitates the establishment of relationships. In this chapter, we propose an effective cooperation model for multi-robots in the interests of promoting human-robot communication, and then verify the mechanism. Chapter 6 reports the trial of using a communication robot in human society. The robot is used for English education at an elementary school (English is the primary foreign language be taught in Japan). In the school, the robot behaves as a foreign child, not as a teacher. The experimental results show important findings on long-term relationships and effects on language education. 6

25 Chapter 7 is dedicated to the discussion of future direction for the constructive approach, human-robot interaction analysis, and social robots. We will discuss future direction with respect to the constructive approach and human-robot interaction analysis, which will follow this study. Lastly, future social robots (including communication robot) and necessary future works toward the development of successful social robots are discussed. Chapter 8 concludes the thesis summarizing the result obtained through this research. 7

26

27 Chapter 2 Toward Communication Robots Over the past several years, many humanoid robots have been developed. We believe humanoid robots can realize natural and smooth communication with humans by using their human-like body. The body allows the transfer of non-verbal and non-language information, in addition to language information. In this chapter, the communication technology necessary for future communication robots to communicate with humans smoothly and naturally is discussed. 2.1 Communication between Robots and Humans Compared with computer agents, robots possess a physical body in the real world. By using the robots bodies effectively, we can realize smooth and natural communication between humans and robots. We start by surveying inter-human communication research. In particular, the knowledge of how humans use their body during communication is very useful on developing communication technology for robots. In human communication, non-verbal communication is considered to transfer more information than the verbal kind. Non-verbal communication consists of static and dynamic features. Static features are related with 9

28 appearance: characteristic of body such as eye color, hair, and clothes. Dynamic features are such things as facial expression, eye movements (eye contact and eye gaze), body gestures, touching, paralanguage (i.e. pitch and intonation of voice), and positional distance. Some of these non-verbal languages are possibly symbolized. In addition to the symbolic information, non-language information is often used in communication. By considering non-language communication, Sperber proposed relevance theory [Sperber95], whereby humans communicate among each other by inferring the minds of others. This is contrasts with the code model for communication, where a sender gives information (signals) to a receiver using a presupposed common code for encoding and decoding. The code model is only applicable for the communication in which both sender and receiver have common symbols corresponding with the signals. Contrarily, the relevance theory explains the communication where humans refer to current situations and environments to infer others thinking. According to this theory, just moving around and opening windows are considered as one of the means of communication. Computer interfaces are rapidly becoming embodied for the use of such non-language information. For example, there are computer agents that have the means of facial expression [Schiano00], eye gaze [Fukayama02, Garau01], pointing, and positional movement [Isbister00]. Since robots have real body, their potential is more wide and varying than the computer agents. Presence in the physical world enhances the capability of computer interface. The visual effect in 3-D real space is far more impressive than that of a 2-D screen. Furthermore, it enables interfaces to move around [Paulos98] and be touched at any location [Burgard98, Schulte99]. These are also means of communication. Meanwhile, many humanoid robots have also been developed [Hirai98, Lim00, Cheng00]. We believe that a humanoid robot will become interface media. A human-like body lets humans intuitively understand the robot s gestures, and causes them to unconsciously behave as if they were communicating with humans instead of a robot. This could allow the robot to perform communication tasks in human society such as being a route guide. 10

29 Thus, if a humanoid robot effectively uses its body, it will become a physical interface media that people can communicate with as if the medium were humans. In development psychology, there are various research works on the basic human ability to distinguish between animate and inanimate objects [Rakison01]. These works study whether humans regard robots as communicative, like animals, or just as mechanical objects. Human infants classify things as being animate and inanimate. This distinction is performed by observing the loosinning of movements, trajectories, autonomy, and contingency of the target. For example, since a ball starts moving and goes straight when a human touches it, it is considered as inanimate. We believe that if robots have the features of animate thing, humans will consider them as targets of communication and continually communicate with them. 2.2 Elemental Technology of Using Humanoid Body Previous research works have proposed various kinds of communicative behaviors made possible by human-like robots. For instance, the eye is a very important communicative body part. Eye gaze and eye contact are therefore often implemented in robots. For example, Nakadai and his colleagues developed a robot that tracks a speaking person [Nakadai01]. Matsusaka and his colleagues also developed such a robot that use eye contact [Matsusaka99]. From these works, we can see that the eyes play an important role in conveying communicative intention to humans. Furthermore, eyes allow us to share attention with other people. Scassellati developed a robot as a testbed for a joint-attention mechanism [Scassellati00]. That is, the robot follows the others gaze in order to share attention. Kozima and his colleagues also developed a robot that has a jointattention mechanism [Kozima01]. Imai and his colleagues used a robot s arms as well as eyes to establish joint attention and verified the effectiveness [Imai01]. 11

30 A robot s arms are used for making gestures. Ono and his colleagues have verified the importance of eye contact, arm gestures, and appropriate positional relationships (orientation of body direction) in a route guide robot. In this research, it was found that the body movements is not only used for visually understanding what the speaker says but also for synchronizing the communication. The speaker s body movements entrain hearers to establish a relationship between them [Ono01]. Such a unconscious synchronization of body movements are called entrainment. The synchronized body movements have been highlighted through developmental psychology such as [Condon74]; Watanabe and his colleagues have developed a robot that induces entrainment by using body movements such as a nod [Watanabe01]. We believe that entrainment is necessary for communication robots to keep relationships with humans. Regarding facial expression, there are many research works that mainly focus on expressing emotions. By developing actuators, Kobayashi and his colleagues has developed a robot with a human-like face that can perform human-like emotion expression [Kobayashi95]. Breazeal and her colleagues developed a face robot named Kismet, which express emotions even though it only has mechanicaly simple eyes, mouth, ears, eyelids, and eye blows [Breazeal99]. As mentioned above, there are various body parts that have been used for communication. In other words, physical contact behaviors such as touching are focused on. These are mainly research works to find out suitable robot behaviors to respond to a human s touch. Sato and his colleagues developed stuffed animal robot, and showd that the robot is useful for allieviating the pain of medical care [Sato97]. Hoshino and her colleagues developed a sensor suit that covers the entire body of the stuffed animal [Hoshino98]. They are applying it to human-robot interaction based on touching in our daily life. Naya and his colleagues has implemented touch sensor on a pet robot and classified humans touching behaviors [Naya99]. Personal space between humans and robots has been studied [Nakauchi02]. This feature is between static and dynamic features of nonverbal communication. We believe the above mentioned studies about static 12

31 and dynamic aspects will be integrated to develop totally embodied robot system. 2.3 Integration Technology The purpose of our research is to make robots participate in human society. Thus, it is important to integrate elemental technologies to realize a robot that really works in our daily life. although elemental technology has been developed, some researchers are struggling to develop such a communication robot. In public spaces, several robots have already started to work. Burgard and his colleagues have developed a guide robot for a museum that autonomously shows exhibits and behaves interactively by using simple interface such as buttons [Burgard98]. Schulte and his colleagues pursued more natural interaction, and developed a guide robot that has a head [Schulte99]. Asoh and his colleagues have developed a robot named Jijo-2 that autonomously moves around an office environment and obtains information about the environment by interacting with humans [Asoh97]. Robots that work in our daily life were designed more than 30 years ago [Thring64]. In Japan, many robots have been presented recently such as a healing robot as pet [Shibata01], a dog-like autonomous robot named AIBO by SONY [Fujita01], a communication robot for elders by Matsushita Electric Co. [Noguchi02], and a personal robot by NEC [Papero]. Those robots are going to be used practically and will become interface between humans and information infrastructures such as the Internet. Like these robots, communication robots, which behave socially as our partner in daily life, are being put to practical use. The communication robot needs to autonomously work and interact with humans. We believe that it needs not only to effectively use its body but also to behave as animately and induce entrainment. The combination of using a body, life-like behaviors, and entrainment encourages humans to interact with the robot in a natural and smooth way (Figure 2.1). A robot that has these features 13

32 Figure 2.1: Communication robot research. will be capable of much non-verbal information than computer agents displayed on a screen. Furthermore, it has the social oriented feature, which the ELIZA-like automatic chat system has, but physical body characterists as well. In the development methodology of communication robots, there are mainly two directions. One is a top-down approach based on a certain model of control. For example, Ogata and his colleagues developed a robot that has a self-preservation function based on a human brain model. They intend to develop mind in the robot with the hypothesis that As it becomes more complex and intends to self-preserve, it will come to possess emotion [Ogata99]. Miwa and his colleagues implemented a 3-D mental model into a face robot to interact with humans [Miwa01]. Those developments are model-based approaches, where the model and intermediate representations are defined first, and then details of the robot system are implemented. In 14

33 contrast to this approach is the bottom-up approach. Elements of behaviors are prepared first, and then a communication robot is constructed by putting together the elements. Our approach is the second direction. We have implemented 102 modules that generate behaviors (named situated module) and 884 partial relationships among the modules (named episode rule) [Kanda02b], which generates varying and complex interactive behaviors. The detail of the approach is introduced in Section Evaluation Technology In development of communication robots, an evaluation method is as important as the development method. With task-oriented robots, we can evaluate their performance according to physical measures such as speed and accuracy. These measures help us to improve the performance. For communication robots, we need to apply psychological measures. This is because the performance of these robots that interact with humans is discussed based on how they influence humans with respect to the naturalness, smoothness, duration of communication. Such evaluation methods are still in a developing stage. For realizing communication robots, we believe it is important and necessary to repeatedly improve both the software architecture and evaluation method. First approaches to determing suitable evaluation methods were made by Matsuura s evaluation of a performance robot [Matsuura91] and Shibata s evaluation of a robot arm movements [Shibata98]. So far the evaluations of human-robot interaction has often been performed with basic psychological questionnaires. Nakata and his colleagues analyzed the effects of expressing emotions and intention [Nakata98]. In this research, they compared the simple behaviors of their stuffed animal robot, and then concluded that passive behaviors produce familiar impressions. It is interesting that they used familiarity for evaluation of the robot. Ogata and his colleagues studied emotional communication by evaluating the impressions of the robot [Ogata99]. Such impression evaluation has also been performed for evalu- 15

34 ating a robot that controls / does not control eye direction [Kanda01]. These questionnaire-based methods have contributed to the development of evaluation method; however, the evaluation ability of such questionnaire is strongly limited. Toward supplementing the evaluation ability, several researchers have adopted unobtrusive measures. The unobtrusive measures are used in psychological researches [Webb99], and they have the merit that measurement does not obstruct experiments. Mizoguchi and his colleagues have already employed spatial distance between humans and a robot [Mizoguchi97]. Besides the spatial distance, Nakata and his colleagues apply Laban theory of dancing for the evaluation. Based on the theory, they implemented simple behaviors into a stuffed animal robot, compared the subjects impressions of the robot, and are trying to find physical parameters that correlate to the impressions [Nakata01]. There are some approaches that use unconscious human behaviors such as eye contact [Kanda02a], and an electroencephalograph [Honna00]. 2.5 Summary In this chapter, the related studies pertinent to realizing communication robot that will participate in human society are reported. In Section 2.1, we set that robots that naturally communicate with humans need to effectively use non-verbal information as humans do in communication. To this effect, there are many research works about use of robots body (discussed in Section 2.2). Contrarily, integration of elementalal technologies (Section 2.3) and evaluation of communication robot (Section 2.4) are still in an early development stage. Meanwhile, information infrastructures are growing, and society is aging. The necessity for communication robots, which offer humans natural and smooth communication as an interface of the infrastructure, is becoming larger and larger. Therefore, it is necessary and important to study communication robots that effectively uses their bodies for communication. 16

35 Chapter 3 Development Method Based on the Constructive Approach Recent progress in robotics research has brought with it a new research direction, communication robots. They will autonomously behave and naturally and smoothly communicate with humans by effectively using their body. Thus, it is necessary to prepare a robot body that has much physical expression ability, like humans. To achieve the autonomy in communication, a software mechanism is needed to autonomously interact with humans. Previous software architectures of intelligent robots purposed to achieve a particular task. Thus, it is difficult to apply it directly to communication robot, which needs to deal with dynamic human behaviors. In this chapter, we focus on realizing natural and smooth human-robot interaction and propose a software mechanism and development method for autonomous communication robots. Our approach, which is named the constructive approach, is based on combining as many simple and appropriate sensory-motor modules as possible (Section 3.1). A humanoid robot has been developed for the study (Section 3.2). Along with the approach, we propose an architecture for communication robots (Section 3.3). The interactive behaviors are designed with knowledge about the robot s embodiment obtained from cognitive experiments (Section 3.3.1), and then implemented as situated modules with 17

36 situation-dependent sensory data processing for understanding complex human behaviors (Section 3.3.2). Reactive modules are also prepared for reactive behaviors (Section 3.3.3). The relationships between behaviors are implemented as rules governing execution order (named episode rules ) to maintain a consistent context for communication (Section 3.3.4). This mechanism using situated modules and episode rules is expanded to communication between multiple robots and humans. Section describes the functions of multi-person identification and adaptation. The identification function is implemented by using a wireless tag system. Based on a person identification, social episode rules allows it to simultaneously interact with multiple people (explained in Section 3.3.6). Section reports the mechanism that allows multiple robots to behave cooperatively to promote human-robot interaction. As a result of using the constructive approach, we have implemented 102 situated modules and 884 episode rules, which generate a complicated switching of behaviors. To support the constructive approach, a development tool named Episode Editor (Section 3.4) has been prepared, which expresses the complex relationships and the execution of many simple behaviors visually. 3.1 Constructive Approach Intelligent robot research (mainly locomotive robot) has started in SRI in the later half of 60 s [Nilsson84]. At the beginning, research topics were mainly about the hardware of sensors and manipulators, and navigation of the robot with a small number of sensors such as sonars, a laser range finder, and vision. As the elemental technologies grew, research focused on the architecture to integrate the elemental technologies (such as sensors and actuators) to archive a complex task. Resent robotics approach toward communication robots has begune. A communication robot needs to work under complex environment in which many humans exist, recognize and respond to various human behav- 18

37 Figure 3.1: Hybrid architecture. iors, and autonomously interact with humans. For such a robot, we need to consider the software architecture to make robots continually work and interact with them. Our approach is constructive approach, where developers prepare a large number of simple appropriate sensor-actions. In the viewpoint of hierarchy, it is similar to the behavior-based system proposed by Brooks. However, it is characterized by a large number of simple behaviors generate, which complex internal status and behaviors, and then produce perceived intelligence. In addition, constructive implies a long-term constructive approach: based on the results to develop a communication robot, evaluate it, apply it to real world, and again develop a next generation robot. We believe that such an iterative constructive loop is necessary to realize a communication robot that participates in human society Function-Based Architecture and Behavior-Based Architecture One of the typical architectures for intelligent robots is a function-based architecture. The architecture consists of function modules that observe the world by sensors, represent environments based on sensor information, analyze the representation by using knowledge databases, plan actions and execute the planned actions. The function modules are connected in a line and the information processed by each module is sent to the next module. Thus, sensing and action are coupled through various intermediate representations. This loose coupling causes problems [Brooks91]. For example, a robot 19

38 often needs to reactively execute actions against the sensor input. However, it is difficult for the function-based architecture to perform such reactive behaviors. On the other hand, Brooks [Brooks86] has been proposed a behavior-based architecture called subsumption architecture that realizes close coupling to the real world. The unique concept of the subsumption architecture consisting of reactive modules is not to utilize any explicit internal representations, but to refer the real world as its own model. Both of the traditional function-based and behavior-based architectures have merits and demerits. The behavior-based architecture is superior in reactivity to the function-based architecture, whereas the traditional functionbased architecture is needed to realize deliberative behaviors based on environment representations. These architectures should be integrated and several researchers have already proposed such integrated architectures. Arkin and his colleagues [Arkin93] used potential fields to represent both reactive and deliberative behaviors. The behaviors represented with potential fields are easily combined and it can determine robot actions by a vector computation. However, the problem of this approach is that unique representations to be able to represent any kinds of robot behaviors do not exist. Another approach is to prepare special modules that can handle environment representations in the behavior-based architecture. This approach is more popular and several researchers are proposing its use [Yuta90, Fleury94, Kuniyoshi97, Oka97, Parker98]. As shown in Figure 3.1, the hybrid architecture can be represented by adding new modules that can deal with internal representation instead of the sensors and actuators. Based on this architecture, the robot behaves by accessing to both of the external and internal worlds Constructive Approach to Communication Robots In addition to the reactivity, communication robots need to have various interactive behaviors, which encourage humans to interact with it, and then recognize human behavioral responses. The robots should be capable of work under complex environments the same as in daily life, and inter- 20

39 Figure 3.2: Robovie: an interactive humanoid robot. act with very dynamic beings humans. To solve this problem, we have adopted a hybrid architecture based on situated recognition [Ishiguro99]. For human-robot communication, this situated approach is effective since robots can create specific situations while communicating with humans. This architecture also allows us to easily and progressively add new behaviors. Each behavior is simple, but performs appropriate sensor-actuator actions in a particular situation. Intelligently switching between behaviors generates complexity and perceived intelligence. Consequently, humans will believe they can communicate with the robot, and will desire to. Our constructive approach means that we continually implement behaviors until humans think the robot is an intelligent being, beyond that of a simple automatic machine. By implementing 40 behaviors and 300 relationships between them on a humanoid robot that possessed enough sensors and physical capacity to express itself, people were able to relate to this robot interpersonally [Kanda02a]. Such interpersonal behaviors were also observed in the interaction between a child and the robots [Ishiguro01]. Today, the number of behaviors is above 100. We believe this is a good starting 21

40 point to discuss the robots intelligence and the mechanisms to achieve this. Regarding the design of how the robot interacts, we consider that the active approach to interaction desirable to make up for imperfect sensory processing technologies. Ono and his colleagues discussed the importance of bi-directional communication [Ono00]. They argued that robots should not simply obey the commands of humans but communicate with them as equals. Now, current sensory-recognition technology is not sufficient to recognize every human behavior. In our approach, robots are proactive in initiating interactions and entice humans to respond adaptively to their actions. The robot s embodiment (head, eyes, arms, etc.) helps an actively entrain humans during interaction. 3.2 Hardware of a Humanoid Robot Before we explain the software architecture, we must detail the hardware of the robot. A robot named Robovie has been developed (Figure 3.2 ). The robot that has a human-like appearance is designed for communication with humans. Like a human, it has various sensors, such as vision, sense of touch, audition and so on. With the human-like body and sensors, the robot performs meaningful interactive behaviors for humans. The size of an interactive robot is important. In order not to give a fearful impression to humans, we decided the size would be 120 cm, which is same as a junior school student. The diameter is 40 cm and the weight is about 40 kg. The robot has two arms (4*2 DOF), a head (3 DOF), two eyes (2*2 DOF for gaze control), and a mobile platform (2 driving wheels and 1 free wheel). The robot has various sensors, 16 skin sensors covering the major parts of the robot, 10 tactile sensors around the mobile platform, an omnidirectional vision sensor, 2 microphones to listen to human voices, and 24 ultra-sonic sensors for detecting obstacles. The eye has a pan-tilt mechanism with direct-drive motors, and they are used for stereo vision and gaze control. The skin sensors are important for realizing interactive behaviors. We have developed sensitive skin sensors using pressure sensitive conduc- 22

41 tive rubber. Another important point in the design is battery life. This robot can function for 4 hours and charges the battery by autonomously looking for battery-charging stations. With these actuators and sensors, the robot can generate enough behaviors required for communication with humans. Robovie is a self-contained autonomous robot. It has a Pentium III PC on board for processing sensory data and generating behaviors. The operating system is Linux. Since the Pentium III PC is sufficiently fast and Robovie does not require precise real-time controls like a legged robot, Linux is the best solution for easy and quick development of Robovie s software modules. 3.3 Software Architecture We developed a software architecture for the robot. The basic components of the system are situated modules and episode rules. Each situated module is implemented by coupling communicative units. The robot system sequentially executes situated modules, and episode rules govern their execution order. The architecture has merits for both development and the use of body s communicative features. Towards the constructive approach, the basic strategy of implementation is as follows: 1. Develop situated modules for various situations. 2. Define the basic execution order of situated modules with episode rules for sequential transition. 3. Add episode rules for reactive transitions. 4. Modify implemented episode rules, and specify episode rules of negation to suppress execution of situated modules in a particular longterm context. To implement and modify episode rules, the episode editor (described in 23

42 3.4) aids developers. In particular, the visualization support function of the episode editor will be helpful for above step Communicative Unit To make the best use of the physical expression ability of the robots body, new kind of collaborative work between cognitive science and robotics has been performed. Cognitive science, especially about the ideas on the practical use of the robot s body properties for communication, helps to design more effective robot behaviors. In turn, the developed robot that has enough physical expression ability can be used for verifying theories in cognitive science. A collaborative experiment in cognitive science (fully reported in [Ishiguro01]) is briefly explained, and ideas about the robot s body properties we obtained. Then, the mechanism by which to incorporate these obtained ideas into the software is reported. This enables easy development and rich human-robot interaction. An Example for the Cognitive Experiment Mutually entrained gestures are important for smooth communications between a robot and a human. An experiment has been performed to ensure this. The experiment focused on the interaction between a subject (human who interacts with the robot in the experiment) and the robot while it explains route directions to the subject. Figure 3.3 displays the environment and the parameters of the experiment. By using several different robot gestures while teaching the route, the relationships between the subject s entrained gestures and the level of their understanding of the robot s utterance were investigated. The experiments consist of the following phases: 1. The subject and the robot move from S to A and from R to A, respectively. 24

43 Stairs B Lobby T4 Elevator T3 T2 T1 R Robot S A Subject Figure 3.3: Environment and the settings of a cognitive experiment using Robovie. 2. The subject asks a route to the lobby (B). The robot says, Go forward, turn right,... while performing corresponding gestures on several levels. Entrained gestures (unconscious synchronized movement of hands or elbows to the robot s gesture, shown in Figure 3.4) appeared in many subjects. 3. The subject tries to go to the lobby. As the results of such cognitive experiments, the obtained important ideas are as follows: 1. Rich robot behaviors induce various human communicative gestures that help understanding of robot utterance. 2. Expression of attention by the robot (such as pointing at something with its hands) guides the human s focus. 3. The robot s eye contact indicates that the robot is intending to communicate with humans. 25

44 Figure 3.4: Results of a cognitive experiment using Robovie. 4. Sharing of a joint viewing point (a proper positional relation) establishes the situation where the human can easily understand robot s utterance. Implementation of Communicative Unit We used these ideas to implement Communicative unit into the software architecture. A communicative unit (communicative sensory-motor unit) is a very basic unit that realizes a sensory-motor action for natural and effective human-robot communication. The experiments in cognitive science 26

45 Figure 3.5: Communicative unit and situated module produced several essential ideas about the robot s body properties. Each communicative unit is based on the ideas. In more specificially, we have implemented gaze at object, eye contact, nod, and so forth as shown in Figure 3.5. Although the number of implemented ideas is not so great to date, we can continuously develop such communicative units through this interdisciplinary approach. We consider that the communicative ability of the robot will increase proportionally to the development of communicative units Situated Module In linguistics, an adjacency pair is a well-known term as a unit of conversation where the first expression of the pair requires the second expression to be of a certain type. For example, greeting and response and question and answer are considered as pairs. Similarly, we consider that human-robot interaction can be divided into action-reaction pairs. That is, when a human takes an action toward a robot, the robot reacts to the human s action; and when the robot takes an action toward the human, the human reacts to the action. In other words, the continuation of the actions and reactions forms the 27

46 Figure 3.6: Implementation of situated module. (Dark-gray boxes indicate the behaviors realized by Communicative Unit and a white box is realized by specific implementation for this behavior) interaction. In reality, the action and reaction happens equally, the recognition ability of the robot is not as powerful as that of a human. Consequently, the robot actively takes actions rather than waiting to react to a human s actions in order to maintain communication. Each situated module is designed for realizing a certain action-reaction pair in a particular situation, where a robot mainly takes an action and recognizes the humans reaction. Deviation from the assumption is treated by reactive transition and reactive modules. Precondition, Indication, and Recognition Parts Each situated module consists of a precondition, an indication, and recognition parts as shown in Figure 3.5. They are implemented by using communicative units. By checking the precondition, the robot knows whether the situated module is executable or not. For example, the situated module 28

47 (a) Sequential transition (human reacts to the robot) (b) Reactive transition (the robot reacts to human s interruption)) (c) Activation of Reactive Modules (robot reacts; no transition) Figure 3.7: Three kinds of transitions of situated module that asks to shake the hand is executable when a human (a moving object locating near the robot) is in front of the robot. The situated module that talks about the weather by retrieving weather information from the Internet is not executable (precondition is not satisfied) when the robot cannot access to the Internet. By executing the indication part, the robot takes the action to interact with humans. For example, in the handshake module, the robot says Let s shake hands and offers its hand. This behavior is realized by combining communicative units for eye contact and for maintaining positional relationships (repositioning its body to face the human), and speaking the sentence 29

48 Let s shake hands then making the appropriate body movement to offer its hand. As shown in Figure 3.5, the indication part of situated module is implemented by coupling communicative sensory-motor units and directly supplementing other sensory-motor units (particular utterance, positional movement and so forth). Figure 3.6 shows an example of implementing a situated module that realizes poster pointing behavior. In the figure, brown boxes are communicative units, and the white box (the utterance Look at the poster ) is a directly implemented behavior. The recognition part is designed to recognize human reactions incited by the indication part. In other words, it is an expectation of the human s reaction. The situated module itself produces the particular situation between the robot and the human. By expecting the situation and limiting the information it need to process for the situation, the robot can recognize complex human behaviors with simple sensory data processing. When the robot recognizes by vision, in this case, we call it Situated Vision. Sequential and Reactive Transition of Situated Modules, and Reactive Modules After the robot executes the indication part of the current situated module, it recognizes the human s reaction by the recognition part. Then it records the result values corresponding to the recognition result, and transits to the next executable situated module (Figure 3.7 (a)). The next module is decided on by the result values of the current situated module and the execution history of situated modules (episode). This sequential transition is governed by the episode rules. The episode rules allow for consistent transition between the situated modules. However, the sequential transition by the episode rules does not represent all transition patterns needed for human-robot communication. There are another two types of transitions: interruption and deviation. Let us consider the following situation. When two persons are talking, a telephone suddenly rings. They will stop talking and respond to the telephone call. The interruption and deviation like this is dealt as a reactive transi- 30

49 tion. The reactive transition is also defined by the episode rules (Figure 3.7 (b)). If the reactive transition is assigned to the current situation and the precondition of the assigned next situated module is satisfied, the robot stops executing the current situated module and immediately transits to the next situated module. This software mechanism enables developers to easily and progressively implement situated modules. As discussed in Section 3.3.1, communicative unit corresponds with the elemental interactive behaviors that effectively use humanoid body for smooth communication. Since situated module produce a particular situation during the indication part, it is easy to prepare appropriate recognition functions (sensory-processing) for the predictable situation the indication part causes. Thus, this mechanism supports our constructive approach Reactive Module Reactive modules are also prepared to deal with interruption, but in this case the robot does not quit the execution of the current situated module when a reactive module is activated (Figure 3.7 (c)). Instead, the robot system executes the reactive module in parallel with the current situated module. For example, we implemented a reactive module to make the robot gaze at the part of its body being touched. When the robot talks to a human and the human suddenly touches the arm of the robot, the robot gazes at the arm to indicate that it has noticed it, but continues talking. Similar to the subsumption architecture [Brooks86], upper hierarchy modules (situated modules) can suppress lower ones (reactive modules). As literature in developmental psychology shows [Rakison01], humans find animateness in contingent movements and goal-directed behaviors. We believe reactive behaviors are important, because they let humans find our animateness in communication robots similar to humans; and in doing so, they maintain relationships with the robots. 31

50 3.3.4 Episode Rule The episode is a sequence of interactive behaviors that a human and a robot exchange. From the viewpoint of the robot, it is a sequence of executed situated modules and their results. The robot memorizes the episode in its internal status. The episode rules guide the robot into a new episode of interaction with humans by controlling transitions between situated modules. Module control controls execution order of situated module by referring to the implemented episode rules and internal status. It is important to keep consistency in communication. This is the motivation of applying a rule mechanism into the software architecture. In our approach, robots take the initiative to interact with humans. Although such an active interaction approach works effectively for a while in certain situations, it lacks mechanisms for maintaining a consistent context for long-term interaction. For example, if a robot asks a human Where are you from? and the robot asks the same question a few minutes later, this will obviously create a strange impression for the human. Therefore, we apply a rule-based constraint mechanism in addition to active interaction. Traditionally, production systems such as [Georgeff89, Ishida95] are a popular rule-based approach. All episode rules are compared with the current situated module and the execution history of situated modules to determine which situated module to execute next. The system performs the comparison in the background of the current situated module s execution and prepares the next executable module list. When the current module s execution finishes, the robot checks the preconditions of each situated module in the list. If the precondition is satisfied, it transits to the next situated module. Each episode rule has a priority. If some episode rules conflict, the episode rule with higher priority is used. Table 3.1 shows the basic grammar of the episode rules. Each situated module has a unique identifier called a Module ID. < ModuleID = resultvalue > is the rule to refer to the execution history and the result value of the situated modules, then < ModuleID1 = resultvalue1 >< 32

51 Table 3.1: Grammer of episode rule 1. Basic structure of describing executed sequence: < ModuleID = result value >... <... > NextModule 2. Selective (OR): (< ModuleID1 = result value1 > < ModuleID2 = result value2 >) Repetitions (n:minimum, m:maximum number of times): (...){n,m} 4. Negation of episode rule:! <... > NextModule 5. Negation of Module ID and result value: ˆ< ModuleID=ˆresult value > NextModule ModuleID2 = resultvalue2 >... means a rule refering to the previously executed sequence of situated modules (Table 3.1-1). <... > <... > means a selective-group (OR) of the executed situated modules, then (...) means the block that consists of a situated module, a sequence of situated modules, or a selective-group of situated modules (Table 3.1-2). Similar to regular expression, we can describe the repetition of the block as (...)n,m, where n gives the minimum number of times to match the block and m gives the maximum (Table 3.1-3). We can specify the negation of the whole episode rule with an exclamation mark!. For example,! <... >... <... > NextModuleID (Table 3.1-4) means the module of NextModuleID will not be executed when the episode rule matches the current situation specified by <... >... <... >. The negation of ModuleID and result value can be written as caret character ˆ (Table 3.1-5). Figure 3.8 is an example transition. At first, the robot explores the environment by EXPLORE. Then, a human touches the shoulder of the robot. This action causes a reactive transition ruled by episode rule 1 (Figure 3.8-1). The robot turns to the human by executing the situated module TURN. After the execution of TURN, it starts to greet him/her by executing GREET. This second transition is caused by episode rule 2 (Fig- 33

52 Figure 3.8: Illustrated example of the transitions of the current situated modules ruled by episode rules ure 3.8-2). Next, we explain what happens in the event of a conflict between episode rules (Figure 3.8-3). When GREET results in No reaction, BYE is removed from the candidacy of the next situated module selected by episode rule 3. Meanwhile, HANDSHAKE is the candidate selected by episode rule 4. Episode rule 5 is a negative episode rule to suppress the transition to BYE (it specifies that the robot should not say goodbye before the handshake once its exploration has been interrupted). If the priority of episode rule 5 is higher than that of episode rule 3, BYE is not the candidate of the next execution Person Identification As regarding person identification, it is difficult to develop the robots that work in real environments by using visual and auditory sensors. With respect to audition, usually exist many people exist who are talking at the same time. With respect to vision, the lighting conditions are complicated, and shapes and colors of objects in a real environment are not so simple that existing computer vision techniques can not attain stable and robust recog- 34

53 nition. The identification function must be robust. Mistakes in human identification spoil relationships between the robot and humans. For example, if a robot is speaking with a person and then says another person s name, it will hurt the interacting person s feelings. To make matters worse, robots that work in public spaces need to distinguish between hundreds of humans and simultaneously identify several humans surrounding it. In real situations, thousands of people are working together in office buildings, schools, and hospitals. To solve this problem, a multi-person identification system for communication robots is implemented by using a wireless tag system. Recent RFID (radio frequency identification) technologies enabled us to use contact-less identification cards in practical situations. For example, several companies have already adopted contactless IC card for employee identification. Meanwhile, since cellular phones have come into use, most people already possess wireless equipment. We feel that wireless identification will become the public standard of person identification. By using such a wireless system, robots will be capable of robust recognition of many people simultaneously. Then, robots can adaptively behave to humans by using the personal information and memorized history of communication with particular people. Hardware Mechanism We have adopted the Spider tag system [Spider] for wireless person identification. In the system, tags (shown in Figure 3.9) periodically transmit their ID to the reader, and the reader receives those signals and passes on the ID to the computer onboard the robot. The size of the tag is about 6 cm, and is therefore easy for people to carry. The system uses 303 MHz radio waves. We attached the antenna of the reader above the omnidirectional camera, since the robot body radiates strong noise in the radio spectrum. We can control the readers antenna s attenuation parameter for adjusting detection range. 35

54 Figure 3.9: Attached antenna and tags Software Mechanism Now, the robot system can interact with multiple people simultaneously by utilizing this method of person identification. It classifies people around it into participant and observers, and then behaves adaptively with each person. The software architecture includes four kinds of databases to use in conjunction with person identification (Figure 3.10): Person ID DB to remember internal IDs for each person, episode rules to control execution orders of the situated modules, public and private episodes to maintain communication with each person, and a long-term individual memory to memorize personal information. Module control controls the execution of situated modules by referring to the episode rules and episodes (history of communication). For taking into account the current people nearby. The long-term individual memory is for the situated modules. It is used to memorize local information given by executing particular situated modules as well as personal information such as a person s name. For example, a robot that teaches a foreign language in a school needs to manage students learning progress, such as the previous answers to game-like questions given 36

55 Figure 3.10: Software architecture during a situated module. This long-term memory is not only associated with a particular situated module, but is also used for sharing data among several situated modules. For example, although the robot knows the person name a priori, the situated module that calls the person name will not be executable unless another situated module that asks for that person s name is executed first, successfully. Several situated modules use the person identification. For example, there is a module that calls a person name who is at a certain distance from the robot. It is useful to draw the person into interaction with the robot. Another module plays the body-part game (the robot asks a person to touch body parts by saying the part s names), and remembers the answers made by every person. In linguistics, Clark classified talking people into two categories: participants and listeners [Clark96]. Participants are mainly speakers and hearers, and listeners just listen to the conversation, and therefore have no responsibility in it. Using Clark s work, we classify humans located around the robot into two categories: participants and observers. Since we are concerned with humans only within the robot s awareness, the categories will be similar to Clark s definition, except that the observer does not include eavesdroppers who listen-in without the speaker s awareness. 37

56 In the software architecture, person identification identifies people and their role in interaction - as participants or observers. We believe the distance between the robot and humans also enables the robot to categorize them. As Hall discussed, there are several steps in the distance between conversing humans [Hall66]. According to his theory, a distance less than 1.2 m is conversational, and a distance from 1.2 m to 3.5 m is social. Persons who first meet each other often talk in the social distance. Our robot recognizes the nearest person within a distance of less than 1.2 m as the participant, and the remainded of people within a detectable distance (by the wireless identification system) as observers. The readable range is so short that we can regard every detected person as being in the robot s awareness. Verification of Readable Distance Performance of the person identification is verified through two experiments. The first one determines the readable area. The robot radiates much noise, which complicates the reception of radio signals from tags. To verify performance of the person identification by wireless tags, we have performed an experiment. The requirement for the robot is to find all persons around it. In the experiment, a subject held tags at various distances from the robot in an indoor environment. Then, we measured how often the system could detect the tag. As a result, ample performance of person identification within the distance as shown in Figure 3.11 has been verified. The system can stably detect subjects within 1.5 m. The reader has eight steps of attenuation that reduce the maximum gain of the receiver by 12.5% with each step. As the attenuation parameter setting is increased, the readable area decreased. The attenuation is shown as R in the graph, where the reader s gain is R/8 of its maximum. The graph indicates the possibility that we can detect the nearest people, even though the the resolution is not so good. The result with the attenuation R=8 seems strange, because the readable area is smaller than R=5,6,7. We believe it is because of the radio noise from the robot. With the least attenuation, the system is too subceptible to to noise to detect any 38

57 Figure 3.11: Readable area of person identification system tags. Evaluation of Multiple Person Identification The person identification system in the robot continually detects the participant and observers with the wireless tag system. The basic performance of the tag system was verified. Then, through another experiment, we evaluated the performance of the person identification and participant-observer distinction. The experiment is performed with three subjects as shown in Figure The distances between the subjects and the robot are measured by using a motion capture system. It has a high resolution in time (120Hz) and space (1mm in the room in which we performed this experiment). The subjects wore a wireless tag and markers for the motion capture system, and then moved around the space, sometimes interacting with the robot. Figure 3.13 is the result of the experiment. The upper graph shows the distance between the three subjects and the robot. When they interacted with the robot, they stood within 0.7 m from the robot. This distance was 39

58 Figure 3.12: Scene of experiment on participant-observer distinction also observed in our previous experiment [Kanda02a]. We consider that the conversation distance for the robot is smaller than for human adults, because it requires to the humans for physical contact as well as its height is not so tall. The lower graph shows the detected person by the tag system. The bold line indicates the subject who is detected as the participant, and the fine line indicates the observers. It seems to show a very good result. The subjects within 1.2 m (conversational distance of adults) are always detected and the nearest subjects are considered to be the participant. It also detected almost all the subjects within 3.0 m. Because it frequently changes the attenuation parameter to detect the participant, the time to detect and classify people is a little long. There is about a 10 second delay. Nevertheless, it is sufficient for the detection mechanism of the robot, because it is a similar time order as the execution of each interactive behavior. Discussion In this subsection, we proposed a mechanism that communicates with multiple people simultaneously. The experiments verified the basic ability of the multi person identification system and distinction between participant and 40

59 Figure 3.13: Participant-observer distinction of person identification system observers. The next step is the verification of social ability. This requires a real world experiment, which is performed in a real human society with a large number of subjects over a long-term. To this effect, there are many potential communicational applications for communication robots. We have already assigned the robot a communicational task: a foreign language education in an elementary school as shown in Fig.9. We believe the multiperson commuication ability will be very helpful on performing this educational task, and the experiment will verify the social ability of the mechanism Public and Private Episode Rule By combining person identification ( 3.3.5) and episode rules ( 3.3.4), the robot system can behave socially among multiple people. It is an enhancement of basic episode rule to use the individual identification and endue robots. It endues the robot with basic social ability. There are two types of episode as shown in Figure 3.14: public and 41

60 private. The public episode is the sequence of all executed situated module. In other words, the robot exhibited those behaviors to the public people. On the other hand, the private episode is a private history for each person. By memorizing each person s history, the robot adaptively behaves to the person who is participating to or observing the communication. Episode Rules for the Public Episode As discussed in Section 3.1, the episode rules guide the robot into a new episode of interaction with humans by controlling transitions among the situated modules. They also give consistency to the robot s behavior sequence. When the robot switches between situated modules, all episode rules are checked with the current situated module and the episodes to determine the next one. Episode Rules for the Private Episode Here, we introduce two characters P and O to specify participation and observation, respectively. If there is a character of P or O at the begging of the episode rule, the episode rule refers to private episodes of the current participant or observers. Otherwise, the episode rules refer to public episodes. If the first character in the angle bracket is P or O, it indicates that the person experienced the module as a participant or an observer. Thus, < P ModuleID = result value > is a rule to represent that if the person participated in the execution of ModuleID and it resulted in the result value. Omission of the first character means, The person participated or observed it. Examples Figure 3.14 is an example of public and private episodes, episode rules, and their relationships. The robot memorizes the public episode and the private episodes correspond to each person. Episode rules 1 and 2 refer to the public episode, which realize self-consistent behaviors of the robot. 42

61 Figure 3.14: Public and private episode More specifically, episode rule 1 realizes sequential transition that the robot will execute the situated module SING next, if it is executing GREET and that module results in Greeted. Similarly, episode rule 2 realizes a reactive transition that if persons touches the shoulder, the precondition of TURN is satisfied and then the robot halts execution of SING to start TURN. Also, there are episode rules that refer to private episodes. Episode rule 3 means that if all modules in the participant s individual episode are different with GREET, it will execute GREET next. Episode rule 4 represents that if the person has heard one of the robot s song once already, it does not sing the same song for a while. Like those examples, the episode rule lets the robot adaptively behave to individuals by referring to the private episodes Communication Functions Using Computer Networks In the future, there will be many communication robots in our daily lives, and the robots will be able to communicate with each other by hidden chan- 43

62 nels, such as radio and infrared. We also feel it important for those robots to communicate using voice and gestures with each other, even if they are actually communicating invisibly. It lets humans know those robots can communicate with each other and interact with their surrounding environments. Toward such visible and audible robot-robot communication, the architecture has components for communication using computer networks. By connecting to a communication server, some robots are able to execute behaviors synchronously. For example, a robot system is realized by using two humanoid robots to communicate with each other according to the following sequence: 1. Find and approach a colleague robot. 2. Start to send/receive data. 3. At the same time, two robots express contents of communication by voice and gestures. An example of how the sequences are implemented is shown in Figure Arrows in the figure represent the execution order of the modules and data flow. Using these modules, R1 points at a poster on a wall and R2 says something about it. To an observer, it looks like they are talking about something located in their surrounding environments. Consequently, observers think the robots can interact with other individuals and surrounding environments. We can easily develop robots that interact with other robots and environments. The architecture has another communication function. This is a new information infrastructure that lets robots keep humans informed by communicating with them in natural language. For example, when the robot and humans talks about weather, the robot obtains weather information from the Internet. If the forecast is rain, it says, It will rain tomorrow. 44

63 Figure 3.15: Example of robot-robot communication Sensor and Actuator Module Here, we briefly explain the remaining architecture components (shown in Figure 3.10). Inputs from sensors are pre-processed by sensor modules such as speech recognition and vision processing. For example, a sensor module currently supplies Japanese [Kawahara99] and English [ViaVoice] speech recognition of 50 words. Vision processing functions to perform such tasks as finding human face, objects in the environment, an tracking clothes color, are implemented. Actuator modules perform low-level controls of actuators according to order made by situated modules Implemented Behaviors We installed this software mechanism on the humanoid robot. The task of the robot is to perform daily communication as children do. The number of developed situated modules reached a hundred: about 70 interactive behaviors such as handshaking (Figure 3.16 upper-left), hugging (Figure

64 (a) shake hands (b) hug (c) paper-scissors-rock (d) exercise Figure 3.16: Example of implemented interactive behaviors upper-right), playing paper-scissors-rock (Figure 3.16 lower-left), exercising (Figure 3.16 lower-right), greeting, kissing, singing a song, short conversation, and pointing to an object in the surroundings; 20 idling behaviors such as scratching its head, folding its arms; and 10 moving-around behaviors such as patrolling its surroundings and going to watch a nearby object. Basically, the transition is implemented as follows: occasionally the robot asks humans for interaction by saying Let s play, touch me, and exhibits idling and moving-around behaviors until a human acts in response; once a human acts toward the robot (touches or speaks), it starts and continues the friendly behaviors while the human reacts to these behaviors; when it stops executing friendly behaviors, it says good bye and starts idling and moving-around behaviors once again. 46

65 3.4 Episode Editor We developed the Episode Editor, which visually displays the complex relationships between the execution of many simple situated modules. This enables us to intuitively develop a large number of situated modules and relationships among them (defined by episode rules). The Episode Editor has three main functions (described in following three subsctions) Implementation Support of Episode Rules The Episode Editor has the obvious function of editing episode rules. In Figure 3.18, on the left half of the screen is displayed the episode rule that a developer is editing. Each box indicates one block of situated modules. If there is a complex block assigned by (...), the block is expanded adjacent to the complex... box. The right-most box on the left half of the screen indicates the next executable situated module. The right half of the screen shows the details of the selected box in the left screen, where developers can indicate the identifier of the situated modules (Module ID), result values, repetition, and negation. This editing screen is opened from the main screen (Figure 3.17, 3.18), which has two windows: a list of all implemented episode rules and the search result of episode rules. Developers can select one of the episode rules from the list and edit it Searching Support of Episode Rules The second function of the Episode Editor is to search implemented episode rules. There are three types of search: current search, recent history search, and direct search. In the current search (Fig.7), the Episode Editor shows the current status (upper half of the screen) and the episode rules (bottom half) match with the current status and history. In the history search, a user can specify a point in the history to search the episode rules that match at specified time. In the direct search, a user can select situated modules from the main screen to specify the situation (such as < TURN = success >< 47

66 Figure 3.17: Display screen of Episode Editor Figure 3.18: Editing screen of Episode Editor 48

67 Figure 3.19: Searching screen of Episode Editor GREET = greeted > ) to search the episode rules Visualization Support of Internal Status The Episode Editor has two types of visualization support functions for past episodes: the position of situated modules for all execution history, and the size and color for the recent history and current state. The situated modules are placed according to their relationships (Figure 3.17), which are calculated from the execution history. For example, if situated module X is frequently executed after Y, X and Y have a strong relationship, and are therefore displayed in close proximity to each other in the main screen. The force of this relationship is calculated by the following formula, which is known as the spring model method: F ij = K ij D ij R D 7 ij (3.1) where i and j are situated modules; F ij, D ij, and K ij are the force, distance, and spring constant between i and j, respectively; and R is the constant 49

68 for the force of repulsion. The spring constants are retrieved from the execution history. The position of each situated module is iteratively calculated until all positions converge. The Episode Editor also helps our visualization of the recent execution history of situated modules (Figure 3.17: upper half of the screen). The recently executed situated modules are connected and colored blue. The sizes are small because the modules were executed in the past. The current situated module is large and colored red. The candidates of the next execution are also large and colored green. Other situated modules are displayed very small Verification of the Visualization Support Experiment We performed an experiment to verify the effects of the visualization support function of the Episode Editor. By using the implementation, the robot autonomously interacts with humans. We used two subjects (university students). Each of them interacted with the robot for ten minutes. Result Figure 3.20 indicates the result. There are two findings from the visually displayed result: 1. Meta-structure: When the robot interacts with humans, it has two states: playing with humans and showing idling behaviors. These were displayed visually on the screen. There are two clusters in both screens of Figure The upper-right clusters are the situated modules of idling behaviors, and the center-to-bottom-left clusters are playing behaviors. Thus, the Episode Editor shows the meta-structure of the situated modules. Rarely executed modules are separated in the upper-right and bottomleft corner. 50

69 Figure 3.20: Result of the experiment about Episode Editor 51

70 2. Difference between subjects: Regarding the screen for subject 2, the center-to-bottom-left cluster nearly separates into two (center and bottom-left). This separation does not appear on the screen of subject 1. In a detailed analysis, we found that this occurred because subject 1 played with the robot longer than subject 2. The center cluster of the screen for subject 2 consists of the situated modules such as greet and handshake that the robot exhibits at initial playing behaviors. The robot exhibits such behaviors first, and if the human continues to react to such friendly behaviors, the robot continues to exhibit friendly behaviors. Subject 2 did not respond so eagerly to the friendly behaviors, therefore, the interaction did not continue as long as subject 2. Thus, the initial playing behaviors are clustered and displayed separately from other playing behaviors. Discussion The results indicate that the Episode Editor can help robot developers visualize the meta-structure of human interactions and individual variations within those interactions. We believe such intuitive visualization is important and necessary for our constructive approach to implement a large number of behaviors and their relationships. 3.5 Summary We proposed a robot architecture that consists of situated modules and episode rules for communication robots. The architecture had the following features: situated recognition to understand complex human behaviors, active interaction to make humans adaptively respond to the robot s actions, and episode rules to realize communication in a consistent context. Because a situated module is designed to work under a particular limited situation, developers can easily implement a large number of situated modules. The architecture has a mechanism to communicate with multiple people and 52

71 robots. We believe this is a necessary social ability in communication. The Episode Editor helps to develop the complex relationships among many situated modules as episode rules. Through experimentation, the Episode Editor was found to have enough visualization support ability for humans to understand the relationships and execution of interactive behaviors. We consider that such a development support tool is indispensable for realizing communication robots that have a large number of behaviors. 53

72

73 Chapter 4 Analysis of Human-Robot Interaction In this chapter, analysis of interaction between humans and the developed robots are reported. The robot behaves like human child and attempts daily communication with humans. However, our purpose is not to develop an entertainment robot that behaves like a child. The robot uses its human-like body and voice for accomplishing smooth and natural communication with humans. We intend to discover the mechanism that communication robots can use to encourage humans to interact with them. This mechanism will be based on the effective use of its embodiment. This chapter consists of the presentation of two experiments. The first experiment (reported in Section 4.1) is about behavior patterns of the robot. It shows us the effects of the robot s internal parameter on humans. The second one is about the interaction of body movements (Section 4.2). The experimental results indicate the importance of cooperative behaviors and the effect of human internal parameter (personality). 4.1 Comparison of Behavior Pattern In this experiment, we intended to compare three behavior patterns of the robot and evaluate the performance of the implemented interactive behav- 55

74 Figure 4.1: Compared three behavior patterns of the robot iors. With task-oriented robots, we can evaluate their performance based on physical measures such as speed and accuracy. These measures help us to improve the performance. Similarly, we need to apply psychological measures to the evaluation of communication robots. This is because, the performance of these robots that interact with humans measured in how they influence humans. To realize the communication robots, we believe it is important and necessary to repeatedly improve both the software architecture and the evaluation methods Behavior Patterns In our approach, the robot takes the initiative to interact with humans. Contrarily, traditional toy robots passively wait for human actions. We intend to analyze this strategy of the robot to induce human actions. In human-robot communication, encouraging human actions is very important, because human reactions and voluntary actions can be less difficultly recognized by situated recognition, and it results in bringing us long-term relationships between humans and robots. To this effect, we compared three behavior patterns: Passive, Active, and Complex. For the experiment, we prepared the three behavior patterns (Figure 4.1): Passive The robot waits until a subject interacts. It says Let s play, touch me. When the subject touches the robot, it exhibits one of the 56

75 friendly behaviors. Then it waits again. Active The robot requests interaction from a subject. It says Let s play, touch me. Once a subject touches the robot, it continues the friendly behaviors while the subject reacts to them. Complex In addition to the Active pattern, it sometimes exhibits Idling and Daily work (move around) behaviors instead of waiting. Passive behaviors is similar to that of toy robots, which just wait for human actions. In active condition, the robot takes initiatives to interact with humans. In the complex condition, the robot wshows the active condition behaviors in addition to non-interactive behaviors Experiment Method We employed 31 university students as subjects. During five minutes, each subject observed one of the above behavior patterns. Next, impressions of the robot were evaluated by using the SD method, similar to our previous research [Kanda01]. Subjects answered a questionnaire to rate 28 adjective pairs (in Japanese) with 1-to-7 scales. In addition, subjects behaviors toward the robot were analyzed with the unobtrusive measures Results We report from the following six points of view about the results. Impressions of the Robot Factor analysis was performed on the SD method ratings for the 28 adjective pairs. By the Kaiser-Meyer-Olkin Measure of Sampling Adequacy, five adjective pairs were omitted. According to the difference in eigenvalues, we adopted a solution that consists of five factors. Cumulative proportion of the final solution was 56.4%. The retrieved factor matrix was rotated by a Varimax method (shown in Table 4.1, 4.2). Along with the factor loadings, each 57

76 Figure 4.2: Illustration of the comparison of the impressions factor was named Evaluation, Familiarity, Potency, Sociability, and Activity factor. Standardized factor scores were calculated to easily understand the results. We have compared the factor scores of the three behavior patterns (Table 4.3, Figure 4.2). ANOVA (analysis of variance) detected a significant difference in Potency scores. Then an LSD (least significance difference) method proved that the scores of Passive are significantly bigger than Complex (p < 0.05). Only as a suggestion, we applied an LSD method for Familiarity scores, in which there is an almost significant difference among three behavior patterns. As the result, the scores of Passive are significantly bigger than Active (p < 0.05). That is, robots give best impressions when it behaves with the Passive patterns. 58

77 Table 4.1: Results of impression (SD ratings) Adjective pairs Mean S.D. Bad Good Cruel Kind Ugly Pretty Dull Exciting Cold Warm Inaccessible Accessible Mechanical Humanlike Unpleasant Pleasant Unfrinendly Friendly Dislikeable Likeable Lonely Cheerful Unfavorable Favorable Unintelligent Intelligent Quiet Showy Simple Complex Blunt Sharp Empty Full Dark Light Passive Active Rigid Frank Slow Rapid Slow Quick Boaring Interesting Selfish Altruistic Dangerous Safe Vague Distinct Calm Agitated Cowardly Brave

78 Table 4.2: Factor pattern of the SD ratings (Varimax normalized) Evalu- Famil- Poten- Social- Activ- Commuation iarity cy ity ity nality Good Kind Pretty Exciting Warm Accessible Humanlike Pleasant Friendly Likeable Cheerful Favorable Intelligent Showy Complex Sharp Full Light Active Frank Rapid Quick Interesting proportion

79 Table 4.3: Comparison of subjects impressions (factor scores of the SD ratings) Num. of Evalu- Famil- Poten- Social- Activsubjects ation iality cy ity ity Passive Active Complex p #1 #2 Multiple Comparison #1: Passive > Active, #2: Passive > Complex Table 4.4: Analysis of the subjects behaviors Num. Utterance (Total) Interpersonal behavior Dist. The time of voluntary answer- give synchro- greet the robot watching subjects ing respons nize (cm) face (sec) Passive 10 2(5) 6(31) Active 11 4(12) 7(53) Complex 10 2(19) 7(77) Spatial Distance between the Robot and Subjects At the beginning of the experiment, the robot said touch me, and then almost all subjects approached the robot. Some of them stood almost one meter away from the robot and approached it only when it asked to touch; some stood very near of the robot where the moving arm of the robot nearly collided with him/her. We have measured the distance between the standing points of the subjects and the robot (Table 4.4: Distance to the robot). Generally, humans keep the distance of 45 cm when they are talking familiarly [Hall66]. In the experiment, the average of the distance was 41 cm. This is a shorter distance. We consider it is because the robot is not so tall and looks like a child in both the appearance and behaviors. In addition, the physical contact of the robot and subjects such as hugging and 61

80 handshaking contributed to decrease the distance. Touched Parts of the Robot In the experiment, subjects touched the robot in response to the request of the robot. The robot detected the touches with skin sensors. We analyzed the record of the sensory input. Although there is no significant difference among three behavior patterns, we acquired meaningful findings about where the subjects touched. Table 4.4 indicates the result. Num. of subjects means the number of the subjects who touched the parts of the robot, avg. of touches means the average of how many times the subjects touched the parts, and First touch means the average of the time when they touched the parts first time. About the first touch, we calculated the time of the subjects who did not touched the parts as 300 seconds (end time of the experiment). Figure 4.3 illustrates the relationship between the time since the experiment started and the number of the subjects who had touched the parts. Because the touches were the start signal of the interaction and the robot required it, almost all subjects touched the robot within ten several seconds from the start. About the communication among humans, psychologically, the parts to easily touch are arms, shoulders, heads, and bodies in that order. The result of the experiment is similar to this. Thus, we consider that subjects touched the humanoid robot as if they touched humans. Eye Contact Eye contact is known as one of the important non-verbal communications, and our cognitive experiment has proved it too. With communicative unit, the robot controls its camera direction to the humans face as the eye contact. As the result of questionnaires, seven subjects answered the eye motion was impressive. Most of the subjects watched the face (around the cameras) while the experiment (Table 4.4: The time watching face). The average of all subjects is seconds. This is more than half of the five minutes experiments. Thus, the subjects focused their attention on the face of the 62

81 Figure 4.3: The time of first touches robot. Subjects Behaviors toward the Robot Some of subjects gave responses to the robot s utterance; some greeted the robot when it greeted the subject; some moved his/her body synchronously to the robot s body movement such as pointing a poster on a wall. We consider that these subjects behaviors were performed with little intention to convey information to the robot. Rather, these behaviors were similar to what we unconsciously perform in daily communications among humans. Thus, each of these subjects behaviors is a kind of interpersonal behaviors (Table 4.4: Interpersonal behaviors). Figure 4.4 shows the number of the subjects who performed such the interpersonal behaviors and how many kinds of the interpersonal behaviors they did. In Active condition, six subjects performed the interpersonal behaviors and three of them performed more than one kind. On the other hand, in Passive condition, two subjects performed one kind of the interpersonal behaviors. We believe that the 63

82 Figure 4.4: Emergence of interpersonal behaviors subjects who performed these behaviors regarded the robot as the target of communication. Subjects Utterance Subjects utterance proved that many subjects regarded the robot as the target of communication as well. The robot can speak more than 100 sentences, and it asks something like where are you from? and please talk to me. More than half of the subjects answered the robot s asking (Table 4.4: answering utterance), such as I m from Kyoto. Some of the subjects voluntarily talked to the robot (Table 4.4: voluntary utterance), such as Let s shake hands. The averages of the number of answering and voluntary utterance are shown in Figure 4.5. Table 4.5 indicates an example of typical conversation between subjects and the robot. The robot continually exhibited the friendly behaviors and asked something, and then subjects answered it and asked something to the robot. 64

83 Figure 4.5: Subjects utterance to the robot Table 4.5: Example of conversations between a subject and the robot Example (R:Robovie, S:Subject) R: Please talk to me. S: Let s shake hands. R: Let s shake hands (It responded correctly, and they shook hands). Where are you from? S: I m from Nara. And you? R: I m from ATR (Although it did not recognize the utterance, it correctly responded apparently as designed.) 65

84 4.1.4 Discussion The results of the experiment indicated that subjects interacted with the robot in the similar manner to how they communicate with humans. That is: Friendly spatial distance Similar parts to be easily touched Communication with eye contact Interpersonal behaviors Answering / voluntary utterance Thus, they naturally interacted with the humanoid robot. We believe many subjects were absorbed in the interaction and they regarded the robot as the target of the natural communication. In other words, the interactive behaviors of the robot established relationships between humans and the robot. These relationships between humans and robots have important roles in human-robot communication. Sperber proposed the relevance theory [Sperber95], where humans communicate among themselves by inferring the minds of others. This is different communication model against the code model, where a sender gives information (signals) to a receiver using a presupposed common code for encoding and decoding. Based on the relevance theory, Ono and his colleagues proved the importance of relationships between humans and a robot [Ono00], that is humans easily understand the utterance of the robot if they build relationships with the robot. The results of the experiment indicate that the communicative relationships are established by enough physical expression ability, the software architecture to incorporate cognitive knowledge, and the implemented interactive behaviors. For the communication robots that we are trying to realize, it is indispensable to establish these relationships with humans. 66

85 4.2 Numerical Analysis of Human-Robot Interaction The experimental result shown in 4.1 is based on the questionnaire and observation of videotape. Further, we intend to analyze the interaction by precisely measuring the body movement of the humanoid robot and humans. A motion capture system allows us the precise and objective measurement so that we can analyze dynamic aspect of human-robot interaction. Based on the analysis, we try to identify the essential communicative behaviors that enable smooth communication Analysis of Body Movement Previous works on the effective use of a robot s body have provided only subsets of communicative behaviors by focusing on particular body parts. On the other hand, we intend to realize a robot that smoothly communicates with humans. In particular, total-embodiment brings a mutually potentiating effect of body parts. That is, a sufficiently equipped body allows it to perform a number of interactive behaviors, and then the combination of the body and behaviors generate complexity and perceived intelligence. If people evaluate the robot as possessing intelligence, then they will regard the robot as a kind of companion of conversation. This total embodiment endows the robot with powerful communication ability. Therefore, it is important to develop such a humanoid robot and discover the essential behaviors for the communication. Contrary, the previous works have focused on particular aspects of the body and behaviors. Thus, they do not state the essential communicative behaviors using the whole body. Our approach is to precisely measure the interaction between a humanoid robot and humans. We have developed an interactive humanoid robot that has a totally embodied human-like body. Furthermore, many interactive behaviors have been implemented. It encourages people to treat the robot as a human child. Based on the interaction, we try to identify the essential behaviors that enable smooth communication. 67

86 For the analysis of body movements, we utilize a motion capture system to precisely measure time and space. This has been used for capturing and creating 3-D motion of individual objects for such applications as computer games and movies but not for analyzing the interaction of movements. The system s higher resolution and objectivity given by numerically recorded motions provides novel experimental equipment that replaces videotape in this area. We will compare the interaction of body movements to subjective evaluations of the robot and the results of a personality test given to humans. We assume that personality governs how a human behaves to others. Thus, the comparison reveals the process of forming an evaluation through interaction with body movements Experiment Method We performed an experiment to investigate the interaction of body movements between the developed robot and a human. The subjects are 26 university students. Average age is They are first shown an example of using the robot system. Then, they freely observed the robot for ten minutes in a rectangle room of 7.5 m by 10 m. The numerical result of the body movements is obtained through a motion capture system. After the experiment, subjects answered a questionnaire to get their evaluations of the robot (with five adjective-pairs shown in Table 4.6) and were given a personality test. These results allowed us to analyze the interaction numerically Results Measurement of Body Movements We adopt a motion capture system [Vicon] to numerically analyze the body movements. The motion capture system consists of 12 pairs of infrared cameras and infrared lights and markers that reflect infrared signals. These cameras are set around the room. The system calculates each marker s 3-D position from all camera images. The system has high resolution in both 68

87 Table 4.6: The adjective-pairs for subjective evaluation, and the mean and standard deviation as the result Adjective-pairs Mean Std. Dev. Good Bad Kind Cruel Pretty Ugly Exciting Dull Likable Unlikable Evaluation score time (120 Hz) and space (1 mm in the room we performed this experiment). As shown in Figure 4.6, we attached ten markers to both the robot and the subjects at their heads (subjects wore a cap attached with markers), shoulders, necks, elbows, and wrists. By attaching markers to corresponding places on the robot and subjects, we analyzed the interaction of body movements. The three markers on the head detect the height, direction, and contact of eyes. The markers on the shoulders and neck are used to calculate the distance between the robot and subjects, and distance moved of them. The markers on the arms provide us the moved distance of two hands (the relative positions of hands from the body) and the duration of synchronized movements (the temporal area where the movements of hands of the subject and robot highly correlate). We also analyzed touching behaviors by using an internal log of the robot s touch sensors. As the result, the comparison between the body movements and the subjective evaluations indicates meaningful correlation. From the experimental results, well-coordinated behaviors such as eye contact and synchronized arm movements proved to be important. This suggests that humans make evaluations depending on their body movements. 69

88 Figure 4.6: Attached markers(left) and obtained 3-D numerical position data of body movement (right) Table 4.7: Results of body movements Mean Std. Dev. Distance (m) Eye contact (s) Eye height (m) Moved distance (m) Moved distance of hands (m) Synchronized movements (s) Touch (num. of times)

89 Subjective Evaluation: Evaluation Score The semantic differential method is applied to obtain subjective evaluations with a 1-to-7 scale, where 7 means the most positive point on the scale. Table 4.6 indicates the used adjective-pairs, average, and standard deviation. We chose these adjective-pairs because they had high loadings as evaluation factors for a robot in previous research (reported in Section 4.1). We calculated the evaluation score as the average of all adjective-pairs scores. Correlation between Body Movements and Subjective Impressions Table 4.7 displays the numerical result of body movements. For eye contact, averaged time was 328 seconds, which is more than half of the experiment time. Since the robot s eye height is 1.13 m and the average of subject eye height is 1.55 m, which is less than their standing eye height average of 1.64, several subjects sat down or stooped down to bring their eyes to the same height as the robot s. The moved distance was farther than what we expected, and it seemed that subjects were always moving little-by-little. For example, the robot sometimes turned, and then the subjects correspondingly turned around the robot. Some subjects performed synchronized arm movements to the robot s behaviors such as exercising. Then, we calculated the correlation between the evaluation score and the body movements (Table 4.8). Since the number of subjects is 26, each correlation value is significant if its absolute value is larger than We indicate these significant values with bold face in the table. As the calculation result, eye contact and synchronized movements indicated higher significant correlations with the evaluation score. According to the correlations among body movements, the following items showed significant correlations: eye contact - distance, eye contact - moved distance, synchronized behaviors - moved distance of hands, and synchronized behaviors - touch. However, these items (distance, moved distance, moved distance of hands, and touch) do not have significant correlations with the evaluation score. That is, only the well-coordinated behaviors correlate with the subjective evaluation. The isolated active body move- 71

90 Table 4.8: Correlation between body movements and subjective evaluation Evaluation Dist. E. C. E. H. M. D. M.D.H. S. M. Touch Dist E. C E.H M. D M.D.H S. M Touch (E.C.: eye contact, E.H.: eye height, M. D.: moved distance, M.D.H.: moved distance of hands, S.M.: synchronized movements) ments, such as coming across the robot, moving the hands energetically, and touching the robot repetitively, do not relate to the evaluation. Effect of Personality To investigate the effect of the personality differences among the subjects, we applied personality test called the Yatabe-Guilford method. The test was designed for Japanese people by Dr. Yatabe in 1958 based on the Guilford-Zimmerman Temperament Survey (10 factor scale was used). In the test, subjects answer 120 questions on a 3-point rating scale, and then the test reveals the personality and temperament on 12 factors. These factors are shown in the left-most column of Table 4.9. The obtained correlation between the personality test, evaluation score, and body movements are shown in Table 4.9. Depression and social extroversion have significant correlations with the evaluation. There are many significant correlations between the personality and body movements. For example, lack of cooperativeness has a negative significant correlation with synchronized movements. That is, the uncooperative subjects had a tendency not to perform synchronized behaviors. Depression also has a significant negative correlation with synchronized movements and the evaluation 72

91 Table 4.9: Subjects personalities and their correlation with subjective evaluation and body movements Eval. Dist. E. C. M. D. M.D.H. E. H. S. M. Touch Depression Cyclic Tendency Inferiority Nervousness Lack of Objectivity Lack of Cooperativeness Lack of Agreeableness General Activity Rhathymia Thinking Extroversion Ascendance Social Extroversion (E.C.: eye contact, E.H.: eye height, M. D.: moved distance, M.D.H.: moved distance of hands, S.M.: synchronized movements) score. Ascendance (tendency of taking leaderships, not obedience) subjects tend to perform eye contact. The subjects who have high social extroversion do not walk around so much and highly evaluate the robot. It is easy for us to intuitively understand these results based on personality. There are relations between a cooperative personality and synchronized behaviors. We consider that the evaluation is formed through cooperative body movements as well as the human s personality. Meanwhile, the personality has effects on the body movements. This is simply modeled as Fig. 6, which suggests that we can estimate the humans momentary evaluation of the robot by observing their current body movements. The humans behaviors indicate how much they are entrained into the interaction with the robot. Estimation of Momentary Evaluation: Entrainment Score The results indicate that there are correlations between the subjective evaluation and the body movements. We performed multiple linear regression analysis to estimate the evaluation score from the body movements. It re- 73

92 Table 4.10: Standardized partial regression coefficients obtained by multiple linear regression analysis Coefficient Value Distance α dist Eye contact α ec Eye height α eh Moved distance α md Moved distance of hands α mdh Synchronized movements α sm Touch α touch veals how much each body movement effects the evaluation. Then, we applied the relations among body movements to estimate momentary evaluation score called entrainment score. As a result of the multiple linear regression analysis, the standardized partial regression coefficients are obtained as shown in Table The obtained multiple linear regression is the following: F = α dist DIST + α eh EH+ α md MD+ α mdh MDH +α sm SM + α touch TOUCH+ α const (4.1) where DIST, EC, EH, MD, MDH, SM, and TOUCH are the standardized values of the experimental results for the body movements. Since the evaluation was scored on a 1-to-7 scale, evaluation score E is between 1 and 7. The multiple correlation coefficient is 0.77, thus 59% of the evaluation score is explained by the regression. The significance of the regression is proved by analysis of variance (F(7,18)=3.71,P < 0.05). The coefficients (Table 4.10) also indicate the importance of wellcoordinated behaviors. Eye contact and synchronized movements positively affect the evaluation score. On the contrary, distance, moved distance and touch seem to negatively affect the evaluation score. The subjects who just 74

93 come across the robot, move around, and touch repeatedly do not highly evaluate the robot. Because we can momentary observe all terms involved in the body movements of the regression (equation 4.1), we can estimate momentary score of evaluation by using the same relations among body movements as follows: F(t)= α dist DIST (t)+α eh EH(t)+α md MD(t)+α mdh MDH(t) +α sm SM(t)+α t ouch TOUCH(t)+α const (4.2) where designations such as DIST(t) are the momentary values of the body movements at time t. We named this momentary evaluation score the entrainment score, with the idea that the robot entrains humans into the interaction through its body movements and humans move their body according to their current evaluation of the robot. The evaluation score and entrainment score satisfy the following equation, which represents our hypothesis that the evaluation forms during the interaction through the exchange of body movements: E = t 0 E(t) / t (4.3) Let us show the validity of the estimation by finding the entrainment score. Figure 4.7 shows the entrainment scores of two subjects. The horizontal axis indicates the time from start to end (600 seconds) of the experiments. The solid line indicates the entrainment score E(t), and the colored region indicates the average of the entrainment score E(t) from the start to time t (this integration value becomes the estimation of E at the end time). The upper graph is the score of the subject who interacted with the robot very well. She reported after the experiment that It seems that the robot really looked at me because of its eye motion. I nearly regard the robot as a human child that has an innocent personality. At the other extreme, the lower graph is for the subject who became embarrassed and had difficulty in interacting with the robot. The entrainment-score graph of the first 75

94 Figure 4.7: Illustration of entrainment score subject hovers around 5 and sometimes goes higher. This is because she came across and talked with the robot while maintaining eye contact. She performed synchronized movements corresponding to the robot s exercising behaviors, which caused the bigger value near 200 [s]. On the other hand, the graph of the second subject sometimes falls below 0. In particular, at the end of the experiment, it became unstable and even lower. He covered the robot s eye camera, touched it like he was irritated, and went away from the robot. We consider those two examples to suggest the validity of the estimation of the entrainment score. 76

95 Table 4.11: Worst five situated modules based on average entrainment score ID Contents Evaluation TICKLE Tickle APOLOGIZE Apologize NOT TURN Say, I m busy, and refuse to play together SLEEP POSE A pose of sleeping FULLY FED A pose of being fully fed Evaluation of the Implemented Behaviors In the above sections, we explained the analysis of the body movement interaction. We believe the results are applicable for other embodied agents that have different appearance and internal mechanisms. Here, we evaluate the implemented behaviors. Although the application of this result is limited to our approach, our findings also prove the validity and applicability of the entrainment score. We calculated the evaluation score of each situated module based on the average of the entrainment score during the module executed. Tables 4.11 and 4.12 indicate the worst and best 5 modules and their scores respectively. Those worst modules are not so interactive. SLEEP POSE and FULLY FED do not respond to human action and exhibit behavior like the sleeping pose. NOT TURN is the behavior to brush off a human s hand while saying I m busy when someone touches on its shoulder. The best modules are rather interactive modules that entrain humans into the interaction. EXERCISE and CONDUCTOR produce the exercising and imitating of musical conductor behaviors, which induce the human s synchronized body movements. Other best-rated modules also produce attractive behaviors such asking and calling, which induce human reactions. We believe that the entrainment scores bring us plenty of information for developing interactive behaviors of robots that communicate with humans. 77

96 Table 4.12: Best five situated modules based on average entrainment score ID Contents Evaluation EXERCISE Exercise 5.75 ASK SING Ask humans, May I sing a song? 5.59 CONDUCTOR Pose imitating a musical conductor 4.85 WHERE FROM Ask humans, Where are you from? 4.55 LET S PLAY Say, Let s play, touch me Discussion Cooperative Interaction in Body Movements The experiment reveals that humans express their evaluations of body movements. If a human highly evaluates the robot, then the human behaves cooperatively with the robot, which will also result in the higher evaluation of the robot. That is, once they establish cooperative relationships with the robot, they interact well with the robot and highly evaluate the robot. As for the evaluation of the implemented behaviors, the modules that entrain humans into interaction were highly evaluated, such as asking something that induces human s answer and producing cheerful body movements to let humans mimic the movements. We believe that the entrainment can help us to establish cooperative relationships between humans and robots. Meanwhile, the multiple linear regression explains 59% of the subjective evaluation. This is remarkable because it is performed without regard to the contents or context of language communication. With speech recognition, the robot can talk with humans, although the ability is similar to that of a little child. Some of the subjects spoke to the robot. Often, there were requests for the robot to present particular behaviors (especially behaviors it performed before). Then it sometimes responded correctly and sometimes incorrectly. To analyze this, we could use several analytical methods such as conversation analysis. However, these are rather subjective. On the other hand, the evaluation we reported performed only with an objective measure: 78

97 numerically obtained body movements without context. Applicability of Numerical Analysis of Body Movements Our approach of estimating the humans evaluation from body movements has wide possibilities. In this paper, we have used a humanoid robot to encourage humans to make body movement. However, this method would also be applicable to embodied agents on screen that let humans physically interact with them. In particular, the embodied agents that perform eye movement on a large screen could easily cause human unconscious (not requested) body action. Moreover, humans probably express their evaluation as body movements while they communicate with humans. Thus, this approach can be applicable to human-human communication (both in the physical world and on screen). Regarding the estimation of momentary evaluation, it is applicable to different subjects (age, culture, etc.) and different agents (physical-virtual, body shape, behaviors, etc.). We can re-calibrate the multiple linear regression while performing the same subjective evaluation. Consequently, there would be a lot of potential usages. For example, an agent could learn and adjust its behavior by using this method. It could also estimate a human s personality from their body movements to adaptively behave with individual humans. 4.3 Toward Establishing Evaluation Scales for Human-Robot Interaction In the development of communication robots, the evaluation method of human-robot interaction is important as the development method. With traditional robots that only perform specific tasks, we can evaluate their performance with physical measures such as speed and accuracy. These measures help us improve the robot s performance. Similarly, we need to measure how communication robots influence humans during interaction. 79

98 There have been several questionnaire-based methods applied to humanrobot interaction as discussed in Section 2.4, which were developed in the psychology field. With several adjective pairs, SD method allows us to evaluate the impression of robots [Sato97, Nakata98]. Further, we applied factor analysis for adjective pairs and retrieved several factors in [Kanda01] and this study. These principle factors are useful for understanding humans internal view about a robot as well as for evaluating another interaction produced by a similar robot (like the experiment reported in Chapter 5). Personality is also an important aspects in human-robot interaction such as the research [Okuno02]. We have applied questionnaires on personality in the study reported in Section 4.2. By applying a personality test to subjects, we determined the effect of their personality on their behaviors toward the robot. Further, we will be able to prepare a questionnaire to test what subjects recognize the robot s personality. By applying such a personality test and questionnaire about the robot s personality, we can analyze personality aspects of human-robot interaction. Nevertheless, the evaluation ability of such questionnaire is strongly limited. We need to apply the questionnaire after the experiment. The applied questionnaire is obtrusive so that we need to stop or interrupt the interaction. To supplemente this evaluation, several researchers have adopted unobtrusive measures. The unobtrusive measures are used in psychological research [Webb99], and they have the merit that measurement does not obstruct experiments. In the human-robot interaction analysis, measures such as spatial distance and apparent arm motions were used as a base for such evaluation. Furthermore, we tried to analyze the dynamic aspect of humanrobot interaction in this study. Because we can obtain the analytical data automatically by using the motion capture system, obtained result is objective and precise both in time and space, whereas previous research works are based on observation of videotape. We consider it is a beginning step in the analysis of the dynamic aspect of human-robot interaction. In this chapter, we reported two experiments; one about behavior patterns of the robot, and the other about body movement interaction. Each of them is a part of the relationships between human parameters (personal- 80

99 ity, behaviors, evaluation) and robot parameters (behavior patterns, behaviors, internal status). According to the aforementioned questionnaire-based methods, we can measure static aspects, such as the human s internal evaluation after interaction, human personality, and the robot s personality from human s view. If we can establish the dynamic evaluation method, we can measure the behavior interaction between a human and a robot, and estimate the human s internal evaluation. Thus, it is an interesting future work to find out such an analytical method for dynamic aspects and evaluate the synthesized relationships among them. 4.4 Summary We have reported two experiments on human-robot interactions. In Section 4.1, we reported an experiment where we verified the robot s performance for interacting with humans by the behaviors of the subjects toward the robot. In particular, many humans interacted with the robot a similar manner to how they communicate with other humans. They performed interpersonal behaviors such as giving responses to the robot and voluntarily spoke to it. Thus, they regarded the robot as a target of natural communication. That is, the relationships between humans and robots are established by the interactive behaviors of the robot. We think this ability of establishing relationships is necessary for communication robots. About the comparison of behavior patterns, the Passive pattern was highly evaluated in the view of human impression of the robot. Meanwhile, the other experimental results suggested that the Passive pattern was rather ineffective in inducing the subjects behaviors toward the robot. In Section 4.2, we reported a numerical analysis method of the body movements in human-robot interactions. The result of the analysis indicates positive correlations between cooperative body movements and subjective evaluations. Furthermore, the multiple linear regression reveals the detailed numerical relations among the body movements. It explains 59% of the subjective evaluation without regard to language communication. This in- 81

100 dicates a synergistic effect of body movements and subjective evaluations: humans express their evaluations by their body movements, and cooperative movements also produce higher evaluations. Recently, embodied communication has been widely used in computing systems. On computer screen, we often see embodied agents such as a helper agent moving around the screen and a character agent for guidance on a large screen. Communication robots that have human-like bodies are also being put to practical use for educational tasks (Chapter 6), mentally healing tasks, and so forth. Our approach of using the analysis of numerical body movements is widely applicable to various tasks in embodied communication, such as designing the behaviors of agents and analyzing human communication. 82

101 Chapter 5 Analysis of Basic Social Relationships Recent advances in robotics technology enable robots to provide people with a more natural interface for communication. Traditional research on human-robot interaction focused mainly on communication between people and one robot and on the robot s internal workings. We consider it important for these robots to interact with other robots and local environments as well. Our developed robot also has the mechanisms to realize. In this section, we propose an effective cooperation method for multi-robots in the interests of promoting human-robot communication, and then verify the mechanism. 5.1 Chained Relationship Model How to establish relationships between humans and robots is one of the important problems of human-robot interaction. Previous research [Ono00] shows the importance of the relationships. Our approach is to solve this problem by multi-robot cooperation in communication. We believe that the cooperative behaviors of robots, especially expressing their communication and interaction with objects in environments, allow robots natural communication with humans. Humans observe the robotrobot communication so that they can establish relationships with robots. 83

102 Figure 5.1: Chained relationship model Figure 5.1 represents this communication model named chained relationships. Robots show their communication and interaction with environment at first, and then add the observing human into this chain of relationship. Since the observation will establish relationships between the human and robots, the human can naturally and smoothly communicate with the robots. In this model, it is especially important to show the triad relationships. In the field of developmental psychology research [Moore95], pointing behavior is known as a form of a triad expression. Human infants cannot build relationships between more than two things at early stage of their development. That is, they only form dyads: human to human or human to object. After further development, however, they can share their attention with others by such pointing behavior. This joint attention mechanism forms the triad: human to object to human. Basically, this multi-robot cooperation is accomplished by the communication mechanism through computer network (as discussed in Section 3.3.7) and the pointing behavior. Because a joint attention mechanism in conjunction with pointing has been already studied [Imai01], we implemented pointing behavior using the results of that study. For example, we used eye-contact behavior to express communicative intention, and drawing the human s attention by looking and pointing at an object. 84

103 5.2 Hypothesis We analyzed how communication expressions between robots and interaction with the surrounding environment affects human observer. That is, how the observation of robot-robot communication, including the expression of robot-robot communication and the expression of triads, influences humanrobot communication. By executing situated modules in synchrony, two robots seem to interact and communicate by voice and gesture. In addition, the robot points at an object in its environment, such as a poster on a wall, then it talks about the object. Thus, observers think the robot can interact with others and the environment. 5.3 Experiment Method We performed the experiment to verify the effect of our developed multirobot system, which expresses robot-robot communication and interaction ability with surrounding environments. We used 36 subjects (18 men, 18 women). Each subject observed robotrobot communication, and then one of the robots talked to the subject. There were three patterns of robot-robot communication. Comparison of the three patterns indicates the effects of the robot-robot interaction on the communication between subjects and the robot. Figure 5.2 shows the outline of the experiment. The room is 4.5 m square. R1 and R2 indicate Robovies, S indicates a subject. The sequence of the experiment is given below: 1. A subject was instructed to observe the robots, and to respond to them if they came close to him/her. 2. At A, two robots communicated according to the following conditions. 3. Robovie R2 came close to the subject at B. 4. R2 communicated with the subject. 85

104 Figure 5.2: Outline of the experiment There were three conditions of robot-robot communication. Point R1 and R2 talk while pointing at the poster (Fig.5 poster, Fig.4 point) and the subject. Then, R2 approached the subject. Talk R1 and R2 talk while performing body movements as complex as Point (Fig.4 talk). However, they did not point at anything. None The two robots do not talk at all. Immediately R2 approached the subject. After the robot-robot communication (Figure 5.3 all-1), R2 came to the subject (Figure 5.3 all-2), greeted the subject at B (Figure 5.3 all-3), asked to shake hands (Figure 5.3 all-4), and then pointed at R1 and saying, It is interesting (Figure 5.3 all-5). Finally, it said, Bye bye (Figure 5.3 all-6). 86

105 (point-1) (point-2) (talk-1) (talk-2) (all-1) (all-2) (all-3) (all-4) (all-5) (all-6) Figure 5.3: Scenes of the experiment 87

106 Table 5.1: Subjects understandings of the utterance and behaviors toward the robot Greet Handshake Nod Headshake Pointing Gaze Goodbye Point Talk None Results Table 5.1 shows the results of the experiment. Nod, look doubtful, and pointing means subjects gave these responses to the robot s utterance. Gaze means subjects gazed at R1 when R2 pointed at R1. These results prove the following two effects on human-robot communication. That is, observation of robot-robot communication and interaction with their environment causes humans to communicate naturally with robots, and understand the utterances of the robots. Effects on Understanding of Robot s Utterances After the experiment, we asked subjects, What did R2 indicate by pointing? (R2 pointed R1 after it came close to humans.) All subjects observing the Point condition understood that R2 pointed at R1. On the other hand, half of subjects observing the Talk and None conditions did not understand it. The number of subjects who understood it and gazed in the pointing direction (it was they would understand it immediately) is illustrated in Figure 5.4. A chi-squared test proved the significant differences on the number of the subjects who understood the pointing utterance (χ(2) 2 = 9.00, p < 0.5), and the subjects who understood it with gazing in the pointing direction (χ(2) 2 = 11.59, p < 0.01). As the result of analysis of residuals, the number of subjects in the Point condition who understood it is significantly larger 88

107 Figure 5.4: Subjects understandings of the utterance (residualr = 2.99, p< 0.01). Thus, it is proved that visible and audible communication between robots and interaction with their environment improves subjects understanding of the robot s utterances. Effects on Promoting Human-Like Natural Communication Many subjects responded to the robot s utterance as if the robots were human (Figure 5.5). We analyzed the relationship between experimental conditions and the responses (Figure 5.6). Here, give response means the subject performed nod, look doubtful, or pointing. We think some subjects gazed in the pointing direction instead of giving responses. A chi-squared test proved the significant differences on the number of the subjects who gave responses or gazed toward the pointing direction (χ(2) 2 = 9.00, p < 0.5). Results of the analysis of residuals indicate the number of the subjects in the Point condition who performed these responses is significantly greater than in the other conditions (residualr = 2.27, p < 0.5). It also proved that the number of the subjects in the None condition who did not perform these responses is significantly greater than that of the other conditions (residualr = 2.83, p < 0.01). 89

108 (1) Greet (2) Handshake (3) Pointing (4) Goodbye 5.5 Discussion Figure 5.5: Subject s behaviors toward the robot The results indicated the effects of the system that expresses robot-robot communication. Moreover, we were able to verify the validity of the results from two aspects. First, we tested the theory that the subjects got accustomed to hearing the robot s voice (Figure 5.7). That is, the subjects in Point and Talk conditions could get more accustomed to listening to the robot s synthesized voice than the subjects in the None condition. After the experiment, we asked the subjects about the easiness of hearing the robot s utterance. The ANOVA (analysis of variance) result told us there is no significant difference (F (2,3) = 0.25). It supports our conclusion that the observation of robot- 90

109 Figure 5.6: Give responses to the robot robot communication and their interaction with the environment aids understanding of a robot s utterances more than simply listening to the robots voice. We expected about negative effects of robot-robot communication. For example, some subjects might think, the robot that talks with another robot is strange, fearful, and so forth. However, it seems that there were no such negative effects. We also investigated the impression the robot leaves on people to check for negative effects (Figure 5.7). We found that there were no significant differences (F (2,3) = 0.21, 0.51, 0.19, 0.28 for familiarity, evaluation, enjoyment, and activity, respectively). In previous research [Ono00], the communicational relationships between a human and a robot have been discussed. That is, once they build the relationships, humans can understand the utterances of the robot. From the viewpoint of communicational relationships, a human can joins the network of relationships among robots and environments, naturally and smoothly communicating with each other. 91

110 Figure 5.7: Comparison of subjective voice quality and impressions of the robot 5.6 Summary We proposed a human-robot communication system based on the observation of robot-robot communication. Robots express their invisible communication by visible means, such as speech and gestures. The expression of the communication indicates to humans that the robots can communicate with each other and interact with surrounding environments. In the experiment, we verified our approach by using a robot that has the ability to express itself physically. Humans observed the communication between multiple robots and the interaction with their environment, learning to easily understand the robot s utterances. Moreover, the observation of the communicating robots makes human-robot communication as natural and smooth as human-human communication. We consider this is one of the basic social relationships among humans and communication robots. 92

111 Chapter 6 Application As the constructive approach, it is necessary to really apply communication robots in a human society. This real world research will bring us plenty of knowledge about what we should study for the next step. Whereas almost all social robots (pet robots, communication robots, etc.) work only in their laboratories, some researchers have already started on real world trials. Our robots will first be assigned a communication task in the real world: education in an elementary school. In this chapter, we will describe the future application of communication robots, especially with respect to the elementary school experiment. 6.1 An Educational Task of Communication Robots The developed communication robots are going to perform foreign language education in an elementary school; however, the task is not to replace human teachers, but behave as foreign children who can only speak their language. Robots need to not only communicate with particular child, but also behave socially among many children. This study is a trial for developing communication robots that can work in human society. This task is motivated from the poorness of Japanese people s English 93

112 skill, despite the students having taken 6 years education of English before entrance into a university. We consider it is because lack of motivation and chance of speaking. First, many children do not think it is important and useful to study English. In fact, children have little necessity to use English in Japan. Even if English teachers speak English, they usually speak in Japanese. What is worse, there are few foreign people who cannot speak Japanese in the children s lives. Thus, it seems many children do not have high motivation to study English. 6.2 Hypothesis We intend to solve these problems by using the developed robots in English education. In the English education, our robots speak and understands English only. The robots try to perform daily communication as children do in English. We believe, owing to the ample physical expression ability, children can guess what the robot is doing and try to interact with the robot. Even if they cannot understand the language of the robot, they gradually get accustomed to hearing and speaking English. Our hypothesis for the experiment is summarized as follows: Owing to the physical expression ability, children can interact with the robots. They can establish communicative relationships with the robots, that will continue for a long time. They will have many chances to use English in interaction. They will learn the English words or sentences the robots use. 6.3 Experiment method We performed two sessions of the experiment in an elementary school, where each session continued for two weeks, including 9 school days. Sub- 94

113 Figure 6.1: Environment of the elementary school Figure 6.2: Wireless tags embedded in nameplates jects of the experiment were three classes of first grade and sixth grade respectively. There are 119 first grade students (59 boy and 60 girl) for first session, and 109 sixth grade students (53 boy and 56 girl) for second session. Two robots, which have same hardware and software, were set in a corridor between the 3 classrooms (Figure 6.1). Children could freely interact with the robot during recesses. Two university students attended each robot for safety, but never guided the interaction. Each child had a nameplate in which a wireless tag was embedded (Figure 6.2) so that each robot could understand who was interacting with it. For post-experiment analysis, we videotaped each session using 4 cam- 95

114 Figure 6.3: Example of applied questionnaires about English sentences (These four drawings are used coupled with a utterance bye-bye ) eras, and applied an English hearing test 3 times (before, 1week-after beginning, and at the end of the session). The test includes 6 easy daily sentences that the robots used: Hello, Bye, Shake hands please, I love you, Let s play together, and This is the way I wash my face (a phrase from a song the robots sing). Figure 6.3 shows one example of the questionnaires questions. Through the experiment, we analyze following: Long-term influence of the developed robot. Daily interaction with multiple individuals (Contrary to the interaction with a single person in an experiment room). Human-robot communication in a foreign language, and effects on the learning of the foreign language. 96

115 Since it is difficult to control large-scale experiments (such as set control and baseline groups), we will exploratorily analyze the effects that the robots cause. Regarding the behaviors of the robots, it is basically the same as we explained in Chapter 3 and 4, except that they use English instead of Japanese. That is, the robots continually move around and try to interact with students. We supplemented several behaviors with the person identification. For example, the situated module that realized a greeting behavior called the name of a child who was participating in the interaction. The situated module that sings several songs rememberd who had heard the songs, and chooses suitable songs to sing based on how many times the interacting children had head each song. 6.4 Results Results for Long-Term Relationship: first grade students Table 6.1 shows the changes in relationships among the children and the robots during the two weeks for the first grade class. We can divide the two weeks into the following three phases: (a) first day, (b) first week (except first day), and (c) second week. (a) First day: big excitement On the first day, as many as 37 children gathered around each robot (Figure 6.4). They pushed one another to gain the position in front of the robot, tried to touch the robot, and spoke to it in loud voices. Since the corridor and classrooms were filled with their loud voices, it was not always possible to understand what the robots and children said. It seemed that almost all of the children wanted to interact with the robots. There were many children watching the excitement around the robots and they joined the interaction by switching places with the children around the robot. In total, 116 students interacted with the robot out of the 119 students on the first day. 97

116 Figure 6.4: The scene of the experiment of first day (for first grade students) (b) First week: stable interaction The excitement on the first day soon quieted down. The average number of simultaneously interacting children gradually decreased (graph in Figure 6.6). In the first week, someone was always interacting with the robots so the rate of vacant time was still quite low. The interaction between the children and the robots became more like inter-human conversation. Several children came in front of the robot, touched it, and watched the response of the robot. (c) Second week: satiation Figure 6.5 shows a scene at the beginning of the second week. It seemed that satiation occurred. At the beginning, time of vacancy around the robots suddenly increased, and the number of children who played with the robots decreased. Near the end, there were no children around the robot during half of the daily experiment time. On average there were 2.0 children simulta- 98

117 Figure 6.5: The scene of the experiment after first week (for first grade students) neously interacting with the robot during the second week. This seemed to be advantageous to the robot since it was easy for it to talk with a few of children simultaneously. The way they played with the robots seemed similar to the play style in the first week. Thus, only the frequency of children playing with the robot decreased Results for Long-Term Relationship: Comparison with sixth grade Table 6.1 also shows the results for the sixth grade class. There were at most 17 children simultaneously around the robot on the first day as shown in Figure 6.7. It seemed that the robots were less fascinating for sixth graders than for first graders. Then, similar to the first grade, vacant time increased and 99

118 Figure 6.6: Transition of interaction with children (1st grade) the number of interacting children decreased at the beginning of the second week (Figure 6.9). Therefore, the three phases first day, first week, and second week exist for the sixth grade students as well as first grade students. In the second week (Figure 6.8), the average number of simultaneously interacting children was 4.4, which was larger than for the first grade. This is because many sixth grade students seemed to interact with the robot while accompanying their friends, which will be analyzed in a later section. The results suggest that the communicative relationships between children and the robots did not endure for more than one week in general. However, some children developed sympathetic emotions for the robot. Child A said, I feel pity for the robot because there is no other children playing with it, and child B played with the robot for the same reason. We consider this to be an early form of a long-term relationship, which is similar to the sympathy extended to a new transfer student who has no friends. 100

119 Figure 6.7: The scene of the experiment of first day (for sixth grade students) Grade Num,of 1st week 2nd week children Interacted children Avg. Simultaneously interacted st 119 Experiment time (min) Rate of vacant time Num. of utterance Num. of utterance / min Interacted children Avg. Simultaneously interacted th 109 Experiment time (min) Rate of vacant time Num. of utterance Num. of utterance / min Table 6.1: Results about the change of children s behaviors at an elementary school (number of children interacted with the robots, number of simultaneously interacted children, vacant time of the robots, and children s utterance in English) 101

120 Figure 6.8: The scene of the experiment after first week (for sixth grade students) Results for Foreign Language Education: speaking opportunity During the experiment, many children spoke English sentences and listened to the robot s English. We analyzed the spoken sentences. Mainly, it was simple daily conversation and the English the robot used, such as Hello, How are you, Bye-bye, I m sorry, I love you, See you again. Since the duration of the experiment was different each day, we compared the average number of English utterances per minute (described in Table 6.1: no. of English utterances). Figure 6.10 illustrates the transition of children s English utterances for both first grade and sixth grade students. In the first grade, there were utterances per minute during the first three days. Then, it gradually decreased along with the increase in vacant time. As a result, 59% of English utterances occurred during the first three days. 102

121 Figure 6.9: Transition of interaction with children (6th grade) In the sixth grade, it was about utterances per minute during the first week, and this decreased to during the second week. This also seems to correspond to the vacancy time of the robot. That is, children talked to the robot when they wanted to interact with the robot. After they became used to the robot, they did not speak or even greet it very often Results for Foreign Language Education: hearing ability Figure 6.11 shows the relationship between 3 mutually exclusive classified groups of children, and those groups average English hearing test scores at three points during the experiment. Table 6.2 indicates the average and standard deviation of three English test scores for each session, where number of valid subjects means, for each classified group of students, the number of students who took all three tests. The students were classified into three groups - 1st day, 1st week or 2nd week - based on when in the experiment 103

122 Figure 6.10: Transition of children s English utterance they interacted with the robots the most. The 1st day group is comprised of the children who interacted with the robot more on the first day, than in the total sum of the other days. Those children only seemed to be attracted by the robots in the beginning; perhaps, merely joining interaction due to the big excitement of phase (a) discussed in the previous subsections. The 1st week group consists of the children who interacted with the robot during the first week, more than during the second week. That is, they mainly participated in phase (b). The 2nd week group is the remaining students, excluding the non-interacted individuals - children who did not interact with the robots at all. In other words, 2nd week children seem to have hesitated to come during phases (a) and (b), and mainly interacted with the robot during phase (c). We excluded the students who did not take every test (for example, due to lateness or absence from school). As a result, 5 first grade students and 12 sixth grade students were excluded from this analysis. There are significant differences (denoted by bold face values in) Table 6.2) between 1 week s test score and after session s test scores for 1st day group of 1st grade students (F(1,54)=5.86, p.05), between before and 1week for 1st day group of 6th grade students (F(1,26)=8.03, p.01), and 104

123 Table 6.2: Transition of scores of English hearing test (Understanding of sentences / before the session, after 1st week, and after the session that lasted in end of 2nd week) Grade Type of playing No. of valid Average of score S. D. season subjects Before 1week After Before 1week After 1st day st 1st week nd week st day th 1st week nd week Non-interacted between 1week s and after s for 2nd week group of 6th grade students (F(1,19)=5.52, p.05). As for sixth grade, the score of after 2 weeks is higher than before the experiment (F(2,192)= 4.73, p.05, LSD method shows the significant difference between the two conditions). We believe that these results show the positive effects of the robots on language education. In particular about 6th grade students, the robots helped children who mainly interacted in the beginning of the first week to learn English (supported by the significance between the before test and 1 week test). They also helped the children who interacted mainly in the second week (supported by the significance between the 1 week test and after session test). Additionally, the results suggest a similar trend among first grade students results. 1st day group s score might have increased in the beginning and significantly decreased during the second week. Meanwhile, 2nd week group s score might have increased during the second week. We feel that these results, and the suggestions they make, are yet a hypothesis for a future partner robot that is equipped with powerful language education ability, since the results have no significance with respect to total comparison (among before-1week-after and playing season types each for 1st and 6th grades). 105

124 Figure 6.11: Transition of English hearing score (left: first grade, right: sixth grade) Children s Behaviors toward the Robots We also investigated detailed aspects of children s behaviors. Table 6.3 and Figure?? indicates the percent of time children were accompanied by their friend(s) when they played with the robot. (Children wrote their friends names on the questionnaire before the experiment, which were then compared with the ID information obtained through the wireless tag system.) Among the first grade children, 48% of the time they interacted with the robot along with al least one friend, and 78% at sixth grade. ANOVA proved the significant difference between first grade and sixth grade (p <.01). We find a similar trend in the average of simultaneously interacting childen (Table 6.1). We think the first grade children came to the robot for communicating with the robot, and sixth grade children used the robot as a method to play with their friend. By observing their interaction with the robots, we found several inter- 106

125 Table 6.3: Comparison of friend-related behaviors (Friend time rate means the average of the each child s rate of his/her Friend time / his/her Interacted time) Grade Interacted time Friend time Friend rate Average Max S.D. Average S.D. Average S.D. 1st th esting cases. Child C could not understand English at all. However, once she heard her name called by the robot, she seemed very pleased. After this, she often interacted with the robot. Child D and E counted how many times the robot called their names. D s name was called more often. D proudly told E that the robot preferred D over E. Child F passed by the robot. He did not intend to play with the robot, but since he saw another child G playing with the robot, he joined the interaction. Those children s behaviors suggest that the robot s behavior of name-calling affected and attracted the children impressively. In addition, the observation of interaction is connected to the desire to participate in interaction. 6.5 Summary The developed interactive humanoid robots autonomously interact with humans by using its human-like body and various sensors such as vision, auditory, and tactile sensors. It also has a mechanism to identify individuals and adapt its interactive behaviors to them. These robots were used in an 107

126 explorative experiment at an elementary school for foreign language education. The experimental results show two important findings about the partner robots: 1. The long-term relationships: The children actively interacted during the first week, and then it generally lasted in a week. They became satiated to interact with the robots in the beginning of the second week. 2. Positive perspective on foreign language education: The robots affected on children s foreign-language ability acquirement. It caused children s utterances in English, and might encourage 6th graders to improve their hearing ability. In addition to the major findings, we have obtained several interesting phenomena from the experimental results. This long-term experiment showed us the strong first impression the robot creates but also its weakness in maintaining long-term relationships. More specifically: Children rushed around each of the robots on the first day. Vacant time of robot interaction rapidly increased at the beginning of the second week. 58% of English utterances occurred during the first three days (59% in the first grade, 56% in the sixth grade). On the other hand, we made the following positive observations: The name calling behavior of the robots provide an excellent chance to start interaction. Even in the second week, several children continued to interact with the robots (some of them might have felt pity for the robot since it was alone). We feel that the most difficult challenges in this experiment were coping with the loss of desire to interact with the robot on a long-term scale and the 108

127 complexity of processing sensory data from a real human environment. Regarding the former challenge, it was an important finding that the children interacted with the robot for the duration of the first week. Now it is necessary to identify methods to promote longer lasting interactive relationships. With respect to the processing of sensory data, real-world data is vastly different from that produced in a well-controlled laboratory. In the classroom many children ran around and spoke in very loud voices. The wireless person identification worked well, but speech recognition was not effective. It is vital for real-world robots to have more robust sensing abilities. Meanwhile, not only education is communication tasks. We also need to try other communication tasks to ensure the applicability of the communication robot. For example, some researchers started to apply communication robots to autism children. We believe applying robots for such a directions are necessary and essential for communication robot research. 109

128

129 Chapter 7 Discussion 7.1 Future Direction of the Constructive Approach Robotics is an integration technology of elemental technologies such as mechanics, sensor-processing, and actuator control. Whereas traditional artificial intelligence research focuses on proposing models for information processing on a certain symbol world, robotics research develops architectures that works in the physical real world. In particular, social robots (including communication robots) need to work in human daily life. (Contrarily to the many trials for developing social robots that are performed in a laboratory environment.) Real environments are difficult to function in, with respect to sensory processing adn behavior selection. Regarding audition, there could be many people talking at same time; With respect to vision, lighting conditions are complicated, and shapes and colors of objects in the real environment are complex so that the previous computer vision techniques cannot perform robust and stable in recognition. Therefore, one of the fundamental problems is how to control the robot under the complex and unstable environment. Our approach addresses this problem. There are plenty of sensorprocessing methods on various kinds of sensors, such as visual, auditory, 111

130 tactile, sonar, and so forth. We believe that almost all sensor-processing methods are situated ; in other words, interpretation of processing results depends on the situation). By using the suitable sensor-processing method for a certain situation, we can try to find suitable sensor-processing model for a real world robot. Based on this situated combination of the sensory processing elements, our robots can interact with humans, respond to human behaviors, and recognize environments. The experimental results shown in Chapter 4 sugests that this method has enough performance for human-robot interaction. We believe our architecture will be a base for future real world robot. However, there are several mechanisms we have not implemented. Our robot does not have facilities for learning, adaptation, or a model for interacting humans. In particular, learning on an episode rule level would be effective to create dynamic interactions. Robots should also have a model of each person in interacts with and adapt to that individual. We believe that body movement analysis (reported in Section 4.2) will be useful for determing a person s personality autonomously. We also need to assign other tasks for the developed robot. For example, from the elementary school experiment, we have shown the strong necessity of person identification. Other tasks also create the need for new mechanisms in the software arcitecture. Thus, it is one of the necessary future works. 7.2 Future Directions of Human-Robot Interaction Analysis Analysis and evaluation of human-robot interaction is necessary for developing communication robots. It brings us the knowledge about human behaviors toward robots, and the effects of robot s behaviors on humans. In Chapter 4, we reported two experiments results, and from these, we see one of the next steps is to find out the relationships between human parameters and robot parameters. As discussed in Chapter 5, to establish an analytical method for dynamic aspects of human-robot interaction is one of the 112

131 Figure 7.1: Scene of an experiment for animate-inanimate distinction based on imitating behaviors of babies important future directions. One of the other directions is to analyze the micro aspects of interaction. Such an analysis based on observation is often performed in child psychology, and has already started in human-robot interaction research. For example, Dautenhahn and her colleagues observed autism children s behaviors toward an interactive robot, and found that some of them behave differently to the robot regarding to children s eye gaze [Dautenhahn02]. Such a detailed observation is an essential way of analyzing the interaction. The other important direction is to find the origin of human attitude and behaviors towards a robot that is mechanically well developed and possess a creature-like self. Human infants do not understand the concept of a robot, so that they behave to the robot based on their nature. For example, one interesting research work has shown that a human infant imitate another human s arm motion, but not that of a robot (a simple mechanical arm was used) [Meltzoff95]. In Rakison s survey [Rakison01], humans were 113

132 Figure 7.2: Scene of an experiment for animate-inanimate distinction based on theory of mind mechanism of children theorized to possess to distinct animate and inanimate. The fundamental animate-inanimate distinction is based on movements and contingency of the object. Thus, it is possible that communication robots will be categorized as being animate. Some researchers have already started such a study. We are also trying to examine children s distinction of a robot. Figure 7.1 is a scene of an experiment in which babies observe the robot behavior. As in Meltzoff s experiment, they will imitate the robot if they regard it as a kind of creature. Thus, we will be able to find out the category of a robot based on such a criteria. Figure 7.2 is also a scene of an experiment, where we are going to apply false-belief test on whether 4-to-6 years-old children believe there is a mind in the robot. We believe this direction is one of the essential studies of human-robot interaction. 114

133 Figure 7.3: Future directions and applications of social robots that participate in our daily life 7.3 Reminded Problems for Future Social Robot Research on social robots (including communication robot) has just started. There are various research topics on this field like emotion, learning, acquire of skills. Here, we need to define what a future social robot is. We consider there are two robot directions: communication-oriented robot, and interaction-oriented robots, and each of them have several different problems to be solved respectively. 115

Body Movement Analysis of Human-Robot Interaction

Body Movement Analysis of Human-Robot Interaction Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,

More information

Person Identification and Interaction of Social Robots by Using Wireless Tags

Person Identification and Interaction of Social Robots by Using Wireless Tags Person Identification and Interaction of Social Robots by Using Wireless Tags Takayuki Kanda 1, Takayuki Hirano 1, Daniel Eaton 1, and Hiroshi Ishiguro 1&2 1 ATR Intelligent Robotics and Communication

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

Reading human relationships from their interaction with an interactive humanoid robot

Reading human relationships from their interaction with an interactive humanoid robot Reading human relationships from their interaction with an interactive humanoid robot Takayuki Kanda 1 and Hiroshi Ishiguro 1,2 1 ATR, Intelligent Robotics and Communication Laboratories 2-2-2 Hikaridai

More information

A practical experiment with interactive humanoid robots in a human society

A practical experiment with interactive humanoid robots in a human society A practical experiment with interactive humanoid robots in a human society Takayuki Kanda 1, Takayuki Hirano 1, Daniel Eaton 1, and Hiroshi Ishiguro 1,2 1 ATR Intelligent Robotics Laboratories, 2-2-2 Hikariai

More information

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why

More information

Android (Child android)

Android (Child android) Social and ethical issue Why have I developed the android? Hiroshi ISHIGURO Department of Adaptive Machine Systems, Osaka University ATR Intelligent Robotics and Communications Laboratories JST ERATO Asada

More information

Application of network robots to a science museum

Application of network robots to a science museum Application of network robots to a science museum Takayuki Kanda 1 Masahiro Shiomi 1,2 Hiroshi Ishiguro 1,2 Norihiro Hagita 1 1 ATR IRC Laboratories 2 Osaka University Kyoto 619-0288 Osaka 565-0871 Japan

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Estimating Group States for Interactive Humanoid Robots

Estimating Group States for Interactive Humanoid Robots Estimating Group States for Interactive Humanoid Robots Masahiro Shiomi, Kenta Nohara, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita Abstract In human-robot interaction, interactive humanoid robots

More information

Interactive Humanoid Robots for a Science Museum

Interactive Humanoid Robots for a Science Museum Interactive Humanoid Robots for a Science Museum Masahiro Shiomi 1,2 Takayuki Kanda 2 Hiroshi Ishiguro 1,2 Norihiro Hagita 2 1 Osaka University 2 ATR IRC Laboratories Osaka 565-0871 Kyoto 619-0288 Japan

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Reading a Robot s Mind: A Model of Utterance Understanding based on the Theory of Mind Mechanism

Reading a Robot s Mind: A Model of Utterance Understanding based on the Theory of Mind Mechanism From: AAAI-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Reading a Robot s Mind: A Model of Utterance Understanding based on the Theory of Mind Mechanism Tetsuo Ono Michita

More information

Robot Society. Hiroshi ISHIGURO. Studies on Interactive Robots. Who has the Ishiguro s identity? Is it Ishiguro or the Geminoid?

Robot Society. Hiroshi ISHIGURO. Studies on Interactive Robots. Who has the Ishiguro s identity? Is it Ishiguro or the Geminoid? 1 Studies on Interactive Robots Hiroshi ISHIGURO Distinguished Professor of Osaka University Visiting Director & Fellow of ATR Hiroshi Ishiguro Laboratories Research Director of JST ERATO Ishiguro Symbiotic

More information

Analysis of humanoid appearances in human-robot interaction

Analysis of humanoid appearances in human-robot interaction Analysis of humanoid appearances in human-robot interaction Takayuki Kanda, Takahiro Miyashita, Taku Osada 2, Yuji Haikawa 2, Hiroshi Ishiguro &3 ATR Intelligent Robotics and Communication Labs. 2 Honda

More information

Implications on Humanoid Robots in Pedagogical Applications from Cross-Cultural Analysis between Japan, Korea, and the USA

Implications on Humanoid Robots in Pedagogical Applications from Cross-Cultural Analysis between Japan, Korea, and the USA Implications on Humanoid Robots in Pedagogical Applications from Cross-Cultural Analysis between Japan, Korea, and the USA Tatsuya Nomura,, No Member, Takayuki Kanda, Member, IEEE, Tomohiro Suzuki, No

More information

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1 Christine Upadek 29 November 2010 Christine Upadek 1 Outline Emotions Kismet - a sociable robot Outlook Christine Upadek 2 Denition Social robots are embodied agents that are part of a heterogeneous group:

More information

Informing a User of Robot s Mind by Motion

Informing a User of Robot s Mind by Motion Informing a User of Robot s Mind by Motion Kazuki KOBAYASHI 1 and Seiji YAMADA 2,1 1 The Graduate University for Advanced Studies 2-1-2 Hitotsubashi, Chiyoda, Tokyo 101-8430 Japan kazuki@grad.nii.ac.jp

More information

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

Human-robot relation. Human-robot relation

Human-robot relation. Human-robot relation Town Robot { Toward social interaction technologies of robot systems { Hiroshi ISHIGURO and Katsumi KIMOTO Department of Information Science Kyoto University Sakyo-ku, Kyoto 606-01, JAPAN Email: ishiguro@kuis.kyoto-u.ac.jp

More information

Cooperative embodied communication emerged by interactive humanoid robots

Cooperative embodied communication emerged by interactive humanoid robots Int. J. Human-Computer Studies 62 (2005) 247 265 www.elsevier.com/locate/ijhcs Cooperative embodied communication emerged by interactive humanoid robots Daisuke Sakamoto a,b,, Takayuki Kanda b, Tetsuo

More information

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are

More information

Android as a Telecommunication Medium with a Human-like Presence

Android as a Telecommunication Medium with a Human-like Presence Android as a Telecommunication Medium with a Human-like Presence Daisuke Sakamoto 1&2, Takayuki Kanda 1, Tetsuo Ono 1&2, Hiroshi Ishiguro 1&3, Norihiro Hagita 1 1 ATR Intelligent Robotics Laboratories

More information

Young Children s Folk Knowledge of Robots

Young Children s Folk Knowledge of Robots Young Children s Folk Knowledge of Robots Nobuko Katayama College of letters, Ritsumeikan University 56-1, Tojiin Kitamachi, Kita, Kyoto, 603-8577, Japan E-mail: komorin731@yahoo.co.jp Jun ichi Katayama

More information

Autonomic gaze control of avatars using voice information in virtual space voice chat system

Autonomic gaze control of avatars using voice information in virtual space voice chat system Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16

More information

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback? 19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands

More information

Development and Evaluation of a Centaur Robot

Development and Evaluation of a Centaur Robot Development and Evaluation of a Centaur Robot 1 Satoshi Tsuda, 1 Kuniya Shinozaki, and 2 Ryohei Nakatsu 1 Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan {amy65823,

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Concept and Architecture of a Centaur Robot

Concept and Architecture of a Centaur Robot Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Understanding the Mechanism of Sonzai-Kan

Understanding the Mechanism of Sonzai-Kan Understanding the Mechanism of Sonzai-Kan ATR Intelligent Robotics and Communication Laboratories Where does the Sonzai-Kan, the feeling of one's presence, such as the atmosphere, the authority, come from?

More information

Experimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction

Experimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction Experimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction Tatsuya Nomura 1,2 1 Department of Media Informatics, Ryukoku University 1 5, Yokotani, Setaohe

More information

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,

More information

Care-receiving Robot as a Tool of Teachers in Child Education

Care-receiving Robot as a Tool of Teachers in Child Education Care-receiving Robot as a Tool of Teachers in Child Education Fumihide Tanaka Graduate School of Systems and Information Engineering, University of Tsukuba Tennodai 1-1-1, Tsukuba, Ibaraki 305-8573, Japan

More information

Social Acceptance of Humanoid Robots

Social Acceptance of Humanoid Robots Social Acceptance of Humanoid Robots Tatsuya Nomura Department of Media Informatics, Ryukoku University, Japan nomura@rins.ryukoku.ac.jp 2012/11/29 1 Contents Acceptance of Humanoid Robots Technology Acceptance

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

Concept and Architecture of a Centaur Robot

Concept and Architecture of a Centaur Robot Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan

More information

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems September 28 - October 2, 2004, Sendai, Japan Flexible Cooperation between Human and Robot by interpreting Human

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Motion Behavior and its Influence on Human-likeness in an Android Robot

Motion Behavior and its Influence on Human-likeness in an Android Robot Motion Behavior and its Influence on Human-likeness in an Android Robot Michihiro Shimada (michihiro.shimada@ams.eng.osaka-u.ac.jp) Asada Project, ERATO, Japan Science and Technology Agency Department

More information

Contents. Part I: Images. List of contributing authors XIII Preface 1

Contents. Part I: Images. List of contributing authors XIII Preface 1 Contents List of contributing authors XIII Preface 1 Part I: Images Steve Mushkin My robot 5 I Introduction 5 II Generative-research methodology 6 III What children want from technology 6 A Methodology

More information

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The 29 th Annual Conference of The Robotics Society of

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

U ROBOT March 12, 2008 Kyung Chul Shin Yujin Robot Co.

U ROBOT March 12, 2008 Kyung Chul Shin Yujin Robot Co. U ROBOT March 12, 2008 Kyung Chul Shin Yujin Robot Co. Is the era of the robot around the corner? It is coming slowly albeit steadily hundred million 1600 1400 1200 1000 Public Service Educational Service

More information

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal Live Feeling on Movement of an Autonomous Robot Using a Biological Signal Shigeru Sakurazawa, Keisuke Yanagihara, Yasuo Tsukahara, Hitoshi Matsubara Future University-Hakodate, System Information Science,

More information

The effect of gaze behavior on the attitude towards humanoid robots

The effect of gaze behavior on the attitude towards humanoid robots The effect of gaze behavior on the attitude towards humanoid robots Bachelor Thesis Date: 27-08-2012 Author: Stefan Patelski Supervisors: Raymond H. Cuijpers, Elena Torta Human Technology Interaction Group

More information

Imitation based Human-Robot Interaction -Roles of Joint Attention and Motion Prediction-

Imitation based Human-Robot Interaction -Roles of Joint Attention and Motion Prediction- Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication Kurashiki, Okayama Japan September 20-22,2004 Imitation based Human-Robot Interaction -Roles of Joint Attention

More information

Physical and Affective Interaction between Human and Mental Commit Robot

Physical and Affective Interaction between Human and Mental Commit Robot Proceedings of the 21 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 21 Physical and Affective Interaction between Human and Mental Commit Robot Takanori Shibata Kazuo Tanie

More information

Online Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots

Online Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots Online Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots Naoya Makibuchi 1, Furao Shen 2, and Osamu Hasegawa 1 1 Department of Computational Intelligence and Systems

More information

Contents. Mental Commit Robot (Mental Calming Robot) Industrial Robots. In What Way are These Robots Intelligent. Video: Mental Commit Robots

Contents. Mental Commit Robot (Mental Calming Robot) Industrial Robots. In What Way are These Robots Intelligent. Video: Mental Commit Robots Human Robot Interaction for Psychological Enrichment Dr. Takanori Shibata Senior Research Scientist Intelligent Systems Institute National Institute of Advanced Industrial Science and Technology (AIST)

More information

CB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics

CB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics CB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics Takashi Minato #1, Yuichiro Yoshikawa #2, Tomoyuki da 3, Shuhei Ikemoto 4, Hiroshi Ishiguro # 5, and Minoru Asada # 6 # Asada

More information

Interaction Debugging: an Integral Approach to Analyze Human-Robot Interaction

Interaction Debugging: an Integral Approach to Analyze Human-Robot Interaction Interaction Debugging: an Integral Approach to Analyze Human-Robot Interaction Tijn Kooijmans 1,2 Takayuki Kanda 1 Christoph Bartneck 2 Hiroshi Ishiguro 1,3 Norihiro Hagita 1 1 ATR Intelligent Robotics

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:

More information

HRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments

HRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments Book Title Book Editors IOS Press, 2003 1 HRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments Tetsunari Inamura a,1, Masayuki Inaba a and Hirochika Inoue a a Dept. of

More information

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat Audio Transmission Technology for Multi-point Mobile Voice Chat Voice Chat Multi-channel Coding Binaural Signal Processing Audio Transmission Technology for Multi-point Mobile Voice Chat We have developed

More information

Making a Mobile Robot to Express its Mind by Motion Overlap

Making a Mobile Robot to Express its Mind by Motion Overlap 7 Making a Mobile Robot to Express its Mind by Motion Overlap Kazuki Kobayashi 1 and Seiji Yamada 2 1 Shinshu University, 2 National Institute of Informatics Japan 1. Introduction Various home robots like

More information

What will the robot do during the final demonstration?

What will the robot do during the final demonstration? SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such

More information

Robotics for Children

Robotics for Children Vol. xx No. xx, pp.1 8, 200x 1 1 2 3 4 Robotics for Children New Directions in Child Education and Therapy Fumihide Tanaka 1,HidekiKozima 2, Shoji Itakura 3 and Kazuo Hiraki 4 Robotics intersects with

More information

Generating Personality Character in a Face Robot through Interaction with Human

Generating Personality Character in a Face Robot through Interaction with Human Generating Personality Character in a Face Robot through Interaction with Human F. Iida, M. Tabata and F. Hara Department of Mechanical Engineering Science University of Tokyo - Kagurazaka, Shinjuku-ku,

More information

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as

More information

Robot: icub This humanoid helps us study the brain

Robot: icub This humanoid helps us study the brain ProfileArticle Robot: icub This humanoid helps us study the brain For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-icub/ Program By Robohub Tuesday,

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

Issues in Information Systems Volume 13, Issue 2, pp , 2012

Issues in Information Systems Volume 13, Issue 2, pp , 2012 131 A STUDY ON SMART CURRICULUM UTILIZING INTELLIGENT ROBOT SIMULATION SeonYong Hong, Korea Advanced Institute of Science and Technology, gosyhong@kaist.ac.kr YongHyun Hwang, University of California Irvine,

More information

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface 6th ERCIM Workshop "User Interfaces for All" Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface Tsutomu MIYASATO ATR Media Integration & Communications 2-2-2 Hikaridai, Seika-cho,

More information

Human Robot Interaction (HRI)

Human Robot Interaction (HRI) Brief Introduction to HRI Batu Akan batu.akan@mdh.se Mälardalen Högskola September 29, 2008 Overview 1 Introduction What are robots What is HRI Application areas of HRI 2 3 Motivations Proposed Solution

More information

Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork

Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork Cynthia Breazeal, Cory D. Kidd, Andrea Lockerd Thomaz, Guy Hoffman, Matt Berlin MIT Media Lab 20 Ames St. E15-449,

More information

SECOND YEAR PROJECT SUMMARY

SECOND YEAR PROJECT SUMMARY SECOND YEAR PROJECT SUMMARY Grant Agreement number: 215805 Project acronym: Project title: CHRIS Cooperative Human Robot Interaction Systems Period covered: from 01 March 2009 to 28 Feb 2010 Contact Details

More information

DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY

DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY Yutaro Fukase fukase@shimz.co.jp Hitoshi Satoh hitoshi_sato@shimz.co.jp Keigo Takeuchi Intelligent Space Project takeuchikeigo@shimz.co.jp Hiroshi

More information

Robotic Systems ECE 401RB Fall 2007

Robotic Systems ECE 401RB Fall 2007 The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation

More information

Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living

Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living Javier Jiménez Alemán Fluminense Federal University, Niterói, Brazil jjimenezaleman@ic.uff.br Abstract. Ambient Assisted

More information

Relation Formation by Medium Properties: A Multiagent Simulation

Relation Formation by Medium Properties: A Multiagent Simulation Relation Formation by Medium Properties: A Multiagent Simulation Hitoshi YAMAMOTO Science University of Tokyo Isamu OKADA Soka University Makoto IGARASHI Fuji Research Institute Toshizumi OHTA University

More information

Assess how research on the construction of cognitive functions in robotic systems is undertaken in Japan, China, and Korea

Assess how research on the construction of cognitive functions in robotic systems is undertaken in Japan, China, and Korea Sponsor: Assess how research on the construction of cognitive functions in robotic systems is undertaken in Japan, China, and Korea Understand the relationship between robotics and the human-centered sciences

More information

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Published by Pan Stanford Publishing Pte. Ltd. Penthouse Level, Suntec Tower 3 8 Temasek Boulevard Singapore 038988 Email: editorial@panstanford.com Web: www.panstanford.com British Library Cataloguing-in-Publication

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Helen McBreen, James Anderson, Mervyn Jack Centre for Communication Interface Research, University of Edinburgh, 80,

More information

Cognitive Robotics 2017/2018

Cognitive Robotics 2017/2018 Cognitive Robotics 2017/2018 Course Introduction Matteo Matteucci matteo.matteucci@polimi.it Artificial Intelligence and Robotics Lab - Politecnico di Milano About me and my lectures Lectures given by

More information

Assignment 1 IN5480: interaction with AI s

Assignment 1 IN5480: interaction with AI s Assignment 1 IN5480: interaction with AI s Artificial Intelligence definitions 1. Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work

More information

An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing An Integrated ing and Simulation Methodology for Intelligent Systems Design and Testing Xiaolin Hu and Bernard P. Zeigler Arizona Center for Integrative ing and Simulation The University of Arizona Tucson,

More information

CS494/594: Software for Intelligent Robotics

CS494/594: Software for Intelligent Robotics CS494/594: Software for Intelligent Robotics Spring 2007 Tuesday/Thursday 11:10 12:25 Instructor: Dr. Lynne E. Parker TA: Rasko Pjesivac Outline Overview syllabus and class policies Introduction to class:

More information

Does a Robot s Subtle Pause in Reaction Time to People s Touch Contribute to Positive Influences? *

Does a Robot s Subtle Pause in Reaction Time to People s Touch Contribute to Positive Influences? * Preference Does a Robot s Subtle Pause in Reaction Time to People s Touch Contribute to Positive Influences? * Masahiro Shiomi, Kodai Shatani, Takashi Minato, and Hiroshi Ishiguro, Member, IEEE Abstract

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

Can a social robot train itself just by observing human interactions?

Can a social robot train itself just by observing human interactions? Can a social robot train itself just by observing human interactions? Dylan F. Glas, Phoebe Liu, Takayuki Kanda, Member, IEEE, Hiroshi Ishiguro, Senior Member, IEEE Abstract In HRI research, game simulations

More information

Visual Arts What Every Child Should Know

Visual Arts What Every Child Should Know 3rd Grade The arts have always served as the distinctive vehicle for discovering who we are. Providing ways of thinking as disciplined as science or math and as disparate as philosophy or literature, the

More information

Chapter 2 Intelligent Control System Architectures

Chapter 2 Intelligent Control System Architectures Chapter 2 Intelligent Control System Architectures Making realistic robots is going to polarize the market, if you will. You will have some people who love it and some people who will really be disturbed.

More information

Affordance based Human Motion Synthesizing System

Affordance based Human Motion Synthesizing System Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror Osamu Morikawa 1 and Takanori Maesako 2 1 Research Institute for Human Science and Biomedical

More information

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Humanoid robot. Honda's ASIMO, an example of a humanoid robot Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.

More information

Expression of Emotion and Intention by Robot Body Movement

Expression of Emotion and Intention by Robot Body Movement Expression of Emotion and Intention by Robot Body Movement Toru NAKATA, Tomomasa SATO and Taketoshi MORI. Sato Lab., RCAST, Univ. of Tokyo, Komaba 4-6-1, Meguro, Tokyo, 153-8904, JAPAN. Abstract. A framework

More information

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

More information

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell 2004.12.01 Abstract I propose to develop a comprehensive and physically realistic virtual world simulator for use with the Swarthmore Robotics

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks

Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks Muh Anshar Faculty of Engineering and Information Technology

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information