Motion Behavior and its Influence on Human-likeness in an Android Robot Michihiro Shimada (michihiro.shimada@ams.eng.osaka-u.ac.jp) Asada Project, ERATO, Japan Science and Technology Agency Department of Adaptive Machine Systems Graduate School of Eng. Osaka Univ. 2-1 Yamada-oka, Suita, Osaka 6-871, Japan Hiroshi Ishiguro (ishiguro@ams.eng.osaka-u.ac.jp) Asada Project, ERATO, Japan Science and Technology Agency Department of Adaptive Machine Systems Graduate School of Eng. Osaka Univ. 2-1 Yamada-oka, Suita, Osaka 6-871, Japan Abstract In human-robot interaction, both appearance and motion are essential aspects of the robot. We study human-robot interaction using an android that has human-like appearance. Humans show unconscious behaviors, especially gaze behavior, when interacting with another human (A.McCarthy, K.Lee, & D.Muir, 21). We expect these same behaviors to happen when interacting with a very human-like robot. We hypothesize that when robot s motion changes and then humanlikeness of robot changes, that change appears in the gaze behavior. We study how gaze behavior of the human changes when changing robot s motion. From the result of this experiment and previous experiments (M.Shimada, T.Minato, S.Itakura, & H.Ishiguro, 26), we can infer that if either the human-likeness of motion or the human-likeness of appearance lacked, humans do not recognize the android as a human. We consider that these results can help evaluating the humanlikeness of robots in the future. Moreover, we will contribute to develop robots to which humans can get familiar with. Furthermore, this research enables to measure the sociality associated to each part of actual human body, contributing also to the field of cognitive science. Keywords: Human-robot Interaction; Android; Appearance and Behavior; Comparative Psychology; Gaze behavior. Introduction Our everyday impressions of intelligence are subjective phenomena that arise from our interactions with others. The development of systems that support rich, multimodal interactions are of enormous value. Our research goal is to discover the principles underlying natural communication among individuals and to establish a methodology for the development of expressive humanoid robots. We adopt a constructivist approach that entails repeatedly developing and integrating behavioral models, implementing them in humanoid robots, analyzing their flaws, and then improving them (M.Asada, K.F.MacDorman, H.Ishiguro, & Y.Kuniyoshi, 21). By following this constructivist approach, we have developed the humanoid robot Robovie, which has numerous situation-dependent behavior modules (H.Ishiguro, T.Ono, M.Imai, T.Kanda, & R.Nakatsu, 21). This has enabled us to study how Robovie s behavior influences human-robot communication (T.Kanda, H.Ishiguro, T.Ono, M.Imai, & K.Mase, 22). However, since human beings have evolved specialized neural centers for the detection of bodies and faces (e.g., (D.I.Perret, M.W.Oram, & E.Ashbridge, 1998)), we can infer that a human-like appearance is also important. Apart from gestures, human beings may also possess many biomechanical structures that support interaction, including scores of muscles for controlling facial expressions. Robovie s machinelike appearance will have an impact on interaction, thereby preventing us from isolating the effects of behavior. Other studies have also tended to focus only on behavior. However, in order to isolate the effects of behavior from those of appearance, it is necessary to develop an android robot that physically resembles a person. Let us define the term android as a robot which has almost same appearance to human. That is, the position and the number of the body parts (nose, eyes and so on) is the same as in humans. In addition, it can move in a human-like way. Let us instead denote by the term humanoid robot a robot which does not have human-like appearance, but it has head and arm that partly has the same functions of humans ones. Therefore, according to our terminology Robovie, ASIMO and so on are humanoid robots. Our study addresses the appearance and behavior problem from the standpoint of both engineering and science and aims at exploring the essence of communication. Studies on androids have two research aspects: The development of a human-like robot based on mechanical and electrical engineering, robotics, control theory, pattern recognition, and artificial intelligence. An analysis of human activity based on the cognitive and social sciences. These aspects interact closely with each other: to make the android human-like, we must investigate human activity from the standpoint of the cognitive and behavioral sciences as well as the neurosciences, and to evaluate human activity, we need to implement processes that support it in the android. In previous research we investigated the effect of appearance of robots (M.Shimada et al., 26). In that research, a subject interacted with one of the agents: human, android, humanoid robot. Agents asked subjects some questions, and subjects answered the questions. Gaze behavior was measured during the time in which the subject was thinking the answer. Gaze behavior during thinking also represented the sociality. This sociality was shown in some articles (A.McCarthy et al., 21). During the experiment, all the agents behaved in a human-like way. As a result, the gaze 2468
behavior of subjects toward the android was as the same as toward another human. However, the gaze behavior of subjects toward the android was different from the one toward the humanoid robot. From these results, it emerged that the gaze behavior of humans interacting with a robot that has human-like appearance and human-like motions was almost the same as toward a human. We consider this similarity to be an evidence for the android s human-likeness. Although we can evaluate an android, we need to know which factors influence its evaluation. Therefore, we payed attention to the movement, and in detail with this research we investigated its effect. There is a great deal of research on the effect of robot s motion. For example, Kanda et al. investigate the effect of gaze control using a humanoid robot. As a result, it is found that gaze control influences impressions such as enjoyment, activity, and performance of robot (T.Kanda, H.Ishiguro, & T.IShida, 21). Additionally, Kakio et al. investigate the effect of the body swing using the humanoid robot Robovie-IV. As a result, it was found that swing influences the extroversion and agreeableness (M.Kakio, T.Miyashita, N.Mitsunaga, H.Ishiguro, & N.Hagita, 26). These researches were only considering motions, but they did not consider appearance. The purpose of this research is to investigate the humanlikeness of movement of each human body part using an android. This cannot be conducted using a humanoid robot because even if we use an android, we cannot remove the effect of appearance completely. However, previous study shows that an android is treated as a human, at least unconsciously. (M.Shimada et al., 26) Therefore, we hypothesize the following: Even if the appearance of a robot is humanlike, if the movement of robot is not human-like, the robot is not treated as human or human-likeness of the robot is lost. We need to know the relationship between android s motion and subjects gaze behavior, and what motion is felt human-like. This research contributes not only to the field of robotics but also to the field of cognitive science in understanding how important is each human body part in order to represent the sociality and to convey human-likeness. Experiment Android robot used in experiment and its motions In the experiments we used Repliee Q2, a robot shown in Fig.1. The appearance of this android is based on a really existing Japanese woman. Silicon skin covers its head, neck, hands, and forearms, and the other parts are covered by clothes. The silicon skin is a soft material and this is useful for increasing the human-likeness. This android has 42 degrees of freedom(dof) in the upper body;13 DOF in the head, 3 DOF in the neck, 11 DOF 2 in the arms and hands, 4 DOF in the waist. The actuators are driven by air. Air is compressed by an air compressor, and is sent to each actuator. The air compressor has a loud noise but since it is placed in a sound booth we can assume that its noise almost does not influence the subjects. The android is controlled by an external computer that sends a signals corresponding to the Figure 1: ReplieeQ2. position of each joint. We developed human-like motions by observing sitting human in the previous study (M.Shimada et al., 26)(Condition(1)). For instance, blinking is performed every three second, eyelids are moved up and down, mouth moves according to the voice and the breast moves to resemble breath. We remove the human-likeness with two policies in order to investigate the human-likeness of motion. One is the policy of removing human-likeness completely. Therefore, first of all, we made a robot-like motion(condition(2)) by changing the human-like motion to the linear motion. For example, although human-like motion used 11 DOF when moving arm, the robot-like one used only two DOF (shoulder and elbow). Additionally, we used just one DOF for the mouth movement, therefore the mouth moves only up and down, and does not move its lips and mouth corner. Neck moves only in a nodding motion. In addition, robot-like motion does not blink. Next, we developed a motion which is between human-like motion and robot-like motion(condition(3)) by averaging the command value of human-like motion and that of robot-like motion. Finally, we used stopping motion(condition(4)) where the android does not move at all, therefore, for instance, when the android speaks, the mouth does not move. In this way, we can explore how by using three levels human-likeness reduce, and if human-likeness reduce, how the human-likeness changes when the motion changes. The other policy is the partial removing of human-likeness. We stop the motion of one part which are usually recognized as human-like features. We paid attention to the eyes as a part usually identified as important for human-likeness. Eyes have the role of a communication tool and in fact there is much research about the function of eye contact (A.Kendon, 1967). For example, it is found that eye contact represents the relationship between humans (M.Argyle & J.Dean, 196). These findings show that eyes are the part that feel sociality and 2469
Table 1: Questions the questioner asked. Figure 2: The experimental scene and the eight averted gaze direction. thus the part that influence the human-likeness. Therefore, we choose eyes and eyelids, and stop these parts(condition()). We considered that another part which strongly provides human-likeness is the waist because this is the part that generates the swing motion which humans exhibit unconsciously. This motion influences the human-likeness and lifelikeness (M.Kakio et al., 26). Therefore we choose the waist, and stop that part(condition(6)). In other words, by stopping various parts of the android s body we could explore how much each of them contribute to the human-likeness. Evaluation method How do we quantify similarity and how do we evaluate human-robot interaction? In order to answer these questions, some main research issues need to be addressed. Human-robot interaction can be evaluated by its degree of naturalness. Therefore, it is necessary to compare humanhuman and human-robot interactions. There are qualitative approaches to measure a mental state using methods such as the semantic differential method (T.Kanda et al., 22). There also exist quantitative methods to observe an individual s largely unconscious behavior, such as gaze behavior, interpersonal distance, and vocal pitch. These observable responses reflect cognitive processes that we might not be able to infer from responses to a questionnaire. In this research, we adopt the quantitative method, because if we had used introspective evaluation, subject would have not accurately evaluated human-likeness because they noticed the android is robot. Previous research found that gaze behavior during thinking includes sociality, and we can measure the humanlikeness by measuring the gaze behavior (A.McCarthy et al., 21). Therefore, we measure the human-likeness included in the robot s motion by comparing the gaze behavior of subjects during thinking. Procedure The subjects were made to sit in front of the questioner (Fig.2(a)) ant their eye movements were measured while thinking about the answers to questions posed by the questioner. There were two types of questions: know questions and think questions (Table 1). The know questions were used as a control condition. Know questions were those to Think questions Do you know a flower whose name is also used as a first name given to women? If you mother s brother or sister has a child, what is the relation between that child and you? Please name a fruit which is red inside. Please say the word shikayama backward. Six times three divided by two. Please tell me a word that consists of eight letters. What distance can a car travel in 1. hour, when its speed is 6 kilo meters an hour? Please enumerate all the months that have 3 days. Name a color that doesn t appear in the rainbow? How many occurrences of n are in the word of ni ho n ka n jo u si n ri ga ku ka i? Know questions What is the name of this university? How many sides does a square have? When a traffic signal is red while driving a car, what should you do? What is the sweet substance called that bees make? What is tofu made from? How old are you? What year is it currently in the Gregorian calendar? Who is the prime minister of Japan? What is the name of animal that make the sound moo? What is the capital of Japan? which the subjects already knew the answers. Think questions, on the other hand, were those questions to which the subjects did not already know the answers because the subject was compelled to derive the answer. In this experiment, we did not analyze gaze behavior of know question because we want to investigate the gaze behavior during thinking. Therefore, we only analyze the gaze behavior of think question. The subjects were asked 1 know questions and 1 think questions in random order. Therefore, subjects did not know that when think questions and know questions were asked. Their faces were videotaped and their gaze direction was coded from the end of the question to the beginning of the answer. Although there are some methods to track eye movement (Morimoto & Mimica, 2), we adopt the video recording and manually coding because this method is the most accurate. The video records of each subject s eye movements were analyzed frame by frame. The average duration of gaze in the eight directions shown in Fig.2(b) was then calculated. We only analyze the average duration of gaze in order to compare the results to previous studies. The detailed experiment procedure was as follows. The experimenter explained the subjects the purpose of experiment saying: the purpose of this experiment is to investigate how 247
Table 2: The number of subjects in each condition and their average age. 2 1 Number Age Male Female Mean SD (1)Human-like motion 1 13 3.2 8.2 (2)Robot-like motion 12 14 31.8 9.7 (3)Between (1) and (2) 8 13 32.9 8.93 (4)Without motion 11 13 31.9 8.86 ()Not move eyes 1 13 33.6 9.96 (6)Not move waist 8 11 3.4 9.16 1 2 1 1 Figure 4: Average percentage of duration of gaze in eight averted directions for an android with motion between human-like motion and robot-like motion. 2 1 1 Figure 3: Average percentage of duration of gaze in eight averted directions for an android with human-like motion (the percentage of duration for each direction is indicated by the distance from the center of the correspoding point). you answer to questions when you are asked them in interview style. The questioner will ask you several questions. Please answer the questions. Do you have any question? During explanation by the experimenter, also the questioner moves slightly. After this explanation, the experimenter exited the experimental environment to start the experiment. The questioner addressed the subject with: Hello, let s start now. Then the questioner asked a series of questions. After the last question, the questioner told the subject: that s all Thank you. Finally, the experimenter told the subject that the experiment was finished. The number of the subjects were 141 Japanese adults. The number of subjects in each condition and their average age are shown in Table 2. In order to avoid a subject to be asked the same questions, each subject participated in only one case of questioner. Result and discussion We analyze the gaze behavior during breaking eye contact as in previous studies. The results are shown in Fig.3, Fig.4, Fig., Fig.6, Fig.9,and Fig.1. The red line indicates known questions, while the blue one indicates think questions. In addition, the results about toward the human and toward the humanoid robot are shown in the Fig.7 and Fig.8 (M.Shimada Figure : Average percentage of duration of gaze in eight averted directions for an android with robot-like motion. et al., 26). From these figures, it can be seen that subjects tend to avert their eyes to the left and right in case of condition(1). Furthermore subjects tend to reduce the percentage of leftward and rightward gaze in the case of condition(3). Moreover the shape of Fig.4 was unique in the downward direction. In the case of condition(2), the percentage of the sideward gaze almost did not change. However, the percentage of downward gaze increased. In the case of condition(4), the subjects often looked downward as much as in the case of condition(3). In order to investigate the similarity of graph shapes, we calculate the correlation between human-like motion and other motions. As a result, coefficient of correlation between human-like motion and the motion between humanlike and robot-like is.67(p <.1), which is a not so strong correlation. Moreover, the coefficient of correlation between the case of condition(1) and condition(3) is.1(p >.1), and the one of correlation between the case of condition(1) and condition(4) is.16(p >.1), that is they have a really low correlation. From these results, we found that the highest similarity is the motion between human-like motion and robot-like motion, and low similarity is robot-like motion and stopped android. We can then infer that when the mo- 2471
2 2 1 1 1 1 Figure 6: Average percentage of duration of gaze in eight averted directions for an android without motion. Figure 8: Average percentage of duration of gaze in eight averted directions for a humanoid robot. 2 2 1 1 1 1 Figure 7: Average percentage of duration of gaze in eight averted directions for a human. tion of android was changed from the human-like motion to the robot-like motion or stopping motion, the human-likeness was reduced and the change influenced the gaze behavior. Moreover, the coefficient of correlation between a human questioner and the case of condition(1) is.77(p <.). Therefore, the gaze behavior obtained with the human-like motion has strong correlation with the one obtained with a human questioner, meaning that human-like motion we developed really look human-like. The subject often looked downward during thinking in the case of a humanoid robot in the previous experiment(m.shimada et al., 26). The gaze behavior in the case of the robot-like motion and the stopping motion in this experiment was almost same as that in case of the humanoid robot. The coefficient of correlation between the humanoid robot case and the case of condition(3) is.93(p <.1) while the one between the humanoid robot case and the case of condition(4) is.92(p <.1). Both of these coefficients are very high so we can infer that despite the appearance of the robot is very human-like, human treat an android as a robot if the motion of the android is robot-like or stopped. From Fig.9 and Fig.1, it can be seen that subjects tend to Figure 9: Average percentage of duration of gaze in eight averted directions for an android with natural movement except eye movement. reduce the percentage of leftward and rightward gaze in both cases and to increase the percentage of downward gaze. We also calculated the coefficient of correlation about these conditions. As a result, the coefficient of correlation between the case of condition() and condition(1) is.62(p >.1). The coefficient of correlation between the case of condition(6) and condition(1) is.66(p <.1). Therefore, these are quite weak correlation with human-like motion. Moreover, we calculated the coefficient of correlation between the case of interaction with a human questioner and the case of condition(6), obtaining.7(p <.). The coefficient of correlation between the case of a human questioner and the case of condition() is.7(p <.1). We can notice the high correlations with human-like motions. However, this value is lower than the correlation between the case of a human questioner and the case of the android questioner with human-like questioner. From these results, we consider that an android is not treated as a robot even if just one type of movement is removed. Through this experiment and previous experiment, we found that if either the human-likeness of motion or the human-likeness of appearance lacked, humans do not recog- 2472
2 1 1 Figure 1: Average percentage of duration of gaze in eight averted directions for an android with natural movement except waist movement. nize the android as a human. Conclusion In this research, we variously changed the motion, and then measured the gaze behavior towards the android. In a previous experiment (M.Shimada et al., 26) we found that during thinking the gaze behavior towards the android with human-like motion is almost same as toward the human. Motion behavior used for this analysis can be categorized as: robot-like motion, a motion behavior which shares parts of both human-like and robot-like behavior, stopping motion, human-like motion without eye movement, and human-like motion without waist movement. As a result, the gaze behavior in all conditions is not as same as that in the case of human-like motion. Therefore, even small lacks of humanlikeness cause the android not to be treated as a human. The gaze behavior towards androids with robot-like motions or towards androids without motions is almost same as towards humanoid robots. Therefore, such androids are treated as mechanical robots. Moreover, based on the degree of similarity, eye movement influences human-likeness more than waist movement. We could therefore verify that even if the appearance of a robot is human-like, if the movement of robot is not human-like, the robot is not treated as human or human-likeness of the robot is lost. In this experiment, the subject were only Japanese adults. However, it is generally known that gaze behavior of children and infants is different from the one of adults thus we need to investigate their gaze behavior. There might also be differences in gender and culture. In our experiment, we measured the degree of humanlikeness of the motion and identified an order in which individual body parts influence human-likeness. These results may influence not only the robotics field. In future works we need to make the appearance of the robot more human-like appearance and to develop more human-like motions by further experiments in order to evaluate the sociality accurately. Again this could, then, also contribute to the field of cognitive science. Acknowledgments We developed the android in collaboration with Kokoro Company, Ltd. This research supported by JST-ERATO. I want to thank associate prof. Shoji Itakura at Kyoto University and Prof. Kang Lee at University of Toronto who gave insightful comments and suggestions. References A.Kendon. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22 63. A.McCarthy, K.Lee, & D.Muir. (21). Eye gaze display that index knowing, thinking and guessing. In In annual conference of the american psychological society. Toront, Ontario, Canada. D.I.Perret, M.W.Oram, & E.Ashbridge. (1998). Evidence accumulation in cell populaiton in cell populations responsive to faces: an account of generalization of recognition without mental transformations. Cognition, 67, 111 14. H.Ishiguro, T.Ono, M.Imai, T.Kanda, & R.Nakatsu. (21). Rovobie:an interactive humanoid robot. International Jornal of Industrial Robot, 28. M.Argyle, & J.Dean. (196). Eye-contact, distance and affiliation. Sociometry, 28, 289 34. M.Asada, K.F.MacDorman, H.Ishiguro, & Y.Kuniyoshi. (21). Cognitive developmental robotics as a new paradigm for the design of humanoid robots. Robotics and Autonomous Systems, 37, 18 193. M.Kakio, T.Miyashita, N.Mitsunaga, H.Ishiguro, & N.Hagita. (26). Natural reflexive behavior for wheeled inverted pendulum type humanoid robots. In 1th ieee international workshop on robot and human interactive communication (p. 41-46). Hatfield, United Kingdom. Morimoto, C. H., & Mimica, M. R. (2). Eye gaze tracking techniques for interactive applications, computer vision and image understanding. Computre Vision and Image Understanding, 98, 4 24. M.Shimada, T.Minato, S.Itakura, & H.Ishiguro. (26). Evaluation of android using unconscious recognition as criterion. In 26 ieee-ras international conference on humanoid robots. Genova, Italy. T.Kanda, H.Ishiguro, & T.IShida. (21). Psychological analysis on human-robot interaction. In Ieee international conference on robotics and automation (p. 4166-4173). Seuol, Korea. T.Kanda, H.Ishiguro, T.Ono, M.Imai, & K.Mase. (22). Deveropment and evaluation of an interactive robot robovie. In In proceedings of the ieee international coference on robotics and automstion (pp. 1848 18). WashingtonDC, USA. 2473