EEG-Based Mu Rhythm Suppression to Measure the Effects of Appearance and Motion on Perceived Human Likeness of a Robot
|
|
- Myles Clarke
- 6 years ago
- Views:
Transcription
1 EEG-Based Mu Rhythm Suppression to Measure the Effects of Appearance and Motion on Perceived Human Likeness of a Robot Goh Matsuda Kyoto Prefectural University of Medicine Kazuo Hiraki The University of Tokyo, CREST JST and Hiroshi Ishiguro Osaka University, Advanced Telecommunications Research We performed two electroencephalogram (EEG) experiments to examine how humanoid robot appearance and motion affect human subjects perception of the robots. Mu rhythm suppression of EEG, which is considered to reflect mirror neuron system (MNS) activation, was regarded as a neurological indicator of perceived human likeness. Video clips depicting upper-body actions performed by three agents (human, android, and mechanical robot) were presented in experiment 1. In experiment 2, point-light motion (PLM) stimuli generated from experiment 1 stimuli were presented. The results of experiment 1 revealed that only the human and android actions evoked significant mu suppression. No PLM stimulus elicited significant mu suppression in experiment 2. These findings suggest that an overall human-like form is necessary to activate the MNS. Keywords: humanoid robot, android, EEG, mirror neuron, mu suppression 1. Introduction In recent years, many types of humanoid robots have been developed all over the world (Zhao, 2006). They are mainly designed to interact with humans and can be divided into two groups: those with extremely human-like appearances, such as Geminoid (Nishio, Ishiguro, & Hagita, 2007) and HRP-4C (Kaneko et al., 2009), and robots with metallic or plastic surfaces, such as ASIMO (Honda Motor Co., Ltd., Japan) and QRIO (Sony Corporation, Japan). Robots belonging to the former type are generally called androids. Because the most communicative partners of humans are undoubtedly other human beings, making robots more human-like is considered the shortcut to developing robots that serve as realistic companions to humans. If this hypothesis is correct, a highly human-like android would have greater potential as a social partner than mechanical humanoids. Authors retain copyright and grant the Journal of Human-Robot Interaction right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. Journal of Human-Robot Interaction, Vol. 5, No. 1, 2016, Pages 68-81, DOI /JHRI.5.1.Matsuda
2 In the current study, we examined the effects of humanoid robot appearance and motion on the perception of human likeness. Although some behavioral studies have already investigated the same issue (Minato, Shimada, Itakura, Lee, & Ishiguro, 2006; Oztop, Franklin, Chaminade, & Cheng, 2005), we used a neurophysiological method with the expectation of revealing new objective findings beyond traditional psychological methods (Chaminade et al., 2010; Hirai & Hiraki, 2007; Krach et al., 2008; Saygin, Chaminade, Ishiguro, Driver, & Frith, 2011; Urgen, Plank, Ishiguro, Poizner, & Saygin, 2013). In a sense, a humanoid robot is a controllable human. Because we can modify its appearance and motion more strictly than in real humans, such a robot has great potential to be a useful tool in examining our cognition and behavior toward other human beings (Ishiguro, 2006). We focused on the human mirror neuron system (MNS) as a possible neurophysiological index of the quality of human likeness. The MNS involves brain regions that are activated by both self-actions and the observation of similar actions performed by others. Many brain imaging studies have indicated that the MNS is located in the premotor cortex, including Broca s area, and the inferior parietal lobule (Buccino et al., 2001; Buccino, Binkofski, & Riggio, 2004; Hamzei et al., 2003; Rizzolatti & Craighero, 2004). The direct matching of visual and motor representations by the MNS is thought to play an important role in understanding the actions and intentions of others, which is essential for social interactions (Iacoboni, 2009; Rizzolatti & Fabbri-Destro, 2008). The most import characteristic of the MNS with regard to this study is that it seems to be tuned for a biological agent (Press, 2011). That is, the MNS is activated more strongly by the observation of human movements than non-human movements. For example, a positron emission tomography (PET) study reported that the left premotor cortex, which is a part of the MNS, was activated when participants observed a grasping action performed by a human model but not a robotic arm (Tai, Scherfler, Brooks, Sawamoto, & Castiello, 2004). Furthermore, a functional magnetic resonance imaging (fmri) study measuring brain activity during dance observation showed that premotor cortex activation was greater while observing a human dance compared to a robot dance (Miura et al., 2010). These studies have suggested that MNS activity reflects the perception of human likenesses. In the present study, we compared MNS activities measured during the observation of actions performed by three types of agents: a real human, an android, and a mechanical humanoid robot. Because the android is modeled on a real human, it can be used as a control stimulus in place of a human. Additionally, since the mechanical humanoid was made by stripping away the clothing and silicone skin from the android, it can be used as a control stimulus in place of the android. We focused on changes in mu rhythm (8 13 Hz) power on electroencephalogram (EEG) oscillation recordings as a measure of MNS activity. Mu rhythm power recorded over the sensorimotor cortex is maximized during the resting state in the absence of any overt body movement, and it is attenuated during body movement. Moreover, mu power is also reduced during observation of another's action compared to resting state (Cochin, Barthelemy, Roux, & Martineau, 1999; Pineda, 2005). Thus, suppression of mu power is considered to reflect activation of the MNS, including the premotor cortex, and it has been used as an index of MNS activity in many studies concerning human MNS (Muthukumaraswamy, Johnson, & McNair, 2004; Oberman, McCleery, Ramachandran, & Pineda, 2007; Perry & Bentin, 2009; Ulloa & Pineda, 2007). The current study comprised two experiments with EEG measurements. The apparatus and procedure of both experiments were identical except for the stimuli. In experiment 1, we employed prerecorded video clips in which the three types of agents were performing upper-body actions, and the MNS responses were compared. Of course, the human stimulus has human-like appearance and motion, whereas the android stimulus has a human-like appearance and somewhat awkward robotic motion. Therefore, we expected that the comparison of MNS activities elicited 69
3 by human and android stimuli could reveal the effect of a human likeness of motion on MNS activity. On the other hand, the comparison between the android and robot conditions was expected to elucidate the effects of human-likeness of appearance, because the mechanical robot has a robotic appearance and motion. The results of experiment 1 suggested the possibility that an agent's appearance affected the perceived human-likeness of motion; therefore, stimuli with different appearances could not separate the effects of appearance and motion. For this reason, experiment 2 was designed to extract the effect of motion on MNS activity. For this purpose, point-light motion (PLM) stimuli generated from the video clips used in the experiment 1 were presented to participants. Standard PLM stimuli consist of only moving dots and are often adopted in studies concerning motion perception (Hirai, Fikushima, & Hiraki, 2003; Johansso, 1973; Saygin, Wilson, Hagler, Bates, & Sereno, 2004; Ulloa & Pineda, 2007). We expected they could elucidate the pure effect of motion on the perceived human-likeness of robots. 2. Experiment Method 2.11 Participants The participants were 9 males and 9 females (mean age, 21.1 years; range, years) who were right handed and had normal or corrected-to-normal vision. Informed consent was obtained prior to participation in the study, which was approved by the ethics committee of the University of Tokyo. According to a post hoc oral interview, no participant had previously interacted with the robots used in this experiment Stimuli The stimuli were seven different video clips ( pixels, 30 frames/s). Black-and-white videos were used to eliminate the effect of the agents' color information and to extract the effects of their appearance and motion more purely. One of them showed visual white noise and was used as a baseline condition (Fig. 1). The remaining six clips depicted one of three agents (human, android, or mechanical robot) that performed one of two actions (reaching or wiping) with their right arm. These clips were made from the videotaped data used in a previous fmri study (Saygin et al., 2011). For the human-reaching stimulus, a human female reached her right hand toward a tube of facial wash, grasped it for a moment, and then moved her hand back to the original position. In the human-wiping stimulus, she wiped a table with a cloth by moving her right hand from side to side. Her left hand remained on her left thigh the entire time. In the android and robot stimuli, a female humanoid robot Repliee Q2 (Osaka University & KOKORO Co. Ltd., Tokyo, Japan) and a mechanical humanoid robot, respectively, performed the same two types of actions as the human stimulus. The Repliee Q2 was modeled on the human stimuli actor, and its upper body is moved with actuators. The physical size and motion of both humanoid robots were almost the same, because their inner architectures were identical. Although their motions were programed to imitate the human s actions to the greatest extent possible, those were actually somewhat awkward due to mechanical limitations. Overall, six video clips depicting three types of agents (human, android, and robot) performed two types of actions (reaching and wiping) and one white noise video clip were employed in this experiment. Fig. 1 shows example frames of each video clip, all of which were approximately 3.5 s in duration (104 frames). The second half of each clip consisted of the first half played 70
4 Figure 1. Sample frames from each video clip used in experiment 1. backwards. A white cross was always displayed at the center of each video as a fixation point. The videos were presented on the center of a 17-inch CRT monitor, and the clip size was 9.7 cm wide 9.4 cm high. The distance between the monitor and participant was maintained at around 90 cm Procedure The EEG measurements consisted of 14 blocks. In each block, participants observed 1 of the 7 video clips for a total of 24 times with no inter-stimulus interval, such that they saw the same movement repeatedly performed by the same agent for a total duration of approximately 87 s. Each of the seven different videos was presented in the first set of seven blocks; the order of presentation was randomized across participants. The videos were presented in the second set of seven blocks in the same order as in the first. A total of 174 s of EEG data were recorded per video clip. A continuous performance task was also used to confirm that participants attended to the stimuli during EEG recordings. The color of the fixation point turned yellow for five frames between two and four times in each block. Participants were asked to report the number of times that the color changed at the end of each block. One-minute breaks were taken between blocks. This procedure was designed based on previous EEG studies concerning mu rhythm suppression (Oberman, et al., 2007; Ulloa & Pineda, 2007). After EEG recording, the participants answered a paper questionnaire concerning the human likeness of the appearance and motion of each agent (human, android, and robot). A seven-point rating scale was used, with scores of 1 to 7 representing not human like at all to extremely human like, respectively. A score of 4 represented "neither," and no corresponding wording was assigned to the other scores (2, 3, 5, 6). 71
5 2.14 EEG recordings Matsuda et al., EEG-Based Mu Rhythm Suppression Measurements EEGs were acquired with a 64-channel high-density sensor net system (Net Station, Electrical Geodesics Inc., Eugene, OR). The sampling rate was 500 Hz. Cz was used as the online reference, and electrode impedance was maintained below 100 kω Data analysis For subjective ratings of appearance and motion, we performed a separate repeated measures analysis of variance (ANOVA) with agent type (human, android, and robot) as the factor. We used a BrainVision Analyser (Brain Products GmbH, Gilching, Germany) to analyze EEG data. The raw EEG data were band-pass filtered (1 30 Hz) and re-referenced to the average of the left and right mastoid electrodes. Data from three electrode sites representing the sensorimotor area (C3, Cz, and C4 of the international system) were selected for the mu power analyses. Each block was segmented into 512 data points to calculate the power of mu rhythm (8 13 Hz) using a fast Fourier transformation. The first and last 10 s of each block were excluded from the analyses because of possible attentional transients associated with stimulus initiation and termination [9]. Artifacts were rejected by excluding segments where the difference between the maximal and minimal amplitudes was greater than 150 μv. For each electrode and each agent s action, mu suppression was calculated as the logtransformed ratio of mu power relative to the mu power during the baseline condition, meaning that a log ratio less than zero indicates mu suppression during observation of an agent s action. The log ratio of mu power was first analyzed by a three-way repeated measure ANOVA with electrode (C3/Cz/C4), agent (human/android/robot), and action (grasping/wiping) as the factors. Because the ANOVA did not reveal significant differences or an interaction, the log ratio of mu power for each agent (human, android, and robot) was averaged across the three electrodes and the two actions, and then it was compared with zero using a one-tailed one-sample t-test to confirm the emergence of the mu suppression. Bonferroni corrections were applied for multiple comparisons. To elucidate the relationship between the explicitly perceived human likeness of agents and the MNS response, correlations (Spearman's ρ) between the log ratios of mu power and the subjective ratings of appearance and motion were analyzed. 2.2 Results 2.21 Subjective ratings Fig. 3 shows the mean rating scores of human likeness of appearance and motion for each agent. There were main effects of both appearance and motion (F[1.27, 21.5] = 197, p < 0.01, and F[1.58, 27.0] = 57.8, p < 0.01, respectively; Greenhouse Geisser correction was applied). Multiple comparisons revealed significant differences among all three agents for both appearance and motion ratings. The scores for the human stimuli were the highest (M = 6.94, SD = 0.24 for appearance; M = 5.78, SD = 1.43 for motion) and those for the android stimuli (M = 5.56, SD = 1.04 for appearance; M = 3.17, SD = 0.86 for motion) were higher than for the robot stimuli (M = 1.67, SD = 0.77 for appearance; M = 2.22, SD = 1.11 for motion). In other words, the human and robot were the highest and lowest for both ratings, respectively. Despite the nearly identical motions of the android and robot, the ratings of their motions were significantly different. 72
6 Figure 2. Subjective ratings of human likeness of appearance (left) and motion (right) for each agent. Error bars represent standard error (SE) values Mu suppression The respective mean (SD) log ratios of mu rhythm power in the human, android, and robot conditions were (0.071), (0.060), and (0.061) (Fig. 3). The mu suppression significantly less than zero (p < 0.05, corrected) was observed in the human and android conditions (t(17) = 2.52 for human, t(17) = 2.68 for android, and t(17) = 1.89 for robot). There was no correlation between the log ratios of mu power and the subjective ratings (ρ = -0.14, p = 0.31 for appearance; ρ = , p = 0.98 for motion; N = 54 for both). Figure 3. Change in mu rhythm power during the observation of three types of agent compared with baseline. The types of action (reaching or wiping) were collapsed. Error bars represent SE Discussion In experiment 1, we measured mu suppression in participants viewing video clips of actions performed by three types of agents (human, android, and mechanical robot). Significant mu suppression was only observed in the human and android conditions, which indicates that both the human and android robustly activated the MNS, whereas the robot did not. Because the android 73
7 and robot were in fact the same robot and thus their motions were almost the same, the difference in the level of mu suppression between the two suggests that their appearance, not their motion, affected MNS activity. However, this difference was not statistically significant. A previous EEG study using the same video clips (Urgen, et al., 2013) also reported no significant difference in mu suppression among the three agents. In contrast to the mu suppression analysis result, there were significant differences in the subjective human-likeness ratings for each agent. The scores for the human and robot were the highest and lowest, respectively, for both the appearance and motion ratings. Whereas the consequence of the appearance rating is not surprising, that of the motion rating is worthy of further attention. Because the android and robot have almost the same kinematics, the humanlikeness rating of motion for both agents should be the same if the participants objectively evaluated the physical property of the agents' motion. Therefore, the significant difference in the motion rating between the android and robot demonstrated that perceived human likeness of motion is influenced by an agent's appearance. It is likely that the participants were biased to rate a robot with more human-like appearance as having more human-like motion and vice versa. However, this bias was not very strong, because the mean motion scores for each robot were less than 4, indicating that their motions looked awkward compared to those of a human. This weak bias of the subjective rating might be associated with the slight gap in mu suppression between the android and robot. These results imply that agent's appearance affected the impression of its motion. Because we use actual images as stimuli, we might not be able to extract pure effects of motion from experiment 1, and we employed point-light motion (PLM) clips without appearance information in experiment Experiment Method 3.11 Participants The participants were 10 males and 9 females who did not participate in experiment 1 (mean age, 19.6 years; range, years). All of the subjects were right handed and had normal or corrected-to-normal vision. Informed consent was obtained prior to participation in the study, which was approved by the ethics committee of the University of Tokyo Stimuli The stimuli were four black-and-white PLM clips generated from the live-action videos and the white noise video used in experiment 1. Only the human and robot videos (i.e., human reaching, human wiping, robot reaching, and robot wiping) were selected as the models for the PLM clips, because the motions of the android and robot were almost identical. Fig. 2 shows example frames of the PLM clips. The eight white dots in the frames correspond to the forehead, neck, right and left shoulders, wrists, and elbows of each agent. The dots were manually placed frame by frame using Adobe Photoshop (Adobe Systems, San Jose, CA). The second half of each video was first half of the video played in reverse. A white cross was displayed at the center of each video as a fixation point. The size and duration of the PLM videos were completely identical to the original clips. In order to exclude the effect of the participants belief in the agent's identity, the subject of the PLM was not revealed to the participants until the end of the experiment. 74
8 Figure 4. Sample frames from each PLM stimuli used in experiment Procedure The procedure of experiment 2 was almost the same as that of experiment 1. Briefly, 10 blocks of EEG measurement were performed. Participants observed 1 of the 5 videos a total of 24 times with no inter-stimulus interval in each block. Five different videos were presented in the first five blocks, and the same five videos were presented in the second five blocks in the same order as in the first. One-minute breaks were taken between blocks. A total of 174 s of EEG data was recorded per video. The continuous performance task was also used during EEG recordings to confirm participant attention to the stimuli. After EEG recording, participants filled out a paper questionnaire with two types of questions concerning each PLM while watching them one by one again. One question asked about the human likeness of motion with the same seven-point rating as in experiment 1. The other question asked about putative models of each PLM with the following wording: "Did you imagine a model? If your answer is "yes," please give a detailed description of it." Although some participants answered that they had imagined a female or an older male as the PLM model, no one imagined that the model was a robot EEG recordings and data analysis The apparatus and settings for EEG recordings were identical with those described for experiment 1. The 64-channel sensor net was employed with Cz as the online reference and a 500-Hz sampling rate. For the subjective ratings of motion, a two-way repeated measure ANOVA with factors of agent (human/robot) and action (reaching/wiping) was performed. EEG analyses were performed in the same manner as described for experiment 1. The log ratio of mu power relative to the white noise condition was calculated for each PLM video using data from three electrodes (C3, Cz, and C4). Because a three-way repeated measure ANOVA with the factors of electrode location (C3/Cz/C4) and agent (human/robot) and action (reaching/wiping) showed no significant effect or interaction, the log ratio of mu power for each agent was collapsed across the three electrodes and the two actions. Finally, a one-sample t-test compared with zero was performed to determine mu suppression. Bonferroni corrections were applied for multiple comparisons. Correlations (Spearman's ρ) between the log ratios of mu power during the observation of each PLM and the subjective ratings of motion were also analyzed. 75
9 3.2 Results 3.21 Subjective ratings Matsuda et al., EEG-Based Mu Rhythm Suppression Measurements Mean rating scores for each PLM were represented in Fig. 5. The two-way ANOVA showed main effects of agent (F[1,18] = 10.21, p < 0.01) and action (F[1, 18] = 20.84, p < 0.01). Participants Figure 5. Rating score of human-likeness for each point-light motion. Scores of 1 and 7 mean not human-like at all and extremely human-like, respectively. Error bars represent SE values. gave higher ratings to the human s actions (M = 4.47, SD = 1.54 for reaching; M = 5.37, SD = 1.30 for wiping) than the robot s actions (M = 3.67, SD = 1.50 for reaching; M = 5.00, SD = 1.37 for wiping) and higher ratings for the wiping action than for the reaching action. No interaction was found Mu suppression Fig. 6 illustrates the mean log ratio of the human and robot stimuli. Significant mu rhythm suppression was not observed in either condition. There was no significant correlation between the log ratios of mu power and the subjective ratings of motion (ρ = 0.12, p = 0.31, N = 76). Figure 6. Change in mu rhythm power during the observation of PLM stimuli compared with baseline. Types of action (reaching or wiping) were collapsed. Error bars represent SE values. 3.3 Discussion Significant mu suppression was not observed for the human or robot PLM conditions. Although a study investigating MNS activity in response to human PLMs revealed significant mu suppression regardless of action type (Ulloa & Pineda, 2007), we did not observe this for the human condition. 76
10 One possible reason for this is the number of point lights. Our PLM stimuli represented upperbody motion with 8 lights, whereas the previous study showed full-body motion (a single kick or jumping jacks) with 12 points. Because the MNS has somatotopic representation and the brain regions activated by the observation of hand and foot motions are different (Buccino, et al., 2001), full-body stimuli would activate the MNS more widely than observed here. It is possible that upper-body motion was too weak to significantly suppress mu power. With regard to the subjective rating for each PLM stimulus, there were main effects of both agent and action. The score of human-likeness was higher for the human than the robot, and higher for wiping than for reaching. This lower score of the reaching action presumably arises from the action s repetition. Whereas wiping is an intrinsically repeated action characterized by moving one's hand from side to side, reaching is not usually repeated and would look unnatural to the participants. 4. General Discussion Significant mu suppression was elicited by the human motions with human appearance in experiment 1. In contrast, the human motions with point-light appearance did not have the same effect in the experiment 2. If only motion information affected the MNS activity, significant mu suppression would have been induced by the human motion stimuli in the experiment 2. Therefore, the combined results of the two experiments suggest that the MNS activity is affected by human likeness of appearance. More specifically, appearance information that is at least more human-like than several dots was necessary to significantly activate the MNS. Human-like appearance in this case does not mean detailed features of appearance, such as a skin texture, facial parts, and hair; rather, it refers to the global form of human beings. It is likely that the MNS is not so sensitive to details of appearance. In fact, it has been reported that an agent s skin color does not modulate MNS activity (Desy & Lepage, 2013). Similar MNS responses to a human and a humanoid robot have been reported in other previous studies. An EEG study that compared mu rhythm power while participants observed movements performed by human and robotic hands demonstrated that both hands induced significant mu suppression, with no significant difference between them (Oberman, et al., 2007). An fmri study also showed common activation of MNS regions, including the premotor cortex, while subjects watched human and robot hand movements (Gazzola, Rizzolatti, Wicker, & Keysers, 2007). They additionally mentioned that the adoption of a stricter significance level in the statistical analysis eliminated the significance of the MNS activity triggered by the robot hand movements, which implies that there was a slight difference in the MNS responses between the human and robot hands. Moreover, another fmri study found that both human and robot facial expressions activated the premotor cortex (Gobbini et al., 2011). The robot used in the study had human-like facial features, such as eyes and mouth, but it had a gray mechanical surface. Taken together with these findings, our results suggest that the MNS, especially the premotor cortex, can respond similarly to human beings and non-human agents who have at least a humanlike form. The MNS is thought to be a neural basis of action understanding and imitation that mediates the direct mapping of observed actions and one's own motor representation (Rizzolatti & Craighero, 2004). If the observed agent has a human-like form that allows us to match our and its body, the MNS would be activated regardless of whether or not the agent is an actual human being. Thus, the reason why there were no significant differences in induced mu suppression among the three agents used in the current study probably lies in their common human-like body. Of course, this interpretation does not necessarily mean that details of appearance have no impact on MNS activity. The fact that the MNS responses to mechanical robots were slightly, but not 77
11 significantly, weaker than those to actual humans in both a previous study (Gazzola, et al., 2007) and in our present study suggests that details of appearance have some influence on MNS activity. 5. Limitations and Future Work In this study, we attempted to compare the perception of a human likeness of an actual human and highly human-like robots as objectively as possible by using mu rhythm suppression of EEG. Nevertheless, we could not demonstrate obvious differences between them except on subjective ratings, probably because of the generalization ability of the MNS. Because the MNS can be activated by any agent who has a human-like body, it would be difficult to discriminate among highly human-like agents, such as an android, by measuring only MNS activity. Naturally, MNS activity is not the sole neurological index of perceived human likeness. For example, medial prefrontal cortex (MPFC) activity is another possible index for perceived human likeness. Because this region is related to the inference of the mental states of others, its activity may reflect whether a person regards an observed agent as an intelligent entity. Indeed, an fmri study in which participants played a prisoner s dilemma game with four types of opponents (a laptop PC, a functional robot, an anthropomorphic robot, and a real human) revealed that the medial prefrontal cortex activity increased linearly with greater human likeness of the opponents (Krach, et al., 2008). Collectively, the evidence indicates that realistic robot appearance will likely encourage higher cognitive functions, such as understanding the mental state of others. In addition, the MPFC is known as an empathy-related region. It has been reported that this region is activated when viewing a person in painful situations but not activated when viewing the same situation with the belief that the person is not a real human but rather a plastic doll (Jackson, Brunet, Meltzoff, & Decety, 2006). If a humanoid robot is regarded as human-like entity, the observation of it in a painful situation may activate the MPFC. Furthermore, our stimuli may have been too weak to activate the MNS significantly for two reasons. One reason is the embodiment of the agents. According to previous brain imaging studies concerning the observation of human actions, motor area was activated more strongly by live presentation than video presentation (Järveläinen, Schurmann, Avikainen, & Hari, 2001; Shimada & Hiraki, 2006). Our previous study using another mechanical humanoid named Robovie (Ishiguro et al., 2001) also revealed that the left ventral premotor cortex was activated more strongly during live presentation of the robot than during a video presentation of it (Matsuda, Kanda, Ishiguro, & Hiraki, 2012). Therefore, it is possible that live presentation (i.e., embodiment) of the android may lead to an alternative result, although there are technical difficulties associated with testing this. The other reason is the degree of motion. As mentioned in the discussion section of experiment 2, a previous EEG study (Ulloa & Pineda, 2007) reported significant mu suppression during the observation of PLMs depicting full-body actions. Although we had to use single-handed actions due the mechanical limitations of the android in this study, it may be necessary to use full-body action to capture robust mu suppression and to detect its small differences among the three agents. In this study, we focused on the physical features of humanoid robots to investigate their human likeness and showed that they can activate the human MNS in the same way, whether they have a skin-like silicon body or a metallic body. However, many other factors may influence the perception of the human likeness of robots. An infant study using Robovie has reported that infants regard Robovie as a communicative agent only after watching interactions between a human and Robovie (Arita, Hiraki, Kanda, & Ishiguro, 2005). This finding implies that interactive functions of robots can improve their human likeness. Further investigations into various perspectives of human likeness are necessary to find key factors affecting the human likeness of robots. Neurological indices have potential as powerful tools for this purpose. 78
12 Acknowledgements Matsuda et al., EEG-Based Mu Rhythm Suppression Measurements This work was supported by Japan Society for the Promotion of Science KAKENHI and References Arita, A., Hiraki, K., Kanda, T., & Ishiguro, H. (2005). Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons. Cognition, 95(3), B49-B57. doi: /j.cognition Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., V, G., Seitz, R. J.,... Freund, H. J. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: An fmri study. European Journal of Neuroscience, 13(2), doi: /j x Buccino, G., Binkofski, F., & Riggio, L. (2004). The mirror neuron system and action recognition. Brain and Language, 89(2), doi: /s x(03) Chaminade, T., Zecca, M., Blakemore, S. J., Takanishi, A., Frith, C. D., Micera, S.,... Umilta, M. A. (2010). Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures. PloS One, 5(7), e doi: /journal.pone Cochin, S., Barthelemy, C., Roux, S., & Martineau, J. (1999). Observation and execution of movement: Similarities demonstrated by quantified electroencephalography. European Journal of Neuroscience, 11(5), doi: /j x Desy, M. C., & Lepage, J. F. (2013). Skin color has no impact on motor resonance: Evidence from mu rhythm suppression and imitation. Neuroscience Research, 77(1-2), doi: /j.neures Gazzola, V., Rizzolatti, G., Wicker, B., & Keysers, C. (2007). The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. NeuroImage, 35(4), doi: /j.neuroimage Gobbini, M. I., Gentili, C., Ricciardi, E., Bellucci, C., Salvini, P., Laschi, C., Guazzelli, M., & Pietrini, P. (2011). Distinct neural systems involved in agency and animacy detection. Journal of Cognitive Neuroscience, 23(8), doi: /jocn Hamzei, F., Rijntjes, M., Dettmers, C., Glauche, V., Weiller, C., & Buchel, C. (2003). The human action recognition system and its relationship to Broca's area: An fmri study. NeuroImage, 19(3), Hirai, M., Fikushima, H., & Hiraki, K. (2003). An event-related potentials study of biological motion perception in humans. Neuroscience Letters, 344(1), doi: /s (03) Hirai, M., & Hiraki, K. (2007). Differential neural responses to humans vs. robots: An eventrelated potential study. Brain Research, 1165, doi: /j.brainres Iacoboni, M. (2009). Imitation, empathy, and mirror neurons. Annual review of psychology, 60, doi: /annurev.psych Ishiguro, H. (2006). Android science: Conscious and subconscious recognition. Connection Science, 18(4), doi: /
13 Ishiguro, H., Ono, T., Imai, M., Maeda, T., Kanda, T., & Nakatsu, R. (2001). Robovie: An interactive humanoid robot. Industrial Robot: An International Journal, 28(6), Jackson, P. L., Brunet, E., Meltzoff, A. N., & Decety, J. (2006). Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain. Neuropsychologia, 44(5), Järveläinen, J., Schurmann, M., Avikainen, S., & Hari, R. (2001). Stronger reactivity of the human primary motor cortex during observation of live rather than video motor acts. NeuroReport, 12(16), Johansso. G. (1973). Visual-perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2), doi: /bf Kaneko, K., Kanehiro, F., Morisawa, M., Miura, K., Nakaoka, S., & Kajita, S. (2009). Cybernetic human HRP-4C. In Proceedings of the 9th IEEE-RAS International Conference on Humanoid Robots. doi: /ichr Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., & Kircher, T. (2008). Can machines think? Interaction and perspective taking with robots investigated via fmri. PloS One, 3(7), e2597. doi: /journal.pone Matsuda, G., Kanda, T., Ishiguro, H., & Hiraki, K. (2012). A humanoid robot activates the human mirror neuron system (in Japanese). Cognitive Studies: Bulletin of the Japanese Cognitive Science Society, 19(4), doi: /jcss Minato, T., Shimada, M., Itakura, S., Lee, K., & Ishiguro, H. (2006). Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person. Advanced Robotics: The International Journal of the Robotics Society of Japan, 20(10), doi: / Miura, N., Sugiura, M., Takahashi, M., Sassa, Y., Miyamoto, A., Sato, S.,... Kawashima, R. (2010). Effect of motion smoothness on brain activity while observing a dance: An fmri study using a humanoid robot. Social Neuroscience, 5(1), doi: / Muthukumaraswamy, S. D., Johnson, B. W., & McNair, N. A. (2004). Mu rhythm modulation during observation of an object-directed grasp. Cognitive Brain Research, 19(2), doi: /j.cogbrainres Nishio, S., Ishiguro, H., & Hagita, N. (2007). Geminoid: Teleoperated android of an existing person. In A. C. de Pina Filho (Ed.), Humanoid Robots: New Developments (pp ). doi: /4876 Oberman, L., McCleery, J., Ramachandran, V., & Pineda, J. (2007). EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing, 70(13-15), doi: /j.neucom Oztop, E., Franklin, D. W., Chaminade, T., & Cheng, G. (2005). Human humanoid interaction: Is a humanoid robot perceived as a human? International Journal of Humanoid Robotics, 2(04), Perry, A., & Bentin, S. (2009). Mirror activity in the human brain while observing hand movements: A comparison between EEG desynchronization in the mu-range and previous fmri results. Brain Research, 1282, doi: /j.brainres
14 Pineda, J. A. (2005). The functional significance of mu rhythms: Translating "seeing" and "hearing" into "doing". Brain Research: Brain Research Reviews, 50(1), doi: /j.brainresrev Press, C. (2011). Action observation and robotic agents: Learning and anthropomorphism. Neuroscience and Biobehavioral Reviews, 35(6), doi: /j.neubiorev Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, doi: /annurev.neuro Rizzolatti, G., & Fabbri-Destro, M. (2008). The mirror system and its role in social cognition. Current Opinion in Neurobiology, 18(2), doi: /j.conb Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2011). The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive and Affective Neuroscience. doi: /scan/nsr025 Saygin, A. P., Wilson, S. M., Hagler, D. J., Bates, E., & Sereno, M. I. (2004). Point-light biological motion perception activates human premotor cortex. Journal of Neuroscience, 24(27), doi: /jneurosci Shimada, S., & Hiraki, K. (2006). Infant's brain responses to live and televised action. NeuroImage, 32(2), doi: /j.neuroimage Tai, Y. F., Scherfler, C., Brooks, D. J., Sawamoto, N., & Castiello, U. (2004). The human premotor cortex is 'mirror' only for biological actions. Current Biology, 14(2), doi: /j.cub Ulloa, E. R., & Pineda, J. A. (2007). Recognition of point-light biological motion: Mu rhythms and mirror neuron activity. Behavioural Brain Research, 183(2), doi: /j.bbr Urgen, B. A., Plank, M., Ishiguro, H., Poizner, H., & Saygin, A. P. (2013). EEG theta and mu oscillations during perception of human and robot actions. Frontiers in Neurorobotics, 7, 19. doi: /fnbot Zhao, S. Y. (2006). Humanoid social robots as a medium of communication. New Media & Society, 8(3), doi: / Contact author s name and information: Kazuo Hiraki, The University of Tokyo, Graduate School of Arts and Sciences, Tokyo, Japan. khiraki@idea.c.u-tokyo.ac.jp 81
Android (Child android)
Social and ethical issue Why have I developed the android? Hiroshi ISHIGURO Department of Adaptive Machine Systems, Osaka University ATR Intelligent Robotics and Communications Laboratories JST ERATO Asada
More informationVideo observation of humanoid robot movements elicits motor interference
Video observation of humanoid robot movements elicits motor interference Aleksandra Kupferberg 1, Stefan Glasauer 1, Markus Huber 1, Markus Rickert 2, Alois Knoll 2, Thomas Brandt 3 Abstract. Anthropomorphic
More informationYoung Children s Folk Knowledge of Robots
Young Children s Folk Knowledge of Robots Nobuko Katayama College of letters, Ritsumeikan University 56-1, Tojiin Kitamachi, Kita, Kyoto, 603-8577, Japan E-mail: komorin731@yahoo.co.jp Jun ichi Katayama
More informationRobotics for Children
Vol. xx No. xx, pp.1 8, 200x 1 1 2 3 4 Robotics for Children New Directions in Child Education and Therapy Fumihide Tanaka 1,HidekiKozima 2, Shoji Itakura 3 and Kazuo Hiraki 4 Robotics intersects with
More informationLive Feeling on Movement of an Autonomous Robot Using a Biological Signal
Live Feeling on Movement of an Autonomous Robot Using a Biological Signal Shigeru Sakurazawa, Keisuke Yanagihara, Yasuo Tsukahara, Hitoshi Matsubara Future University-Hakodate, System Information Science,
More informationSupplementary Figure 1
Supplementary Figure 1 Left aspl Right aspl Detailed description of the fmri activation during allocentric action observation in the aspl. Averaged activation (N=13) during observation of the allocentric
More informationMotion Behavior and its Influence on Human-likeness in an Android Robot
Motion Behavior and its Influence on Human-likeness in an Android Robot Michihiro Shimada (michihiro.shimada@ams.eng.osaka-u.ac.jp) Asada Project, ERATO, Japan Science and Technology Agency Department
More informationA Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency
A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency Shunsuke Hamasaki, Atsushi Yamashita and Hajime Asama Department of Precision
More informationCB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics
CB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics Takashi Minato #1, Yuichiro Yoshikawa #2, Tomoyuki da 3, Shuhei Ikemoto 4, Hiroshi Ishiguro # 5, and Minoru Asada # 6 # Asada
More informationDesigning Human-Robot Interactions: The Good, the Bad and the Uncanny
Designing Human-Robot Interactions: The Good, the Bad and the Uncanny Frank Pollick Department of Psychology University of Glasgow paco.psy.gla.ac.uk/ Talk available at: www.psy.gla.ac.uk/~frank/talks.html
More informationDevelopment of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -
Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda
More informationA Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration
A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School
More informationModulating motion-induced blindness with depth ordering and surface completion
Vision Research 42 (2002) 2731 2735 www.elsevier.com/locate/visres Modulating motion-induced blindness with depth ordering and surface completion Erich W. Graf *, Wendy J. Adams, Martin Lages Department
More informationRobot Imitation from Human Body Movements
Robot Imitation from Human Body Movements Carlos A. Acosta Calderon and Huosheng Hu Department of Computer Science, University of Essex Wivenhoe Park, Colchester CO4 3SQ, United Kingdom caacos@essex.ac.uk,
More informationImplications on Humanoid Robots in Pedagogical Applications from Cross-Cultural Analysis between Japan, Korea, and the USA
Implications on Humanoid Robots in Pedagogical Applications from Cross-Cultural Analysis between Japan, Korea, and the USA Tatsuya Nomura,, No Member, Takayuki Kanda, Member, IEEE, Tomohiro Suzuki, No
More informationConcept and Architecture of a Centaur Robot
Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan
More informationThe Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror
The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror Osamu Morikawa 1 and Takanori Maesako 2 1 Research Institute for Human Science and Biomedical
More informationConcept and Architecture of a Centaur Robot
Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan
More informationSalient features make a search easy
Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second
More informationThis is a postprint of. The influence of material cues on early grasping force. Bergmann Tiest, W.M., Kappers, A.M.L.
This is a postprint of The influence of material cues on early grasping force Bergmann Tiest, W.M., Kappers, A.M.L. Lecture Notes in Computer Science, 8618, 393-399 Published version: http://dx.doi.org/1.17/978-3-662-44193-_49
More informationDevelopment and Evaluation of a Centaur Robot
Development and Evaluation of a Centaur Robot 1 Satoshi Tsuda, 1 Kuniya Shinozaki, and 2 Ryohei Nakatsu 1 Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan {amy65823,
More informationAnalysis of Electromyography and Skin Conductance Response During Rubber Hand Illusion
*1 *1 *1 *2 *3 *3 *4 *1 Analysis of Electromyography and Skin Conductance Response During Rubber Hand Illusion Takuma TSUJI *1, Hiroshi YAMAKAWA *1, Atsushi YAMASHITA *1 Kaoru TAKAKUSAKI *2, Takaki MAEDA
More informationTouch Perception and Emotional Appraisal for a Virtual Agent
Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de
More informationEvaluating Effect of Sense of Ownership and Sense of Agency on Body Representation Change of Human Upper Limb
Evaluating Effect of Sense of Ownership and Sense of Agency on Body Representation Change of Human Upper Limb Shunsuke Hamasaki, Qi An, Wen Wen, Yusuke Tamura, Hiroshi Yamakawa, Atsushi Yamashita, Hajime
More informationCB Database: A change blindness database for objects in natural indoor scenes
DOI 10.3758/s13428-015-0640-x CB Database: A change blindness database for objects in natural indoor scenes Preeti Sareen 1,2 & Krista A. Ehinger 1 & Jeremy M. Wolfe 1 # Psychonomic Society, Inc. 2015
More informationAndroid as a Telecommunication Medium with a Human-like Presence
Android as a Telecommunication Medium with a Human-like Presence Daisuke Sakamoto 1&2, Takayuki Kanda 1, Tetsuo Ono 1&2, Hiroshi Ishiguro 1&3, Norihiro Hagita 1 1 ATR Intelligent Robotics Laboratories
More informationThe shape of luminance increments at the intersection alters the magnitude of the scintillating grid illusion
The shape of luminance increments at the intersection alters the magnitude of the scintillating grid illusion Kun Qian a, Yuki Yamada a, Takahiro Kawabe b, Kayo Miura b a Graduate School of Human-Environment
More informationExploring body holistic processing investigated with composite illusion
Exploring body holistic processing investigated with composite illusion Dora E. Szatmári (szatmari.dora@pte.hu) University of Pécs, Institute of Psychology Ifjúság Street 6. Pécs, 7624 Hungary Beatrix
More informationTowards the development of cognitive robots
Towards the development of cognitive robots Antonio Bandera Grupo de Ingeniería de Sistemas Integrados Universidad de Málaga, Spain Pablo Bustos RoboLab Universidad de Extremadura, Spain International
More informationPreliminary Investigation of Moral Expansiveness for Robots*
Preliminary Investigation of Moral Expansiveness for Robots* Tatsuya Nomura, Member, IEEE, Kazuki Otsubo, and Takayuki Kanda, Member, IEEE Abstract To clarify whether humans can extend moral care and consideration
More informationAnthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction
Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction Julia Fink CRAFT, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland julia.fink@epfl.ch Abstract.
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 1pPPb: Psychoacoustics
More informationSupplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces
Supplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces Hisato Sugata 1,2, Masayuki Hirata 1,3, Takufumi Yanagisawa
More informationBody Movement Analysis of Human-Robot Interaction
Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,
More informationInteraction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping
Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino
More informationEye catchers in comics: Controlling eye movements in reading pictorial and textual media.
Eye catchers in comics: Controlling eye movements in reading pictorial and textual media. Takahide Omori Takeharu Igaki Faculty of Literature, Keio University Taku Ishii Centre for Integrated Research
More informationDissociating Ideomotor and Spatial Compatibility: Empirical Evidence and Connectionist Models
Dissociating Ideomotor and Spatial Compatibility: Empirical Evidence and Connectionist Models Ty W. Boyer (tywboyer@indiana.edu) Matthias Scheutz (mscheutz@indiana.edu) Bennett I. Bertenthal (bbertent@indiana.edu)
More information780. Biomedical signal identification and analysis
780. Biomedical signal identification and analysis Agata Nawrocka 1, Andrzej Kot 2, Marcin Nawrocki 3 1, 2 Department of Process Control, AGH University of Science and Technology, Poland 3 Department of
More informationProceedings of th IEEE-RAS International Conference on Humanoid Robots ! # Adaptive Systems Research Group, School of Computer Science
Proceedings of 2005 5th IEEE-RAS International Conference on Humanoid Robots! # Adaptive Systems Research Group, School of Computer Science Abstract - A relatively unexplored question for human-robot social
More informationWho like androids more: Japanese or US Americans?
Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Technische Universität München, Munich, Germany, August 1-3, 2008 Who like androids more: Japanese or
More informationLow-Frequency Transient Visual Oscillations in the Fly
Kate Denning Biophysics Laboratory, UCSD Spring 2004 Low-Frequency Transient Visual Oscillations in the Fly ABSTRACT Low-frequency oscillations were observed near the H1 cell in the fly. Using coherence
More informationNon-Invasive Brain-Actuated Control of a Mobile Robot
Non-Invasive Brain-Actuated Control of a Mobile Robot Jose del R. Millan, Frederic Renkens, Josep Mourino, Wulfram Gerstner 5/3/06 Josh Storz CSE 599E BCI Introduction (paper perspective) BCIs BCI = Brain
More informationPromotion of self-disclosure through listening by robots
Promotion of self-disclosure through listening by robots Takahisa Uchida Hideyuki Takahashi Midori Ban Jiro Shimaya, Yuichiro Yoshikawa Hiroshi Ishiguro JST ERATO Osaka University, JST ERATO Doshosya University
More informationT he mind-body relationship has been always an appealing question to human beings. How we identify our
OPEN SUBJECT AREAS: CONSCIOUSNESS MECHANICAL ENGINEERING COGNITIVE CONTROL PERCEPTION Received 24 May 2013 Accepted 22 July 2013 Published 9 August 2013 Correspondence and requests for materials should
More informationAPPENDIX MATHEMATICS OF DISTORTION PRODUCT OTOACOUSTIC EMISSION GENERATION: A TUTORIAL
In: Otoacoustic Emissions. Basic Science and Clinical Applications, Ed. Charles I. Berlin, Singular Publishing Group, San Diego CA, pp. 149-159. APPENDIX MATHEMATICS OF DISTORTION PRODUCT OTOACOUSTIC EMISSION
More informationThe Effect of Opponent Noise on Image Quality
The Effect of Opponent Noise on Image Quality Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Rochester Institute of Technology Rochester, NY 14623 ABSTRACT A psychophysical
More informationENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS
BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of
More informationA New Social Emotion Estimating Method by Measuring Micro-movement of Human Bust
A New Social Emotion Estimating Method by Measuring Micro-movement of Human Bust Eui Chul Lee, Mincheol Whang, Deajune Ko, Sangin Park and Sung-Teac Hwang Abstract In this study, we propose a new micro-movement
More informationCHAPTER 7 INTERFERENCE CANCELLATION IN EMG SIGNAL
131 CHAPTER 7 INTERFERENCE CANCELLATION IN EMG SIGNAL 7.1 INTRODUCTION Electromyogram (EMG) is the electrical activity of the activated motor units in muscle. The EMG signal resembles a zero mean random
More informationRobot Society. Hiroshi ISHIGURO. Studies on Interactive Robots. Who has the Ishiguro s identity? Is it Ishiguro or the Geminoid?
1 Studies on Interactive Robots Hiroshi ISHIGURO Distinguished Professor of Osaka University Visiting Director & Fellow of ATR Hiroshi Ishiguro Laboratories Research Director of JST ERATO Ishiguro Symbiotic
More informationClassification of Four Class Motor Imagery and Hand Movements for Brain Computer Interface
Classification of Four Class Motor Imagery and Hand Movements for Brain Computer Interface 1 N.Gowri Priya, 2 S.Anu Priya, 3 V.Dhivya, 4 M.D.Ranjitha, 5 P.Sudev 1 Assistant Professor, 2,3,4,5 Students
More informationSystem of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications
More informationChapter 73. Two-Stroke Apparent Motion. George Mather
Chapter 73 Two-Stroke Apparent Motion George Mather The Effect One hundred years ago, the Gestalt psychologist Max Wertheimer published the first detailed study of the apparent visual movement seen when
More informationFace Perception. The Thatcher Illusion. The Thatcher Illusion. Can you recognize these upside-down faces? The Face Inversion Effect
The Thatcher Illusion Face Perception Did you notice anything odd about the upside-down image of Margaret Thatcher that you saw before? Can you recognize these upside-down faces? The Thatcher Illusion
More informationPerceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices
Perceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices Michael E. Miller and Rise Segur Eastman Kodak Company Rochester, New York
More informationBirth of An Intelligent Humanoid Robot in Singapore
Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing
More informationMotor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers
Motor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers Maitreyee Wairagkar Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, U.K.
More informationHumanoid Robots. by Julie Chambon
Humanoid Robots by Julie Chambon 25th November 2008 Outlook Introduction Why a humanoid appearance? Particularities of humanoid Robots Utility of humanoid Robots Complexity of humanoids Humanoid projects
More informationNAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS
NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present
More informationExperimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction
Experimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction Tatsuya Nomura 1,2 1 Department of Media Informatics, Ryukoku University 1 5, Yokotani, Setaohe
More informationOrientation-sensitivity to facial features explains the Thatcher illusion
Journal of Vision (2014) 14(12):9, 1 10 http://www.journalofvision.org/content/14/12/9 1 Orientation-sensitivity to facial features explains the Thatcher illusion Department of Psychology and York Neuroimaging
More informationUnderstanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization
Understanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization Jakub Z lotowski 1,2(B), Hidenobu Sumioka 2, Christoph Bartneck 1, Shuichi Nishio 2, and Hiroshi Ishiguro 2,3
More informationHow Many Pixels Do We Need to See Things?
How Many Pixels Do We Need to See Things? Yang Cai Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ycai@cmu.edu
More informationIMPLEMENTATION OF REAL TIME BRAINWAVE VISUALISATION AND CHARACTERISATION
Journal of Engineering Science and Technology Special Issue on SOMCHE 2014 & RSCE 2014 Conference, January (2015) 50-59 School of Engineering, Taylor s University IMPLEMENTATION OF REAL TIME BRAINWAVE
More informationMagnetoencephalography and Auditory Neural Representations
Magnetoencephalography and Auditory Neural Representations Jonathan Z. Simon Nai Ding Electrical & Computer Engineering, University of Maryland, College Park SBEC 2010 Non-invasive, Passive, Silent Neural
More informationINTERACTIONS WITH ROBOTS:
INTERACTIONS WITH ROBOTS: THE TRUTH WE REVEAL ABOUT OURSELVES Annual Review of Psychology Vol. 68:627-652 (Volume publication date January 2017) First published online as a Review in Advance on September
More informationTexture recognition using force sensitive resistors
Texture recognition using force sensitive resistors SAYED, Muhammad, DIAZ GARCIA,, Jose Carlos and ALBOUL, Lyuba Available from Sheffield Hallam University Research
More informationHaptic Invitation of Textures: An Estimation of Human Touch Motions
Haptic Invitation of Textures: An Estimation of Human Touch Motions Hikaru Nagano, Shogo Okamoto, and Yoji Yamada Department of Mechanical Science and Engineering, Graduate School of Engineering, Nagoya
More informationImplicit Fitness Functions for Evolving a Drawing Robot
Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,
More informationDoes the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?
19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands
More informationA SURVEY OF SOCIALLY INTERACTIVE ROBOTS
A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why
More informationRobot: Geminoid F This android robot looks just like a woman
ProfileArticle Robot: Geminoid F This android robot looks just like a woman For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-geminoid-f/ Program
More informationHow a robot s attention shapes the way people teach
Johansson, B.,!ahin, E. & Balkenius, C. (2010). Proceedings of the Tenth International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. Lund University Cognitive Studies,
More informationClassifying the Brain's Motor Activity via Deep Learning
Final Report Classifying the Brain's Motor Activity via Deep Learning Tania Morimoto & Sean Sketch Motivation Over 50 million Americans suffer from mobility or dexterity impairments. Over the past few
More informationOptic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball
Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine
More informationMotor Interference and Behaviour Adaptation in Human-Humanoid Interactions. Qiming Shen. Doctor of Philosophy
Motor Interference and Behaviour Adaptation in Human-Humanoid Interactions Qiming Shen A thesis submitted in partial fulfilment of the requirements of the University of Hertfordshire for the degree of
More informationOff-line EEG analysis of BCI experiments with MATLAB V1.07a. Copyright g.tec medical engineering GmbH
g.tec medical engineering GmbH Sierningstrasse 14, A-4521 Schiedlberg Austria - Europe Tel.: (43)-7251-22240-0 Fax: (43)-7251-22240-39 office@gtec.at, http://www.gtec.at Off-line EEG analysis of BCI experiments
More informationREBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL
World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced
More informationOpening Pandora s Box
Author s reply to commentaries Opening Pandora s Box Reply to commentaries on The uncanny advantage of using androids in social and cognitive science research Karl F. MacDorman and Hiroshi Ishiguro School
More informationTakeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1
Perception, 13, volume 42, pages 11 1 doi:1.168/p711 SHORT AND SWEET Vection induced by illusory motion in a stationary image Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 1 Institute for
More informationEmbodiment illusions via multisensory integration
Embodiment illusions via multisensory integration COGS160: sensory systems and neural coding presenter: Pradeep Shenoy 1 The illusory hand Botvinnik, Science 2004 2 2 This hand is my hand An illusion of
More informationBehavioural Realism as a metric of Presence
Behavioural Realism as a metric of Presence (1) Jonathan Freeman jfreem@essex.ac.uk 01206 873786 01206 873590 (2) Department of Psychology, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ,
More informationThe Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students
The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students Published online by the Deluwak UG Research Department, December 2016 Abstract This study examines
More informationWhy interest in visual perception?
Raffaella Folgieri Digital Information & Communication Departiment Constancy factors in visual perception 26/11/2010, Gjovik, Norway Why interest in visual perception? to investigate main factors in VR
More informationAnalysis of Gaze on Optical Illusions
Analysis of Gaze on Optical Illusions Thomas Rapp School of Computing Clemson University Clemson, South Carolina 29634 tsrapp@g.clemson.edu Abstract A comparison of human gaze patterns on illusions before
More informationPerceived depth is enhanced with parallax scanning
Perceived Depth is Enhanced with Parallax Scanning March 1, 1999 Dennis Proffitt & Tom Banton Department of Psychology University of Virginia Perceived depth is enhanced with parallax scanning Background
More informationLarge-scale cortical correlation structure of spontaneous oscillatory activity
Supplementary Information Large-scale cortical correlation structure of spontaneous oscillatory activity Joerg F. Hipp 1,2, David J. Hawellek 1, Maurizio Corbetta 3, Markus Siegel 2 & Andreas K. Engel
More informationEssay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam
1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are
More informationSTRATEGO EXPERT SYSTEM SHELL
STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl
More informationDesign and Evaluation of Tactile Number Reading Methods on Smartphones
Design and Evaluation of Tactile Number Reading Methods on Smartphones Fan Zhang fanzhang@zjicm.edu.cn Shaowei Chu chu@zjicm.edu.cn Naye Ji jinaye@zjicm.edu.cn Ruifang Pan ruifangp@zjicm.edu.cn Abstract
More informationDoes a Robot s Subtle Pause in Reaction Time to People s Touch Contribute to Positive Influences? *
Preference Does a Robot s Subtle Pause in Reaction Time to People s Touch Contribute to Positive Influences? * Masahiro Shiomi, Kodai Shatani, Takashi Minato, and Hiroshi Ishiguro, Member, IEEE Abstract
More informationThe MARCS Institute for Brain, Behaviour and Development
The MARCS Institute for Brain, Behaviour and Development The MARCS Institute for Brain, Behaviour and Development At the MARCS Institute for Brain, Behaviour and Development, we study the scientific bases
More informationPopObject: A Robotic Screen for Embodying Video-Mediated Object Presentations
PopObject: A Robotic Screen for Embodying Video-Mediated Object Presentations Kana Kushida (&) and Hideyuki Nakanishi Department of Adaptive Machine Systems, Osaka University, 2-1 Yamadaoka, Suita, Osaka
More informationThe Influence of Visual Illusion on Visually Perceived System and Visually Guided Action System
The Influence of Visual Illusion on Visually Perceived System and Visually Guided Action System Yu-Hung CHIEN*, Chien-Hsiung CHEN** * Graduate School of Design, National Taiwan University of Science and
More informationTECHNICAL APPENDIX FOR: POSSIBLE PARANORMAL COMPONENTS OF ANTICIPATION: PSYCHOPHYSIOLOGICAL EXPLORATIONS
Return to: Paranormal Phenomena Articles TECHNICAL APPENDIX FOR: POSSIBLE PARANORMAL COMPONENTS OF ANTICIPATION: PSYCHOPHYSIOLOGICAL EXPLORATIONS J. E. Kennedy 1979 This appendix is published on the internet
More informationMobile robot control based on noninvasive brain-computer interface using hierarchical classifier of imagined motor commands
Mobile robot control based on noninvasive brain-computer interface using hierarchical classifier of imagined motor commands Filipp Gundelakh 1, Lev Stankevich 1, * and Konstantin Sonkin 2 1 Peter the Great
More informationA Constructive Approach for Communication Robots. Takayuki Kanda
A Constructive Approach for Communication Robots Takayuki Kanda Abstract In the past several years, many humanoid robots have been developed based on the most advanced robotics technologies. If these
More informationNeuroImage 56 (2011) Contents lists available at ScienceDirect. NeuroImage. journal homepage:
NeuroImage 56 (2011) 2356 2363 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Differential selectivity for dynamic versus static information in face-selective
More informationGenerating Natural Motion in an Android by Mapping Human Motion
Generating Natural Motion in an Android by Mapping Human Motion Daisuke Matsui, Takashi Minato, Karl F. MacDorman, and Hiroshi Ishiguro Department of Adaptive Machine Systems, Graduate School of Engineering,
More informationAGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA
AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS Bobby Nguyen 1, Yan Zhuo 2, & Rui Ni 1 1 Wichita State University, Wichita, Kansas, USA 2 Institute of Biophysics, Chinese Academy of Sciences,
More information