Generating Personality Character in a Face Robot through Interaction with Human

Similar documents
Nobutsuna Endo 1, Shimpei Momoki 1, Massimiliano Zecca 2,3, Minoru Saito 1, Yu Mizoguchi 1, Kazuko Itoh 3,5, and Atsuo Takanishi 2,4,5

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

Contents. Mental Commit Robot (Mental Calming Robot) Industrial Robots. In What Way are These Robots Intelligent. Video: Mental Commit Robots

Associated Emotion and its Expression in an Entertainment Robot QRIO

Essential Understandings with Guiding Questions Robotics Engineering

Intuitive Vision Robot Kit For Efficient Education

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Relation Formation by Medium Properties: A Multiagent Simulation

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Booklet of teaching units

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

BIOMETRIC IDENTIFICATION USING 3D FACE SCANS

Development of a telepresence agent

WHITE PAPER. NLP TOOL (Natural Language Processing) User Case: isocialcube (Social Networks Campaign Management)

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Design of Silent Actuators using Shape Memory Alloy

Learning and Using Models of Kicking Motions for Legged Robots

Introduction to robotics. Md. Ferdous Alam, Lecturer, MEE, SUST

Emily Dobson, Sydney Reed, Steve Smoak

Cooperative Transportation by Humanoid Robots Learning to Correct Positioning

3D Face Recognition in Biometrics

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror

Issues in Information Systems Volume 13, Issue 2, pp , 2012

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

DEEP DIVE ON AZURE ML FOR DEVELOPERS

Sensor system of a small biped entertainment robot

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System

Active Agent Oriented Multimodal Interface System

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Robot Personality based on the Equations of Emotion defined in the 3D Mental Space

Physical and Affective Interaction between Human and Mental Commit Robot

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

Team Description 2006 for Team RO-PE A

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

RB-Ais-01. Aisoy1 Programmable Interactive Robotic Companion. Renewed and funny dialogs

Experimental Investigation into Influence of Negative Attitudes toward Robots on Human Robot Interaction

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

CS10 The Beauty and Joy of Computing

Text Emotion Detection using Neural Network

Group Robots Forming a Mechanical Structure - Development of slide motion mechanism and estimation of energy consumption of the structural formation -

Analog Circuit for Motion Detection Applied to Target Tracking System

An Intelligent Robot Based on Emotion Decision Model

Inteligência Artificial. Arlindo Oliveira

ENTRY ARTIFICIAL INTELLIGENCE

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Segmentation Extracting image-region with face

ARTIFICIAL INTELLIGENCE - ROBOTICS

What is AI? AI is the reproduction of human reasoning and intelligent behavior by computational methods. an attempt of. Intelligent behavior Computer

A Practical Approach to Understanding Robot Consciousness

A STUDY ON THE EMOTION ELICITING ALGORITHM AND FACIAL EXPRESSION FOR DESIGNING INTELLIGENT ROBOTS

CS10 The Beauty and Joy of Computing

Recognition of very low-resolution characters from motion images captured by a portable digital camera

Cinderella. Teacher Resource

Dr. Ashish Dutta. Professor, Dept. of Mechanical Engineering Indian Institute of Technology Kanpur, INDIA

INTELLIGENT WHEELCHAIRS

Google vs. Local Competitors in Japan

Artificial Intelligence

AR Tamagotchi : Animate Everything Around Us

APPLICATION OF FUZZY BEHAVIOR COORDINATION AND Q LEARNING IN ROBOT NAVIGATION

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Interactive System for Origami Creation

Robotics. In Textile Industry: Global Scenario

Keywords: Immediate Response Syndrome, Artificial Intelligence (AI), robots, Social Networking Service (SNS) Introduction

REALIZATION OF TAI-CHI MOTION USING A HUMANOID ROBOT Physical interactions with humanoid robot

New developments in the philosophy of AI. Vincent C. Müller. Anatolia College/ACT February 2015

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Because Strong AI is Dead, Test-Based AI Lives

Looking ahead : Technology trends driving business innovation.

Available online at ScienceDirect. Procedia Computer Science 76 (2015 ) 2 8

Artificial Intelligence for Games

Learning serious knowledge while "playing"with robots

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Facial Caricaturing Robot COOPER in EXPO 2005

The Future of AI A Robotics Perspective

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Automatic Laser-Controlled Erection Management System for High-rise Buildings

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Intro to AI. AI is a huge field. AI is a huge field 2/26/16. What is AI (artificial intelligence) What is AI. One definition:

Humanoid Robots. by Julie Chambon

Implicit Fitness Functions for Evolving a Drawing Robot

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal

THE DEVELOPMENT of domestic and service robots has

Research Article Humanoid Robot Head Design Based on Uncanny Valley and FACS

Graz University of Technology (Austria)

Learning and Using Models of Kicking Motions for Legged Robots

Chapter 7 Information Redux

Latest trends in sentiment analysis - A survey

Acquisition of Box Pushing by Direct-Vision-Based Reinforcement Learning

By Marek Perkowski ECE Seminar, Friday January 26, 2001

AAU SUMMER SCHOOL PROGRAMMING SOCIAL ROBOTS FOR HUMAN INTERACTION LECTURE 10 MULTIMODAL HUMAN-ROBOT INTERACTION

Sentiment Analysis of User-Generated Contents for Pharmaceutical Product Safety

Lecture Overview. c D. Poole and A. Mackworth 2017 Artificial Intelligence, Lecture 1.1, Page 1 1 / 15

Proceedings of th IEEE-RAS International Conference on Humanoid Robots ! # Adaptive Systems Research Group, School of Computer Science

Emotion Based Music Player

Elements of a Story. What you need to know!

Evolutionary robotics Jørgen Nordmoen

Transcription:

Generating Personality Character in a Face Robot through Interaction with Human F. Iida, M. Tabata and F. Hara Department of Mechanical Engineering Science University of Tokyo - Kagurazaka, Shinjuku-ku, Tokyo 6-86, Japan Abstract For the human-robot communicative interaction, it is difficult to design robots behavior which is preferable to human. In this paper, therefore, we propose an approach to personality characterization of the face robot generated by reinforcement learning with human instruction, and investigate its characteristics from some real human-robot interaction. We found that the face robot personality character is generated through communicative interaction learning done by human partner s instruction and pointed out the importance of the learning algorithm and human instruction procedure.. Introduction For the face robot that looks like a human face and is equipped with a CCD camera sensory vision system and with motor-output system of artificial facial expression muscles[], it seems to be difficult to learn by herself what kind of actions such as facial expression, nodding or voice should be taken in complicated human-robot communicative interaction. We thus think that the supervised instruction by a human partner of the face robot may be useful for the robot to acquire this kind of intelligence, since improvement in human behavior is usually done by such supervision. Based on this psychological background speculation, we developed an approach to the generation of personality character in the face robot through applying a reinforcement learning algorithm under the communicative interaction between the face robot and a human partner. Some researches about use of human instruction for reinforcement learning have been done, but these instructions are used as one example of the answer[][]. It is thus difficult for a robot to learn its behavior in the case when answer cannot be shown by human. For the emotional behavior in a robot such as facial expressions or the sophisticated behavior which need a great number of degree-of-freedom of actions, the answers are difficult to be determined accurately as well as to be shown to the robot. This paper presents, therefore, an approach to generation of face robot personality character with instruction/evaluation of robot behavior that reflect to the reward in the learning algorithm. We prepared three experiments to investigate the face robot personality character which is appeared in the responsive behavior to the given behavior of the human partner. In the first and second experiments the test subjects observed the process of the face robot interaction with a human partner and classified the face robot personality character. Then we investigated the personality character generated in the face robot by a variety of the test subjects.. Face robot and its learning algorithm The face robot used in the experiments has the following functions: () The face robot is equipped with 8 small pneumatic actuators at proper locations inside the face struc-

Face Robot Fig. Environment of the experiments Fig. Appearance of face robot ture to generate 6 typical facial expressions such as anger, surprise, fear, disgust, sad, and happy; and has degrees of freedom in the eyeball rotation and degrees of freedom in the head rotation. () The face robot is implemented with a real-time vision system to identify the location of the face robot s partner and his/her facial expressions. () The face robot is equipped with a conventional voice synthesizing system. The size of the face robot was almost % of a normal human head and her appearance is a young woman face as shown in Fig.. For the learning algorithm, we employed a reinforcement learning to generate proper response to her partner s behavior such as approaching her or leaving her, in which the reward essentially needed in a reinforcement learning is given by the face robot partner to each of the face robot response to her partner s behavior, i.e., changes in the partner s position. This means that, when the response made by the face robot is preferable to the partner, the partner gives a positive reward to the learning algorithm and when not preferable, a negative one is given. The internal representation from state to action of the robot is described as Q(s,a), where s means a certain state and a means a certain action. In a certain state s, the robot inclines to take the action whose Q(s,a) is the maximum one. When a reward is given by the partner, Q(s,a) is recalculated as follows: Q(s,a) := (- )Q(s,a)+ r () where is a learning rate, r is a reward value that is + when positive reward is given, and when negative one is given. As a whole, after certain runs of the interactive learning test with a human partner, the learning may be converged to a certain sensory-input to action-output coordination, i.e. a personality character in the face robot.. Experiment To analyze the personality character in terms of certain behavior patterns of the face robot, we initially employed the following experiment. We prepared 9 small sections in the fan-like region in front of the face robot and a human partner changes his position from one section to another after the face robot took a responsive action of turning her head and made a voice of hello according to the recognition of the partner s position, as shown in Fig.. We set three cases of responsive behavior patterns for the face robot, one (pattern A) is a follower type personality, i.e., the face robot tries to look toward the partner and make a hello voice to him at each change in the partner s position. The second one (pattern B) is a selfish personality with ignoring the partner s movement and making a hello. Then we implemented the reinforcement learning algorithm in the face robot computer and, by means of the partner s instruction, a follower personality was generated. The initial state of the face robot personality was set to the selfish one. This is pattern C. With the observation of the responsive behavior of the face robot, the test subjects ( college students) were asked to answer the categories of personality classification which is usually employed in Y-G personality analysis[]. The results are shown in Fig., which shows the distribution of scores obtained with

Sociable Pessimistic Whim Sociable Pessimistic Whim Sociable Pessimistic Whim Leadership Discourage Leadership Discourage Lea dership Discourage Rough Anxious Rough Anxious Rough Anxious Optimistic Subjective Optimistic Subjective Optimistic Subjective Lively Short Tempe r Characteristic of pattern ` Fig. Evaluated personality character for pattern A, B, and C respect to the axes of Y-G personality analysis. Patterns A and B show the same personality characters as those designed respectively. Pattern C shows the similar distribution to that in pattern A, which indicates that the learning was successful to generate a personality character in the face robot through communicative interaction with a human. We investigated the major factors influential to the evaluation of the personality character with face robot responsive behavior by means of analyzing the answer data of subjects. Table shows what kind of feature of the face robot contributed to the evaluation of the face robot responsive behavior from the view point of personality character. The most influential factor was the manner of interaction of the face robot with the human partner, and then the dynamic hardware feature of the face robot such as speed of head rotation was another major factor.. Experiment Se lfish Lively Short Temper Characteristic of pattern B Selfish Lively Short Temper Characteristic of pattern C Table Number of answered feature Selfish Pattern Feature A B C Total Static Dynamic 7 Interaction 8 9 Learning According to Experiment, the personality character can be analyzed in terms of the interaction feature and dynamic hardware feature. In the second experiment, therefore, we pay attention to three aspects of face robot response based on dynamic hardware and interaction features in Experiment as an indicator of personality character. The first one, i.e., voice, is used to measure how happily the subjects feel with the voice of the face robot during the communicative interaction. The second point, speed, is to measure how fast the subjects feel with the head rotation. The third one, face direction, is for evaluating whether the subjects can recognize the face robot s tendency to take the action of looking toward the partner. The more happily the voice sounds, the faster the rotation is felt, or the more strongly the face robot tends to look toward the partner, the higher scores to the attention points are given. To investigate the transition of personality character of the face robot, test subjects were asked to observe the same communicative interaction between the face robot and a human partner as that in Experiment, and answer the score to each of these attention point at -step rating. steps of the responsive actions of the face robot were employed for each of patterns A, B and C. In Fig., which shows the average scores of attention point for each pattern, the results of patterns A and B can be clearly distinguishable, i.e., in pattern A the score is positive and high for these aspects. On the other hand, in pattern B it is negative and lower. This result indicates that these attention points are influential to evaluation of the personality character of the face robot. In addition, with the increase in the learn-

Average Average Average..... -. -. -. -...... -. -. -. -...... -. -. -. -. Pattern A ( Voice ) Pattern B ( Voice ) Pattern C ( Voice ) Average Average Average..... -. -. -. -...... -. -. -. -...... -. -. -. -. Pattern A ( Speed ) Pattern B ( Speed ) Pattern C ( Speed ) Fig. -wise change of average score for attention points; voice, speed and face direction Average Average Average..... -. -. -. -...... -. -. -. -...... -. -. -. -. Pattern A ( Face Direc. ) Pattern B ( Face Direc. ) Pattern C ( Face Direc. ) ing steps that is shown for pattern C in Fig., the personality character of the face robot changed from its evaluation similar to that in pattern B to the one in pattern A and finally was evaluated as pattern A. This also supports the result obtained in Experiment, i.e., the personality character tends to be evaluated by the convergent responsive behavior.. Experiment To clarify the variety of the personality character generated in the face robot through the instruction given by different partners, in this experiment the subjects are asked not only to evaluate the score to each of the attention point defined in Experiment, but also to play the role of the robot partner, i.e., they change their position and give a positive or negative reward in response to the action taken by the face robot. In this experiment the subjects are not explained when they give a positive/negative reward or which reward they give, but explained only about the task; Try to make the face robot look toward you and say hello to you by judging the responsive action of the face robot in terms of a positive or negative reward at each interaction step. This task includes sub-tasks in the face robot: () to turn her face to the direction toward the point where the partner is, and () to produce the proper voice. The learning algorithm implemented in the face robot did not converge to the follower type personality when the positive reward was not given in the case the face robot achieved both sub-tasks. Table shows the distribution of the number of positive or negative rewarding for each subject, which was given when each of sub-task was achieved or failed. In the table there are distinctive types of reward number distribution patterns; one is correspondent to the positive rewarding in the case either sub-task was achieved, for example, subjects A and B, the other is to the positive rewarding in the case both of sub-tasks were achieved. Because of the reinforcement learning mechanism, these reward distributions affect the number of achieved tasks which is shown in Fig., i.e., the number of achieved tasks of subjects A and B are mark-

Table Number of rewards given for the combination of sub-task accomplishment and failure Test Subject A Test Subject C Test Subject E Direc. Voice Pos. Neg. Direc. Voice Pos. Neg. Direc. Voice Pos. Neg. Achieved Achieved Achieved Achieved 7 Achieved Achieved 9 Achieved Failed Achieved Failed Achieved Failed Failed Achieved Failed Achieved Failed Achieved Failed Failed 7 Failed Failed 6 Failed Failed 6 Test Subject B Direc. Voice Pos. Neg. Achieved Achieved Achieved Failed Failed Achieved Failed Failed 6 Test Subject D Direc. Voice Pos. Neg. Achieved Achieved 8 Achieved Failed Failed Achieved Failed Failed 6 edly smaller than those of subjects C, D and E. Fig. also shows the scores of attention points for each subject, in which the distribution of the score obtained in subjects C, D and E is similar to that of pattern C in Experiment. On the other hand, the distribution of the score in subjects A and B is clearly not the same as those in subjects C, D and E, which means that the personality character generated in the face robot by the instruction given by subjects A and B are evaluated as different ones from those by subjects C, D and E. It is thus found that the responsive behavior of the face robot was changed by the different reward distribution patterns made by different subjects. As a result, the personality characters of the face robot generated by different partner were evaluated as different each other. 6. Conclusion The personality character of the face robot, which may change the responsive behavior, can be generated by means of the supervised learning algorithm under the communicative interaction with her partner and be evaluated as a similar personality to the designed behavior. The personality character evaluation was mainly done by the interaction factor such as responsive behavior of the face robot. When the face robot was trained by different partners, the responsive behavior of the face robot can be changed and its personality character can be evaluated as different. This fact suggests that the personality character of the face robot may become the one fitted to the preference of the partner when it is trained by him properly. Further discussion about the generality of human instruction and learning algorithm will be needed. Reference []F.Hara, H.Kobayashi, State-of-the Art in Component Technology for an Animated Facerobot- Its Component Technology Development for Interactive Communication with Humans, The Intl. Jour. of the Robotics Society of Japan Advanced Robotics, Vol., No.6, pp8-6, 997 []Long-Ji Lin, Programming Robots Using Reinforcement Learning and Teaching, Proc. of 9th National Conference of Artificial Intelligence, pp78-786, 99 []Jeffery A. Clouse, Paul E. Utgoff, A Teaching Method for Reinforcement Learning, Proc. of 9th International Conference on Machine Learning, pp9-, 99 []Hiroshi Watanabe, Introduction to Psychological Test, Fukumura-shuppan, 997 (in Japanese)

.. -. - -. -.. -. - -. -.. -. - -. -.. -. - -. -.. -. - -. - Test Subject A ( Voice ) Test Subject B ( Voice ) Test Subject C ( Voice ) Test Subject D ( Voice ) Test Subject E ( Voice ).. -. - -. - Test Subject A ( Speed ) Test Subject B ( Speed ) Fig. change with steps for each attention point in Experiment.. -. - -. -.. -. - -. -.. -. - -. -.. -. - -. - Test Subject C ( Speed ) Test Subject D ( Speed ) Test Subject E ( Speed ).. -. - -. -.. -. - -. -.. -. - -. -.. -. - -. -.. -. - -. - Test Subject A ( Speed ) Test Subject B ( Speed ) Test Subject C ( Speed ) T est Subject D ( Sp eed ) T est Subject E ( Speed )