Some essential skills and their combination in an architecture for a cognitive and interactive robot.

Size: px
Start display at page:

Download "Some essential skills and their combination in an architecture for a cognitive and interactive robot."

Transcription

1 Some essential skills and their combination in an architecture for a cognitive and interactive robot. Sandra Devin, Grégoire Milliez, Michelangelo Fiore, Aurérile Clodic and Rachid Alami CNRS, LAAS, Univ de Toulouse, INP, 7 avenue du colonel Roche, F Toulouse, France CNRS, LAAS, Univ de Toulouse, INSA, 7 avenue du colonel Roche, F Toulouse, France CNRS, LAAS, Univ de Toulouse, LAAS, 7 avenue du colonel Roche, F Toulouse, France arxiv: v1 [cs.ro] 2 Mar 2016 Abstract The topic of joint actions has been deeply studied in the context of Human-Human interaction in order to understand how humans cooperate. Creating autonomous robots that collaborate with humans is a complex problem, where it is relevant to apply what has been learned in the context of Human- Human interaction. The question is what skills to implement and how to integrate them in order to build a cognitive architecture, allowing a robot to collaborate efficiently and naturally with humans. In this paper, we first list a set of skills that we consider essential for Joint Action, then we analyze the problem from the robot s point of view and discuss how they can be instantiated in human-robot scenarios. Finally, we open the discussion on how to integrate such skills into a cognitive architecture for humanrobot collaborative problem solving and task achievement. I. INTRODUCTION To interact with its environment, an autonomous robot needs to be able to reason about its surroundings and extract a set of facts that model a semantic representation of the world state. To act on its environment, the robot also needs to decide what actions to take and when to act. Adding humans in the environment requires to enhance the robot with specific skills. Indeed, as humans are social agents, considering them just as moving obstacles or in the same way as a another robot, would lead to dangerous or at least unpleasant interactions for the involved humans. Thus, to estimate the quality of a humanrobot collaborative task execution, the robot should not only take into account the task completion but also the comfort and safety of its human partners as well as their acceptability of its actions. A way to increase human comfort when interacting with the robot is to give human-like social skills to it. In this paper, we will present a set of skills needed for Joint Action and that we consider essential and then discuss how they can be implemented and combined into a coherent control architecture. II. REQUESTED SKILLS FOR JOINT ACTIONS Research in psychology and philosophy have led to a good understanding of human behaviors during joint actions and collaboration. As discussed in [1], the analysis of these studies can help to identify key elements for human-robot joint actions. The first step when people have to act together is to share a goal and the intention to achieve it. In [2], Tomassello et al. define a goal as the representation of a desired state and an intention as an action plan chosen in order to reach a goal. Bratman adds that if you have a shared intention to perform an action you should agree on the meshing subparts of a shared plan [3]. To do so, agents engaged in joint actions need to be able to negotiate the shared plan [4] which requires to have a common ground [5]. During the execution of the shared plan, agents need to be able to perceive and represent the other agents involved in the task. In [6], Sebanz et al. present three necessary skills concerning the perception and representation of the others during joint actions: Joint attention: the ability to direct the attention of a partner on a common reference in order to put an element of the interaction (e.g. an entity, a task) in focus. Action observation: agents need to be able to recognize others actions and to understand their effects on the environment and the task. Concerning this, Pacherie discusses two types of predictions in [7]: goal-to-action and actionto-goal prediction. Co-representation: it is also necessary for an agent to have a representation of others abilities. Knowing what others know, their goals or their capacities, allows to better predict and understand their actions. In other words, agents need to have a Theory of mind [8]. Finally, during a joint action, agents need to coordinate with each other. In [9], Knoblich et al. define two types of coordination: Emergent coordination: it is non voluntary coordination as entrainment (for example, two people in rocking chairs will synchronize their movements) or coordination coming from affordances [10] and from access to resources. Planned coordination: when people voluntary change their behavior in order to coordinate. They can do it by adding what Vesper et al. called coordination smoother in [11] or by using verbal or non-verbal communication. III. FROM THE ROBOT S POINT OF VIEW To have an intuitive, natural and fluid interaction with its human partners a robotic system needs to integrate the skills described in the previous section. However, these skills need to be adapted, since a robot may have different capacities than humans, and since humans behaviors may be different when interacting with a robot, compared to when interacting with another human.

2 A. Building and maintaining common ground The first skill needed by the robot to interact properly with a human is to provide means to understand each other in the situated context of the joint activity. The robot and the human need to share a common ground, meaning that they can identify, in their own world representation, the actions and objects referred to by their counterpart. Robotics systems rely on sensors to recognize and to localize entities (humans, robots or objects) in order to build a world state. These sensors produce coordinates to position the objects according to a given frame. For example, a stereo camera with an object recognition software can tell that a mug is in a given position x, y, z with an orientation of θ, φ, ψ. Humans, instead, use relations between objects to describe their position. Therefore, to indicate the location of a mug, the human would say that it is on the kitchen table, without giving its coordinates or orientation. To understand the human references and to generate understandable utterances, the robot needs therefore to build a semantic representation of the world, based on the geometric data it collects from sensors, as done in [12]. We have developed a module based on spatial and temporal reasoning to generate facts about the current state of the world [13]. A fact is a property which describes the current state of an entity (e.g. MUG isnextt o BOT T LE, MUG isf ull T RU E). This framework generates facts related to the entities position and facts about affordances to know, for instance, what is visible or reachable to each agent (human and robot). It also generates facts about agent postures to know if an agent is pointing toward an object or where an agent is looking. When the robot tries to understand the human, it should also use these data to improve the information grounding process. In addition to the world state (which can be considered as the robot s belief state), our situation assessment module maintains a separate and consistent belief state for each human, updated by estimating what the human is able to perceive of the world state. This belief state is an estimation of the list of facts that the agent believes to be true. In psychology this ability is called conceptual perspective taking. In [14] we have used this framework along with a dialog system to implement a situated dialog and consequently to improve dialog in term of efficiency and success rate. B. Joint goal establishment Once the common ground is established and maintained, the robot needs to share a goal with its human partners. This goal can be imposed by a human (direct order) but the robot should be able to proactively propose its help when needed. To do so, the robot needs to be able to infer highlevel goals by observing and reasoning on its human partners activities. This process is called plan recognition or, when a bigger focus is put on Human-Robot Interaction (HRI) aspects, intention recognition. There is a rich literature on plan recognition, using approaches such as classical planning [15], probabilistic planning [16] or logic-based techniques [17]. In the context of intention recognition work such as [18], and [19] introduced theory of mind aspects in plan recognition. In our system we provide an intention recognition system, using as inference model a Bayesian Network, which links contextual information, intentions, actions, and low-level observations. A key aspect of the system is using Markov Decision Processes (MDPs) to link intentions to human actions. Each MDP represents an intention, and is able to compute what is the action, in a given situation, that a rational agent would execute to achieve the associated goal. To properly infer a human agent s intention, it is important to consider his current estimated mental belief in the recognition process, as the actions carried out by the human may be consistent with his intention in his mental representation of the world but not in the actual world state. Our system is able to track and maintain, as discussed earlier, a belief state for each human, which is used as current state for the MDPs when computing the action rewards. C. Elaborating and negotiating shared plans After establishing a joint goal, in order to achieve it, the robot should be able to create, and negotiate a shared plan [20] with its human partners. To do so, there are several possibilities. The human can choose a specific plan, which the robot needs to be able to understand. Otherwise the robot can compute a plan on its own, which may be negotiated with the human. 1) Shared plans elaboration: In our system, the robot can generate plans using a high level task planner, called Human- Aware Task Planner (HATP) [21], which is able to compute a shared plan for a given goal in a given context. HATP can plan actions not only for the robot, but also for its human partners. The computed plan takes into account social rules (such as effort sharing) and also the knowledge of the other agents. For example, in some applications and contexts, the system could try to teach new ways of solving a problem to the human. In that case, the robot will favor plans where the human has to perform new tasks in order to make him learn by experience. The system could also choose that the efficiency is more important than learning in a given situation, in which case HATP will generate a plan to minimize the number of unknown tasks to be performed by the human [22]. 2) Sharing and negotiation: Once the plan found, it needs to be shared/negotiated with the collaborator in order to be accepted by him. When dealing with simple plans, infants can cooperate without language (using shared attention, intention recognition and social cues). In situations requiring more complex ones, language is the preferred modality [23], [24]. There are two possible situations studied in HRI. First, the human has a plan and needs to share it with the robot. Some research in robotics studied how a system could acquire knowledge on plan decomposition from a user [25] and how dialog can be used to teach new plans to the robot and to modify these plans [26]. When the robot is the one who has to share its generated plan with the human, to ensure his acceptability and comfort, it should allow the user to contest the plan and to formulate requests on the plan. To do so we have devised a module able to present the plan higher-level

3 tasks and to ask for the user s approval [22]. The user can also refuse the plan and ask to perform (or not) specific tasks, which would result in a new plan generation from the robot, trying to take into account the user preference concerning task allocation. D. Executing shared plans and reacting to humans 1) Maintaining and executing shared plans: When both agents agree on a plan, the execution process may start. During this execution, the robot needs to coordinate with its human partner. This must be achieved both at task planning level, by synchronizing the robot s plan with the human s, and at a lower level, by adapting the robot s action to the human. In the literature, executives like Pike [27] and Chaski [28], explicitly model humans in their plans and allow the robot to adapt its behavior to their actions. In our system, as explained in [29], we are able, using monitoring and task preconditions, to perform plan-level coordination. The robot executive decides when to execute the robot actions and assure, at each time, that the plan is still feasible. In case of an unexpected behavior of a human, leading to a plan that is non-valid anymore, the robot is able to quickly find a new plan and adapt to the new situation. Moreover, the robot should take the human knowledge into account when managing the shared plan. For example, during the execution, it may happen that a human might not know how to perform a task. In this case, the robot should be able to guide the user to perform the requested task. To make this possible and to adapt the explanation to the human s knowledge on the task to perform, we have created a component which models and maintains the human s knowledge on each task. When a human has to perform a task, the system will check if he knows how to perform it, and adapt its behavior to this information by explaining or not the task [22]. The robot executive also models and maintains humans mental states concerning goals, plans and actions. For example, at each time, the robot estimates if its partners are aware of the current plan and if they know which actions have been performed and which ones remain to be done. Then, the robot uses these mental states to detect when a divergent belief can affect the smooth execution of the shared plan (for example if a human does not know that he has to perform an action) and communicates the needed information to inform the human and hence, correct his belief [30]. 2) Actions recognition, execution and coordination: At a lower level, the robot needs to correctly interpret human signals and actions, perform its action in a legible, safe and acceptable way, and, when an action is a joint action (e.g. handover), coordinate its execution with its human partner. In order to perform actions, in our system, the robot is equipped with a human-aware geometric task and motion planner [31] (both for navigation or manipulation tasks). It also has real time control algorithms to react quickly and stop its movements. As an example, when both the human and itself are trying to execute manipulation operations in the same workspace, the robot would wait for the human to achieve his action to prevent any dangerous situation for the human. To recognize human actions, the robot needs to observe human movements and infer what action is being performed. An interesting idea is using the robot s own internal models in order to recognize actions and predict user intents, as shown by the HAMMER system in [32]. In our system we perform this process with geometrical reasoning, linking the position and movement of human joints to points in areas in the environments, like objects and rooms. When the action is a joint action, like an handover, our system uses Partially Observed Markov Decision Processes (POMDP) to achieve action level coordination and estimate, from observations, the human s engagement level in the action, and to react accordingly [29]. This mechanism was applied in [33] to guide a human to a destination in a socially acceptable way, for example by adapting the robot s speed to the human s needs. Our robot is also able to produce proactive behavior when executing joint actions such as suggesting a solution by extending its arm when it needs to perform an handover [34]. An other way to coordinate is non-verbal communication. In [35], Breazeal et al. demonstrate that explicit (expressive social cues) as well as implicit (communication of implicit mentalstate through non-verbal behaviours) communication is useful and necessary if we want a fluent human-robot interaction. The robot needs to be able both to detect and interpret the nonverbal cues given by its partners and to produce non-verbal signals in an understandable and pertinent way. However, these signals need to be adapted to the physiognomy of the robot: for example, for a robot with a head and without eyes, it has been established that gaze signals can be replaced by the orientation of the head [36]. IV. AN ARCHITECTURE FOR A COGNITIVE AND INTERACTIVE ROBOT We discuss here below how the skills described above can be combined in key elements of a cognitive architecture for a collaborative robot. Pacherie describes a three-level architecture used to monitor human-human joint actions [7]. It consists of a Shared Distal level which deals with joint intention issues (commitment, creation and monitoring of a shared plan), a Shared Proximal level which deals with the execution of the plan actions coming from the Distal level at a high level, and a Coupled Motor intention which ensures the coordination in space and time during the action execution. In robotics, Alami et al. [37] drew an architecture with three similar levels. Then, this architecture has been developed in the context of Human-robot interaction [38], [39], [29] integrating more and more human-aware skills. Starting from these architectures we tried to design a cognitive system for human-robot interaction. This architecture (Figure 1) is intended to equip the robot with all the skills described in the previous section. First, the architecture would require a Situation Assessment component to understand the environment in which the interaction takes place. To do so, this component needs to collect data

4 Fig. 1: A cognitive architecture for Human-Robot Interaction. from sensors and to perform spatial and temporal reasoning to describe the environment in term of predicates and properties. These properties should describe the situation of each entities in term of relational position (BOB isin KITCHEN), and state property (MUG isempty FALSE). It should also describe the situation of agents in term of posture (BOB ispointing MUG), affordance (BOB canreach MUG), and motion or actions (BOB ismovingtoward MUG). We previously described how conceptual perspective taking is important to understand humans utterance or assess humans intentions. To build and maintain a separate and consistent mental state for each agent, a Mental State Management component is needed. Each Mental state consists of a set of facts related to the environment, like the one generated by the Situation Assessment component, but also of facts concerning what the agent knows about other agents knowledge on the goal, the plan and tasks included in the plan and about the agents capacities. The architecture also requires a component dealing with Communication for Joint Actions. Based on the information concerning the agents mental states, this component should provide situated dialogue and perspective taking skills when discussing with humans or sharing/negotiating a plan. This component should also allow the robot to produce human-like non-verbal signals and to correctly recognize and interpret the signals of its human partners. The architecture should also involve a component to determine the robot s current goal. This is the Intention Prediction component. First, this component should be able to recognize humans intentions, based on the estimation of their mental states and communicative feedback. Then it should estimate if the robot can and has to help humans and choose the most appropriate goal to execute. During the execution of the chosen goal, this component should estimate if the humans involved into the goal are still (or not) committed to it. Once a goal is chosen, a Shared Plan Elaboration component would allow the robot to agree on a shared plan with its partners. Therefore, this component should contain a high level task planner, like HATP, capable of computing human-aware plans considering social costs and other agents mental states. This component should also allow the robot to share/negotiate a plan with humans. Finally, the architecture should contains a component, that we call Shared Plan Achievement, which ensures the smooth execution of the shared plan. This component should consider other agents mental states to coordinate with them and communicate information when needed. It will send a request to the Action Achievement component to perform robot actions and gather information from Human Actions Monitoring which will interpret humans movements and recognize humans actions. The Action Achievement component should execute the robot s actions in a legible, safe and acceptable way and, when those actions are joint actions (e.g. handover), it should ensure the proper coordination with the human partners. V. CONCLUSION In this paper, we briefly reviewed a set of skills that we consider as essential for joint action. Then we discussed how to apply these skills to HRI and how to integrate them into a cognitive architecture. However, in the current robotics context, we are still far from a real cognitive architecture able to perform fluid and friendly Human-Robot joint actions. Each of this skills can be more or less sophisticated and each them is a topic of research by itself. The next step should be to design and implement, with the robotic community, a complete cognitive architecture inspired from social sciences and neuroscience but adapted to the capacities of a robot and to the behavior of humans, when they interact with robots. REFERENCES [1] A. Clodic, R. Alami, R. Chatila, and L. acnrs, Key elements for human-robot joint action, Frontiers in Artificial Intelligence and Applications, vol. 273, pp , [2] M. Tomasello, M. Carpenter, J. Call, T. Behne, and H. Moll, Understanding and sharing intentions: The origins of cultural cognition, Behavioral and brain sciences, vol. 28, no. 05, pp , [3] M. E. Bratman, Shared intention, Ethics, pp , [4] D. G. Pruitt, Negotiation behavior. Academic Press, [5] H. H. Clark, R. Schreuder, and S. Buttrick, Common ground at the understanding of demonstrative reference, Journal of verbal learning and verbal behavior, vol. 22, no. 2, pp , [6] N. Sebanz, H. Bekkering, and G. Knoblich, Joint action: bodies and minds moving together, Trends in cognitive sciences, vol. 10, no. 2, pp , [7] E. Pacherie, The phenomenology of joint action: Self-agency vs. jointagency, Joint attention: New developments, pp , [8] S. Baron-Cohen, A. M. Leslie, and U. Frith, Does the autistic child have a theory of mind?, Cognition, vol. 21, no. 1, pp , [9] G. Knoblich, S. Butterfill, and N. Sebanz, Psychological research on joint action: theory and data, Psychology of Learning and Motivation- Advances in Research and Theory, vol. 54, p. 59, [10] J. J. Gibson, The theory of affordances, Hilldale, USA, [11] C. Vesper, S. Butterfill, G. Knoblich, and N. Sebanz, A minimal architecture for joint action, Neural Networks, vol. 23, no. 8, pp , [12] S. Lemaignan, R. Ros, E. A. Sisbot, R. Alami, and M. Beetz, Grounding the interaction: Anchoring situated discourse in everyday human-robot interaction, International Journal of Social Robotics, vol. 4, no. 2, pp , [13] G. Milliez, M. Warnier, A. Clodic, and R. Alami, A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management, in Int. Symp. on Robot and Human Interactive Communication, pp , IEEE, [14] E. Ferreira, G. Milliez, F. Lefevre, and R. Alami, Users belief awareness in reinforcement learning-based situated human-robot dialogue management, in IWSDS, 2015.

5 [15] M. Ramırez and H. Geffner, Plan recognition as planning, in Proceedings of the 21st international joint conference on Artifical intelligence. Morgan Kaufmann Publishers Inc, pp , Citeseer, [16] H. H. Bui, A general model for online probabilistic plan recognition, in IJCAI, vol. 3, pp , Citeseer, [17] P. Singla and R. J. Mooney, Abductive markov logic for plan recognition., in AAAI, [18] C. Breazeal, J. Gray, and M. Berlin, An embodied cognition approach to mindreading skills for socially intelligent robots, The International Journal of Robotics Research, vol. 28, no. 5, pp , [19] C. L. Baker and J. B. Tenenbaum, Modeling human plan recognition using bayesian theory of mind, Plan, activity, and intent recognition: Theory and practice, pp , [20] B. J. Grosz and C. L. Sidner, Plans for discourse, tech. rep., DTIC Document, [21] R. Lallement, L. De Silva, and R. Alami, Hatp: An htn planner for robotics, arxiv preprint arxiv: , [22] G. Milliez, R. Lallement, M. Fiore, and R. Alami, Using human knowledge awareness to adapt collaborative plan generation, explanation and monitoring, in ACM/IEEE International Conference on Human- Robot Interaction, HRI 16, New Zealand, March 7-10, [23] F. Warneken, F. Chen, and M. Tomasello, Cooperative activities in young children and chimpanzees, Child Development, pp , [24] F. Warneken and M. Tomasello, Helping and cooperation at 14 months of age, Infancy, vol. 11, no. 3, pp , [25] A. Mohseni-Kabir, C. Rich, S. Chernova, C. L. Sidner, and D. Miller, Interactive hierarchical task learning from a single demonstration, in Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 15, ACM, [26] M. Petit et al., The coordinating role of language in real-time multimodal learning of cooperative tasks, Autonomous Mental Development, IEEE Transactions on, vol. 5, pp. 3 17, March [27] E. Karpas, S. J. Levine, P. Yu, and B. C. Williams, Robust execution of plans for human-robot teams, in Twenty-Fifth International Conference on Automated Planning and Scheduling, [28] J. Shah, J. Wiken, B. Williams, and C. Breazeal, Improved humanrobot team performance using chaski, a human-inspired plan execution system, in international conference on Human-robot interaction, [29] M. Fiore, A. Clodic, and R. Alami, On planning and task achievement modalities for human-robot collaboration., in The 2014 International Symposium on Experimental Robotics, [30] S. Devin and R. Alami, An implemented theory of mind to improve human-robot shared plans execution., in ACM/IEEE International Conference on Human-Robot Interaction, HRI 16, New Zealand, March 7-10, [31] E. A. Sisbot and R. Alami, A human-aware manipulation planner, Robotics, IEEE Transactions on, vol. 28, no. 5, pp , [32] Y. Demiris, Prediction of intent in robotics and multi-agent systems, Cognitive processing, vol. 8, no. 3, pp , [33] M. Fiore, H. Khambhaita, G. Milliez, and R. Alami, An adaptive and proactive human-aware robot guide, in Social Robotics, pp , Springer International Publishing, [34] A. K. Pandey, M. Ali, and R. Alami, Towards a task-aware proactive sociable robot based on multi-state perspective-taking, International Journal of Social Robotics, vol. 5, no. 2, pp , [35] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, Effects of nonverbal communication on efficiency and robustness in human-robot teamwork, in Intelligent Robots and Systems, 2005.(IROS 2005) IEEE/RSJ International Conference on, pp , IEEE, [36] J.-D. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, P. F. Dominey, and J. Ventre-Dominey, I reach faster when i see you look: gaze effects in human human and human robot face-to-face cooperation, Frontiers in neurorobotics, vol. 6, [37] R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand, An architecture for autonomy, The International Journal of Robotics Research, vol. 17, no. 4, pp , [38] R. Alami, M. Warnier, J. Guitton, S. Lemaignan, and E. A. Sisbot, When the robot considers the human..., in Proceedings of the 15th international symposium on robotics research, [39] R. Alami, On human models for collaborative robots, in Collaboration Technologies and Systems (CTS), 2013 International Conference on, pp , IEEE, 2013.

An Implemented Theory of Mind to Improve Human-Robot Shared Plans Execution

An Implemented Theory of Mind to Improve Human-Robot Shared Plans Execution An Implemented Theory of Mind to Improve Human-Robot Shared Plans Execution Sandra Devin, Rachid Alami To cite this version: Sandra Devin, Rachid Alami. An Implemented Theory of Mind to Improve Human-Robot

More information

Key elements for joint human-robot action

Key elements for joint human-robot action Key elements for joint human-robot action Aurélie Clodic, Rachid Alami, Raja Chatila To cite this version: Aurélie Clodic, Rachid Alami, Raja Chatila. Key elements for joint human-robot action. Robo- Philosophy,

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

An Adaptive and Proactive Human-Aware Robot Guide

An Adaptive and Proactive Human-Aware Robot Guide An Adaptive and Proactive Human-Aware Robot Guide Michelangelo Fiore, Harmish Khambhaita, Grégoire Milliez, Rachid Alami To cite this version: Michelangelo Fiore, Harmish Khambhaita, Grégoire Milliez,

More information

SECOND YEAR PROJECT SUMMARY

SECOND YEAR PROJECT SUMMARY SECOND YEAR PROJECT SUMMARY Grant Agreement number: 215805 Project acronym: Project title: CHRIS Cooperative Human Robot Interaction Systems Period covered: from 01 March 2009 to 28 Feb 2010 Contact Details

More information

Toward Human-Aware Robot Task Planning

Toward Human-Aware Robot Task Planning Toward Human-Aware Robot Task Planning Rachid Alami, Aurélie Clodic, Vincent Montreuil, Emrah Akin Sisbot, Raja Chatila LAAS-CNRS 7, Avenue du Colonel Roche 31077 Toulouse, France Firstname.Name@laas.fr

More information

Evaluating Fluency in Human-Robot Collaboration

Evaluating Fluency in Human-Robot Collaboration Evaluating Fluency in Human-Robot Collaboration Guy Hoffman Media Innovation Lab, IDC Herzliya P.O. Box 167, Herzliya 46150, Israel Email: hoffman@idc.ac.il Abstract Collaborative fluency is the coordinated

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Natural Interaction with Social Robots

Natural Interaction with Social Robots Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,

More information

Robotic Applications Industrial/logistics/medical robots

Robotic Applications Industrial/logistics/medical robots Artificial Intelligence & Human-Robot Interaction Luca Iocchi Dept. of Computer Control and Management Eng. Sapienza University of Rome, Italy Robotic Applications Industrial/logistics/medical robots Known

More information

Viewing Robot Navigation in Human Environment as a Cooperative Activity

Viewing Robot Navigation in Human Environment as a Cooperative Activity Viewing Robot Navigation in Human Environment as a Cooperative Activity Harmish Khambhaita, Rachid Alami To cite this version: Harmish Khambhaita, Rachid Alami. Viewing Robot Navigation in Human Environment

More information

Effects of Integrated Intent Recognition and Communication on Human-Robot Collaboration

Effects of Integrated Intent Recognition and Communication on Human-Robot Collaboration Effects of Integrated Intent Recognition and Communication on Human-Robot Collaboration Mai Lee Chang 1, Reymundo A. Gutierrez 2, Priyanka Khante 1, Elaine Schaertl Short 1, Andrea Lockerd Thomaz 1 Abstract

More information

FP7 ICT Call 6: Cognitive Systems and Robotics

FP7 ICT Call 6: Cognitive Systems and Robotics FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media

More information

Planning Safe and Legible Hand-over Motions for Human-Robot Interaction

Planning Safe and Legible Hand-over Motions for Human-Robot Interaction Planning Safe and Legible Hand-over Motions for Human-Robot Interaction Jim Mainprice, E. Akin Sisbot, Thierry Siméon and Rachid Alami Abstract Human-Robot interaction brings new challenges to motion planning.

More information

Towards affordance based human-system interaction based on cyber-physical systems

Towards affordance based human-system interaction based on cyber-physical systems Towards affordance based human-system interaction based on cyber-physical systems Zoltán Rusák 1, Imre Horváth 1, Yuemin Hou 2, Ji Lihong 2 1 Faculty of Industrial Design Engineering, Delft University

More information

Towards Intuitive Industrial Human-Robot Collaboration

Towards Intuitive Industrial Human-Robot Collaboration Towards Intuitive Industrial Human-Robot Collaboration System Design and Future Directions Ferdinand Fuhrmann, Wolfgang Weiß, Lucas Paletta, Bernhard Reiterer, Andreas Schlotzhauer, Mathias Brandstötter

More information

Toward Human-Aware Robot Task Planning

Toward Human-Aware Robot Task Planning Toward Human-Aware Robot Task Planning Rachid Alami, Aurélie Clodic, Vincent Montreuil, mrah Akin Sisbot, Raja Chatila LAAS - CNRS 7, Avenue du Colonel Roche 31077 Toulouse, France Firtsname.Name@laas.fr

More information

Interactive Plan Explicability in Human-Robot Teaming

Interactive Plan Explicability in Human-Robot Teaming Interactive Plan Explicability in Human-Robot Teaming Mehrdad Zakershahrak and Yu Zhang omputer Science and Engineering Department Arizona State University Tempe, Arizona mzakersh, yzhan442@asu.edu arxiv:1901.05642v1

More information

Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork

Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork Cynthia Breazeal, Cory D. Kidd, Andrea Lockerd Thomaz, Guy Hoffman, Matt Berlin MIT Media Lab 20 Ames St. E15-449,

More information

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Artificial Cognition for Social Human-Robot Interaction: An Implementation

Artificial Cognition for Social Human-Robot Interaction: An Implementation Artificial Cognition for Social Human-Robot Interaction: An Implementation Séverin Lemaignan 1,2, Mathieu Warnier 1, E. Akin Sisbot 1, Aurélie Clodic 1, Rachid Alami 1 1 LAAS-CNRS, Univ. de Toulouse, CNRS

More information

A Framework For Human-Aware Robot Planning

A Framework For Human-Aware Robot Planning A Framework For Human-Aware Robot Planning Marcello CIRILLO, Lars KARLSSON and Alessandro SAFFIOTTI AASS Mobile Robotics Lab, Örebro University, Sweden Abstract. Robots that share their workspace with

More information

Interactive Plan Explicability in Human-Robot Teaming

Interactive Plan Explicability in Human-Robot Teaming Interactive Plan Explicability in Human-Robot Teaming Mehrdad Zakershahrak, Akshay Sonawane, Ze Gong and Yu Zhang Abstract Human-robot teaming is one of the most important applications of artificial intelligence

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute State one reason for investigating and building humanoid robot (4 pts) List two

More information

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why

More information

Towards Human-Aware Cognitive Robots

Towards Human-Aware Cognitive Robots Towards Human-Aware Cognitive Robots Rachid Alami, Raja Chatila, Aurélie Clodic, Sara Fleury Matthieu Herrb, Vincent Montreuil, Emrah Akin Sisbot LAAS-CNRS 7, Avenue du Colonel Roche 31077 Toulouse, France

More information

Cognitively Compatible and Collaboratively Balanced Human-Robot Teaming in Urban Military Domains

Cognitively Compatible and Collaboratively Balanced Human-Robot Teaming in Urban Military Domains Cognitively Compatible and Collaboratively Balanced Human-Robot Teaming in Urban Military Domains Cynthia Breazeal (P.I., MIT) Deb Roy (MIT), Nick Roy (MIT), John How (MIT) Julie Adams (Vanderbilt), Rod

More information

Software Product Assurance for Autonomy On-board Spacecraft

Software Product Assurance for Autonomy On-board Spacecraft Software Product Assurance for Autonomy On-board Spacecraft JP. Blanquart (1), S. Fleury (2) ; M. Hernek (3) ; C. Honvault (1) ; F. Ingrand (2) ; JC. Poncet (4) ; D. Powell (2) ; N. Strady-Lécubin (4)

More information

Appendices master s degree programme Artificial Intelligence

Appendices master s degree programme Artificial Intelligence Appendices master s degree programme Artificial Intelligence 2015-2016 Appendix I Teaching outcomes of the degree programme (art. 1.3) 1. The master demonstrates knowledge, understanding and the ability

More information

Knowledge Processing for Autonomous Robot Control

Knowledge Processing for Autonomous Robot Control AAAI Technical Report SS-12-02 Designing Intelligent Robots: Reintegrating AI Knowledge Processing for Autonomous Robot Control Moritz Tenorth and Michael Beetz Intelligent Autonomous Systems Group Department

More information

With a New Helper Comes New Tasks

With a New Helper Comes New Tasks With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping Anders Green 1 Helge Hüttenrauch 1 Cristian Bogdan 1 Kerstin Severinson Eklundh 1 1 School of Computer Science

More information

Master Artificial Intelligence

Master Artificial Intelligence Master Artificial Intelligence Appendix I Teaching outcomes of the degree programme (art. 1.3) 1. The master demonstrates knowledge, understanding and the ability to evaluate, analyze and interpret relevant

More information

Planning and Execution with Robot Trajectory Generation in Industrial Human-Robot Collaboration

Planning and Execution with Robot Trajectory Generation in Industrial Human-Robot Collaboration Planning and Execution with Robot Trajectory Generation in Industrial Human-Robot Collaboration Amedeo Cesta 1, Lorenzo Molinari Tosatti 2, Andrea Orlandini 1, Nicola Pedrocchi 2, Stefania Pellegrinelli

More information

Coordination in Human-Robot Teams Using Mental Modeling and Plan Recognition

Coordination in Human-Robot Teams Using Mental Modeling and Plan Recognition Coordination in Human-Robot Teams Using Mental Modeling and Plan Recognition Kartik Talamadupula Gordon Briggs Tathagata Chakraborti Matthias Scheutz Subbarao Kambhampati Dept. of Computer Science and

More information

Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners

Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners Harmish Khambhaita, Rachid Alami To cite this version: Harmish Khambhaita, Rachid

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press,   ISSN Application of artificial neural networks to the robot path planning problem P. Martin & A.P. del Pobil Department of Computer Science, Jaume I University, Campus de Penyeta Roja, 207 Castellon, Spain

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are

More information

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA)

Plan for the 2nd hour. What is AI. Acting humanly: The Turing test. EDAF70: Applied Artificial Intelligence Agents (Chapter 2 of AIMA) Plan for the 2nd hour EDAF70: Applied Artificial Intelligence (Chapter 2 of AIMA) Jacek Malec Dept. of Computer Science, Lund University, Sweden January 17th, 2018 What is an agent? PEAS (Performance measure,

More information

Knowledge Representation and Cognition in Natural Language Processing

Knowledge Representation and Cognition in Natural Language Processing Knowledge Representation and Cognition in Natural Language Processing Gemignani Guglielmo Sapienza University of Rome January 17 th 2013 The European Projects Surveyed the FP6 and FP7 projects involving

More information

Elena Corina Grigore

Elena Corina Grigore Elena Corina Grigore Yale University, Department of Computer Science Ph.D. Candidate, Yale University 51 Prospect Street, Office 505 elena.corina.grigore@yale.edu New Haven, CT, 06511 USA elenacorinagrigore.com

More information

3 A Locus for Knowledge-Based Systems in CAAD Education. John S. Gero. CAAD futures Digital Proceedings

3 A Locus for Knowledge-Based Systems in CAAD Education. John S. Gero. CAAD futures Digital Proceedings CAAD futures Digital Proceedings 1989 49 3 A Locus for Knowledge-Based Systems in CAAD Education John S. Gero Department of Architectural and Design Science University of Sydney This paper outlines a possible

More information

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Klaus Buchegger 1, George Todoran 1, and Markus Bader 1 Vienna University of Technology, Karlsplatz 13, Vienna 1040,

More information

Artificial Intelligence: An overview

Artificial Intelligence: An overview Artificial Intelligence: An overview Thomas Trappenberg January 4, 2009 Based on the slides provided by Russell and Norvig, Chapter 1 & 2 What is AI? Systems that think like humans Systems that act like

More information

2. Publishable summary

2. Publishable summary 2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research

More information

Agents in the Real World Agents and Knowledge Representation and Reasoning

Agents in the Real World Agents and Knowledge Representation and Reasoning Agents in the Real World Agents and Knowledge Representation and Reasoning An Introduction Mitsubishi Concordia, Java-based mobile agent system. http://www.merl.com/projects/concordia Copernic Agents for

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

Gameplay as On-Line Mediation Search

Gameplay as On-Line Mediation Search Gameplay as On-Line Mediation Search Justus Robertson and R. Michael Young Liquid Narrative Group Department of Computer Science North Carolina State University Raleigh, NC 27695 jjrobert@ncsu.edu, young@csc.ncsu.edu

More information

Simulation and HRI Recent Perspectives with the MORSE Simulator

Simulation and HRI Recent Perspectives with the MORSE Simulator Simulation and HRI Recent Perspectives with the MORSE Simulator Séverin Lemaignan 1, Marc Hanheide 2, Michael Karg 3, Harmish Khambhaita 4, Lars Kunze 5, Florian Lier 6, Ingo Lütkebohle 7, and Grégoire

More information

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL Nathanael Chambers, James Allen, Lucian Galescu and Hyuckchul Jung Institute for Human and Machine Cognition 40 S. Alcaniz Street Pensacola, FL 32502

More information

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1 Christine Upadek 29 November 2010 Christine Upadek 1 Outline Emotions Kismet - a sociable robot Outlook Christine Upadek 2 Denition Social robots are embodied agents that are part of a heterogeneous group:

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS

COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS D. Perzanowski, A.C. Schultz, W. Adams, M. Bugajska, E. Marsh, G. Trafton, and D. Brock Codes 5512, 5513, and 5515, Naval Research Laboratory, Washington,

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (2 pts) How to avoid obstacles when reproducing a trajectory using a learned DMP?

More information

Franοcois Michaud and Minh Tuan Vu. LABORIUS - Research Laboratory on Mobile Robotics and Intelligent Systems

Franοcois Michaud and Minh Tuan Vu. LABORIUS - Research Laboratory on Mobile Robotics and Intelligent Systems Light Signaling for Social Interaction with Mobile Robots Franοcois Michaud and Minh Tuan Vu LABORIUS - Research Laboratory on Mobile Robotics and Intelligent Systems Department of Electrical and Computer

More information

Wednesday, October 29, :00-04:00pm EB: 3546D. TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof.

Wednesday, October 29, :00-04:00pm EB: 3546D. TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof. Wednesday, October 29, 2014 02:00-04:00pm EB: 3546D TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof. Ning Xi ABSTRACT Mobile manipulators provide larger working spaces and more flexibility

More information

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Towards Opportunistic Action Selection in Human-Robot Cooperation

Towards Opportunistic Action Selection in Human-Robot Cooperation This work was published in KI 2010: Advances in Artificial Intelligence 33rd Annual German Conference on AI, Karlsruhe, Germany, September 21-24, 2010. Proceedings, Dillmann, R.; Beyerer, J.; Hanebeck,

More information

Planning for Human-Robot Teaming Challenges & Opportunities

Planning for Human-Robot Teaming Challenges & Opportunities for Human-Robot Teaming Challenges & Opportunities Subbarao Kambhampati Arizona State University Thanks Matthias Scheutz@Tufts HRI Lab [Funding from ONR, ARO J ] 1 [None (yet?) from NSF L ] 2 Two Great

More information

A DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607)

A DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607) 117 From: AAAI Technical Report WS-94-04. Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved. A DAI Architecture for Coordinating Multimedia Applications Keith J. Werkman* Loral Federal

More information

A review of Reasoning About Rational Agents by Michael Wooldridge, MIT Press Gordon Beavers and Henry Hexmoor

A review of Reasoning About Rational Agents by Michael Wooldridge, MIT Press Gordon Beavers and Henry Hexmoor A review of Reasoning About Rational Agents by Michael Wooldridge, MIT Press 2000 Gordon Beavers and Henry Hexmoor Reasoning About Rational Agents is concerned with developing practical reasoning (as contrasted

More information

Outline. What is AI? A brief history of AI State of the art

Outline. What is AI? A brief history of AI State of the art Introduction to AI Outline What is AI? A brief history of AI State of the art What is AI? AI is a branch of CS with connections to psychology, linguistics, economics, Goal make artificial systems solve

More information

Rachid Alami and Felix Ingrand and Samer Qutub 1. of mobile robots, one can consider the whole eet or limit the

Rachid Alami and Felix Ingrand and Samer Qutub 1. of mobile robots, one can consider the whole eet or limit the A Scheme for Coordinating Multi-robot Planning Activities and Plans Execution Rachid Alami and Felix Ingrand and Samer Qutub 1 Abstract. We present and discuss a generic scheme for multi-robot cooperation

More information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Semester Schedule C++ and Robot Operating System (ROS) Learning to use our robots Computational

More information

Detecticon: A Prototype Inquiry Dialog System

Detecticon: A Prototype Inquiry Dialog System Detecticon: A Prototype Inquiry Dialog System Takuya Hiraoka and Shota Motoura and Kunihiko Sadamasa Abstract A prototype inquiry dialog system, dubbed Detecticon, demonstrates its ability to handle inquiry

More information

Tesca Fitzgerald. Graduate Research Assistant Aug

Tesca Fitzgerald. Graduate Research Assistant Aug Tesca Fitzgerald Webpage www.tescafitzgerald.com Email tesca.fitzgerald@cc.gatech.edu Last updated April 2018 School of Interactive Computing Georgia Institute of Technology 801 Atlantic Drive, Atlanta,

More information

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The 29 th Annual Conference of The Robotics Society of

More information

Human-Swarm Interaction

Human-Swarm Interaction Human-Swarm Interaction a brief primer Andreas Kolling irobot Corp. Pasadena, CA Swarm Properties - simple and distributed - from the operator s perspective - distributed algorithms and information processing

More information

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Keywords: Multi-robot adversarial environments, real-time autonomous robots ROBOT SOCCER: A MULTI-ROBOT CHALLENGE EXTENDED ABSTRACT Manuela M. Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA veloso@cs.cmu.edu Abstract Robot soccer opened

More information

Physical Human Robot Interaction

Physical Human Robot Interaction MIN Faculty Department of Informatics Physical Human Robot Interaction Intelligent Robotics Seminar Ilay Köksal University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Department

More information

LASA I PRESS KIT lasa.epfl.ch I EPFL-STI-IMT-LASA Station 9 I CH 1015, Lausanne, Switzerland

LASA I PRESS KIT lasa.epfl.ch I EPFL-STI-IMT-LASA Station 9 I CH 1015, Lausanne, Switzerland LASA I PRESS KIT 2016 LASA I OVERVIEW LASA (Learning Algorithms and Systems Laboratory) at EPFL, focuses on machine learning applied to robot control, humanrobot interaction and cognitive robotics at large.

More information

Intelligent Agents for Virtual Simulation of Human-Robot Interaction

Intelligent Agents for Virtual Simulation of Human-Robot Interaction Intelligent Agents for Virtual Simulation of Human-Robot Interaction Ning Wang, David V. Pynadath, Unni K.V., Santosh Shankar, Chirag Merchant August 6, 2015 The work depicted here was sponsored by the

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

A dataset of head and eye gaze during dyadic interaction task for modeling robot gaze behavior

A dataset of head and eye gaze during dyadic interaction task for modeling robot gaze behavior A dataset of head and eye gaze during dyadic interaction task for modeling robot gaze behavior Mirko Raković 1,2,*, Nuno Duarte 1, Jovica Tasevski 2, José Santos-Victor 1 and Branislav Borovac 2 1 University

More information

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach Witold Jacak* and Stephan Dreiseitl" and Karin Proell* and Jerzy Rozenblit** * Dept. of Software Engineering, Polytechnic

More information

Robotics for Children

Robotics for Children Vol. xx No. xx, pp.1 8, 200x 1 1 2 3 4 Robotics for Children New Directions in Child Education and Therapy Fumihide Tanaka 1,HidekiKozima 2, Shoji Itakura 3 and Kazuo Hiraki 4 Robotics intersects with

More information

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Luca Iocchi 1, Maria Teresa Lázaro 1, Laurent Jeanpierre 2, Abdel-Illah Mouaddib 2 1 Dept. of Computer,

More information

Cognitive Robotics 2017/2018

Cognitive Robotics 2017/2018 Cognitive Robotics 2017/2018 Course Introduction Matteo Matteucci matteo.matteucci@polimi.it Artificial Intelligence and Robotics Lab - Politecnico di Milano About me and my lectures Lectures given by

More information

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback? 19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands

More information

Toward autonomous airships: research and developments at LAAS/CNRS

Toward autonomous airships: research and developments at LAAS/CNRS Toward autonomous airships: research and developments at LAAS/CNRS Simon LACROIX LAAS / CNRS 7, Ave du Colonel Roche F-31077 TOULOUSE Cedex FRANCE E-mail: Simon.Lacroix@laas.fr Phone: +33 561 33 62 66

More information

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informatics and Electronics University ofpadua, Italy y also

More information

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Ergonomic positioning of bulky objects Thesis 1 Robot acts as a 3rd hand for workpiece positioning: Muscular fatigue

More information

A Novel Approach To Proactive Human-Robot Cooperation

A Novel Approach To Proactive Human-Robot Cooperation A Novel Approach To Proactive Human-Robot Cooperation Oliver C. Schrempf and Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute of Computer Science and Engineering Universität Karlsruhe

More information

Investigation of Navigating Mobile Agents in Simulation Environments

Investigation of Navigating Mobile Agents in Simulation Environments Investigation of Navigating Mobile Agents in Simulation Environments Theses of the Doctoral Dissertation Richárd Szabó Department of Software Technology and Methodology Faculty of Informatics Loránd Eötvös

More information

Interactive Robot Learning of Gestures, Language and Affordances

Interactive Robot Learning of Gestures, Language and Affordances GLU 217 International Workshop on Grounding Language Understanding 25 August 217, Stockholm, Sweden Interactive Robot Learning of Gestures, Language and Affordances Giovanni Saponaro 1, Lorenzo Jamone

More information

Task-Based Dialog Interactions of the CoBot Service Robots

Task-Based Dialog Interactions of the CoBot Service Robots Task-Based Dialog Interactions of the CoBot Service Robots Manuela Veloso, Vittorio Perera, Stephanie Rosenthal Computer Science Department Carnegie Mellon University Thanks to Joydeep Biswas, Brian Coltin,

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 16, 2009 COMP 4766/6778 (MUN) Course Introduction January

More information

Neural Models for Multi-Sensor Integration in Robotics

Neural Models for Multi-Sensor Integration in Robotics Department of Informatics Intelligent Robotics WS 2016/17 Neural Models for Multi-Sensor Integration in Robotics Josip Josifovski 4josifov@informatik.uni-hamburg.de Outline Multi-sensor Integration: Neurally

More information

Human-Robot Interaction: Tackling the AI Challenges

Human-Robot Interaction: Tackling the AI Challenges Human-Robot Interaction: Tackling the AI Challenges Séverin Lemaignan, Mathieu Warnier, E. Akin Sisbot, Rachid Alami CNRS, LAAS, 7 avenue du Colonel Roche, F-31400 Toulouse, France Univ de Toulouse, LAAS,

More information

Using Computational Cognitive Models to Build Better Human-Robot Interaction. Cognitively enhanced intelligent systems

Using Computational Cognitive Models to Build Better Human-Robot Interaction. Cognitively enhanced intelligent systems Using Computational Cognitive Models to Build Better Human-Robot Interaction Alan C. Schultz Naval Research Laboratory Washington, DC Introduction We propose an approach for creating more cognitively capable

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

A CYBER PHYSICAL SYSTEMS APPROACH FOR ROBOTIC SYSTEMS DESIGN

A CYBER PHYSICAL SYSTEMS APPROACH FOR ROBOTIC SYSTEMS DESIGN Proceedings of the Annual Symposium of the Institute of Solid Mechanics and Session of the Commission of Acoustics, SISOM 2015 Bucharest 21-22 May A CYBER PHYSICAL SYSTEMS APPROACH FOR ROBOTIC SYSTEMS

More information

Autonomous Task Execution of a Humanoid Robot using a Cognitive Model

Autonomous Task Execution of a Humanoid Robot using a Cognitive Model Autonomous Task Execution of a Humanoid Robot using a Cognitive Model KangGeon Kim, Ji-Yong Lee, Dongkyu Choi, Jung-Min Park and Bum-Jae You Abstract These days, there are many studies on cognitive architectures,

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

HUMAN-ROBOT COLLABORATION TNO, THE NETHERLANDS. 6 th SAF RA Symposium Sustainable Safety 2030 June 14, 2018 Mr. Johan van Middelaar

HUMAN-ROBOT COLLABORATION TNO, THE NETHERLANDS. 6 th SAF RA Symposium Sustainable Safety 2030 June 14, 2018 Mr. Johan van Middelaar HUMAN-ROBOT COLLABORATION TNO, THE NETHERLANDS 6 th SAF RA Symposium Sustainable Safety 2030 June 14, 2018 Mr. Johan van Middelaar CONTENTS TNO & Robotics Robots and workplace safety: Human-Robot Collaboration,

More information