Task-Based Dialog Interactions of the CoBot Service Robots

Size: px
Start display at page:

Download "Task-Based Dialog Interactions of the CoBot Service Robots"

Transcription

1 Task-Based Dialog Interactions of the CoBot Service Robots Manuela Veloso, Vittorio Perera, Stephanie Rosenthal Computer Science Department Carnegie Mellon University Thanks to Joydeep Biswas, Brian Coltin, Tom Kollar, Daniele Nardi, Robin Soetens, and Yichao Sun Abstract As we incorporate autonomous service robots into our environments, we realize the need and opportunity for them to interact with humans through speech and language. Our CoBot robots incorporate such natural interaction within their taskoriented behaviors. Our robots combine perception, planning, execution, natural spoken interaction, and learning to accomplish a variety of service tasks. We believe the combination of these multiple features to be well tuned with the varied interests of this JGC60 Festschrift celebration, and are very pleased to present a brief overview of the dialog interactions in particular. 1 Introduction These days, in the Gates-Hillman building, it is common to see a CoBot robot roaming around executing requested service tasks (see Figure 1). Our research and continuous indoor deployment of the CoBot robots in multi-floor officestyle buildings (Biswas and Veloso, 2013) provides multiple contributions, including: robust real-time autonomous localization (Biswas et al., 2011), based on WIFI data (Biswas and Veloso, 2010), and on depth camera information (Biswas and Veloso, 2012); symbiotic autonomy in which the deployed robots can overcome their perceptual, cognitive, and actuation limitations by proactively asking for help from humans (Rosenthal et al., 2010; Rosenthal et al., 2011b), and, in ongoing experiments, from the web (Kollar et al., 2012; Samadi et al., 2012), and from other robots (Aguero and Veloso, 2012; Hristoskova et al., 2013); human-centered planning in which models of humans are explicitly used in robot task and path planning (Rosenthal et al., 2011a); semi-autonomous telepresence, enabling the combination of rich remote visual and motion control (a) CoBot-1 (b) CoBot-2 Figure 1: Two of the CoBot robots. (The hardware was designed and built by Michael Licitra.) with autonomous robot localization and navigation (Coltin et al., 2011a); web-based user task selection and information interfaces, also as input to creative multi-robot task scheduling and execution (Coltin et al., 2011b). Our robots purposefully include a modest variety of sensing and computing devices, including the Microsoft Kinect depth-camera, vision cameras for telepresence and interaction, and a small Hokuyo LIDAR for obstacle avoidance and localization comparison studies (no longer needed or present in our most recent CoBot-4). Finally, and of particular relevance to this paper, the robots include a touch-screen and speech-enabled tablet, with its microphones and speakers, to support the dialog-based human-robot interaction. The CoBot robots perform multiple task types: A single-destination task, in which the user asks the robot to go to a specific location the Go-To-Room task and, in addition, possibly to deliver a given spoken message the Deliver-Message task;

2 An item-transport task, in which the user requests that the robot retrieve an item from a location, and deliver it to a destination. This Transport task also acts as the task to accompany a person between locations. A person-guiding task, the Escort task, in which the robot waits for a person at the elevator hall on the floor of the destination location, and guides the person to the location; and the Visitor Companion task, in which the robot accompanies an all-day visitor. A semi-autonomous Telepresence task, in which a user remotely selects destinations on a displayed robot view or map, to which CoBot autonomously navigates. The robot s motion and camera is controlled through a rich web interface (Coltin et al., 2011a). The above tasks are equivalent from a navigational point of view, as they are achieved by the same navigation planner generating plans to reach destinations in the building and to pickup or deliver items at locations (see Figure 2). (a) Navigation (b) Object Delivery Figure 2: Snapshots of CoBot s execution For the performance of service tasks over their long lifetime, the robots need to interact with humans in their environments. We have researched and developed multiple modalities for the human-robot interaction, namely through a touch-screen based interface, through restrictedvocabulary speech interaction via a dedicated headset, and through a general microphone-based, open-environment, and unrestricted-vocabulary input. All the interactions are assumed to be related to the tasks the robot can perform. We briefly present three aspects related to language and speech. We first present the interaction within the Visitor Companion task. We then present the learning of grounding locations through speech interaction with human task requesters. Finally, we discuss our ongoing work on the processing of complex command sentences. 2 Companion for All-Day Visitors The All-Day Visitor-Companion task, developed as CoBot s first complete task (Rosenthal, 2012), offers an interesting opportunity for the robot to naturally interact with a human, as a visitor does not know or has limited knowledge of the overall environment. The robot therefore has the chance to inform the visitor about locations, people, and research, producing an interaction useful to the human, consisting of two main modules: the core interaction to (i) provide information about the day s schedule and visit, (ii) inform the visitor about the next appointment, and (iii) escort the visitor to meeting locations. the elective interaction to (i) offer extra unsolicited information, (ii) respond to requests from the visitor, and (iii) handle schedule delays or changes. The core interaction algorithm is a predefined parameterized policy, which gets instantiated with the specific given schedule and hosts. The policy includes states and interaction actions, where the states are detected by the robot s localization and its internal timing, and the actions consist of informative acts. The robot proceeds by executing the interaction per the instantiated policy (Rosenthal et al., 2010). The elective interaction is also a parameterized policy, but gets instantiated from spoken input from the visitor. This All-Day Visitor-Companion task uses a predefined command-like language. The input affects both the underlying task policy and future interactions with the visitor. An announcement of a delay, in particular, is handled through a lack of the robot receiving that the visitor is ready to continue its visit within the expected meeting end time. The robot could take the initiative of interrupting the meeting, though it currently does not. Further elective interactions include providing unsolicited information that matches the visitor s gathered research interests, modeled as broad keywords matching the robot s knowledge base of locations. Finally, the visitor can initiate requests for more information about a meeting, as well as for other services such as printing papers and retrieving coffee between meetings. The robot proceeds, adding tasks to its own schedule in accordance with both the requests and the visitor s meeting schedule (Rosenthal et al., 2010).

3 Figure 3: Example trace of the All-Day Visitor-Companion path. Snapshots: a) CoBot leads Alice to her first meeting. b) CoBot requests assistance from a staff member to prepare a cup of coffee. c) Alice graciously accepts her cup of coffee during her meeting. d) CoBot notifies Alice of a lab of interest on their way to a meeting. In this task, we have successfully used a headset for the All-Day Visitor, as the voice remains the same for the duration of the visit. In our tests with state-of-the-art speech recognition systems, we also use a brief initial training phase for the visitor with a limited vocabulary confined to the theme of the visit. Table 1 shows an illustrative trace of CoBot s interaction with an All-Day Visitor, as also shown in Figure 3. This trace represents one of many possible sequences of events and interactions. In general, both the visitor and CoBot can initiate dialog. The robot cannot predict most of the visitor s needs, so the visitor makes requests explicitly. However, CoBot maintains state about the visitor (Rosenthal et al., 2010). Using the dialog history, CoBot can determine when it is appropriate to offer an amenity such as coffee if the visitor has not requested it recently. If the visitor makes a request that CoBot cannot answer, CoBot has the ability to perform Internet searches and to request help from other humans through dialog or (Samadi et al., 2012; Coltin et al., 2011b). We have also researched having the robot generate an interesting dialog, during its all-day interaction with the visitor. We have, in particular, investigated two different techniques to keep the interaction engaging by reducing repetitiveness. First, rather than repeating the same notifications about the same locations throughout the day, CoBot increases the level of detail of the information given Initial State - CoBot and the visitor, Alice, start in the initial location Room 3412, and both know of the meeting schedule. Alice spends 5mn training CoBot s speech recognizer. CoBot explains that it will help her get to each meeting. Visitor: Alice; Interests: Robotics; Schedule: 9:00-10:00 AM - Pat Smith, Room :00-11:00 AM - Chris Jones, Room :00-12:00 PM - Dana Adams, Room :50 AM - Alice asks CoBot for a cup of coffee. Elective: Visitor Request - CoBot plans the subtask of getting coffee after navigating Alice in time to her meeting in Room 3123 (Figure 3a). 9:00 AM - CoBot and Alice arrive at Room Elective: Task Execution - CoBot executes the plan to get coffee while Alice is in the meeting. It dialogs with a human for help to get coffee. (Figure 3b). 9:55 AM - Alice tells CoBot she s ready. Elective: Task Execution - CoBot announces it has the coffee and the Visitor takes it. Core: Schedule - CoBot announces the next meeting room and navigates to Room :00 AM - Alice asks about host Chris Jones. Elective: Visitor Request - CoBot stops and displays the host s website. 10:05 AM - CoBot notifies Alice about a robot lab. Elective: Unsolicited Notification - Based on Alice interests (Robotics), CoBot describes a robot lab on the way (Figure 3d). CoBot and Alice arrive in time to Room :00 AM - CoBot s Dana that Alice is late. Elective: Task Delay - CoBot reached a timeout of waiting for Alice. CoBot s Dana that Alice is late. When she is ready, CoBot navigates to Dana s office. Table 1: Illustrative human-robot interaction at specific locations, as a function of the number of times it visits the locations. Second, our future work in CoBot includes probabilistically choosing an utterance from a set of synonyms so that similar

4 notifications do not sound repetitious, a technique we have used previously for the robot soccer CMCast commentators (Veloso et al., 2008). As there are clearly many ways for CoBot to respond to a request, offer help, and ask for help, it can vary the utterances to ensure that the dialog does not become boring and predictable, by weighing the responses by multiple factors, e.g., the frequency and recency of the utterance, and possibly feedback from the visitor. 3 (a) (b) (c) (d) (e) (f) Learning from Spoken Interaction In speech-based interaction, users solicit tasks and interact with the robot through open speech. In addition to the clear speech understanding challenges, the main issue is to extract a task request and its arguments from the spoken language. To attack such a challenge, the robot dialogs with the human and is capable of learning groundings of language to action types and arguments, namely locations of language references. Such groundings are learned and accumulated in a knowledge base. Figure 4 shows a sample of the learned knowledge base, which contains the learned groundings (currently thousands) for language references to actions and to arguments (Kollar et al., 2013). Not shown in the snapshot are the learned object groundings acquired from accessing the web to determine the most probable location of an object, as requested to be transported (Samadi et al., 2012). Figure 4: A sample snapshot of the knowledge base learned from interaction with humans. The knowledge base is used by CoBot to infer the action and parameters of the action from the spoken dialog. When the language cannot be resolved by using the learned facts, the robot engages in further dialog or access to the web, and learns new facts. Figure 5 shows snapshots of the spoken interaction with the robot. The CoBot Task Executor executes the actions to ask for help in performing Figure 5: The user requests a task through spoken language, and the action parameters are identified and processed by the Task Planner and Task Executor with the corresponding Ask Help actions. transport tasks, solicited through speech-based requests. In Figure 5(a), the user requests a transport task of an object to a specific location. In Figure 5(b) and Figure 5(c), after speech understanding, access to its knowledge base for action and parameter grounding, and actions to ask for help for parameters, including confirmation from the user on the robot s understanding, the robot has filled in all the parameters of its action primitives, and proceeds planning and executing the route to its pickup location. In Figure 5(d) and Figure 5(e), the robot asks for help for actuation to put the object in its basket and asks for help for the condition of detecting that the object is in the basket; and in Figure 5(f), the robot autonomously navigates to the destination location and delivers the object, asking for help for the actuation of removing the object from its basket. The speech-based interaction allows for users to request tasks and for any person to interrupt the robot and have the robot answer common questions about its status and capabilities. Moreover the speech interface enables the scheduling of a task without the need to access a computing device, and provides the ability to dialog to clarify the spoken request.

5 4 Requests with Complex Sentences As discussed, CoBot is able to process task requests as single spoken commands, e.g., Go to the lab or Bring me a soda from the kitchen. Moreover, the robot can engage in a dialog with the user if some part of the command is not clear or unknown, as illustrated in Figure 6. USER: Take me to Jaime s office. COBOT: I did not understand, should I go somewhere? USER: Yes COBOT: Ok, where should I go? USER: To Jaime Carbonell s office COBOT: Where can I find it? Please spell the room number USER: 6271 COBOT: I am going to room Is it correct? USER: Yes Figure 6: Direct-request task dialog CoBot is able to understand single commands, but we understand that spoken interactions about tasks and requests may be more complex. A user may want to request that the robot perform a sequence of tasks, or a user may not be sure about some part of the command, or the user may want to give different options to the robot. For instance, a user could ask Look for Professor Carbonell in his office or in the meeting room. We are pursuing research to handle this kind of complex command, concretely as sentences with conjunction and disjunction. To understand a spoken command, our goal is to map input sentences to one or more semantic frames representing the robot s task. When a command is given to the robot, it is analyzed using a free-form speech recognizer. The output of the ASR is a set of strings representing possible transcriptions of the command. To recover a frame from these strings, the algorithm needs to parse each sentence and extract an action and a variable number of arguments. For actions, we are interested in the referring expressions for tasks that the robot can execute such as go to or bring me ; the arguments can be of three different types: object, person, and location. An example of a parsed sentence is shown in Figure 7. The model to obtain such parses was trained using a conditional random field (CRF); the features used to train this model are binary and include both the words and the part of speech for current, previous, and next word. Bring me Action a cup of coffee Object from the kitchen Location Figure 7: Parsed sentence This model enables the algorithm to recover the role of each chunk in a sentence, but it also needs to recover the frame structure and identify how many frames there are in a sentence. To do this, we trained a second model, still using a CRF, where in addition to features used to train the first model, we used the label from the first parse. The goal of this second model is to extract a common root (RR) in a sentence and a set of conjunctions ( ) or disjunctions ( ). An example of the result of this second parse is shown in Figure 8. Look for Professor Carbonell RR in his office or in the meeting room Bring me to the lab and then go back to my office Figure 8: Parsed disjunctive sentence Once the input sentences have been parsed using both models, a partial frame is filled using the frame elements contained in the part of the sentence labeled as root. This partial frame, potentially empty, is then filled once for each or chunk. Figure 9 shows both parses for a sentence, the partial frame, and the fully filled frames. [Bring me] Action [a cup of coffee] Object or [some tea] Object [Bring me] RR [a cup of coffee] or [some tea] (a) Parses Frame: Bringing Object:... Frame: Bringing Object: cup of coffee Frame: Bringing Object: some tea (b) Partial Frame (c) Frame One (d) Frame Two Figure 9: Examples of steps to parse and extract task parameters of complex sentences Once the full frames are recovered the robot can execute the required task or, if more options are available, pick one according to its scheduler. If, after analyzing all the conjuncts or disjuncts, no frame is entirely filled, the robot can still engage in a dialog similarly to the one illustrated in Figure 6.

6 5 Conclusion Our CoBot robots include task-based dialog to allow users to request tasks and have general spoken interactions with people. We have researched different levels of vocabulary, from menu-based to free speech interpreted within a task. The robot is capable of learning mappings from language to its known task components, such as locations, actions, and objects. We have also devised a solution to process complex sentences composed of conjunctions and disjunctions. Our goal is to continue enriching the understanding of spoken interaction, with the next steps being to process conditional commands, e.g., Bring me a cup of coffee, only if it s freshly brewed and time expressions, e.g., Go to the lab and wait there until someone arrives. Our overall approach to dialog understanding and generation is strongly based on the fact that the human-robot interaction and the underlying dialog are bounded by the known predefined capabilities of the robot, the grounded physical information known and perceived, and the actions the robot can perform. Such a task-based framework for the robot enables the contribution of rich spoken interaction, as bounded by the finite robot s built-in or learned task parameters. References C. Aguero and M. Veloso Transparent Multi-Robot Communication Exchange for Executing Robot Behaviors. In Int. Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2012), pages J. Biswas and M. Veloso Wifi localization and navigation for autonomous indoor mobile robots. In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages IEEE. J. Biswas and M. Veloso Depth camera based indoor mobile robot localization and navigation. In Proceedings of ICRA 12, the IEEE International Conference on Robotics and Automation. J. Biswas and M. Veloso Localization and navigation of the CoBots over long-term deployments. International Journal of Robotics Research, 32(14): , December. B. Coltin, J. Biswas, D. Pomerleau, and M. Veloso. 2011a. Effective semi-autonomous telepresence. Proceedings of the RoboCup Symposium, pages , July. B. Coltin, M. Veloso, and R. Ventura. 2011b. Dynamic user task scheduling for mobile robots. In Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence. A. Hristoskova, C. Aguero, M. Veloso, and F. De- Turck Heterogeneous Context-aware Robots Providing a Personalized Building Tour. International Journal of Advanced Robotic Systems, January. DOI: / T. Kollar, M. Samadi, and M. Veloso Enabling robots to find and fetch objects by querying the web. In Proceedings of AAMAS 12, the Eleventh International Joint Conference on Autonomous Agents and Multi-Agent Systems. T. Kollar, V. Perera, D. Nardi, and M. Veloso Learning Environmental Knowledge from Task- Based Human-Robot Dialog. In Proceedings of ICRA 13, the IEEE International Conference on Robotics and Automation, June. S. Rosenthal, J. Biswas, and M. Veloso An effective personal mobile robot agent through symbiotic human-robot interaction. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, volume 1, pages S. Rosenthal, M. Veloso, and A.K. Dey. 2011a. Is someone in this office available to help me? Journal of Intelligent & Robotic Systems, pages S. Rosenthal, M. Veloso, and A.K. Dey. 2011b. Task behavior and interaction planning for a mobile service robot that occasionally requires help. In Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence. S. Rosenthal Human-Centered Planning for Effective Task Autonomy. Ph.D. thesis, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, May. Available as technical report CMU- CS M. Samadi, T. Kollar, and M. Veloso Using the Web to Interactively Learn to Find Objects. In Proceedings of the Twenty-Sixth Conference on Artificial Intelligence (AAAI-12), Toronto, Canada, July. M. Veloso, N. Armstrong-Crews, S. Chernova, Crawford. E., C. McMillen, M. Roth, D. Vail, and S. Zickler A team of humanoid game commentators. International Journal of Humanoid Robotics, 5(3): J. Biswas, B. Coltin, and M. Veloso Corrective gradient refinement for mobile robot localization. In Intelligent Robots and Systems (IROS), 2011 IEEE International Conference on. IEEE.

Autonomous Mobile Service Robots For Humans, With Human Help, and Enabling Human Remote Presence

Autonomous Mobile Service Robots For Humans, With Human Help, and Enabling Human Remote Presence Autonomous Mobile Service Robots For Humans, With Human Help, and Enabling Human Remote Presence Manuela Veloso, Stephanie Rosenthal, Rodrigo Ventura*, Brian Coltin, and Joydeep Biswas School of Computer

More information

Towards Replanning for Mobile Service Robots with Shared Information

Towards Replanning for Mobile Service Robots with Shared Information Towards Replanning for Mobile Service Robots with Shared Information Brian Coltin and Manuela Veloso School of Computer Science, Carnegie Mellon University 500 Forbes Avenue, Pittsburgh, PA, 15213 {bcoltin,veloso}@cs.cmu.edu

More information

Language-Based Bidirectional Human and Robot Interaction Learning for Mobile Service Robots

Language-Based Bidirectional Human and Robot Interaction Learning for Mobile Service Robots Language-Based Bidirectional Human and Robot Interaction Learning for Mobile Service Robots Vittorio Perera CMU-CS-18-108 August 22, 2018 School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Keywords: Multi-robot adversarial environments, real-time autonomous robots ROBOT SOCCER: A MULTI-ROBOT CHALLENGE EXTENDED ABSTRACT Manuela M. Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA veloso@cs.cmu.edu Abstract Robot soccer opened

More information

CMDragons 2009 Team Description

CMDragons 2009 Team Description CMDragons 2009 Team Description Stefan Zickler, Michael Licitra, Joydeep Biswas, and Manuela Veloso Carnegie Mellon University {szickler,mmv}@cs.cmu.edu {mlicitra,joydeep}@andrew.cmu.edu Abstract. In this

More information

Language-Based Bidirectional Human And Robot Interaction Learning For Mobile Service Robots

Language-Based Bidirectional Human And Robot Interaction Learning For Mobile Service Robots Language-Based Bidirectional Human And Robot Interaction Learning For Mobile Service Robots Vittorio Perera Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 vdperera@cs.cmu.edu

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

Help Me! Sharing of Instructions Between Remote and Heterogeneous Robots

Help Me! Sharing of Instructions Between Remote and Heterogeneous Robots Help Me! Sharing of Instructions Between Remote and Heterogeneous Robots Jianmin Ji 1, Pooyan Fazli 2,3(B), Song Liu 1, Tiago Pereira 2, Dongcai Lu 1, Jiangchuan Liu 1, Manuela Veloso 2, and Xiaoping Chen

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Luca Iocchi 1, Maria Teresa Lázaro 1, Laurent Jeanpierre 2, Abdel-Illah Mouaddib 2 1 Dept. of Computer,

More information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Announcements Robotics Study Still going on... Readings for this week Stoytchev, Alexander.

More information

Multi-Robot Team Response to a Multi-Robot Opponent Team

Multi-Robot Team Response to a Multi-Robot Opponent Team Multi-Robot Team Response to a Multi-Robot Opponent Team James Bruce, Michael Bowling, Brett Browning, and Manuela Veloso {jbruce,mhb,brettb,mmv}@cs.cmu.edu Carnegie Mellon University 5000 Forbes Avenue

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

CMDragons 2008 Team Description

CMDragons 2008 Team Description CMDragons 2008 Team Description Stefan Zickler, Douglas Vail, Gabriel Levi, Philip Wasserman, James Bruce, Michael Licitra, and Manuela Veloso Carnegie Mellon University {szickler,dvail2,jbruce,mlicitra,mmv}@cs.cmu.edu

More information

Multi-Humanoid World Modeling in Standard Platform Robot Soccer

Multi-Humanoid World Modeling in Standard Platform Robot Soccer Multi-Humanoid World Modeling in Standard Platform Robot Soccer Brian Coltin, Somchaya Liemhetcharat, Çetin Meriçli, Junyun Tay, and Manuela Veloso Abstract In the RoboCup Standard Platform League (SPL),

More information

In the pursuit of developing autonomous, perpetually

In the pursuit of developing autonomous, perpetually EXPERT OPINION Editor: Daniel Zeng, University of Arizona and Chinese Academy of Sciences, zengdaniel@gmail.com The 1,-km Challenge: Insights and Quantitative and Qualitative Results Joydeep Biswas and

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

Confidence-Based Multi-Robot Learning from Demonstration

Confidence-Based Multi-Robot Learning from Demonstration Int J Soc Robot (2010) 2: 195 215 DOI 10.1007/s12369-010-0060-0 Confidence-Based Multi-Robot Learning from Demonstration Sonia Chernova Manuela Veloso Accepted: 5 May 2010 / Published online: 19 May 2010

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (2 pts) How to avoid obstacles when reproducing a trajectory using a learned DMP?

More information

Language-Based Sensing Descriptors for Robot Object Grounding

Language-Based Sensing Descriptors for Robot Object Grounding Language-Based Sensing Descriptors for Robot Object Grounding Guglielmo Gemignani 1, Manuela Veloso 2, and Daniele Nardi 1 1 Department of Computer, Control, and Management Engineering Antonio Ruberti",

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Autonomous Robot Soccer Teams

Autonomous Robot Soccer Teams Soccer-playing robots could lead to completely autonomous intelligent machines. Autonomous Robot Soccer Teams Manuela Veloso Manuela Veloso is professor of computer science at Carnegie Mellon University.

More information

CS 309: Autonomous Intelligent Robotics FRI I. Instructor: Justin Hart.

CS 309: Autonomous Intelligent Robotics FRI I. Instructor: Justin Hart. CS 309: Autonomous Intelligent Robotics FRI I Instructor: Justin Hart http://justinhart.net/teaching/2017_fall_cs378/ Today Basic Information, Preliminaries FRI Autonomous Robots Overview Panel with the

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Home-Care Technology for Independent Living

Home-Care Technology for Independent Living Independent LifeStyle Assistant Home-Care Technology for Independent Living A NIST Advanced Technology Program Wende Dewing, PhD Human-Centered Systems Information and Decision Technologies Honeywell Laboratories

More information

SPQR RoboCup 2016 Standard Platform League Qualification Report

SPQR RoboCup 2016 Standard Platform League Qualification Report SPQR RoboCup 2016 Standard Platform League Qualification Report V. Suriani, F. Riccio, L. Iocchi, D. Nardi Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti Sapienza Università

More information

Multi-Robot Planning using Robot-Dependent Reachability Maps

Multi-Robot Planning using Robot-Dependent Reachability Maps Multi-Robot Planning using Robot-Dependent Reachability Maps Tiago Pereira 123, Manuela Veloso 1, and António Moreira 23 1 Carnegie Mellon University, Pittsburgh PA 15213, USA, tpereira@cmu.edu, mmv@cs.cmu.edu

More information

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL Nathanael Chambers, James Allen, Lucian Galescu and Hyuckchul Jung Institute for Human and Machine Cognition 40 S. Alcaniz Street Pensacola, FL 32502

More information

Distributed, Play-Based Coordination for Robot Teams in Dynamic Environments

Distributed, Play-Based Coordination for Robot Teams in Dynamic Environments Distributed, Play-Based Coordination for Robot Teams in Dynamic Environments Colin McMillen and Manuela Veloso School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, U.S.A. fmcmillen,velosog@cs.cmu.edu

More information

Knowledge Representation and Cognition in Natural Language Processing

Knowledge Representation and Cognition in Natural Language Processing Knowledge Representation and Cognition in Natural Language Processing Gemignani Guglielmo Sapienza University of Rome January 17 th 2013 The European Projects Surveyed the FP6 and FP7 projects involving

More information

Saphira Robot Control Architecture

Saphira Robot Control Architecture Saphira Robot Control Architecture Saphira Version 8.1.0 Kurt Konolige SRI International April, 2002 Copyright 2002 Kurt Konolige SRI International, Menlo Park, California 1 Saphira and Aria System Overview

More information

A DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607)

A DAI Architecture for Coordinating Multimedia Applications. (607) / FAX (607) 117 From: AAAI Technical Report WS-94-04. Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved. A DAI Architecture for Coordinating Multimedia Applications Keith J. Werkman* Loral Federal

More information

Human Robot Dialogue Interaction. Barry Lumpkin

Human Robot Dialogue Interaction. Barry Lumpkin Human Robot Dialogue Interaction Barry Lumpkin Robots Where to Look: A Study of Human- Robot Engagement Why embodiment? Pure vocal and virtual agents can hold a dialogue Physical robots come with many

More information

CMDragons 2006 Team Description

CMDragons 2006 Team Description CMDragons 2006 Team Description James Bruce, Stefan Zickler, Mike Licitra, and Manuela Veloso Carnegie Mellon University Pittsburgh, Pennsylvania, USA {jbruce,szickler,mlicitra,mmv}@cs.cmu.edu Abstract.

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Cognitive Robotics 2017/2018

Cognitive Robotics 2017/2018 Cognitive Robotics 2017/2018 Course Introduction Matteo Matteucci matteo.matteucci@polimi.it Artificial Intelligence and Robotics Lab - Politecnico di Milano About me and my lectures Lectures given by

More information

Design of an office guide robot for social interaction studies

Design of an office guide robot for social interaction studies Design of an office guide robot for social interaction studies Elena Pacchierotti, Henrik I. Christensen & Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology, Stockholm, Sweden

More information

Mission Reliability Estimation for Repairable Robot Teams

Mission Reliability Estimation for Repairable Robot Teams Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2005 Mission Reliability Estimation for Repairable Robot Teams Stephen B. Stancliff Carnegie Mellon University

More information

Integration of Speech and Vision in a small mobile robot

Integration of Speech and Vision in a small mobile robot Integration of Speech and Vision in a small mobile robot Dominique ESTIVAL Department of Linguistics and Applied Linguistics University of Melbourne Parkville VIC 3052, Australia D.Estival @linguistics.unimelb.edu.au

More information

RoboCup. Presented by Shane Murphy April 24, 2003

RoboCup. Presented by Shane Murphy April 24, 2003 RoboCup Presented by Shane Murphy April 24, 2003 RoboCup: : Today and Tomorrow What we have learned Authors Minoru Asada (Osaka University, Japan), Hiroaki Kitano (Sony CS Labs, Japan), Itsuki Noda (Electrotechnical(

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

ACHIEVING SEMI-AUTONOMOUS ROBOTIC BEHAVIORS USING THE SOAR COGNITIVE ARCHITECTURE

ACHIEVING SEMI-AUTONOMOUS ROBOTIC BEHAVIORS USING THE SOAR COGNITIVE ARCHITECTURE 2010 NDIA GROUND VEHICLE SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 17-19 DEARBORN, MICHIGAN ACHIEVING SEMI-AUTONOMOUS ROBOTIC

More information

A Collaboration with DARCI

A Collaboration with DARCI A Collaboration with DARCI David Norton, Derrall Heath, Dan Ventura Brigham Young University Computer Science Department Provo, UT 84602 dnorton@byu.edu, dheath@byu.edu, ventura@cs.byu.edu Abstract We

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine

More information

Tesca Fitzgerald. Graduate Research Assistant Aug

Tesca Fitzgerald. Graduate Research Assistant Aug Tesca Fitzgerald Webpage www.tescafitzgerald.com Email tesca.fitzgerald@cc.gatech.edu Last updated April 2018 School of Interactive Computing Georgia Institute of Technology 801 Atlantic Drive, Atlanta,

More information

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Capturing and Adapting Traces for Character Control in Computer Role Playing Games Capturing and Adapting Traces for Character Control in Computer Role Playing Games Jonathan Rubin and Ashwin Ram Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 USA Jonathan.Rubin@parc.com,

More information

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information

More information

User interface for remote control robot

User interface for remote control robot User interface for remote control robot Gi-Oh Kim*, and Jae-Wook Jeon ** * Department of Electronic and Electric Engineering, SungKyunKwan University, Suwon, Korea (Tel : +8--0-737; E-mail: gurugio@ece.skku.ac.kr)

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

Autonomous Task Execution of a Humanoid Robot using a Cognitive Model

Autonomous Task Execution of a Humanoid Robot using a Cognitive Model Autonomous Task Execution of a Humanoid Robot using a Cognitive Model KangGeon Kim, Ji-Yong Lee, Dongkyu Choi, Jung-Min Park and Bum-Jae You Abstract These days, there are many studies on cognitive architectures,

More information

Context-sensitive speech recognition for human-robot interaction

Context-sensitive speech recognition for human-robot interaction Context-sensitive speech recognition for human-robot interaction Pierre Lison Cognitive Systems @ Language Technology Lab German Research Centre for Artificial Intelligence (DFKI GmbH) Saarbrücken, Germany.

More information

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1 CS 730/830: Intro AI Prof. Wheeler Ruml TA Bence Cserna Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1 Wheeler Ruml (UNH) Lecture 1, CS 730 1 / 23 My Definition

More information

CURRICULUM VITAE STEPHANIE ROSENTHAL RESEARCH INTERESTS EDUCATION EMPLOYMENT

CURRICULUM VITAE STEPHANIE ROSENTHAL RESEARCH INTERESTS EDUCATION EMPLOYMENT STEPHANIE ROSENTHAL CURRICULUM VITAE www.rosenthalphd.com Current Position: Assistant Professor of Applied Data Analytics Chatham University Adjunct Faculty in School of Computer Science Carnegie Mellon

More information

Changjiang Yang. Computer Vision, Pattern Recognition, Machine Learning, Robotics, and Scientific Computing.

Changjiang Yang. Computer Vision, Pattern Recognition, Machine Learning, Robotics, and Scientific Computing. Changjiang Yang Mailing Address: Department of Computer Science University of Maryland College Park, MD 20742 Lab Phone: (301)405-8366 Cell Phone: (410)299-9081 Fax: (301)314-9658 Email: yangcj@cs.umd.edu

More information

Research Statement MAXIM LIKHACHEV

Research Statement MAXIM LIKHACHEV Research Statement MAXIM LIKHACHEV My long-term research goal is to develop a methodology for robust real-time decision-making in autonomous systems. To achieve this goal, my students and I research novel

More information

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer University of Toronto Companion ECE1778 Winter 2015 Creative Applications for Mobile Devices Wei Hao Chang Apper Alexander Hong Programmer April 9, 2015 Contents 1 Introduction 3 1.1 Problem......................................

More information

No one claims that people must interact with machines

No one claims that people must interact with machines Applications: Robotics Building a Multimodal Human Robot Interface Dennis Perzanowski, Alan C. Schultz, William Adams, Elaine Marsh, and Magda Bugajska, Naval Research Laboratory No one claims that people

More information

CPS331 Lecture: Agents and Robots last revised November 18, 2016

CPS331 Lecture: Agents and Robots last revised November 18, 2016 CPS331 Lecture: Agents and Robots last revised November 18, 2016 Objectives: 1. To introduce the basic notion of an agent 2. To discuss various types of agents 3. To introduce the subsumption architecture

More information

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Research Supervisor: Minoru Etoh (Professor, Open and Transdisciplinary Research Initiatives, Osaka University)

More information

Vision System for a Robot Guide System

Vision System for a Robot Guide System Vision System for a Robot Guide System Yu Wua Wong 1, Liqiong Tang 2, Donald Bailey 1 1 Institute of Information Sciences and Technology, 2 Institute of Technology and Engineering Massey University, Palmerston

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

Design and Simulation of a New Self-Learning Expert System for Mobile Robot Design and Simulation of a New Self-Learning Expert System for Mobile Robot Rabi W. Yousif, and Mohd Asri Hj Mansor Abstract In this paper, we present a novel technique called Self-Learning Expert System

More information

Dependable AI Systems

Dependable AI Systems Dependable AI Systems Homa Alemzadeh University of Virginia In collaboration with: Kush Varshney, IBM Research 2 Artificial Intelligence An intelligent agent or system that perceives its environment and

More information

An Effective Personal Mobile Robot Agent Through Symbiotic Human-Robot Interaction

An Effective Personal Mobile Robot Agent Through Symbiotic Human-Robot Interaction An Effective Personal Mobile Robot Agent Through Symbiotic Human-Robot Interaction Stephanie Rosenthal Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA srosenth@cs.cmu.edu Joydeep

More information

Indiana K-12 Computer Science Standards

Indiana K-12 Computer Science Standards Indiana K-12 Computer Science Standards What is Computer Science? Computer science is the study of computers and algorithmic processes, including their principles, their hardware and software designs,

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Programming Design ROBOTC Software

Programming Design ROBOTC Software Programming Design ROBOTC Software Computer Integrated Manufacturing 2013 Project Lead The Way, Inc. Behavior-Based Programming A behavior is anything your robot does Example: Turn on a single motor or

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Prospective Teleautonomy For EOD Operations

Prospective Teleautonomy For EOD Operations Perception and task guidance Perceived world model & intent Prospective Teleautonomy For EOD Operations Prof. Seth Teller Electrical Engineering and Computer Science Department Computer Science and Artificial

More information

Gameplay as On-Line Mediation Search

Gameplay as On-Line Mediation Search Gameplay as On-Line Mediation Search Justus Robertson and R. Michael Young Liquid Narrative Group Department of Computer Science North Carolina State University Raleigh, NC 27695 jjrobert@ncsu.edu, young@csc.ncsu.edu

More information

Learning Accuracy and Availability of Humans Who Help Mobile Robots

Learning Accuracy and Availability of Humans Who Help Mobile Robots Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence Learning Accuracy and Availability of Humans Who Help Mobile Robots Stephanie Rosenthal, Manuela Veloso, and Anind K. Dey School

More information

NTU Robot PAL 2009 Team Report

NTU Robot PAL 2009 Team Report NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering

More information

CSC C85 Embedded Systems Project # 1 Robot Localization

CSC C85 Embedded Systems Project # 1 Robot Localization 1 The goal of this project is to apply the ideas we have discussed in lecture to a real-world robot localization task. You will be working with Lego NXT robots, and you will have to find ways to work around

More information

CS343 Introduction to Artificial Intelligence Spring 2010

CS343 Introduction to Artificial Intelligence Spring 2010 CS343 Introduction to Artificial Intelligence Spring 2010 Prof: TA: Daniel Urieli Department of Computer Science The University of Texas at Austin Good Afternoon, Colleagues Welcome to a fun, but challenging

More information

ACTIVE, A TOOL FOR BUILDING INTELLIGENT USER INTERFACES

ACTIVE, A TOOL FOR BUILDING INTELLIGENT USER INTERFACES ACTIVE, A TOOL FOR BUILDING INTELLIGENT USER INTERFACES Didier Guzzoni and Charles Baur Robotics Systems Lab (LSRO 2) EPFL Lausanne, Switzerland Adam Cheyer Artificial Intelligence Center SRI International

More information

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL Juan Fasola jfasola@andrew.cmu.edu Manuela M. Veloso veloso@cs.cmu.edu School of Computer Science Carnegie Mellon University

More information

Soccer-Swarm: A Visualization Framework for the Development of Robot Soccer Players

Soccer-Swarm: A Visualization Framework for the Development of Robot Soccer Players Soccer-Swarm: A Visualization Framework for the Development of Robot Soccer Players Lorin Hochstein, Sorin Lerner, James J. Clark, and Jeremy Cooperstock Centre for Intelligent Machines Department of Computer

More information

The Behavior Evolving Model and Application of Virtual Robots

The Behavior Evolving Model and Application of Virtual Robots The Behavior Evolving Model and Application of Virtual Robots Suchul Hwang Kyungdal Cho V. Scott Gordon Inha Tech. College Inha Tech College CSUS, Sacramento 253 Yonghyundong Namku 253 Yonghyundong Namku

More information

Artificial Intelligence: Implications for Autonomous Weapons. Stuart Russell University of California, Berkeley

Artificial Intelligence: Implications for Autonomous Weapons. Stuart Russell University of California, Berkeley Artificial Intelligence: Implications for Autonomous Weapons Stuart Russell University of California, Berkeley Outline Remit [etc] AI in the context of autonomous weapons State of the Art Likely future

More information

COS Lecture 1 Autonomous Robot Navigation

COS Lecture 1 Autonomous Robot Navigation COS 495 - Lecture 1 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Introduction Education B.Sc.Eng Engineering Phyics, Queen s University

More information

SPQR RoboCup 2014 Standard Platform League Team Description Paper

SPQR RoboCup 2014 Standard Platform League Team Description Paper SPQR RoboCup 2014 Standard Platform League Team Description Paper G. Gemignani, F. Riccio, L. Iocchi, D. Nardi Department of Computer, Control, and Management Engineering Sapienza University of Rome, Italy

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

Design of an Office-Guide Robot for Social Interaction Studies

Design of an Office-Guide Robot for Social Interaction Studies Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems October 9-15, 2006, Beijing, China Design of an Office-Guide Robot for Social Interaction Studies Elena Pacchierotti,

More information

Using Reactive and Adaptive Behaviors to Play Soccer

Using Reactive and Adaptive Behaviors to Play Soccer AI Magazine Volume 21 Number 3 (2000) ( AAAI) Articles Using Reactive and Adaptive Behaviors to Play Soccer Vincent Hugel, Patrick Bonnin, and Pierre Blazevic This work deals with designing simple behaviors

More information

Overview of the Carnegie Mellon University Robotics Institute DOE Traineeship in Environmental Management 17493

Overview of the Carnegie Mellon University Robotics Institute DOE Traineeship in Environmental Management 17493 Overview of the Carnegie Mellon University Robotics Institute DOE Traineeship in Environmental Management 17493 ABSTRACT Nathan Michael *, William Whittaker *, Martial Hebert * * Carnegie Mellon University

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

4D-Particle filter localization for a simulated UAV

4D-Particle filter localization for a simulated UAV 4D-Particle filter localization for a simulated UAV Anna Chiara Bellini annachiara.bellini@gmail.com Abstract. Particle filters are a mathematical method that can be used to build a belief about the location

More information

Construction of Mobile Robots

Construction of Mobile Robots Construction of Mobile Robots 716.091 Institute for Software Technology 1 Previous Years Conference Robot https://www.youtube.com/watch?v=wu7zyzja89i Breakfast Robot https://youtu.be/dtoqiklqcug 2 This

More information

Voice Control of da Vinci

Voice Control of da Vinci Voice Control of da Vinci Lindsey A. Dean and H. Shawn Xu Mentor: Anton Deguet 5/19/2011 I. Background The da Vinci is a tele-operated robotic surgical system. It is operated by a surgeon sitting at the

More information

understanding sensors

understanding sensors The LEGO MINDSTORMS EV3 set includes three types of sensors: Touch, Color, and Infrared. You can use these sensors to make your robot respond to its environment. For example, you can program your robot

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Modeling Human-Robot Interaction for Intelligent Mobile Robotics

Modeling Human-Robot Interaction for Intelligent Mobile Robotics Modeling Human-Robot Interaction for Intelligent Mobile Robotics Tamara E. Rogers, Jian Peng, and Saleh Zein-Sabatto College of Engineering, Technology, and Computer Science Tennessee State University

More information

2018 Avanade Inc. All Rights Reserved.

2018 Avanade Inc. All Rights Reserved. Microsoft Future Decoded 2018 November 6th Why AI Empowers Our Business Today Roberto Chinelli Data and Artifical Intelligence Market Unit Lead Avanade Roberto Chinelli Avanade Italy Data and AI Market

More information

Baset Adult-Size 2016 Team Description Paper

Baset Adult-Size 2016 Team Description Paper Baset Adult-Size 2016 Team Description Paper Mojtaba Hosseini, Vahid Mohammadi, Farhad Jafari 2, Dr. Esfandiar Bamdad 1 1 Humanoid Robotic Laboratory, Robotic Center, Baset Pazhuh Tehran company. No383,

More information

Artificial Intelligence and Mobile Robots: Successes and Challenges

Artificial Intelligence and Mobile Robots: Successes and Challenges Artificial Intelligence and Mobile Robots: Successes and Challenges David Kortenkamp NASA Johnson Space Center Metrica Inc./TRACLabs Houton TX 77058 kortenkamp@jsc.nasa.gov http://www.traclabs.com/~korten

More information