Teaching robots: embodied machine learning strategies for networked robotic applications

Size: px
Start display at page:

Download "Teaching robots: embodied machine learning strategies for networked robotic applications"

Transcription

1 Teaching robots: embodied machine learning strategies for networked robotic applications Artur Arsenio Departamento de Engenharia Informática, Instituto Superior técnico / Universidade Técnica de Lisboa Institute for Human Studies and Intelligent Sciences artur.arsenio}@ist.utl.pt Conference Topic CT 13 Abstract A plethora of network applications exists nowadays for users to download to their mobile devices, game consoles, or personal computers. Interactivity is increasingly a must on such applications. This paper presents several interactive applications to be downloaded from an application server and deployed at robotic platforms, which are especially suited for human-robot interactions. We introduce learning on robot applications, aiming at increasingly and pervasively including robots into our society, at our homes or on the move. We argue for establishing social interactions, treating robots as humans, using theories from children development as a basis for developmental learning on robots. Hence, by downloading interactive applications, and equipped with a series of children toys, a robot can socially interact with humans, having an increasingly satisfactory experience as the robot learns new knowledge. Key Words: Machine Learning, Robotics, Distributed Systems, Networked Applications, Developmental Learning 1. Introduction Children love toys. Human caregivers often employ learning aids, such as books, educational videos, drawing boards, musical or textured toys, to teach a child. These social interactions should be extrapolated to robots as well, for them to learn and to interact with humans [1]. With the current advent of network applications, distributed over the internet (with components running on sensors or robots, while others running eventually on network servers [2]), there is the need to download applications into the robots. These applications must be automatically installed on robots, enabling them to interact with humans locally or even remotely through the internet. 1.1 Development of Biologically Inspired Interacting Applications that Learn We, humans, can be seen as a biological machine. However, we do not treat children as machines, i.e., automatons. But this view is still widely employed in industry to build robots. Building robots involves indeed the hardware setup of sensors, actuators, metal parts, cables, processing boards, as well as the software development. Such engineering might be viewed as the robot genotype. But equally important in a child is the phenotype, the developmental acquisition of information in a social and cultural context. Inspired in infant development, we aim at developing a robot's perceptual system through the use of learning aids, so that a robot learns about the world according to a child s developmental phases, by socially interacting with humans. The human caregiver plays a very important role on a robot s learning process (as it is so with children), performing educational and play activities with the robot (such as drawing, painting or playing with a toy train on a railway), facilitating robot s perception and learning. 1.2 Networked and Interactive Applications We aim at enculturating robots - introducing robots into our society and treating them as us - using child development as a metaphor for developmental learning on a robot. And by doing so, we developed a plethora of interactive applications that, whenever downloaded from a server platform to a robot, and installed on it, can provide humans with new levels of interactivity while communicating with the robot.

2 2. Machine Learning For an autonomous robot to be capable of developing and adapting to its environment, it needs to be able to learn. The field of machine learning offers many powerful algorithms, but these require training data to operate. Infant development research suggests ways to acquire such training data from simple contexts, and use this experience to bootstrap to more complex contexts. We need to identify situations that enable the robot to temporarily reach beyond its current perceptual abilities, giving the opportunity for development to occur [1]. This led us to create children-like learning scenarios for teaching a robot. These learning experiments are used for transmitting information to the humanoid robot Cog (see Figure 1) by interactive applications, to learn about object's multiple visual and auditory representations from books, other learning aids, musical instruments and education activities such as drawing and painting. Our strategy for the development of these applications relies heavily in human-robot interactions. It is essential to have a human in the loop to introduce objects from a book to the robot (as a human caregiver does to a child). A more rich, complete Figure 1 Cog going through many different social interactions with a human, in order to extract information and perform different tasks according to different interactive applications. through the caregiver s eyes. 2.1 Robot Skill Augmentation through Cognitive Artifacts human-robot communication interface results from adding other aiding tools to the robot's portfolio (which facilitate as well the children' learning process). This is achieved by having an application that selectively attends to the human actuator (Hand or Finger). Indeed, primates have specific brain areas to process the hand visual appearance [3]. Inspired by human development studies, emphasis will be placed on facilitating perception through the action of a human instructor. Multi-modal object properties are learned using these children educational tools and inserted into several recognition applications, which are then applied to developmentally acquire new object representations. The goal is for a robot, employing new interactive applications, to see the world A human caregiver can introduce a robot to a rich world of visual information concerning objects' visual appearance and shape. But cognitive artifacts can also be applied to improve perception over other perceptual modalities, such as auditory processing. We exploit repetition (rhythmic motion, repeated sounds) to achieve segmentation and recognition across multiple senses. Hence, we aim at detecting conditions that repeat with some roughly constant rate, where that rate is consistent with what a human can easily produce and perceive. This is not a very well defined range, but we will consider anything above 10Hz to be too fast, and anything below 0.1Hz to be too slow. Repetitive signals in this range are considered to be events in our system: waving a flag is an event, but the vibration of a violin string is not an event (too fast), and neither is the daily rise and fall of the sun (too slow). Abrupt motions, such as a poking movement, which involve large variations of movement, are

3 also used to extract percepts [1,4]. Such restrictions are related to the idea of natural kinds, where perception is based on the physical dimensions and practical interests of the observer. 3. Teaching Applications using Books and Toys Learning aids are often used by human caregivers to introduce the child to a diverse set of (in)animate objects, exposing the latter to an outside world of colors, forms, shapes and contrasts, that otherwise might not be available to a child (such as images of whales and cows). Since these learning aids help to expand the child's knowledge of the world, they are a potentially useful tool for introducing new informative percepts to a robot. Children's learning is often aided by the use of audiovisuals, and especially books, from social interactions with their mother or caregiver. Indeed, humans often paint, draw or just read books to children during their childhood. Books are indeed a useful tool to teach robots different object representations and to communicate properties of unknown objects to them. 3.1 Learning Book Images A human aided perceptual grouping application acquires informative percepts from picture Figure 2 - Object templates extracted from books. books (made of different materials: fabric, cardboard or foam books experimental results for regular books shown on Figure 2) by tracking a periodically moving human actuator (tapping finger), to extract the visual appearance of objects from background pages, as shown in Figure 3, and applied as follows [5]: Stationary image 1. Color segmentation of stationary Object mask image (over a sequence of consecutive frames). 2. Human actor waves on top of the 1 4 object to be segmented. Motion of skin-tone pixels tracked over a time interval, Energy per frequency Color Segmentation 3 content is determined for each trajectory point 3. Periodic, skin-tone points are grouped together into the finger 2 Periodicity detection mask. Figure 3 Human-aided perceptual grouping algorithm.

4 4. Target object's template given by union of all color regions of the stationary image which intersect the finger s trajectory. Teaching robots from books may become an interesting market application. The software required is computationally inexpensive, and it could easily be incorporated on a micro-chip. Integration of such technology with cross-modal association (which is also computationally inexpensive) is also possible for introducing further applications. Just imagine a child showing a tree from a book to a baby robot (or a Sony AIBO robot), while saying tree. The posteriori visual perception of the tree would enact the production of the sound tree by the robot, or listening to the repetitive sound tree would trigger visual search behaviors for such object. 3.2 Matching Geometric Patterns: Drawings, Paintings, Pictures... Object descriptions may came in different formats - drawings, paintings, photos, etc. Hence, the link between an object representation in a book and real objects recognized from the surrounding world is established through object recognition. Objects are recognized using geometric hashing, a widely used recognition technique. The algorithm operates on three different set of features: chrominance and luminance topological regions, and shape [1] (determined by an object s edges). Except for a description contained in a book, which was previously segmented, the robot had no other knowledge concerning the visual appearance or shape of such object. Additional possibilities include linking different object descriptions in a book, such as a drawing, as demonstrated also by results presented in Figure 4. A sketch of an object contains salient features concerning its shape, and therefore there are advantages to learning, and linking, these different representations. This framework is also a useful tool for linking other object descriptions in a book, such as a photo, a painting, or a printing [1]. 3.3 Toys Applications A plethora of educational tools are widely used by educators to teach children, helping them to develop. Examples of such tools are toys (such as drawing boards), educational TV programs or educational videos. The Baby Einstein collection includes videos to introduce infants and toddlers to colors, music, literature and art. Famous painters and their artistic creations are displayed to children on the Baby Van Gogh video, from the mentioned collection. This inspired the design of learning experiments in which a robot is introduced Figure 4 - Matching objects from books to real world objects and drawings. Figure 5 - The image of a painting by Vincent Van Gogh, Road with Cypress and Star, 1890 is displayed on a computer screen. Paintings are contextually different than pictures or photos, since the painter style changes the elements on the figure considerably. Van Gogh, a post-impressionist, painted with an aggressive use of brush strokes. But individual painting elements can still be grouped together by having a human actor tapping on their representation in the computer screen to group them together.

5 to art using an artificial display (a computer monitor) [1]. The image of a painting by Vincent Van Gogh, Road with Cypress and Star, 1890 is displayed on a computer screen. Paintings are contextually different than pictures or photos, since the painter style changes the elements on the figure considerably. Van Gogh, a post-impressionist, painted with an aggressive use of brush strokes. But individual painting elements can still be grouped together by having a human actor tapping on their representation in the computer screen to group them together. Other videos, such as the humanoid robot Cog sawing a piece of wood, were also shown to the robot, from which visual templates of objects were extracted. Drawing boards are also very useful to design geometric shapes while interacting with a child. 4. Educational, Learning Applications A common pattern of early human-child interactive communication is through activities that stimulate the child's brain, such as drawing or painting. Children are able to extract information from such activities while they are being performed on-line. This capability motivated the implementation of three parallel processes which receive input data from three different sources: from an attentional tracker, which tracks the attentional focus and is attracted to a new salient stimulus; from a multi-target tracking algorithm implemented to track simultaneously multiple targets; and from an algorithm that selectively attends to the human actuator. 4.1 Learning Hand Gestures Standard hand gesture recognition algorithms require an annotated database of hand gestures, built off-line. Common approaches, such as Space-Time Gestures [6], rely on dynamic programming. Others [7] developed systems for children to interact with lifelike characters and play virtual instruments by classifying optical flow measurements. Other classification techniques include state machines, dynamic time warping or Hidden Markov Models. We follow a fundamentally different approach, being periodic hand trajectories mapped into geometric descriptions of these objects Figure 6 reports an experiment in which a human draws repetitively a geometric shape on a sheet of paper with a pen. The robot learns what was drawn by matching one period of the hand gesture to the previously learned shape (the hand gesture is recognized as circular). Hence, the geometry of periodic hand trajectories are on-line recognized to the geometry of objects in an object database, instead of being mapped to a database of annotated gestures. 4.2 Object Recognition from Hand Gestures The problem of recognizing objects in a scene can be framed as the dual version of the hand gestures recognition problem. Instead of using previously learned object geometries to recognize hand gestures, hand gestures' trajectories are now applied to recover the geometric shape (defined by a set of lines) and appearance (given by an image template enclosing such lines) of a scene object (as seen by the robot). Visual geometries in a scene (such as circles) are recognized as such from hand gestures having the same geometry (as is the case of circular gestures). Figure 6 shows results for a set of tasks. The robot learns what was painted by matching the hand gesture to the shape defined by the ink on the paper. This algorithm is useful to identify shapes from drawing, painting or other educational activities. 4.3 Shape from Human Cues This same framework is applied to extract object boundaries from human cues. Indeed, human manipulation provides the robot with extra perceptual information concerning objects, by actively describing (using human arm/ hand/finger trajectories) object contours or the hollow parts of objects, such as a cup (Figure 6). Tactile perception of objects from the robot grasping activities has been actively pursued [8]. Although more precise, these techniques require hybrid position/ force control of the robot's manipulator end-effector so as not to damage or break objects.

6 Figure 6 - Sample of experiments for object and shape recognition from hand gestures. 4.4 Functional Constraints Not only hand gestures can be used to detect interesting geometric shapes in the world as seen by the robot. For instance, certain toys, such as trains, move periodically on rail tracks, with a functional constraint fixed both in time and space. Therefore, one might obtain information concerning the rail tracks by observing the train's visual trajectory. To accomplish such goal, objects are visually tracked by an attentional tracker which is modulated by an attentional system [1]. The algorithm starts by masking the input world image to regions inside the moving object's visual trajectory (or outside but near the boundary). Lines modelling the object's trajectory are then mapped into lines fitting the scene edges. The output is the geometry of the stationary object which is imposing the functional constraint on the moving object. Figure 6 shows experimental results for the specific case of extracting templates for train rail tracks from the train's motion (which is constrained by the railway circular geometry). 4.5 Language Learning Applications Auditory processing is also integrated with visual processing to extract the name and properties of objects. However, hand visual trajectory properties and sound properties might be independent - while tapping on books, it is not the interacting human caregiver hand that generates sound, but the caregiver vocal system pronouncing sounds such as the object s name. Therefore, cross-modal events are associated together under a weak requirement: visual segmentations from periodic signals and sound segmentations are bound together if occurring temporally close [4]. This strategy is also well suited for sound patterns correlated with the hand visual trajectory (such as playing musical tones by shaking a rattle).

7 5. Cross-Modal Sensing Applications Different objects have distinct acoustic-visual patterns which are a rich source of information for object recognition, if we can recover them. The relationship between object motion and the sound generated varies in an object-specific way. A hammer causes sound after striking an object. A toy truck causes sound while moving rapidly with wheels spinning; it is quiet when changing direction (see Figure 7). These statements are truly cross-modal in nature. Features extracted from the visual and acoustic segmentations are what is needed to build an object recognition system [4]. The feature space for recognition consists of: Sound/Visual period ratios Figure 7 - The car and the cube, both moving, both making noise. The line overlaid on the spectrogram (bottom) shows the cutoff the sound energy of a hammer determined automatically between the high-pitched bell in the peaks once per visual period, cube and the low-patched rolling sound of the car. The frequencies while the sound energy of a of both visual signals are half those of the audio signals. car peaks twice. Features: sound/vision peak energy and period ratios Visual/Sound peak energy ratios 4 the hammer upon impact creates high peaks of sound energy relative 2 to the amplitude of the visual Car trajectory. Hammer Cube rattle Dynamic programming is applied to 0 Snake rattle match the sound energy to the visual trajectory signal. Formally, let -2 = ( S ) and V ( V ) S 1 S n = 1 V m be sequences of sound and visual trajectory energies segmented from n and m periods of the sound and visual trajectory signals, respectively. Due to noise, n may be different to m. If the estimated sound period is half the visual one, then V corresponds to energies segmented with 2m half periods (given by the distance between maximum and minimum peaks). = 1 P l defines an alignment between S and V, where max( m, n) 1 m + n 1, and P k = ( i, j), a match k between sound cluster j and visual cluster i. The matching constraints are set by: P 1,1 A matching path P ( P ) The boundary conditions: 1 = ( ) and P l = ( m, n). Temporal continuity: P k + 1 i + 1, j + 1, i + 1, j, i, j + 1 Steps are adjacent elements of P. [( ) ( ) ( )] Log(visual peak energy/acoustic peak energy) Log(acoustic period/visual period) Figure 8 - Object recognition from cross-modal clues. The confusion matrix for a four-class recognition experiment is shown. Objects are recognized based on cross-modal features.

8 The function cost c ij is given by the square difference between V i and S j periods. The best matching path W can be found efficiently using dynamic programming, by incrementally building an m n table caching the optimum cost at each table cell, together with the link corresponding to that optimum. The binding W will then result by tracing back through these links, as in the Viterbi algorithm. Figure 8 shows cross-modal features for a set of four objects. The system was evaluated by selecting randomly 10% of the data for validation, and the remaining data for training. This process was randomly repeated 15 times. The recognition rate averaged over all these runs were, by object category: 86.7%for the cube rattle, 100% for both the car and the snake rattle, and 83% for the hammer. The overall recognition rate was 92.1%. Such results demonstrate the potential for object recognition applications using cross-modal cues. 6. Networked Robotic Platform We have presented a collection of interactive applications for human-robot interactions. The architecture to deploy such applications over the network into robots is shown in Figure 9. Notice that applications may be deployed as distributed applications, having interactive components running on the robot, and other components running on foreign servers (or even on other robots or machines, such as on Peer2Peer systems [9]). Indeed, the learning modules are often more suitable to run on more porwerfull network machines, while live object recognition should run on the robot platform. In addition, network bandwidth should be appropriate, so that quality of service for such near-real time applications is assured. Selects Applications Web portal Download User Robot Application Server Applications Database Play game Applications Figure 9 Architecture for networked interactive applications. A user accesses a Web Portal and selects applications to be deployed at a robot. Such applications are stored on an application server (or cluster of network servers), and are transmitted through the internet. Although not shown explicitly, distributed applications are possible, by having application components running on internet servers (application server or others). Also possible is a P2P solution, in which some application components may even run on other robots hardware.

9 7. Other Potential (Networked) Applications This paper covered a collection of interactive applications that were implemented, and that can be deployed remotely from a server into a robot. But a larger number of interactive applications are possible, being presented hereafter an overview of future potential applications. 7.1 Human-Robot interaction through more complex mechanisms Our work could also be extended to account for multi-link object recognition for carrying out the identification of possibly parallel kinematic mechanisms (e.g., two pendulums balancing in a bar) over constrained kinematic mechanisms (e.g., a slide and crank mechanism) mechanisms constrained by pulleys, belts chain of rotating cranes or pendulums the different moving links of an object (e.g., car with wheels) or a combination of them. Such scheme could then be applied developmentally: for instance, by learning the kinematics of two rolling objects, such as two capstans connected by a cable, the robot might use such knowledge to determine a configuration of a capstan rolling around the other. 7.2 Processing of tactile information The addition of tactile sensors to a robot, enables the download of network applications to interact in reacher ways with a robot. On possibility is the cross-modal integration of this sensory modality with the other modalities. 7.3 Integration of parsing structures for language processing This paper describes methods to learn about sounds and first words. A babbling language was previously developed on the robot Cog at MIT CSAIL [10], as well as a grounded language framework to learn about activities [11]. The integration of these components would add flexibility for interacting with a robot. In addition, there are strong correlations between learning the execution of motor tasks and speech, which would be interesting to exploit. 7.4 Caregiven-Children Games This work could be extended by further employing learning by scaffolding from educational activities between a robot and a helping caregiver. Good examples of such future learning include games in which the caregiver position babies facing them in order to play roll the ball or peek-a-boo games [12]. Another interesting example occurs when caregivers help toddlers solving their first puzzles by a priori orienting pieces in the right direction. In each case the children receive very useful insights that help them learn their roles in these social interactions, making future puzzles and games easier to solve. Solving puzzles is therefore an interesting problem involving object recognition, feature integration and object representation, for which this paper framework could be extended. In addition, hand gestures recognition from non-repetitive gestures (but without hand gesture annotation) would be of interest, since a lot of information can be conveyed to a robot by the detection of human gesturing. An interesting possibility would be for the robot to play games with humans, or other robots - which would have as well to download the network application from a server platform (or employing Peer2Peer technology), through the internet, besides the physical human-robot interaction. 8. Conclusions Teaching a robot information concerning its surrounding world is a difficult task, which takes several years for a child, equipped with evolutionary mechanisms stored in its genes, to

10 accomplish. Learning aids are often used by human caregivers to introduce the child to a diverse set of (in)animate objects, exposing the latter to an outside world of colors, forms, shapes and contrasts, that otherwise could not be available to a child (such as the image of a Panda). A learning aid expands the child's knowledge of its surrounding world, and it is therefore a potentially useful tool to introduce new informative percepts to a robot. If in the future robots are to behave like humans, a promising venue to achieve this goal is by treating then as such, and initially as children. Learning aids such as books or educational activities that stimulate a child's brain are important tools that caregivers extensively apply to communicate with children. And we exploited such tools to develop interactive applications. There are already some companies providing remote applications to be downloaded into robots, such as Lego Mindstorms, among others. But such applications still lack complex levels of interactivity. This paper described work on interactive applications being targeted for human-robot interactions. But such applications are not static on their capabilities: by employing developmental learning strategies, as a robot interacts with humans, it learns new knowledge, which enables it to demonstrate richer levels of interactivity later on. References 1. A. Arsenio, Cognitive-Developmental Learning for a Humanoid Robot: A Caregivers' gift, Ph.D. thesis, MIT, May/June A. Arsenio. Intelligent Algorithms and Clever Applications in User Centric Network. Eds. J. Crowcroft, J. Kempf, P. Mendes, R. Sofia Abstracts Collection and Report: User- Centric Networking URL: 3. D. I. Perrett, A. J. Mistlin, M. H. Harries, A. J. Chitty, Understanding the visual appearance and consequence of hand action, in Vision and action: the control of grasping (Ablex, Norwood, NJ, 1990) P. Fitzpatrick, A. Arsenio, Feel the beat: using cross-modal rhythm to integrate robot perception (International Workshop on Epigenetic Robotics, 2004). 5. A. Arsenio, Teaching a humanoid robot from books, in International Symposium on Robotics (2004). 6. T. Darrel, A. Pentland, Space-time gestures, in IEEE Conference on Computer Vision and Pattern Recognition (New York, NY, 1993) R. Cutler, M. Turk, View-based interpretation of real-time optical flow for gesture recognition, in Int. Conference on Automatic Face and Gesture Recognition (1998). 8. K. Rao, G. Medioni, H. Liu, B. G.A., Shape description and grasping for robot hand-eye coordination, IEEE Control Systems Magazine 9 (1989) (2) R. Schollmeier, A Definition of Peer-to-Peer Networking for the Classification of Peer-to Peer Architectures and Applications, Proceedings of the First International Conference on Peer-to-Peer Computing, IEEE, P. Varchavskaia, P. Fitzpatrick, C. Breazeal, Characterizing and Processing Robot-Directed Speech. Proceedings of Second International Conference on Humanoid Robotics, Tokyo, Japan, P. Fitzpatrick, From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot, PhD thesis, MIT, R. Hodapp, E. Goldfield, C. Boyatzis. The use and effectiveness of maternal scaffolding in mother-infant games. Child Development, 55:

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT Modal and amodal features Modal and amodal features (following

More information

YDDON. Humans, Robots, & Intelligent Objects New communication approaches

YDDON. Humans, Robots, & Intelligent Objects New communication approaches YDDON Humans, Robots, & Intelligent Objects New communication approaches Building Robot intelligence Interdisciplinarity Turning things into robots www.ydrobotics.co m Edifício A Moagem Cidade do Engenho

More information

Perception and Perspective in Robotics

Perception and Perspective in Robotics Perception and Perspective in Robotics Paul Fitzpatrick MIT CSAIL USA experimentation helps perception Rachel: We have got to find out if [ugly naked guy]'s alive. Monica: How are we going to do that?

More information

Manipulation. Manipulation. Better Vision through Manipulation. Giorgio Metta Paul Fitzpatrick. Humanoid Robotics Group.

Manipulation. Manipulation. Better Vision through Manipulation. Giorgio Metta Paul Fitzpatrick. Humanoid Robotics Group. Manipulation Manipulation Better Vision through Manipulation Giorgio Metta Paul Fitzpatrick Humanoid Robotics Group MIT AI Lab Vision & Manipulation In robotics, vision is often used to guide manipulation

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

The Whole World in Your Hand: Active and Interactive Segmentation

The Whole World in Your Hand: Active and Interactive Segmentation The Whole World in Your Hand: Active and Interactive Segmentation Artur Arsenio Paul Fitzpatrick Charles C. Kemp Giorgio Metta 1 MIT AI Lab Cambridge, Massachusetts, USA Lira Lab, DIST, University of Genova

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

An Introduction To Modular Robots

An Introduction To Modular Robots An Introduction To Modular Robots Introduction Morphology and Classification Locomotion Applications Challenges 11/24/09 Sebastian Rockel Introduction Definition (Robot) A robot is an artificial, intelligent,

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Designing Toys That Come Alive: Curious Robots for Creative Play

Designing Toys That Come Alive: Curious Robots for Creative Play Designing Toys That Come Alive: Curious Robots for Creative Play Kathryn Merrick School of Information Technologies and Electrical Engineering University of New South Wales, Australian Defence Force Academy

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Physical Computing: Hand, Body, and Room Sized Interaction. Ken Camarata

Physical Computing: Hand, Body, and Room Sized Interaction. Ken Camarata Physical Computing: Hand, Body, and Room Sized Interaction Ken Camarata camarata@cmu.edu http://code.arc.cmu.edu CoDe Lab Computational Design Research Laboratory School of Architecture, Carnegie Mellon

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

Social Interaction and the Development of Artificial Consciousness

Social Interaction and the Development of Artificial Consciousness Artur M. Arsenio IHSIS - Institute for Human Studies and Intelligence Sciences Instituto Superior Técnico / Universidade Técnica de Lisboa, Portugal Luisa G. Caldas IHSIS - Institute for Human Studies

More information

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab Vision-based User-interfaces for Pervasive Computing Tutorial Notes Vision Interface Group MIT AI Lab Table of contents Biographical sketch..ii Agenda..iii Objectives.. iv Abstract..v Introduction....1

More information

Augmented Home. Integrating a Virtual World Game in a Physical Environment. Serge Offermans and Jun Hu

Augmented Home. Integrating a Virtual World Game in a Physical Environment. Serge Offermans and Jun Hu Augmented Home Integrating a Virtual World Game in a Physical Environment Serge Offermans and Jun Hu Eindhoven University of Technology Department of Industrial Design The Netherlands {s.a.m.offermans,j.hu}@tue.nl

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Use an example to explain what is admittance control? You may refer to exoskeleton

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

Graz University of Technology (Austria)

Graz University of Technology (Austria) Graz University of Technology (Austria) I am in charge of the Vision Based Measurement Group at Graz University of Technology. The research group is focused on two main areas: Object Category Recognition

More information

Learning Actions from Demonstration

Learning Actions from Demonstration Learning Actions from Demonstration Michael Tirtowidjojo, Matthew Frierson, Benjamin Singer, Palak Hirpara October 2, 2016 Abstract The goal of our project is twofold. First, we will design a controller

More information

Sound Automata. Category: Physics: Force & Motion; Sound & Waves. Type: Make & Take. Rough Parts List: Tools List: Video:

Sound Automata. Category: Physics: Force & Motion; Sound & Waves. Type: Make & Take. Rough Parts List: Tools List: Video: Sound Automata Category: Physics: Force & Motion; Sound & Waves Type: Make & Take Rough Parts List: 2 Clear plastic cups, large 2 Bamboo skewers 2 Straws 1 Sheet of cardboard or foam core 1 Bottle cap

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

2. Publishable summary

2. Publishable summary 2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research

More information

MPEG-4 Structured Audio Systems

MPEG-4 Structured Audio Systems MPEG-4 Structured Audio Systems Mihir Anandpara The University of Texas at Austin anandpar@ece.utexas.edu 1 Abstract The MPEG-4 standard has been proposed to provide high quality audio and video content

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

Auditory-Tactile Interaction Using Digital Signal Processing In Musical Instruments

Auditory-Tactile Interaction Using Digital Signal Processing In Musical Instruments IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 2, Issue 6 (Jul. Aug. 2013), PP 08-13 e-issn: 2319 4200, p-issn No. : 2319 4197 Auditory-Tactile Interaction Using Digital Signal Processing

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction It is appropriate to begin the textbook on robotics with the definition of the industrial robot manipulator as given by the ISO 8373 standard. An industrial robot manipulator is

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Towards affordance based human-system interaction based on cyber-physical systems

Towards affordance based human-system interaction based on cyber-physical systems Towards affordance based human-system interaction based on cyber-physical systems Zoltán Rusák 1, Imre Horváth 1, Yuemin Hou 2, Ji Lihong 2 1 Faculty of Industrial Design Engineering, Delft University

More information

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

Speech/Music Change Point Detection using Sonogram and AANN

Speech/Music Change Point Detection using Sonogram and AANN International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 6, Number 1 (2016), pp. 45-49 International Research Publications House http://www. irphouse.com Speech/Music Change

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots Eric Matson Scott DeLoach Multi-agent and Cooperative Robotics Laboratory Department of Computing and Information

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

II. ROBOT SYSTEMS ENGINEERING

II. ROBOT SYSTEMS ENGINEERING Mobile Robots: Successes and Challenges in Artificial Intelligence Jitendra Joshi (Research Scholar), Keshav Dev Gupta (Assistant Professor), Nidhi Sharma (Assistant Professor), Kinnari Jangid (Assistant

More information

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1 Jorge Paiva Luís Tavares João Silva Sequeira Institute for Systems and Robotics Institute for Systems and Robotics Instituto Superior Técnico,

More information

Visual Search using Principal Component Analysis

Visual Search using Principal Component Analysis Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Birth of An Intelligent Humanoid Robot in Singapore

Birth of An Intelligent Humanoid Robot in Singapore Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Saphira Robot Control Architecture

Saphira Robot Control Architecture Saphira Robot Control Architecture Saphira Version 8.1.0 Kurt Konolige SRI International April, 2002 Copyright 2002 Kurt Konolige SRI International, Menlo Park, California 1 Saphira and Aria System Overview

More information

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor. - Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface Computer-Aided Engineering Research of power/signal integrity analysis and EMC design

More information

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Florent Berthaut and Martin Hachet Figure 1: A musician plays the Drile instrument while being immersed in front of

More information

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors Towards the more concrete end of the Alife spectrum is robotics. Alife -- because it is the attempt to synthesise -- at some level -- 'lifelike behaviour. AI is often associated with a particular style

More information

Eyes n Ears: A System for Attentive Teleconferencing

Eyes n Ears: A System for Attentive Teleconferencing Eyes n Ears: A System for Attentive Teleconferencing B. Kapralos 1,3, M. Jenkin 1,3, E. Milios 2,3 and J. Tsotsos 1,3 1 Department of Computer Science, York University, North York, Canada M3J 1P3 2 Department

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

RoboCup. Presented by Shane Murphy April 24, 2003

RoboCup. Presented by Shane Murphy April 24, 2003 RoboCup Presented by Shane Murphy April 24, 2003 RoboCup: : Today and Tomorrow What we have learned Authors Minoru Asada (Osaka University, Japan), Hiroaki Kitano (Sony CS Labs, Japan), Itsuki Noda (Electrotechnical(

More information

Today. CS 395T Visual Recognition. Course content. Administration. Expectations. Paper reviews

Today. CS 395T Visual Recognition. Course content. Administration. Expectations. Paper reviews Today CS 395T Visual Recognition Course logistics Overview Volunteers, prep for next week Thursday, January 18 Administration Class: Tues / Thurs 12:30-2 PM Instructor: Kristen Grauman grauman at cs.utexas.edu

More information

Exploring Surround Haptics Displays

Exploring Surround Haptics Displays Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

UUIs Ubiquitous User Interfaces

UUIs Ubiquitous User Interfaces UUIs Ubiquitous User Interfaces Alexander Nelson April 16th, 2018 University of Arkansas - Department of Computer Science and Computer Engineering The Problem As more and more computation is woven into

More information

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1 CS 730/830: Intro AI Prof. Wheeler Ruml TA Bence Cserna Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1 Wheeler Ruml (UNH) Lecture 1, CS 730 1 / 23 My Definition

More information

COMP150 Behavior-Based Robotics

COMP150 Behavior-Based Robotics For class use only, do not distribute COMP150 Behavior-Based Robotics http://www.cs.tufts.edu/comp/150bbr/timetable.html http://www.cs.tufts.edu/comp/150bbr/syllabus.html Course Essentials This is not

More information

An Integrated HMM-Based Intelligent Robotic Assembly System

An Integrated HMM-Based Intelligent Robotic Assembly System An Integrated HMM-Based Intelligent Robotic Assembly System H.Y.K. Lau, K.L. Mak and M.C.C. Ngan Department of Industrial & Manufacturing Systems Engineering The University of Hong Kong, Pokfulam Road,

More information

Evaluation of Five-finger Haptic Communication with Network Delay

Evaluation of Five-finger Haptic Communication with Network Delay Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

Digital image processing vs. computer vision Higher-level anchoring

Digital image processing vs. computer vision Higher-level anchoring Digital image processing vs. computer vision Higher-level anchoring Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception

More information

Non-Invasive Brain-Actuated Control of a Mobile Robot

Non-Invasive Brain-Actuated Control of a Mobile Robot Non-Invasive Brain-Actuated Control of a Mobile Robot Jose del R. Millan, Frederic Renkens, Josep Mourino, Wulfram Gerstner 5/3/06 Josh Storz CSE 599E BCI Introduction (paper perspective) BCIs BCI = Brain

More information

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Ergonomic positioning of bulky objects Thesis 1 Robot acts as a 3rd hand for workpiece positioning: Muscular fatigue

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

Artificial Life Simulation on Distributed Virtual Reality Environments

Artificial Life Simulation on Distributed Virtual Reality Environments Artificial Life Simulation on Distributed Virtual Reality Environments Marcio Lobo Netto, Cláudio Ranieri Laboratório de Sistemas Integráveis Universidade de São Paulo (USP) São Paulo SP Brazil {lobonett,ranieri}@lsi.usp.br

More information

FP7 ICT Call 6: Cognitive Systems and Robotics

FP7 ICT Call 6: Cognitive Systems and Robotics FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media

More information

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

A Lego-Based Soccer-Playing Robot Competition For Teaching Design Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

Issues in Information Systems Volume 13, Issue 2, pp , 2012

Issues in Information Systems Volume 13, Issue 2, pp , 2012 131 A STUDY ON SMART CURRICULUM UTILIZING INTELLIGENT ROBOT SIMULATION SeonYong Hong, Korea Advanced Institute of Science and Technology, gosyhong@kaist.ac.kr YongHyun Hwang, University of California Irvine,

More information

AIEDAM Special Issue: Sketching, and Pen-based Design Interaction Edited by: Maria C. Yang and Levent Burak Kara

AIEDAM Special Issue: Sketching, and Pen-based Design Interaction Edited by: Maria C. Yang and Levent Burak Kara AIEDAM Special Issue: Sketching, and Pen-based Design Interaction Edited by: Maria C. Yang and Levent Burak Kara Sketching has long been an essential medium of design cognition, recognized for its ability

More information

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Mel Spectrum Analysis of Speech Recognition using Single Microphone International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON Josep Amat 1, Alícia Casals 2, Manel Frigola 2, Enric Martín 2 1Robotics Institute. (IRI) UPC / CSIC Llorens Artigas 4-6, 2a

More information

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press,   ISSN Application of artificial neural networks to the robot path planning problem P. Martin & A.P. del Pobil Department of Computer Science, Jaume I University, Campus de Penyeta Roja, 207 Castellon, Spain

More information

Spatialization and Timbre for Effective Auditory Graphing

Spatialization and Timbre for Effective Auditory Graphing 18 Proceedings o1't11e 8th WSEAS Int. Conf. on Acoustics & Music: Theory & Applications, Vancouver, Canada. June 19-21, 2007 Spatialization and Timbre for Effective Auditory Graphing HONG JUN SONG and

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

Controlling Humanoid Robot Using Head Movements

Controlling Humanoid Robot Using Head Movements Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Visual Interpretation of Hand Gestures as a Practical Interface Modality Visual Interpretation of Hand Gestures as a Practical Interface Modality Frederik C. M. Kjeldsen Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate

More information

INTRODUCTION TO DEEP LEARNING. Steve Tjoa June 2013

INTRODUCTION TO DEEP LEARNING. Steve Tjoa June 2013 INTRODUCTION TO DEEP LEARNING Steve Tjoa kiemyang@gmail.com June 2013 Acknowledgements http://ufldl.stanford.edu/wiki/index.php/ UFLDL_Tutorial http://youtu.be/ayzoubkuf3m http://youtu.be/zmnoatzigik 2

More information

Changing and Transforming a Story in a Framework of an Automatic Narrative Generation Game

Changing and Transforming a Story in a Framework of an Automatic Narrative Generation Game Changing and Transforming a in a Framework of an Automatic Narrative Generation Game Jumpei Ono Graduate School of Software Informatics, Iwate Prefectural University Takizawa, Iwate, 020-0693, Japan Takashi

More information

EXPLORING SENSING-BASED KINETIC DESIGN

EXPLORING SENSING-BASED KINETIC DESIGN EXPLORING SENSING-BASED KINETIC DESIGN Exploring Sensing-based Kinetic Design for Responsive Architecture CHENG-AN PAN AND TAYSHENG JENG Department of Architecture, National Cheng Kung University, Taiwan

More information

Humanoid Robots: A New Kind of Tool

Humanoid Robots: A New Kind of Tool Humanoid Robots: A New Kind of Tool Bryan Adams, Cynthia Breazeal, Rodney Brooks, Brian Scassellati MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 USA {bpadams, cynthia,

More information

Lecture 23: Robotics. Instructor: Joelle Pineau Class web page: What is a robot?

Lecture 23: Robotics. Instructor: Joelle Pineau Class web page:   What is a robot? COMP 102: Computers and Computing Lecture 23: Robotics Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp102 What is a robot? The word robot is popularized by the Czech playwright

More information

Smart Robotic Assistants for Small Volume Manufacturing Tasks

Smart Robotic Assistants for Small Volume Manufacturing Tasks Smart Robotic Assistants for Small Volume Manufacturing Tasks Satyandra K. Gupta Director, Center for Advanced Manufacturing Smith International Professor Aerospace and Mechanical Engineering Department

More information

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Driver Assistance for Keeping Hands on the Wheel and Eyes on the Road ICVES 2009 Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road" Cuong Tran and Mohan Manubhai Trivedi Laboratory for Intelligent and Safe Automobiles (LISA) University of California

More information

The Science In Computer Science

The Science In Computer Science Editor s Introduction Ubiquity Symposium The Science In Computer Science The Computing Sciences and STEM Education by Paul S. Rosenbloom In this latest installment of The Science in Computer Science, Prof.

More information

User Interaction and Perception from the Correlation of Dynamic Visual Responses Melinda Piper

User Interaction and Perception from the Correlation of Dynamic Visual Responses Melinda Piper User Interaction and Perception from the Correlation of Dynamic Visual Responses Melinda Piper 42634375 This paper explores the variant dynamic visualisations found in interactive installations and how

More information

Term Paper: Robot Arm Modeling

Term Paper: Robot Arm Modeling Term Paper: Robot Arm Modeling Akul Penugonda December 10, 2014 1 Abstract This project attempts to model and verify the motion of a robot arm. The two joints used in robot arms - prismatic and rotational.

More information