Extracting Multimodal Dynamics of Objects Using RNNPB
|
|
- Imogen White
- 5 years ago
- Views:
Transcription
1 Paper: Tetsuya Ogata Λ, Hayato Ohba Λ, Jun Tani ΛΛ, Kazunori Komatani Λ, and Hiroshi G. Okuno Λ Λ Graduate School of Informatics, Kyoto University, Kyoto, Japan fogata, hayato, komatani, ΛΛ Brain Science Institute, RIKEN, Saitama, Japan [Received February 4, 2005; accepted April 30, 2005] Dynamic features play an important role in recognizing objects that have similar static features in color or shape. This paper focuses on active sensing that exploits the dynamic feature of an object. An extended version of the robot, Robovie-IIs, uses its arms to move an object and determine its dynamic features. At issue is how to extract symbols from different temporal states of the object. We use a recurrent neural network with parametric bias (RNNPB) that generates self-organized nodes in parametric bias space. We trained an RNNPB with 42 neurons using data on sounds, trajectories, and tactile sensors generated while the robot was moving or hitting an object with its arm. Clusters of 20 types of objects were selforganized. Experiments with unknown (untrained) objects showed that our proposal configured them appropriately in PB space, demonstrating its generalization. Keywords: active sensing, humanoid robot, recurrent neural network 1. Introduction We are developing techniques to enable robots to manipulate tools designed for human use. Conventional robots manipulate specific tools designed for robot hands, and it remains difficult for mechanical systems to handle dynamics of objects and generate adaptive behavior by learning a dynamic environment. In tool manipulation, for example, robots identify objects but do not yet predict an object s dynamics. Tool manipulation is crucial to recognizing objects recognition and several studies have been done on active sensing [1 5]. Noda et al. reported using a humanoid robot, Wamoeba-2Ri, which grasps objects with its hand and recognizes them by integrating multiple sensory data on size, weight, and color imaging [2]. This study used a three-layered self-organizing map (SOM) [6], which deals only with static features, and required over a thousand neurons for processing multimodal sensory data, making it difficult for the robot to apply recognition results in motion planning. Arsenio et al. focused on rhythmic motion as object dynamics and merged audiovisual sensory data to recognize objects using the humanoid robot Cog [3]. This study showed that cross-modal dynamics are essential for object recognition and manipulation, but targeted only rhythmic motion generated by human operators, rather than the robot. This did not enable the robot to plan more generalized tool manipulation. Fukano et al. presented an example in which a multifingered hand grasps the target object, observes the resulting motion through proprioception, and detects motion constraints [4]. This study yielded interesting results on active sensing with stochastic techniques, and the unknown objects were bottles of different sizes. The common problem of the above studies is that their target was recognition of fewer than 10 objects designed and/or selected for the robot. We propose novel active-sensing using object dynamics a recurrent neural net (RNN) trained using multimodal sensory data generated while a robot is moving or colliding with objects. The RNN enables robots to use dynamic features for different object recognition and motion prediction. Our proposal is generalizable enough to configure unknown (untrained) objects appropriately. Section 2 introduces the recurrent neural network model for learning. Section 3 details actual design of active sensing, e.g., motion design, target objects, sensors, and the neural network configuration. Section 4 discusses experiments and results of our proposal. Section 5 discusses the characteristics of our proposal and compares them to those of conventional recognition. Section 6 concludes this paper and reviews projected work on motion generation. 2. Learning Algorithm This section details how robots deal with dynamic features of sensory information during active sensing. Statistical techniques represented by the hidden Markov model (HMM) process time-sequence data efficiently, but require huge amounts of data for learning. This becomes a problem for real robots conducting experiments for collecting data due to the durability problem. The HMM deals only with known objects, which could be fatal in adaptability to a real dynamic environment. We used a deterministic method represented by an artificial neu- Journal of Robotics and Mechatronics Vol.17 No.6,
2 Ogata, T. et al. Fig. 2. External ears and hand. Fig. 1. RNNPB network configuration. ral net (ANN) to solve this problem. that the RNN selforganizes (acquires) contextual information [7]. We use the forwarding forward (FF) model proposed by Tani [8], also called the RNN with parametric bias (RN- NPB) model. It articulates complex motion sequences into motion units, which are encoded as limit-cycling or fixed-point dynamics of the RNN. We previously reported a study of human-robot interaction based on quasisymbols acquired by the RNNBP [9] RNNPB Model The RNNPB model has the same architecture as the conventional Jordan RNN model [10] except for parametric bias (PB) nodes in the input layer. Unlike other input nodes, PB nodes take a constant value throughout each time sequence and are used to implement mapping between fixed length and time sequences. The network configuration of the RNNPB model is shown in Fig.1. As with the Jordan RNN model, the RNNPB model learns data sequences supervisedly. The difference is that in the RNNPB model, values encoding sequences are self-organized in PB nodes during learning. Common structural properties of training data sequences are acquired as connection weights by using the back propagation through time (BPTT) algorithm [11], also used in the conventional RNN. The specific properties of each individual time sequence are simultaneously encoded as PB values, so the RNNPB model self-organizes mapping between PB values and time sequences PB Vector Learning The learning algorithm for PB vectors is a variant of the BPTT algorithm. The step length of a training sequence is denoted by l. For each sensory-motor output, backpropagated errors for PB nodes are accumulated and used to update PB values. Update equations for the ith unit of the PB at the t in the sequence are as follows: ρ t = k bp t+l=2 δt bp + k nb (ρ t+1 2ρ t + ρ t 1 ). (1) t l=2 p t = sigmoid(ρ t =ζ ) (2) In Eq.(1), the δ force for updating internal values of PB p t is obtained by summing two terms. The first term represents delta error, δt bp, back propagated from output nodes to PB nodes: it is integrated over the period from the t l=2 tothet + l=2 steps. Integrating delta error prevents local fluctuations in output errors from significantly affecting temporal PB values. The second term is a lowpass filter that inhibits frequent rapid changes in PB values. k bp, k nb, and ε are coefficients. Current PB values are obtained from sigmoidal outputs of internal values. After learning sequences, the RNNPB model generates a sequence from corresponding PB values. The RNNPB model can be used for recognition and sequence generation. For a given sequence, the corresponding PB value is obtained by using update rules for PB values, without updating connection weights. This inverse operation for generation is regarded as recognition. The RNNPB model acquires relational structures among training sequences in PB space through learning. This generation enables the RNNPB model to generate and recognize unseen sequences without additional learning. By learning several cyclic time sequences for different frequencies, for example, it generates novel time sequences of intermediate frequencies. In the RNNPB, step length l should be determined by dynamic properties of target sequences such as the learning coefficient, but acquired PB patterns were almost the same for a wide range of l. Adjustment is not strict in practical sequence learning. 3. Active Sensing by Moving Objects 3.1. Adding New Functions to Robovie-IIs We refined the humanoid robot Robovie-IIs [12] as a platform for our experiments. Robovie-IIs itself is refined from Robovie-II developed at ATR [13]. The original Robovie-II has three degrees of freedom (DOF) at the neck and four DOF at each arm. It uses two CCD cameras on the head. Tactile sensors of soft rubber silicon cover Robobie-IIs and distinguish among three types of contact collision, rubbing, and touch by detecting changes in pressure. We added functions to Robovie-IIs (Fig.2), i.e., two external ears on the head and two 1-DOF hands on the arms for the experiment on active sensing. 682 Journal of Robotics and Mechatronics Vol.17 No.6, 2005
3 Fig. 5. Object recognition system configuration. Fig. 3. Experiment in active sensing RNNPB Configuration and Learning The following sensory data was normalized ([0-1]) and synchronized (9 frame/s) between different modalities for use by the RNNPB model: 1) Audio Information (5): An audio signal was detected by microphones in the robot s ears (48kHz). Five signal features were extracted using a Mel Filter Bank. 2) Visual Information (4): The center (x; y) and color (R;B) were detected by a CCD camera with a resolution of pixels (30 frame/s). Fig. 4. Recognition targets Motion of Active Sensing and Target Objects Perception using only static features such visual images is not sufficient for distinguishing among objects having similar sizes, shapes, and colors. Such a recognition framework is not applicable to dynamic motion planning. Perception must be designed for sensory-motor coordination [14]. We focused on active sensing motion in which a robot moves or hits an object on a table with its arm. Infants often touch and hit unknown objects in front of them, thereby acquiring skill in manipulating. The movement of a moving object enables the object s dynamic features to be exploited, e.g., tactile pressure to move it, actual trajectory, and sound patterns generated by collision with the table. Figure 3 shows an experiment in which Robovie touched and moved an object by rotating a shoulder motor (roll axis) with constant velocity (60 ffi =s). As the robot moved an object, it collected sound, object trajectory, and touch pressure using its microphones, cameras, and tactile sensors. Figure 4 shows the 20 types of objects used as recognition targets rubber ball, plastic ball, ceramic cup, 2 types of plastic cup, glass, can, moneybox, stuffed doll, Rubik cube, toy car, funnel, pen tray, scrub brush, soft brush, water dumbbell, and shampoo container. The water dumbbell and shampoo container were full or empty two conditions not distinguishable statically as, for example, in visual images. 3) Tactile Information (1): Input voltage from tactile skin sensors was used (4.3Hz). The system configuration is shown in Fig.5. The RN- NPB works as a prediction system whose input is current sensory data s(t) and whose output is the next sensory state s(t + 1): It consists of 42 neurons 10 in the input layer, 20 in the middle layer, 10 in the context layer, and 2asPB. The training sequence of the RNNPB was segmented when changes in all sensory input were less than the threshold. In experiments, sensory sequence lengths L s were 15 to 40 steps. Our goal was to acquire specific parameters corresponding to each object for recognition and motion generation. To fix parameters during sensing, Eq.(1) was simplified in our RNNPB model training as follows: ρ t = k bp t+l=2 δt bp (3) t l=2 Equation (2), which normalizes parameters, was not used in our experiments for analyzing the acquired PB. 4. Experiments and Results 4.1. Self-Organization of PB Space and Modality Differences We conducted an experiment using the 20 objects in the previous section to confirm clustering by our proposal using dynamic features of objects. Robovie moved objects five times (20 5 = 100 sequence data), and the RN- NPB was trained by collecting 100,000 samples, which Journal of Robotics and Mechatronics Vol.17 No.6,
4 Ogata, T. et al. (a) Example of a glass (a) PB space acquired by tactile sensor (b) Example of a scrub brush (b) PB space acquired by audio signal Fig. 6. Sensor flow and prediction output of the RNNPB. required approx 1 hour using a PC with a Pentium IV processor (2.8GHz). Figure 6 shows the sequence of tactile pressure, object position (x coordinate), and sound, when Robovie moved (a) a glass and (b) a scrub brush. Black lines indicate RNNPB input (real value) and gray lines RNNPB output (prediction). We confirmed that the RNNPB predicts each sequence well. Average prediction error is less than 1.5%. Figure 7 shows PB space acquired by each sensor modality. Two parameters in the RNNPB before normalization correspond to X-Y axes in space, as follows: 1) PB space acquired by tactile sensor: Fig.7(a) shows PB space when only tactile sensors were used. Though most objects were not categorized, heavy objects tended to be mapped in upper space. 2) PB space acquired by sound signal: Fig.7(b) shows space when only the sound signal was used. In this space, objects that did not make a sound were mapped at upper right. Although some vague clusters are seen, the sharpness of separation is quite low. 3) PB space acquired by visual data: Fig.7(c) shows space when only visual information was used. In this space, almost all objects were separated, but objects with similar trajectories, such as can or glass and (c) PB space acquired by visual image (d) PB space acquired by all sensor modality Fig. 7. PB spaces acquired by sensor modality. 684 Journal of Robotics and Mechatronics Vol.17 No.6, 2005
5 Figure 8 shows PB space of (a) RNNPB-1 and (b) RNNPB-2. Trained-for objects are shown by white plots and untrained-for objects by black plots. RNNPBparameters 2 corresponding to unknown objects were determined by renewing only parameters without updating synaptic weights (recognition). Renewing the PB value only 1000 times completed recognition. Clusters were self-organized corresponding to all objects in PB space of RNNPB-1. Specifically: 1) Objects that moved easily were mapped at upper left, (a) PB space acquired by RNNPB-1 trained by 8 objects 2) Objects making sounds were mapped in the upper area, 3) Blue objects were mapped at upper right. Fig.8(b) shows that RNNPB-2 acquired almost the same map as Fig.8(a) of RNNPB-1, even though it was trained with data on only four objects. This means that RNNPB- 2 configured PB space with a structure similar to that of RNNPB-1, except for the different area size of each cluster. 5. Discussion (b) PB space acquired by RNNPB-2 trained by 4 objects and recognition results of untrained 4 objects Fig. 8. Comparison of two RNNPBs (generalization analysis). moneybox or pen tray, were not separated. 4) PB space acquired by all sensory modalities: Fig.7(d) shows PB space self-organized when all sensory modalities were used. We confirmed that the RN- NPB acquire clusters for all types of objects Clustering of Unknown Objects We conducted other experiments to confirm our proposal s generalization by recognizing unknown (untrained) objects. In this experiment, we used 8 objects rubber ball, glass, moneybox, pen tray, scrub brush, soft brush, and empty and full shampoo containers. The RNNPB was trained in two different ways. RNNPB-1 was trained using all multimodal sensory data in active sensing. RNNPB-2 was trained using only the rubber ball, glass, scrub brush, and empty shampoo container. For RNNPB-2, moneybox, pen tray, soft brush, and full shampoo container were unknown. Both RN- NPBs were trained 100,000 times Motion Design and Multimodality Motion patterns applicable to different objects are difficult to prepare. Most research on active sensing selected touch or or grasping that focused on tactile and jointdegree sensing. Although these motions guarantee reliable data on shape, size, and weight of objects, such motions require skill for detecting the accurate position of objects to pick them up. Even human infants have difficulty, however, manipulating objects with their hands. We selected the motion of moving or hitting of objects by the robot s arm for active perception. This motion is completed without precise sensor data and many types of dynamic features of objects are extractable, including moving trajectory and sound. Metta et al. used the motion of moving object for active sensing [15], but ignored the important sensor modality of sound. The sound signal reflects many properties of objects such as shape, material, and internal structure. Visual devices alone cannot obtain these properties simultaneously as shown in Fig.7(c). A large number and variety of motion patterns of moving or hitting objects at different speeds and in different directions enabled the extraction of a greater variety of dynamic features of objects Clustering of RNNPB As mentioned in Section 1, most conventional studies on active sensing have dealt with small numbers of objects as the recognition target. Hence it is also difficult to prepare a recognition system that can handle different objects. Typical neural networks for time-sequence data Journal of Robotics and Mechatronics Vol.17 No.6,
6 Ogata, T. et al. processing represented by TDNN [16], for example, require an impractically large number of neurons and learning time for the problem treated in this paper, because it is designed to store all time sequences of sensory data in the input layer. In contrast, the number of neurons in our RNNPB was only 42 because it uses self-organizing contextual information in the context layer Generalization of RNNPB We confirmed that the RNNPB has superior generalization for clustering dynamic sequences. It expresses different objects and their relationships in PB space selforganized through training with a few objects. As mentioned in Section 2, our proposal is more suited for robot learning than stochastic learning because real robot systems have limitations in hardware durability. A deterministic learning system called a mixture of experts represented by MOSAIC [17] also works well in dealing with multiple dynamic patterns (attracters). This usually consists of several dynamic recognizers that categorize and learn target sequences individually (local expression). In contrast, the RNNPB acquires multiattracters overlapping in a single network by changing parameters that represent the boundary condition (distributed expression). In the RNNPB, all neurons and synaptic weights participate in representing all trained patterns. In local expression, interference is minimized between patterns because it allocates a novel pattern in an additional recognizer. In a distributed expression, however, memory interference occurs because memory shares network resources. Due to embedding multiple attractors in a distributed network, we attained a global structure that handles learned patterns and unknown (unlearned) patterns. We believe this is why the RNNPB shows generalization in recognizing unknown objects in Section Conclusions and Projected Work We have proposed active sensing method a humanoid robot with a recurrent neural network to solve the problem of object recognition. Specifically, we trained an RNNPB model with only 42 neurons with data on sounds, trajectories, and tactile senses generated while the robot was moving or hitting an object with its hands. The clusters of 20 types of objects was self-organized in PB space. Experiments using unknown (untrained) objects demonstrated that our proposal configures these unknown objects appropriately in PB space, which proves its generalization. An interesting challenge for projected work is to achieve robot motion planning using our proposal. The targets of our experiments in this paper were different objects but the motion pattern was the same in all active sensing. Our proposal easily shifts its target from different objects to different motions. The RNNPB configuration can be redesigned for treating motor output easily. We expect that our robot will be able to generate arm motion by using RNNPB output. The RNNPB could, for example, associate arm motion patterns with observed object trajectories and sounds. This association was related to the discussion of imitation based on behavioral primitives corresponding to PB in our study. Acknowledgements This research was supported by the RIKEN Brain Science Institute and the Scientific Research on Priority Areas: Informatics Studies for the Foundation of IT Evolution. References: [1] R. Bajcsy, Active Perception, IEEE Proceedings, Special issue on Computer Vision, Vol.76, No.8, pp , [2] K. Noda, M. Suzuki, N. Tsuchiya, Y. Suga, T. Ogata, and S. Sugano, Robust modeling of dynamic environment based on robot embodiment, IEEE ICRA 2003, pp , [3] A. Arsenio, and P. Fitzpatrick, Exploiting cross-modal rhythm for robot perception of objects, Int. Conf. on Computational Intelligence, Robotics, and Autonomous Systems, [4] R. Fukano, Y. Kuniyoshi, T. Kobayashi, and T. Otani, Statistical Manipulation Learning of Unknown Objects by a Multi-Fingered Robot Hand, Humanoids 2004, paper #65, [5] P. Dario, M. Rucci, C. Guadagnini, and C. Laschi, Integrating Visual and Tactile Information in Disassembly Tasks, Int. Conf. on Advanced Robotics, pp , [6] T. Kohonen, Self-Organizing Maps, Springer Series in Information Science, Vol.30, Springer, Berlin, Heidelberg, New York, [7] L. Lin, and T. Mitchell, Efficient Learning and Planning within the Dynamic Framework, SAB 92, pp , [8] J. Tani, and M. Ito, Self-Organization of Behavioural Primitives as Multiple Attractor Dynamics: A Robot Experiment, IEEE Transactions on SMC Part A, Vol.33, No.4, pp , [9] T. Ogata, M. Matsunaga, S. Sugano, and J. Tani, Human Robot Collaboration Using Behavioral Primitives, IEEE/RSJ IROS 2004, pp , [10] M. Jordan, Attractor dynamics and parallelism in a connectionist sequential machine, Eighth Annual Conference of the Cognitive Science Society (Erlbaum, Hillsdale, NJ), pp , [11] D. Rumelhart, G. Hinton, and R. Williams, Learning internal representation by error propagation, in D.E. Rumelhart and J.L. McLelland (editors), Parallel Distributed Processing (Cambridge, MA: MIT Press), [12] H. Ishiguro, T. Ono, M. Imai, T. Maeda, T. Kanda, and R. Nakatsu, Robovie: an interactive humanoid robot, Int. Journal of Industrial Robotics, Vol.28, No.6, pp , [13] T. Miyashita, T. Tajika, K. Shinozawa, H. Ishiguro, K. Kogure, and N. Hagita, Human Position and Posture Detection based on Tactile Information of the Whole Body, IEEE/RSJ IROS 2004 Work Shop, [14] R. Pfeifer, and C. Scheier, Understanding Intelligence, Cambridge, MA: MIT Press, [15] G. Metta, and P. Fitzpatrick, Better Vision through Manipulation, Adaptive Behavior, Vol.11, No.2, pp , [16] A. Waibel, T. Hanazawa, K. Hinton, K. Shikano, and K. Lang, Phoneme Recognition Using Time-Delay Neural Networks, ATR Technical Report TR , [17] M. Haruno, D. Wolpert, and M. Kawato, MOSAIC model for sensorimotor learning and control, Neural Computation 13, pp , Journal of Robotics and Mechatronics Vol.17 No.6, 2005
7 Tetsuya Ogata Associate Professor, Graduate School of Informatics, Kyoto University Jun Tani Laboratory Head, Brain Science Institute, RIKEN Brief Biographical History: Research Associate, Waseda University Research Scientist, Brain Science Institute, RIKEN Graduate School of Informatics, Kyoto University Main Works: ffl Open-end Human-Robot Interaction from the Dynamical Systems Perspective-Mutual Adaptation and Incremental Learning, Advanced Robotics, VSP and Robotics Society of Japan, Vol.19, No.6, pp , July, Membership in Learned Societies: ffl Information Processing Society of Japan (IPSJ) ffl The Robotics Society of Japan (RSJ) ffl The Japan Society of Mechanical Engineers (JSME) ffl Japanese Society for Artificial Intelligence (JSAI) ffl The Society of Biomechanisms (SOBIM) ffl Society of Systems, Control and Information Engineers (ISCIE) ffl The Institute of Electrical and Electronics Engineers (IEEE) Brief Biographical History: Senior Scientist, Sony Computer Science Laboratory Laboratory Head, Brain Science Institute, RIKEN Main Works: ffl The dynamical systems accounts for phenomenology of immanent time: An interpretation by revisiting a robotics synthetic study, Journal of Consciousness Studies, Vol.11, No.9, pp. 5-24, Hayato Ohba Master Course Student, Graduate School of Informatics, Kyoto University Kazunori Komatani Research Associate, Graduate School of Informatics, Kyoto University Membership in Learned Societies: ffl The Robotics Society of Japan (RSJ) Brief Biographical History: Research Associate, Kyoto University Main Works: ffl User Modeling in Spoken Dialogue Systems to Generate Flexible Guidance, User Modeling and User-Adapted Interaction, Vol.15, No.1, pp , Membership in Learned Societies: ffl Information Processing Society of Japan (IPSJ) ffl Japanese Society for Artificial Intelligence (JSAI) ffl The Association for Computational Linguistics (ACL) Journal of Robotics and Mechatronics Vol.17 No.6,
8 Ogata, T. et al. Hiroshi G. Okuno Professor, Graduate School of Informatics, Kyoto University Brief Biographical History: Principal Researcher, Nippon Telephone and Telegram Corp Professor, Science University of Tokyo Professor, Kyoto University Main Works: ffl Advanced Lisp Technology, Taylor & Francis, Aug., ffl Computational Auditory Scene Analysis, Lawrence Erlbaum Associates, ffl Utilizing the Internet, Iwanami Science Library, ffl Intelligent Programming, Ohm Publisher, Membership in Learned Societies: ffl Association for Computing Machinery (ACM) ffl American Association for Artificial Intelligence (AAAI) ffl Acoustical Society of America (ASA) ffl Information Processing Society of Japan (IPSJ) ffl Japanese Society for Software Science and Technology (JSSST) ffl Japanese Society for Artificial Intelligence (JSAI) ffl Japan Cognitive Science Society (JCSS) ffl Institute of Electrical and Electronics Engineers (IEEE) 688 Journal of Robotics and Mechatronics Vol.17 No.6, 2005
Joint attention between a humanoid robot and users in imitation game
Joint attention between a humanoid robot and users in imitation game Masato Ito Sony Corporation 6-7-35 Kitashinagawa, Shinagawa-ku Tokyo, 141-0001, Japan masato@pdp.crl.sony.co.jp Jun Tani Brain Science
More informationOnline Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots
Online Knowledge Acquisition and General Problem Solving in a Real World by Humanoid Robots Naoya Makibuchi 1, Furao Shen 2, and Osamu Hasegawa 1 1 Department of Computational Intelligence and Systems
More informationAdaptive Human-Robot Interaction System using Interactive EC
Adaptive Human-Robot Interaction System using Interactive EC Yuki Suga, Chihiro Endo, Daizo Kobayashi, Takeshi Matsumoto, Shigeki Sugano School of Science and Engineering, Waseda Univ.,Tokyo, Japan. {ysuga,
More informationDistributed Vision System: A Perceptual Information Infrastructure for Robot Navigation
Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp
More informationCONTACT SENSING APPROACH IN HUMANOID ROBOT NAVIGATION
Contact Sensing Approach In Humanoid Robot Navigation CONTACT SENSING APPROACH IN HUMANOID ROBOT NAVIGATION Hanafiah, Y. 1, Ohka, M 2., Yamano, M 3., and Nasu, Y. 4 1, 2 Graduate School of Information
More informationAssociated Emotion and its Expression in an Entertainment Robot QRIO
Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,
More informationOptic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball
Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine
More informationEstimating Group States for Interactive Humanoid Robots
Estimating Group States for Interactive Humanoid Robots Masahiro Shiomi, Kenta Nohara, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita Abstract In human-robot interaction, interactive humanoid robots
More informationDevelopment of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -
Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda
More informationBirth of An Intelligent Humanoid Robot in Singapore
Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing
More informationSensor system of a small biped entertainment robot
Advanced Robotics, Vol. 18, No. 10, pp. 1039 1052 (2004) VSP and Robotics Society of Japan 2004. Also available online - www.vsppub.com Sensor system of a small biped entertainment robot Short paper TATSUZO
More informationHRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments
Book Title Book Editors IOS Press, 2003 1 HRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments Tetsunari Inamura a,1, Masayuki Inaba a and Hirochika Inoue a a Dept. of
More informationAdaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers
Proceedings of the 3 rd International Conference on Mechanical Engineering and Mechatronics Prague, Czech Republic, August 14-15, 2014 Paper No. 170 Adaptive Humanoid Robot Arm Motion Generation by Evolved
More informationArtificial Neural Networks. Artificial Intelligence Santa Clara, 2016
Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural
More informationFeel the beat: using cross-modal rhythm to integrate perception of objects, others, and self
Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT Modal and amodal features Modal and amodal features (following
More informationPerception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision
11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste
More informationBehavior generation for a mobile robot based on the adaptive fitness function
Robotics and Autonomous Systems 40 (2002) 69 77 Behavior generation for a mobile robot based on the adaptive fitness function Eiji Uchibe a,, Masakazu Yanase b, Minoru Asada c a Human Information Science
More informationGPU Computing for Cognitive Robotics
GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating
More informationRapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface
Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface Kei Okada 1, Yasuyuki Kino 1, Fumio Kanehiro 2, Yasuo Kuniyoshi 1, Masayuki Inaba 1, Hirochika Inoue 1 1
More informationBiomimetic Design of Actuators, Sensors and Robots
Biomimetic Design of Actuators, Sensors and Robots Takashi Maeno, COE Member of autonomous-cooperative robotics group Department of Mechanical Engineering Keio University Abstract Biological life has greatly
More informationCognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many
Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July
More informationAndroid (Child android)
Social and ethical issue Why have I developed the android? Hiroshi ISHIGURO Department of Adaptive Machine Systems, Osaka University ATR Intelligent Robotics and Communications Laboratories JST ERATO Asada
More informationService Robots in an Intelligent House
Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System
More informationDevelopment of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics
Development of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics Kazunori Asanuma 1, Kazunori Umeda 1, Ryuichi Ueda 2,andTamioArai 2 1 Chuo University,
More informationAcquisition of Multi-Modal Expression of Slip through Pick-Up Experiences
Acquisition of Multi-Modal Expression of Slip through Pick-Up Experiences Yasunori Tada* and Koh Hosoda** * Dept. of Adaptive Machine Systems, Osaka University ** Dept. of Adaptive Machine Systems, HANDAI
More informationManipulation. Manipulation. Better Vision through Manipulation. Giorgio Metta Paul Fitzpatrick. Humanoid Robotics Group.
Manipulation Manipulation Better Vision through Manipulation Giorgio Metta Paul Fitzpatrick Humanoid Robotics Group MIT AI Lab Vision & Manipulation In robotics, vision is often used to guide manipulation
More informationA Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures
A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)
More informationEmergence of Interactive Behaviors between Two Robots by Prediction Error Minimization Mechanism
(Presented at IEEE Int. Conf. ICDL-Epirob 2016) Emergence of Interactive Behaviors between Two Robots by Prediction Error Minimization Mechanism Yiwen Chen, Shingo Murata, Hiroaki Arie, Tetsuya Ogata,
More informationSupplementary information accompanying the manuscript Biologically Inspired Modular Neural Control for a Leg-Wheel Hybrid Robot
Supplementary information accompanying the manuscript Biologically Inspired Modular Neural Control for a Leg-Wheel Hybrid Robot Poramate Manoonpong a,, Florentin Wörgötter a, Pudit Laksanacharoen b a)
More informationDevelopment of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics
Development of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics Kazunori Asanuma 1, Kazunori Umeda 1, Ryuichi Ueda 2, and Tamio Arai 2 1 Chuo University,
More informationLearning the Proprioceptive and Acoustic Properties of Household Objects. Jivko Sinapov Willow Collaborators: Kaijen and Radu 6/24/2010
Learning the Proprioceptive and Acoustic Properties of Household Objects Jivko Sinapov Willow Collaborators: Kaijen and Radu 6/24/2010 What is Proprioception? It is the sense that indicates whether the
More informationWirelessly Controlled Wheeled Robotic Arm
Wirelessly Controlled Wheeled Robotic Arm Muhammmad Tufail 1, Mian Muhammad Kamal 2, Muhammad Jawad 3 1 Department of Electrical Engineering City University of science and Information Technology Peshawar
More informationInteraction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping
Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino
More informationLearning Behaviors for Environment Modeling by Genetic Algorithm
Learning Behaviors for Environment Modeling by Genetic Algorithm Seiji Yamada Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo
More informationFigure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw
Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur
More informationCB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics
CB 2 : A Child Robot with Biomimetic Body for Cognitive Developmental Robotics Takashi Minato #1, Yuichiro Yoshikawa #2, Tomoyuki da 3, Shuhei Ikemoto 4, Hiroshi Ishiguro # 5, and Minoru Asada # 6 # Asada
More informationA developmental approach to grasping
A developmental approach to grasping Lorenzo Natale, Giorgio Metta and Giulio Sandini LIRA-Lab, DIST, University of Genoa Viale Causa 13, 16145, Genova Italy email: {nat, pasa, sandini}@liralab.it Abstract
More informationTransactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN
Application of artificial neural networks to the robot path planning problem P. Martin & A.P. del Pobil Department of Computer Science, Jaume I University, Campus de Penyeta Roja, 207 Castellon, Spain
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More information* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged
ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing
More informationReal-Time Face Detection and Tracking for High Resolution Smart Camera System
Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell
More informationTransactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN
Combining multi-layer perceptrons with heuristics for reliable control chart pattern classification D.T. Pham & E. Oztemel Intelligent Systems Research Laboratory, School of Electrical, Electronic and
More informationArtificial Neural Network based Mobile Robot Navigation
Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,
More informationEvolving High-Dimensional, Adaptive Camera-Based Speed Sensors
In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors
More informationEmergence of Purposive and Grounded Communication through Reinforcement Learning
Emergence of Purposive and Grounded Communication through Reinforcement Learning Katsunari Shibata and Kazuki Sasahara Dept. of Electrical & Electronic Engineering, Oita University, 7 Dannoharu, Oita 87-1192,
More informationHumanoid robot. Honda's ASIMO, an example of a humanoid robot
Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.
More informationInsertion of Pause in Drawing from Babbling for Robot s Developmental Imitation Learning
2014 IEEE International Conference on Robotics & Automation (ICRA) Hong Kong Convention and Exhibition Center May 31 - June 7, 2014. Hong Kong, China Insertion of Pause in Drawing from Babbling for Robot
More informationNCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects
NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS
More informationArtificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA
Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA Milene Barbosa Carvalho 1, Alexandre Marques Amaral 1, Luiz Eduardo da Silva Ramos 1,2, Carlos Augusto Paiva
More informationPerception and Perspective in Robotics
Perception and Perspective in Robotics Paul Fitzpatrick MIT CSAIL USA experimentation helps perception Rachel: We have got to find out if [ugly naked guy]'s alive. Monica: How are we going to do that?
More informationSalient features make a search easy
Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second
More informationBody Movement Analysis of Human-Robot Interaction
Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,
More informationCognition & Robotics. EUCog - European Network for the Advancement of Artificial Cognitive Systems, Interaction and Robotics
Cognition & Robotics Recent debates in Cognitive Robotics bring about ways to seek a definitional connection between cognition and robotics, ponder upon the questions: EUCog - European Network for the
More informationSwarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization
Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Learning to avoid obstacles Outline Problem encoding using GA and ANN Floreano and Mondada
More informationFuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration
Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MF1 94) Las Vega, NV Oct. 2-5, 1994 Fuzzy Logic Based Robot Navigation In Uncertain
More informationEvolved Neurodynamics for Robot Control
Evolved Neurodynamics for Robot Control Frank Pasemann, Martin Hülse, Keyan Zahedi Fraunhofer Institute for Autonomous Intelligent Systems (AiS) Schloss Birlinghoven, D-53754 Sankt Augustin, Germany Abstract
More informationHigh-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control
High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control Pedro Neto, J. Norberto Pires, Member, IEEE Abstract Today, most industrial robots are programmed using the typical
More informationObject Exploration Using a Three-Axis Tactile Sensing Information
Journal of Computer Science 7 (4): 499-504, 2011 ISSN 1549-3636 2011 Science Publications Object Exploration Using a Three-Axis Tactile Sensing Information 1,2 S.C. Abdullah, 1 Jiro Wada, 1 Masahiro Ohka
More informationTeam KMUTT: Team Description Paper
Team KMUTT: Team Description Paper Thavida Maneewarn, Xye, Pasan Kulvanit, Sathit Wanitchaikit, Panuvat Sinsaranon, Kawroong Saktaweekulkit, Nattapong Kaewlek Djitt Laowattana King Mongkut s University
More informationConcept and Architecture of a Centaur Robot
Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan
More informationMel Spectrum Analysis of Speech Recognition using Single Microphone
International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree
More informationImitation based Human-Robot Interaction -Roles of Joint Attention and Motion Prediction-
Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication Kurashiki, Okayama Japan September 20-22,2004 Imitation based Human-Robot Interaction -Roles of Joint Attention
More informationImplicit Fitness Functions for Evolving a Drawing Robot
Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,
More informationComplex Continuous Meaningful Humanoid Interaction: A Multi Sensory-Cue Based Approach
Complex Continuous Meaningful Humanoid Interaction: A Multi Sensory-Cue Based Approach Gordon Cheng Humanoid Interaction Laboratory Intelligent Systems Division Electrotechnical Laboratory Tsukuba, Ibaraki,
More informationIntent Imitation using Wearable Motion Capturing System with On-line Teaching of Task Attention
Intent Imitation using Wearable Motion Capturing System with On-line Teaching of Task Attention Tetsunari Inamura, Naoki Kojo, Tomoyuki Sonoda, Kazuyuki Sakamoto, Kei Okada and Masayuki Inaba Department
More informationDevelopment and Evaluation of a Centaur Robot
Development and Evaluation of a Centaur Robot 1 Satoshi Tsuda, 1 Kuniya Shinozaki, and 2 Ryohei Nakatsu 1 Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan {amy65823,
More informationRobot: icub This humanoid helps us study the brain
ProfileArticle Robot: icub This humanoid helps us study the brain For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-icub/ Program By Robohub Tuesday,
More informationJane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute
Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute State one reason for investigating and building humanoid robot (4 pts) List two
More informationConcept and Architecture of a Centaur Robot
Concept and Architecture of a Centaur Robot Satoshi Tsuda, Yohsuke Oda, Kuniya Shinozaki, and Ryohei Nakatsu Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan
More informationSensing the Texture of Surfaces by Anthropomorphic Soft Fingertips with Multi-Modal Sensors
Sensing the Texture of Surfaces by Anthropomorphic Soft Fingertips with Multi-Modal Sensors Yasunori Tada, Koh Hosoda, Yusuke Yamasaki, and Minoru Asada Department of Adaptive Machine Systems, HANDAI Frontier
More informationBehaviour-Based Control. IAR Lecture 5 Barbara Webb
Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor
More informationConstructivist Approach to Human-Robot Emotional Communication - Design of Evolutionary Function for WAMOEBA-3 -
Constructivist Approach to Human-Robot Emotional Communication - Design of Evolutionary Function for WAMOEBA-3 - Yuki SUGA, Hiroaki ARIE,Tetsuya OGATA, and Shigeki SUGANO Humanoid Robotics Institute (HRI),
More informationPhysical and Affective Interaction between Human and Mental Commit Robot
Proceedings of the 21 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 21 Physical and Affective Interaction between Human and Mental Commit Robot Takanori Shibata Kazuo Tanie
More informationImage Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network
436 JOURNAL OF COMPUTERS, VOL. 5, NO. 9, SEPTEMBER Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network Chung-Chi Wu Department of Electrical Engineering,
More informationSimulating development in a real robot
Simulating development in a real robot Gabriel Gómez, Max Lungarella, Peter Eggenberger Hotz, Kojiro Matsushita and Rolf Pfeifer Artificial Intelligence Laboratory Department of Information Technology,
More informationAffordance based Human Motion Synthesizing System
Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract
More informationToward Interactive Learning of Object Categories by a Robot: A Case Study with Container and Non-Container Objects
Toward Interactive Learning of Object Categories by a Robot: A Case Study with Container and Non-Container Objects Shane Griffith, Jivko Sinapov, Matthew Miller and Alexander Stoytchev Developmental Robotics
More informationReading human relationships from their interaction with an interactive humanoid robot
Reading human relationships from their interaction with an interactive humanoid robot Takayuki Kanda 1 and Hiroshi Ishiguro 1,2 1 ATR, Intelligent Robotics and Communication Laboratories 2-2-2 Hikaridai
More informationNavigation of Transport Mobile Robot in Bionic Assembly System
Navigation of Transport Mobile obot in Bionic ssembly System leksandar Lazinica Intelligent Manufacturing Systems IFT Karlsplatz 13/311, -1040 Vienna Tel : +43-1-58801-311141 Fax :+43-1-58801-31199 e-mail
More informationIN MOST human robot coordination systems that have
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 54, NO. 2, APRIL 2007 699 Dance Step Estimation Method Based on HMM for Dance Partner Robot Takahiro Takeda, Student Member, IEEE, Yasuhisa Hirata, Member,
More informationNeural Models for Multi-Sensor Integration in Robotics
Department of Informatics Intelligent Robotics WS 2016/17 Neural Models for Multi-Sensor Integration in Robotics Josip Josifovski 4josifov@informatik.uni-hamburg.de Outline Multi-sensor Integration: Neurally
More informationHAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA
HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1
More informationRobots Learning from Robots: A proof of Concept Study for Co-Manipulation Tasks. Luka Peternel and Arash Ajoudani Presented by Halishia Chugani
Robots Learning from Robots: A proof of Concept Study for Co-Manipulation Tasks Luka Peternel and Arash Ajoudani Presented by Halishia Chugani Robots learning from humans 1. Robots learn from humans 2.
More informationBooklet of teaching units
International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,
More informationGraphical Simulation and High-Level Control of Humanoid Robots
In Proc. 2000 IEEE RSJ Int l Conf. on Intelligent Robots and Systems (IROS 2000) Graphical Simulation and High-Level Control of Humanoid Robots James J. Kuffner, Jr. Satoshi Kagami Masayuki Inaba Hirochika
More informationIncorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller
From:MAICS-97 Proceedings. Copyright 1997, AAAI (www.aaai.org). All rights reserved. Incorporating a Connectionist Vision Module into a Fuzzy, Behavior-Based Robot Controller Douglas S. Blank and J. Oliver
More informationMINE 432 Industrial Automation and Robotics
MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering
More informationLearning haptic representation of objects
Learning haptic representation of objects Lorenzo Natale, Giorgio Metta and Giulio Sandini LIRA-Lab, DIST University of Genoa viale Causa 13, 16145 Genova, Italy Email: nat, pasa, sandini @dist.unige.it
More informationRobot-Cub Outline. Robotcub 1 st Open Day Genova July 14, 2005
Robot-Cub Outline Robotcub 1 st Open Day Genova July 14, 2005 Main Keywords Cognition (manipulation) Human Development Embodiment Community Building Two Goals or a two-fold Goal? Create a physical platform
More informationThe Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment-
The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment- Hitoshi Hasunuma, Kensuke Harada, and Hirohisa Hirukawa System Technology Development Center,
More informationSeparation and Recognition of multiple sound source using Pulsed Neuron Model
Separation and Recognition of multiple sound source using Pulsed Neuron Model Kaname Iwasa, Hideaki Inoue, Mauricio Kugler, Susumu Kuroyanagi, Akira Iwata Nagoya Institute of Technology, Gokiso-cho, Showa-ku,
More informationロボティクスと深層学習. Robotics and Deep Learning. Keywords: robotics, deep learning, multimodal learning, end to end learning, sequence to sequence learning.
210 31 2 2016 3 ニューラルネットワーク研究のフロンティア ロボティクスと深層学習 Robotics and Deep Learning 尾形哲也 Tetsuya Ogata Waseda University. ogata@waseda.jp, http://ogata-lab.jp/ Keywords: robotics, deep learning, multimodal learning,
More informationLearning to Recognize Human Action Sequences
Learning to Recognize Human Action Sequences Chen Yu and Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY, 14627 yu,dana @cs.rochester.edu Abstract One of the major
More informationThe Basic Kak Neural Network with Complex Inputs
The Basic Kak Neural Network with Complex Inputs Pritam Rajagopal The Kak family of neural networks [3-6,2] is able to learn patterns quickly, and this speed of learning can be a decisive advantage over
More informationAdaptive Action Selection without Explicit Communication for Multi-robot Box-pushing
Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Seiji Yamada Jun ya Saito CISS, IGSSE, Tokyo Institute of Technology 4259 Nagatsuta, Midori, Yokohama 226-8502, JAPAN
More informationSimultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array
2012 2nd International Conference on Computer Design and Engineering (ICCDE 2012) IPCSIT vol. 49 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V49.14 Simultaneous Recognition of Speech
More informationWIRELESS VOICE CONTROLLED ROBOTICS ARM
WIRELESS VOICE CONTROLLED ROBOTICS ARM 1 R.ASWINBALAJI, 2 A.ARUNRAJA 1 BE ECE,SRI RAMAKRISHNA ENGINEERING COLLEGE,COIMBATORE,INDIA 2 ME EST,SRI RAMAKRISHNA ENGINEERING COLLEGE,COIMBATORE,INDIA aswinbalaji94@gmail.com
More informationAutonomous Vehicle Speaker Verification System
Autonomous Vehicle Speaker Verification System Functional Requirements List and Performance Specifications Aaron Pfalzgraf Christopher Sullivan Project Advisor: Dr. Jose Sanchez 4 November 2013 AVSVS 2
More informationUsing Vision to Improve Sound Source Separation
Using Vision to Improve Sound Source Separation Yukiko Nakagawa y, Hiroshi G. Okuno y, and Hiroaki Kitano yz ykitano Symbiotic Systems Project ERATO, Japan Science and Technology Corp. Mansion 31 Suite
More informationSpeech Synthesis using Mel-Cepstral Coefficient Feature
Speech Synthesis using Mel-Cepstral Coefficient Feature By Lu Wang Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Mark Hasegawa-Johnson May 2018 Abstract
More information