A Data Collection of Infants Visual, Physical, and Behavioral Reactions to a Small Humanoid Robot
|
|
- Ruby Perry
- 5 years ago
- Views:
Transcription
1 A Data Collection of Infants Visual, Physical, and Behavioral Reactions to a Small Humanoid Robot Rebecca Funke 1, Naomi T. Fitter 1, Joyce T. de Armendi 2, Nina S. Bradley 2, Barbara Sargent 2, Maja J. Mataric 1, and Beth A. Smith 2 Abstract Exploratory movements during infancy help typically developing infants learn the connections between their own actions and desired outcomes. In contrast, infants who are at risk for developmental delays often have neuromotor impairments that negatively influence their motivation for movement. The goal of this work is to expand our understanding of infant responses to non-contact interactions with a small humanoid robot. In the initial work presented here, we focus on understanding how this type of robotic system might help to encourage typically developing infant motor exploration. A data collection with N = 9 infants compared infant reactions to four robot conditions: saying yay with arm movement, saying kick with leg movement, saying yay with no movement, and saying kick with no movement. The results indicate that infants visually gazed at the robot while it moved, looking specifically to the part of the robot that was moving. Infants tended to move more during periods of robot inactivity. When the robot was moving, the infants also seemed more alert. Overall, these results can inform future studies of how to develop interventions to encourage movement practice by typically developing and at-risk infants. I. INTRODUCTION Typically developing infants engage in exploratory movements that help them learn how their own actions are connected to different outcomes, from making a caregiver smile to grasping a favorite toy. Through this perceptionaction learning process, infants learn to control their bodies and interact with the environment. In contrast to typically developing (TD) infants, infants at risk (AR) for developmental delays often have neuromotor impairments involving strength, proprioception, and coordination. These challenges can lead to greater difficulty with movement and decreased motivation for motor babbling. The goal of this work is to expand our understanding of infant responses to noncontact interactions with a small humanoid robot. Initially, these efforts focus on how this type of robotic system might help to encourage TD infant movement. This foundational work also will equip us with appropriate strategies for future interventions with the more vulnerable population of AR infants. A recent estimate determined that approximately 9% of all infants in the United States are AR and could benefit from early intervention services to address motor, cognitive, and/or 1 Interaction Lab, Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA {rfunke, nfitter, mataric}@usc.edu 2 Division of Biokinesiology and Physical Therapy, University of Southern California, Los Angeles, CA 90089, USA {jdearmen, nbradley, bsargent, beth.smith}@usc.edu social development [1]. Because of connections between these different types of development, motor, cognitive, and social domains can all be positively impacted by an intervention in just one of these areas [2]. The current standard of care for early intervention practice is to provide infrequent, low-intensity movement therapy or no intervention in infancy [3], [4]. However, early, intense, and targeted therapy intervention has the potential to improve neurodevelopmental structure and function [5]. Despite this potential gain, it can be challenging to find feasible and resource-efficient ways to deliver this type of intervention to AR infants. Noncontact infant-robot interactions that provide demonstrations and feedback are one such possible solution for encouraging infant motor exploration. Before working with AR infants, we first aim to understand TD infant responses to non-contact interactions with a humanoid robot. This paper summarizes related work (Section II), describes our methods for conducting a data collection on TD infant interactions with a small humanoid robot (Section III), presents the results of the data collection (Section IV), and discusses our findings (Section V). II. RELATED WORK Overall, the work presented in this paper aims to explore how TD infants respond to non-contact interactions with a small humanoid robot. Because of the humanoid form of the robot, one goal of this work is to determine if infants might be inclined to imitate the robot s motions. Related literature also suggests that robot motion might capture the attention of infants and provide contingent rewards for motion exploration. The related work discussed in the following sections motivates the design of the data collection, demonstrates the potential of socially assistive robots, and explains possible future applications of this work. A. Infant Motor Learning and Adaptation Infants acquire motor skills through a dynamic process of exploration and discovery during which the spontaneous movements of infancy modulate into task-specific actions such as reaching, crawling, and walking [6], [7]. The process by which task-specific action emerges from spontaneous movement is a fundamental topic of study in infant development. One previously studied paradigm of infant behavior is infant replication of demonstrated actions. This replication qualifies as imitation if the infant repeats the action more with the demonstration than without the demonstration [8].
2 Although some debate exists on when and why imitation behavior emerges in humans [9], several previous works suggest that infants use imitation as one mechanism for the acquisition of new behaviors, skills, and actions [10], [11], [12]. It is not yet known if infants perceive human actions differently from humanoid robot actions resembling human behavior. Infants appear more likely to initiate certain actions toward objects if they are first modeled by humans. For example, in one object-directed action study, six-month-old infants who observed a researcher removing and replacing a mitten on a puppet were more likely to perform this action than infants who did not observe this behavior [10]. On the other hand, infants appear unlikely to imitate a select objectdirected action if it is performed by one object on another object. For example, in a study of nine-month-old infants, participants who watched a claw grasping and moving an object did not imitate this action, but infants who watched a human demonstration did imitate this action [13]. In this initial data collection, we aim to understand infant responses to robot sound and motion. After we understand the infant imitation tendencies in this scenario, we may also be able to design appropriate robot-based feedback. Contingent feedback is one technique used to study the emergence of task-specific learning from spontaneous movement. A mobile paradigm is one example of contingent feedback where specific arm or leg movements of an infant are reinforced with sound and motion from an overhead mobile [14]. Historically, this technique was used to study learning and memory in early infancy [15], but we may be able to leverage this paradigm to reward and encourage particular types of infant exploratory motion. B. Robot Design for Intervention Human-robot interaction is a rapidly expanding research area. Robots have demonstrated capabilities to assist people in applications ranging from socially-motivated physical therapy for stroke survivors [16] to behavioral therapy for children with autism spectrum disorder [17]. Robots present opportunities to assist people as broadly replicable platforms for effective, personalized, and socially engaging intervention. Past work has also shown that physically present robots can persuade and motivate people more than on-screen agents do [18], [19]. Behavioral cues produced by objects having humanoid morphological features (e.g., a face and eyes) appear to be salient stimuli for motivating infant action [20]. Abstract behavioral cues such as biological motion [21] and selfpropulsion [22] trigger infants innate detection of objectdirected actions. These past results, in combination with the knowledge that infants of a certain age tend to imitate nearby people, led to our decision to use a humanoid robot for this work. This research aims to leverage a small humanoid robot s embodiment to engage infants in imitating specific physical movements. Personalized models appropriate for each infant participant will be developed based on findings in this initial work. C. Infant-Robot Intervention Motion demonstrations from a humanoid robot have several unique advantages for studying infant motion adaptation. Since infants attend preferentially to faces [23], interactive humanoid robots may capture and maintain the attention of infants more than inanimate toys. Past work has shown that children from six to fourteen months old attend to a humanoid robot for longer than an android or a person [24]. Furthermore, small humanoid robots can produce motions similar to those of infants. This ability may help the robot to inspire infants to imitate desired patterns of motion. Since each infant is unique in their development, interactions should be personalized to each user. The benefits of personalized one-on-one instruction are well supported in psychology research [25], and these results have been replicated in human-computer interaction work [26] and human-robot interaction studies [19]. The data collected by our infant-robot system will enable us to achieve real-time personalization and adaptation of robot behaviors. The infants studied in this data collection can help us to understand how infant responses and behaviors might vary across system user. Post-hoc analyses of these infants behaviors will help us to equip future iterations of the infant-robot system with appropriate personalization and adaptation capabilities. III. METHODS To better understand infant responses to a small humanoid robot, we conducted an exploratory data collection. We brought a robot and an infant together in a laboratory setting, varied the robot s actions, and tracked the following aspects of the infant s state: visual responses (i.e., eye gaze), physical responses (i.e., motion measured by inertial sensors), and behavioral responses (i.e., infant alertness level). For this work, we chose to use the Aldebaran Nao robot shown in Fig. 1 because of its humanoid form. During the data collection, each infant sat across from the Nao robot in Fig. 1. The Nao robot used in this data collection.
3 TABLE I INFORMATION ABOUT THE DEMOGRAPHICS AND OPENING ASSESSMENT MEASUREMENTS FOR THE SIX INFANTS WHO SUCCESSFULLY COMPLETED THE EXPERIMENT. Infant Gender Age Gestation Age Healthy AIMS AIMS Head Weight (kg) Height (cm) (months, days) at Birth (weeks) at Delivery? Score Percentile Circ. (cm) TD2 M 4m 13d 38 No TD3 M 4m 29d 41 Yes TD4 M 4m 24d 40 Yes TD6 M 4m 26d 40 Yes TD7 F 5m 11d 40 Yes TD9 M 2m 28d 40 Yes Fig. 2. The experimental setup in which an infant observes and reacts to a Nao robot. The labeled sensors capture information about the infant-robot interaction for post-hoc analyses. A camcorder and Kinect behind the robot (not shown in image) provide additional infant behavior data. a small white room. Participants interacted with the robot as described in the following sections. The data collection procedure was approved by the University of Southern California Institutional Review Board under protocol #HS A. Participants We recruited nine TD infants between the ages of 2 months 28 days and 5 months 11 days. The infants were recruited from the Greater Los Angeles Area. The data presented in this paper excludes three infants: two were excluded due to excessive crying (nonstop for more than two minutes) and one was excluded for having an Alberta Infant Motor Scales (AIMS) score below the tenth percentile. Table I displays the age, size, and development information for each infant included in the data analysis. Each family received $20 of compensation for participating in the data collection. B. Procedure When the infant first entered the experiment space, the child s parent or legal guardian received a written overview and verbal explanation of the procedures. This caregiver signed an informed consent form prior to their infant s participation. A researcher administered the AIMS assessment to quantify infant motor development status [27] and measured the weight, length, and head circumference of the infant. We affixed one Opal inertial movement sensor [28] on each infant leg and arm using custom-made leg warmers with pockets. Past work has validated that these sensors can accurately record the quantity of infant limb movements [29]. The infant also wore a head-mounted eye tracker that allowed us to analyze their visual gaze. Infants came to the lab for a single session that lasted approximately one hour. At the start of the infant-robot interaction, the robot remained still for ten seconds while baseline visual, physical, and behavioral measurements were recorded from the infant. Figure 2 illustrates the setup of this interaction. For the next eight minutes, the infant engaged with the Nao robot in the procedure described by Fig. 3. The caregiver was seated next to the infant in the experiment setup, but they were asked to refrain from socially interacting with the infant during the data collection procedure. C. Conditions All infants experienced four interaction conditions during which the Nao robot behaved in each of the following ways: 1) raising arms and saying yay 2) kicking legs and saying kick 3) stationary and saying yay 4) stationary and saying kick The LED lights in the robot s eyes flashed during each robot behavior (movement and/or speech). The button on the robot s chest was dimly lit throughout the full eightminute experiment procedure. In this initial data collection, the robot s actions were preprogrammed and not contingent on the infant s actions so that we might gain some initial understanding of how infants respond to robot activity. As illustrated in Fig. 3, during conditions 1, 3, and 4, robot behavior lasted six seconds and inactivity lasted nine seconds per interaction cycle. During condition 2, robot behavior lasted seven seconds and inactivity lasted eight seconds. Thus, the duration of a full cycle of the active and inactive state together for any given condition was fifteen seconds; this sequence was repeated four times to make up each minute-long interaction condition trial. The experiment was composed of two four-minute blocks during which each condition was presented in a random counterbalanced order. D. Data Collection In addition to the recordings captured by the inertial movement sensors and head-mounted eye tracker, an RGB camera and a Microsoft Kinect One sensor (both positioned behind the robot) captured information about the infant s body and face throughout child-nao interactions. A trained video coder annotated the visual gaze of the infant using the eye tracker data. There were seven possible gaze annotations: robot face, robot arms, robot legs, robot trunk, not robot, eyes
4 Study Session (8 min) Trial (1 min) Block 1 (4 min) Block 2 (4 min) Cycle (15 sec) Active (6 sec) Inactive (9 sec) or Active (7 sec) Inactive (8 sec) Fig. 3. Graphic explaining the flow of the experiment and the makeup of trials within each experiment session. The full study session lasted eight minutes and was divided into two four-minute blocks. Each condition occurred once during a block in a random counterbalanced order. Condition behaviors repeated in fifteen-second cycles of robot activity (6-7 seconds) and inactivity (8-9 seconds) that repeated four times to make up a condition trial. closed, and eye movement. A different trained video coder completed annotations of the infant s behavioral state using footage of the infant from the camera behind the robot. This rater used a five-point arousal scale with the following anchor points: drowsy, alert and inactive, alert and active, fussy, and crying. This coding system is consistent with previous infant behavior research [30]. IV. RESULTS The goal of this data collection was to determine how a humanoid robot can elicit visual, physical, and behavioral responses of an infant in preparation for future interventions. A. Visual Responses Infants spent a larger percentage of their time looking at the robot when the robot was active (74.69%) compared to when the robot was inactive (53.13%). Figure 4 displays the breakdown of the infants average attention on and off of the robot for the active and inactive periods of each condition. Overall, during the active state of either condition involving robot motion (moving arms or legs in conditions 1 and 2), the infants exhibited more visual attention on the robot compared to when the robot was active in the non-moving conditions (saying yay or kick in conditions 3 and 4) and all inactive periods. The infants spent more time observing the robot during and after robot motion than during conditions when the robot was not moving. Figure 5 illustrates that the infant gaze was directed to the specific limb that was moving; when the legs moved, the infant was drawn to watch the legs, and arm motion promoted a similar tendency. After the motion of the respective limbs stopped, infant visual focus continued to be directed to the limbs that had been moving. The infants visual attention was more focused on the robot s face when the condition was purely verbal, and attention on the face continued during the inactive state of the robot in these conditions. B. Physical Responses We found that the infants tended to move more when the robot was immobile or inactive. We detected leg movements Fig. 4. Percent of time that infants focused on any part of the robot during its activity and inactivity over the discussed conditions. For the rest of the time, the infant was blinking, moving their eyes, or focusing on something other than the robot. The error bars represent one standard deviation. using an algorithm that has been validated for typically developing infants aged one to twelve months old using the same Opal sensor [29]. Figure 6 illustrates infant motion during the active and inactive phases of each condition. We did not find a relationship between what limb the robot moved and what limb the infant moved. Infant arm movement data were not analyzed; leg movement activity was used to represent overall physical activity as infants tended to concurrently move all limbs or not move. C. Behavioral Response Using the collected camera data, a member of the research team re-watched the experiment footage and conducted qualitative and quantitive analyses of the infant s behavioral state during the data collection. Typically, infants were more likely to be alert and engaged when the robot was active compared to when the robot was not active. More specifically, infants maintained an alert and focused behavioral state for longer when the robot was active with movements compared to when the robot was active but immobile. Infants tended to become more fussy during the intervals of robot inactivity. It is not clear if this was triggered by the cessation of robot behavior. It was possible, for example, that the infant was
5 Fig. 5. The average amount of time the infants directed their gaze to a specific body region of the robot during each 6-7 seconds of robot behavior and the following 8-9 seconds of robot inactivity for each of the behavioral conditions. The error bars represent one standard deviation. Fig. 6. The average number of right- and left-legged movements per second during the active and inactive phase of each condition. The error bars represent one standard deviation. Fig. 7. Visualization of the percent of time infants spent being alert while the robot was active and inactive in each condition. The error bars represent one standard deviation. not comfortable in the chair used in the experiment setup. When the robot stopped moving, the infants also started to move their legs more. With the exception of the two excluded infants, the participating infants spent little time crying. Overall, the infants were more alert when the robot was active (64.50%) compared to when the robot was inactive (47.50%). Figure 7 shows the average percent the infant was alert during each robot condition. V. DISCUSSION In this work, we investigated the visual, physical, and behavioral responses of infants to a humanoid Nao robot. Understanding infant reactions to this type of robot is essential for designing future robot-mediated therapy interventions to encourage AR infants to make exploratory movements. One of our hypotheses was that infants would be inclined to imitate the small humanoid robot s motions. We also believed the robot motion might capture the attention of infants. In future work, this focus may help us to provide infants with contingent rewards for motion exploration. We found that infants directed their gaze to the Nao when it was active. Participants eyes focused on the part of the robot that was moving. This visual tendency could promote both the success of robot motion demonstrations and the future strategy of robot motion as a contingent reward. In future studies, it will be important to design interactions that help us discern whether infant gaze varies in this way because of salient events in their environment or because of the robot s form specifically. Infants tended to move more during periods of robot inactivity. This may mean that future intervention interactions should include pauses in robot activity to promote infant movement. We did not find a correlation between what limb the robot moved and what limb the infant moved. Thus, the studied duration of infant-robot interaction does not seem appropriate for an imitation paradigm in infants this young. It is possible that infants would adjust their movements to match the robot s demonstration during a longer interaction with a clear contingent reward. The infants behavioral state generally appeared more focused when the robot was active. This is important because we hope to use the humanoid robot to model new behaviors or movements for the infant to repeat. Infants tended to be
6 physically still and visually engaged when the robot was moving, and when the robot stopped moving, the infants started moving their legs more. We hypothesize that they may have been either attempting to mimic the robot or expressing a desire to see the robot move again. This observation provides support for using the robot in a contingent manner to encourage infant movement. As this data collection involved very young infants, the session duration and number of conditions were constrained to minimize the possibility of participant stress or fatigue. We were also limited in the number of conditions included in the experimental design; additional pairings of present and absent LED lights, speech, and motion would help us determine what types of robot action are most salient for infants. Smaller and younger infants seemed to experience discomfort in the selected chair, which could result in crying and fussing. We will consider other chair options in future studies. We also may need to reconsider the appropriate spacing between the robot and infant. The moderate distance between the infant and robot may have caused the infant to lose focus and look around the room at other lights and objects. In the future, recruiting infants aged six months and older will allow us to test whether children imitate robot movements at a developmental time period during which this behavior typically begins to emerge. These results offer a first step into the exploration of infant-robot interactions, particularly focusing on two- to five-month-old infants perceptions of a humanoid robot. Specifically, we understand that infants direct their gaze toward the robot when it is active. Infants are more alert during robot activity. Infants also seem motivated to move during periods of robot inactivity. In the future, we will apply these findings to develop robot behavior than can motivate infant movement. ACKNOWLEDGMENT The authors acknowledge Elizabeth Cha for her assistance and advice preparing this paper and Edward Kaszubski and Jeong-ah Kim for their contributions to the data collection. REFERENCES [1] S. A. Rosenberg, C. C. Robinson, E. F. Shaw, and M. C. Ellison, Part c early intervention for infants and toddlers: Percentage eligible versus served, Pediatrics, vol. 131, no. 1, pp , [2] M. A. Lobo and J. C. Galloway, Assessment and stability of early learning abilities in preterm and full-term infants across the first two years of life, Research in Developmental Disabilities, vol. 34, no. 5, pp , [3] G. Roberts, K. Howard, A. J. Spittle, N. C. Brown, P. J. Anderson, and L. W. Doyle, Rates of early intervention services in very preterm children with developmental disabilities at age 2 years, Journal of Paediatrics and Child Health, vol. 44, no. 5, pp , [4] B. G. Tang, H. M. Feldman, L. C. Huffman, K. J. Kagawa, and J. B. Gould, Missed opportunities in the referral of high-risk infants to early intervention, Pediatrics, pp. peds 2011, [5] R. L. Holt and M. A. Mikati, Care for child development: Basic science rationale and effects of interventions, Pediatric Neurology, vol. 44, no. 4, pp , [6] E. J. Gibson and A. D. Pick, An ecological approach to perceptual learning and development. Oxford University Press, USA, [7] E. Thelen and L. Smith, A dynamic systems approach to the development of cognition and action. Cambridge, MA: The MIT Press, [8] A. N. Meltzoff, Infant imitation after a 1-week delay: long-term memory for novel acts and multiple stimuli, Developmental Psychology, vol. 24, no. 4, p. 470, [9] S. S. Jones, The development of imitation in infancy, Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1528, pp , [10] R. Barr, A. Dowden, and H. Hayne, Developmental changes in deferred imitation by 6-to 24-month-old infants, Infant Behavior and Development, vol. 19, no. 2, pp , [11] R. Barr and H. Hayne, It s not what you know, it s who you know: Older siblings facilitate imitation during infancy, International Journal of Early Years Education, vol. 11, no. 1, pp. 7 21, [12] H. Hayne, J. Boniface, and R. Barr, The development of declarative memory in human infants: Age-related changes in deffered imitation, Behavioral Neuroscience, vol. 114, no. 1, p. 77, [13] T. Hofer, P. Hauf, and G. Aschersleben, Infant s perception of goaldirected actions performed by a mechanical device, Infant Behavior and Development, vol. 28, no. 4, pp , [14] C. K. Rovee-Collier and M. J. Gekoski, The economics of infancy: A review of conjugate reinforcement, in Advances in Child Development and Behavior. Elsevier, 1979, vol. 13, pp [15] R. Barr, C. Rovee-Collier, and A. Learmonth, Potentiation in young infants: The origin of the prior knowledge effect? Memory & Cognition, vol. 39, no. 4, pp , [16] M. J. Matarić, J. Eriksson, D. J. Feil-Seifer, and C. J. Winstein, Socially assistive robotics for post-stroke rehabilitation, Journal of NeuroEngineering and Rehabilitation, vol. 4, no. 1, p. 5, [17] J. Greczek, E. Kaszubski, A. Atrash, and M. Matarić, Graded cueing feedback in robot-mediated imitation practice for children with autism spectrum disorders, in IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2014, pp [18] W. A. Bainbridge, J. Hart, E. S. Kim, and B. Scassellati, The effect of presence on human-robot interaction, in IEEE International Symposium on Robot and Human Interactive Communication (RO- MAN), 2008, pp [19] D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati, The physical presence of a robot tutor increases cognitive learning gains, in Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 34, no. 34, [20] S. Carey and E. Spelke, Domain-specific knowledge and conceptual change, Mapping the Mind: Domain Specificity in Cognition and Culture, vol. 169, p. 200, [21] S. Baron-Cohen, Mindblindness: An essay on autism and theory of mind. MIT press, [22] A. M. Leslie, Tomm, toby, and agency: Core architecture and domain specificity, Mapping the mind: Domain specificity in cognition and culture, pp , [23] M. C. Frank, E. Vul, and S. P. Johnson, Development of infants attention to faces during the first year, Cognition, vol. 110, no. 2, pp , [24] G. Matsuda, H. Ishiguro, and K. Hiraki, Infant discrimination of humanoid robots, Frontiers in Psychology, vol. 6, p. 1397, [25] B. S. Bloom, The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring, Educational Researcher, vol. 13, no. 6, pp. 4 16, [26] A. Corbett, Cognitive computer tutors: Solving the two-sigma problem, in International Conference on User Modeling. Springer, 2001, pp [27] M. C. Piper, J. Darrah, T. O. Maguire, and L. Redfern, Motor assessment of the developing infant. Saunders Philadelphia, 1994, vol. 1. [28] APDM Wearable Technologies, Portland, OR, USA, Opals, [29] B. A. Smith, I. A. Trujillo-Priego, C. J. Lane, J. M. Finley, and F. B. Horak, Daily quantity of infant leg movement: Wearable sensor algorithm and relationship to walking onset, Sensors, vol. 15, no. 8, pp , [30] B. Sargent, N. Schweighofer, M. Kubo, and L. Fetters, Infant exploratory learning: influence on leg joint coordination, PLoS One, vol. 9, no. 3, p. e91500, 2014.
Robotics for Children
Vol. xx No. xx, pp.1 8, 200x 1 1 2 3 4 Robotics for Children New Directions in Child Education and Therapy Fumihide Tanaka 1,HidekiKozima 2, Shoji Itakura 3 and Kazuo Hiraki 4 Robotics intersects with
More informationEnsuring the Safety of an Autonomous Robot in Interaction with Children
Machine Learning in Robot Assisted Therapy Ensuring the Safety of an Autonomous Robot in Interaction with Children Challenges and Considerations Stefan Walke stefan.walke@tum.de SS 2018 Overview Physical
More informationEssay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam
1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are
More informationVIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT
3-59 Corbett Hall University of Alberta Edmonton, AB T6G 2G4 Ph: (780) 492-5422 Fx: (780) 492-1696 Email: atlab@ualberta.ca VIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT Mengliao
More informationState of the Science Symposium
State of the Science Symposium Virtual Reality and Physical Rehabilitation: A New Toy or a New Research and Rehabilitation Tool? Emily A. Keshner Department of Physical Therapy College of Health Professions
More informationCare-receiving Robot as a Tool of Teachers in Child Education
Care-receiving Robot as a Tool of Teachers in Child Education Fumihide Tanaka Graduate School of Systems and Information Engineering, University of Tsukuba Tennodai 1-1-1, Tsukuba, Ibaraki 305-8573, Japan
More informationYoung Children s Folk Knowledge of Robots
Young Children s Folk Knowledge of Robots Nobuko Katayama College of letters, Ritsumeikan University 56-1, Tojiin Kitamachi, Kita, Kyoto, 603-8577, Japan E-mail: komorin731@yahoo.co.jp Jun ichi Katayama
More informationISAAC /22/2014. Disclosures. Dynamic Assessment (DA) A Holistic Approach Incorporates active teaching within the assessment process
Using Dynamic Assessment for Early Sentence Structures with Children using an ipad AAC App Disclosures This research has been supported with funds from: NIH grant: 1R03DC011610 American Speech-Language-Hearing
More informationENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS
BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of
More informationStabilize humanoid robot teleoperated by a RGB-D sensor
Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information
More informationCognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many
Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July
More informationOptic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball
Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine
More informationA Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency
A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency Shunsuke Hamasaki, Atsushi Yamashita and Hajime Asama Department of Precision
More informationChildren s age influences their perceptions of a humanoid robot as being like a person or machine.
Children s age influences their perceptions of a humanoid robot as being like a person or machine. Cameron, D., Fernando, S., Millings, A., Moore. R., Sharkey, A., & Prescott, T. Sheffield Robotics, The
More informationMachine Learning in Robot Assisted Therapy (RAT)
MasterSeminar Machine Learning in Robot Assisted Therapy (RAT) M.Sc. Sina Shafaei http://www6.in.tum.de/ Shafaei@in.tum.de Office 03.07.057 SS 2018 Chair of Robotics, Artificial Intelligence and Embedded
More informationA SURVEY OF SOCIALLY INTERACTIVE ROBOTS
A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why
More informationJane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute
Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Use an example to explain what is admittance control? You may refer to exoskeleton
More informationEvaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:
More informationArbitrating Multimodal Outputs: Using Ambient Displays as Interruptions
Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Ernesto Arroyo MIT Media Laboratory 20 Ames Street E15-313 Cambridge, MA 02139 USA earroyo@media.mit.edu Ted Selker MIT Media Laboratory
More informationEvolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks
Evolving Robot Empathy through the Generation of Artificial Pain in an Adaptive Self-Awareness Framework for Human-Robot Collaborative Tasks Muh Anshar Faculty of Engineering and Information Technology
More informationInvited Speaker Biographies
Preface As Artificial Intelligence (AI) research becomes more intertwined with other research domains, the evaluation of systems designed for humanmachine interaction becomes more critical. The design
More informationRobot: icub This humanoid helps us study the brain
ProfileArticle Robot: icub This humanoid helps us study the brain For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-icub/ Program By Robohub Tuesday,
More informationDesigning Toys That Come Alive: Curious Robots for Creative Play
Designing Toys That Come Alive: Curious Robots for Creative Play Kathryn Merrick School of Information Technologies and Electrical Engineering University of New South Wales, Australian Defence Force Academy
More informationA Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures
A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)
More informationHumanoid Robots. by Julie Chambon
Humanoid Robots by Julie Chambon 25th November 2008 Outlook Introduction Why a humanoid appearance? Particularities of humanoid Robots Utility of humanoid Robots Complexity of humanoids Humanoid projects
More informationJane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute
Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute State one reason for investigating and building humanoid robot (4 pts) List two
More informationMIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1
Christine Upadek 29 November 2010 Christine Upadek 1 Outline Emotions Kismet - a sociable robot Outlook Christine Upadek 2 Denition Social robots are embodied agents that are part of a heterogeneous group:
More informationBody Movement Analysis of Human-Robot Interaction
Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,
More informationDrumming with a Humanoid Robot: Lessons Learnt from Designing and Analysing Human-Robot Interaction Studies
Drumming with a Humanoid Robot: Lessons Learnt from Designing and Analysing Human-Robot Interaction Studies Hatice Kose-Bagci, Kerstin Dautenhahn, and Chrystopher L. Nehaniv Adaptive Systems Research Group
More informationReinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza
Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza Computer Graphics Computational Imaging Virtual Reality Joint work with: A. Serrano, J. Ruiz-Borau
More informationImplicit Fitness Functions for Evolving a Drawing Robot
Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,
More informationThe Effect of Display Type and Video Game Type on Visual Fatigue and Mental Workload
Proceedings of the 2010 International Conference on Industrial Engineering and Operations Management Dhaka, Bangladesh, January 9 10, 2010 The Effect of Display Type and Video Game Type on Visual Fatigue
More informationCYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS
CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS GARY B. PARKER, CONNECTICUT COLLEGE, USA, parker@conncoll.edu IVO I. PARASHKEVOV, CONNECTICUT COLLEGE, USA, iipar@conncoll.edu H. JOSEPH
More informationDiscrimination of Virtual Haptic Textures Rendered with Different Update Rates
Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,
More informationLEGO MINDSTORMS CHEERLEADING ROBOTS
LEGO MINDSTORMS CHEERLEADING ROBOTS Naohiro Matsunami\ Kumiko Tanaka-Ishii 2, Ian Frank 3, and Hitoshi Matsubara3 1 Chiba University, Japan 2 Tokyo University, Japan 3 Future University-Hakodate, Japan
More informationSocial Robots Research Reports Project website: Institute website:
Orelena Hawks Puckett Institute Social Robots Research Reports, 2013, Number 2, 1-5. Social Robots Research Reports Project website: www.socialrobots.org Institute website: www.puckett.org Social Robots
More informationROMEO Humanoid for Action and Communication. Rodolphe GELIN Aldebaran Robotics
ROMEO Humanoid for Action and Communication Rodolphe GELIN Aldebaran Robotics 7 th workshop on Humanoid November Soccer 2012 Robots Osaka, November 2012 Overview French National Project labeled by Cluster
More informationSven Wachsmuth Bielefeld University
& CITEC Central Lab Facilities Performance Assessment and System Design in Human Robot Interaction Sven Wachsmuth Bielefeld University May, 2011 & CITEC Central Lab Facilities What are the Flops of cognitive
More informationVIP I-Natural Team. Report Submitted for VIP Innovation Competition April 26, Name Major Year Semesters. Justin Devenish EE Senior First
VIP I-Natural Team Report Submitted for VIP Innovation Competition April 26, 2011 Name Major Year Semesters Justin Devenish EE Senior First Khoadang Ho CS Junior First Tiffany Jernigan EE Senior First
More informationFigure 1. Motorized Pediatric Stander Problem Statement and Mission. 1 of 6
Problem Statement/Research Question and Background A significant number of children are confined to a sitting position during the school day. This interferes with their education and self esteem by reducing
More informationThe role of physical embodiment in human-robot interaction
The role of physical embodiment in human-robot interaction Joshua Wainer David J. Feil-Seifer Dylan A. Shell Maja J. Matarić Interaction Laboratory Center for Robotics and Embedded Systems Department of
More informationHumanoid robot. Honda's ASIMO, an example of a humanoid robot
Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.
More informationVISUOMOTOR PROGRAM DESCRIPTION AND OVERVIEW
VISUOMOTOR PROGRAM DESCRIPTION AND OVERVIEW Overview System hardware consists of a touchscreen display (46-65 ), extremely portable stand and an Intel NUC running Windows 8. The display can be rotated,
More informationMSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation
MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation Rahman Davoodi and Gerald E. Loeb Department of Biomedical Engineering, University of Southern California Abstract.
More informationTopic Paper HRI Theory and Evaluation
Topic Paper HRI Theory and Evaluation Sree Ram Akula (sreerama@mtu.edu) Abstract: Human-robot interaction(hri) is the study of interactions between humans and robots. HRI Theory and evaluation deals with
More information3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments
2824 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 64, NO. 12, DECEMBER 2017 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments Songpo Li,
More informationVishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)
Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,
More informationEMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS
EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS DAVIDE MAROCCO STEFANO NOLFI Institute of Cognitive Science and Technologies, CNR, Via San Martino della Battaglia 44, Rome, 00185, Italy
More informationHomeostasis Lighting Control System Using a Sensor Agent Robot
Intelligent Control and Automation, 2013, 4, 138-153 http://dx.doi.org/10.4236/ica.2013.42019 Published Online May 2013 (http://www.scirp.org/journal/ica) Homeostasis Lighting Control System Using a Sensor
More informationHumanoid Robots: A New Kind of Tool
Humanoid Robots: A New Kind of Tool Bryan Adams, Cynthia Breazeal, Rodney Brooks, Brian Scassellati MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 USA {bpadams, cynthia,
More informationStudy of Effectiveness of Collision Avoidance Technology
Study of Effectiveness of Collision Avoidance Technology How drivers react and feel when using aftermarket collision avoidance technologies Executive Summary Newer vehicles, including commercial vehicles,
More informationARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)
Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More informationBooklet of teaching units
International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,
More informationDo You See What I See? Infants Perception of Face-like Objects. Nadia Islam. Distinguished Majors Thesis. University of Virginia.
INFANTS PERCEPTION OF FACE-LIKE OBJECTS 1 Running head: INFANTS PERCEPTION OF FACE-LIKE OBJECTS Do You See What I See? Infants Perception of Face-like Objects Nadia Islam Distinguished Majors Thesis University
More informationInteraction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping
Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino
More informationHere I present more details about the methods of the experiments which are. described in the main text, and describe two additional examinations which
Supplementary Note Here I present more details about the methods of the experiments which are described in the main text, and describe two additional examinations which assessed DF s proprioceptive performance
More informationRobot: Geminoid F This android robot looks just like a woman
ProfileArticle Robot: Geminoid F This android robot looks just like a woman For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-geminoid-f/ Program
More informationLearning relative directions between landmarks in a desktop virtual environment
Spatial Cognition and Computation 1: 131 144, 1999. 2000 Kluwer Academic Publishers. Printed in the Netherlands. Learning relative directions between landmarks in a desktop virtual environment WILLIAM
More informationAndroid (Child android)
Social and ethical issue Why have I developed the android? Hiroshi ISHIGURO Department of Adaptive Machine Systems, Osaka University ATR Intelligent Robotics and Communications Laboratories JST ERATO Asada
More informationRobots as Assistive Technology - Does Appearance Matter?
Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication Kurashiki, Okayama Japan September 20-22,2004 Robots as Assistive Technology - Does Appearance Matter? Ben
More informationVOICE CONTROL BASED PROSTHETIC HUMAN ARM
VOICE CONTROL BASED PROSTHETIC HUMAN ARM Ujwal R 1, Rakshith Narun 2, Harshell Surana 3, Naga Surya S 4, Ch Preetham Dheeraj 5 1.2.3.4.5. Student, Department of Electronics and Communication Engineering,
More informationProprioception & force sensing
Proprioception & force sensing Roope Raisamo Tampere Unit for Computer-Human Interaction (TAUCHI) School of Information Sciences University of Tampere, Finland Based on material by Jussi Rantala, Jukka
More informationEffects of Framing a Robot as a Social Agent or as a Machine on Children s Social Behavior*
Effects of Framing a Robot as a Social Agent or as a Machine on Children s Social Behavior* Jacqueline M. Kory Westlund 1, Marayna Martinez 1, Maryam Archie 1, Madhurima Das 1, and Cynthia Breazeal 1 Abstract
More informationRobin Gaines Lanzi, PhD, MPH
Robin Gaines Lanzi, PhD, MPH SAAFE: Sexually Active Adolescent Focused Education Mobile Based Game to Promote Healthy Sexual Practices CFAR Behavioral and Community Science Core mhealth Panel: Innovative
More informationVirtual Reality in Neuro- Rehabilitation and Beyond
Virtual Reality in Neuro- Rehabilitation and Beyond Amanda Carr, OTRL, CBIS Origami Brain Injury Rehabilitation Center Director of Rehabilitation Amanda.Carr@origamirehab.org Objectives Define virtual
More informationGESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera
GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able
More informationPolicy Forum. Science 26 January 2001: Vol no. 5504, pp DOI: /science Prev Table of Contents Next
Science 26 January 2001: Vol. 291. no. 5504, pp. 599-600 DOI: 10.1126/science.291.5504.599 Prev Table of Contents Next Policy Forum ARTIFICIAL INTELLIGENCE: Autonomous Mental Development by Robots and
More informationBirth of An Intelligent Humanoid Robot in Singapore
Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing
More informationRunning an HCI Experiment in Multiple Parallel Universes
Author manuscript, published in "ACM CHI Conference on Human Factors in Computing Systems (alt.chi) (2014)" Running an HCI Experiment in Multiple Parallel Universes Univ. Paris Sud, CNRS, Univ. Paris Sud,
More informationWRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment.
WRS Partner Robot Challenge (Virtual Space) 2018 WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment. 1 Introduction The Partner Robot
More informationKeywords: Innovative games-based learning, Virtual worlds, Perspective taking, Mental rotation.
Immersive vs Desktop Virtual Reality in Game Based Learning Laura Freina 1, Andrea Canessa 2 1 CNR-ITD, Genova, Italy 2 BioLab - DIBRIS - Università degli Studi di Genova, Italy freina@itd.cnr.it andrea.canessa@unige.it
More informationInability of Five-Month-Old Infants to Retrieve a Contiguous Object: A Failure of Conceptual Understanding or of Control of Action?
Child Development, November/December 2000, Volume 71, Number 6, Pages 1477 1494 Inability of Five-Month-Old Infants to Retrieve a Contiguous Object: A Failure of Conceptual Understanding or of Control
More informationCybersickness, Console Video Games, & Head Mounted Displays
Cybersickness, Console Video Games, & Head Mounted Displays Lesley Scibora, Moira Flanagan, Omar Merhi, Elise Faugloire, & Thomas A. Stoffregen Affordance Perception-Action Laboratory, University of Minnesota,
More informationComparison of Three Eye Tracking Devices in Psychology of Programming Research
In E. Dunican & T.R.G. Green (Eds). Proc. PPIG 16 Pages 151-158 Comparison of Three Eye Tracking Devices in Psychology of Programming Research Seppo Nevalainen and Jorma Sajaniemi University of Joensuu,
More informationDevelopment and Validation of Virtual Driving Simulator for the Spinal Injury Patient
CYBERPSYCHOLOGY & BEHAVIOR Volume 5, Number 2, 2002 Mary Ann Liebert, Inc. Development and Validation of Virtual Driving Simulator for the Spinal Injury Patient JEONG H. KU, M.S., 1 DONG P. JANG, Ph.D.,
More informationOrchestration. Lighton Phiri. Supervisors: A/Prof. Hussein Suleman Prof. Dr. Christoph Meinel HPI-CS4A, University of Cape Town
Streamlined Orchestration Streamlined Technology-driven Orchestration Lighton Phiri Supervisors: A/Prof. Hussein Suleman Prof. Dr. Christoph Meinel HPI-CS4A, University of Cape Town Introduction Source:
More informationHRTF adaptation and pattern learning
HRTF adaptation and pattern learning FLORIAN KLEIN * AND STEPHAN WERNER Electronic Media Technology Lab, Institute for Media Technology, Technische Universität Ilmenau, D-98693 Ilmenau, Germany The human
More informationAssociated Emotion and its Expression in an Entertainment Robot QRIO
Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,
More informationGESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL
GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different
More informationEvolving High-Dimensional, Adaptive Camera-Based Speed Sensors
In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors
More informationSocial Robots and Human-Robot Interaction Ana Paiva Lecture 12. Experimental Design for HRI
Social Robots and Human-Robot Interaction Ana Paiva Lecture 12. Experimental Design for HRI Scenarios we are interested.. Build Social Intelligence d) e) f) Focus on the Interaction Scenarios we are interested..
More informationNatural Interaction with Social Robots
Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,
More informationWhat will the robot do during the final demonstration?
SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such
More informationTouch Perception and Emotional Appraisal for a Virtual Agent
Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More informationHow a robot s attention shapes the way people teach
Johansson, B.,!ahin, E. & Balkenius, C. (2010). Proceedings of the Tenth International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. Lund University Cognitive Studies,
More informationQuantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays
Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays Z.Y. Alpaslan, S.-C. Yeh, A.A. Rizzo, and A.A. Sawchuk University of Southern California, Integrated Media Systems
More informationFATIGUE INDEPENDENT AMPLITUDE-FREQUENCY CORRELATIONS IN EMG SIGNALS
Fatigue independent amplitude-frequency correlations in emg signals. Adam SIEMIEŃSKI 1, Alicja KEBEL 1, Piotr KLAJNER 2 1 Department of Biomechanics, University School of Physical Education in Wrocław
More informationTAKING A WALK IN THE NEUROSCIENCE LABORATORIES
TAKING A WALK IN THE NEUROSCIENCE LABORATORIES Instructional Objectives Students will analyze acceleration data and make predictions about velocity and use Riemann sums to find velocity and position. Degree
More informationInforming a User of Robot s Mind by Motion
Informing a User of Robot s Mind by Motion Kazuki KOBAYASHI 1 and Seiji YAMADA 2,1 1 The Graduate University for Advanced Studies 2-1-2 Hitotsubashi, Chiyoda, Tokyo 101-8430 Japan kazuki@grad.nii.ac.jp
More informationOptical Marionette: Graphical Manipulation of Human s Walking Direction
Optical Marionette: Graphical Manipulation of Human s Walking Direction Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai (Digital Nature Group, University
More informationSPQR RoboCup 2016 Standard Platform League Qualification Report
SPQR RoboCup 2016 Standard Platform League Qualification Report V. Suriani, F. Riccio, L. Iocchi, D. Nardi Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti Sapienza Università
More informationConverting Motion between Different Types of Humanoid Robots Using Genetic Algorithms
Converting Motion between Different Types of Humanoid Robots Using Genetic Algorithms Mari Nishiyama and Hitoshi Iba Abstract The imitation between different types of robots remains an unsolved task for
More informationTowards a Software Engineering Research Framework: Extending Design Science Research
Towards a Software Engineering Research Framework: Extending Design Science Research Murat Pasa Uysal 1 1Department of Management Information Systems, Ufuk University, Ankara, Turkey ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationCONCURRENT AND RETROSPECTIVE PROTOCOLS AND COMPUTER-AIDED ARCHITECTURAL DESIGN
CONCURRENT AND RETROSPECTIVE PROTOCOLS AND COMPUTER-AIDED ARCHITECTURAL DESIGN JOHN S. GERO AND HSIEN-HUI TANG Key Centre of Design Computing and Cognition Department of Architectural and Design Science
More informationMultisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills
Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills O Lahav and D Mioduser School of Education, Tel Aviv University,
More informationThe Design and Assessment of Attention-Getting Rear Brake Light Signals
University of Iowa Iowa Research Online Driving Assessment Conference 2009 Driving Assessment Conference Jun 25th, 12:00 AM The Design and Assessment of Attention-Getting Rear Brake Light Signals M Lucas
More informationCombined effects of low frequency vertical vibration and noise on whole-body vibration sensation
Combined effects of low frequency vertical vibration and noise on whole-body vibration sensation Hiroshi MATSUDA and Nobuo MACHIDA 2, 2 College of Science and Technology, Nihon University, Japan ABSTRACT
More informationMultisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study
Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Orly Lahav & David Mioduser Tel Aviv University, School of Education Ramat-Aviv, Tel-Aviv,
More information