Pre-collision safety strategies for human-robot interaction

Size: px
Start display at page:

Download "Pre-collision safety strategies for human-robot interaction"

Transcription

1 Auton Robot (2007) 22: DOI /s Pre-collision safety strategies for human-robot interaction Dana Kulić Elizabeth Croft Received: 4 February 2006 / Revised: 21 September 2006 / Accepted: 29 September 2006 / Published online: 20 October 2006 C Science + Business Media, LLC 2006 Abstract Safe planning and control is essential to bringing human-robot interaction into common experience. This paper presents an integrated human robot interaction strategy that ensures the safety of the human participant through a coordinated suite of safety strategies that are selected and implemented to anticipate and respond to varying time horizons for potential hazards and varying expected levels of interaction with the user. The proposed planning and control strategies are based on explicit measures of danger during interaction. The level of danger is estimated based on factors influencing the impact force during a human-robot collision, such as the effective robot inertia, the relative velocity and the distance between the robot and the human. A second key requirement for improving safety is the ability of the robot to perceive its environment, and more specifically, human behavior and reaction to robot movements. This paper also proposes and demonstrates the use of human monitoring information based on vision and physiological sensors to further improve the safety of the human robot interaction. A methodology for integrating sensor-based information about the user s position and physiological reaction to the robot into medium and short-term safety strategies is presented. This methodology is verified through a series of experimental test cases where a human and an articulated robot respond to each other based on the human s physical and physiological behavior. D. Kulić ( ) University of Tokyo dana@ynl.t.u-tokyo.ac.jp E. Croft University of British Columbia Keywords Human-robot interaction. Robot safety. Affective state estimation. Physiological signals 1 Introduction Robots have been successfully employed in industrial settings to improve productivity and perform dangerous or monotonous tasks. Recently, research has focused on the potential for using robots to aid humans outside the strictly industrial environment, in medical, office or home settings. To this end, robots are being designed to perform homecare/daily living tasks 1 (Weiner et al., 1990) such as dish clearing (Bearveldt, 1993), co-operative load carrying (Arai et al., 2000; Fernandez et al., 2001) and feeding (Guglielmelli et al., 1996; Kawamura et al., 1995) and to provide social interaction (Breazeal, 2001; Wada et al., 2004). As robots move from isolated work cells to more unstructured and interactive environments, they will need to become better at acquiring and interpreting information about their environment (Pentland, 2000). One of the critical issues hampering the entry of robots into unstructured environments populated by humans is safety (Corke, 1999; Heinzmann and Zelinsky, 2003; Ikuta and Nokata, 2003). In particular, when the tasks of the interaction include manipulation tasks, such as picking up and carrying items (Kosuge and Hirata, 2004), assisting with dressing, opening and closing doors, etc., large, powerful robots will be required. Such robots (e.g., articulated robots) must be able to interact with humans in a safe and friendly manner while performing their tasks. 1 The five Activities of Daily Living (ADL) are: (i) transferring to and from bed, (ii) dressing, (iii) feeding, (iv) bathing, (v) toileting.

2 150 Auton Robot (2007) 22: Related work Industrial safety standards (RIA/ANSI, 1999) focus on ensuring safety by isolating the robot away from humans, and are, therefore, not directly applicable to human-robot interaction applications. However, industrial experience has shown that eliminating hazards though mechanical design is often the most effective safety strategy. This approach has also been applied to interactive robots, for example, by applying a whole-body robot viscoelastic covering (Yamada et al., 1997), using spherical joints (Yamada et al., 1999), or by utilizing various compliance methods such as compliant joints (Bicchi et al., 2001; Bicchi and Tonietti, 2004) and distributed parallel actuation (Zinn et al., 2002, 2004). While these and other mechanical design approaches have made contributions to reducing the impact force during a collision, they do not attempt to prevent the collision from occurring. To ensure safe and human friendly interaction in unstructured environments, additional safety measures, utilizing system control and planning, are required. One approach is to attenuate reference commands to limit the maximum impact force that can be generated by the robot. Heinzmann and Zelinsky (2003) and Matsumoto et al. (1999) propose a control scheme based on impact force control for any point on the robot. The robot is controlled such that the impact force with a static object does not exceed a preset value. The impact force controller acts as a saturating filter between the motion control algorithm and the robot. As with mechanical design, impact force control aims to limit the impact force once collision has already occurred, thus limiting the potential for human injury. In addition, inertia reduction and the force controller saturating filter act prior to impact in order to minimize the potential impact force, constituting a pre-collision safety strategy (Heinzmann and Zelinsky, 2003; Morita et al., 1999). In order to prevent collisions, safeguarding type controllers execute a safety strategy if a person is detected within the work envelope of the robot. If a human is detected in the safeguarded zone, the default robot control sequence is altered to ensure safety of the human (Bearveldt, 1993; Yamada et al., 1997; Zurada et al., 2001). These methods consider a fixed distance around the robot as the safeguarded zone, at which point the reactive controller performs a safety action. A more sophisticated approach is to develop a dynamically sized safeguarded zone, based on an implicit or explicit evaluation of the current danger, namely, a danger index. For example, Traver et al. (2000) propose two human friendly robotic designs. The elusive robot uses the distance between the robot and the human as the danger index. The ergonomic robot computes a danger factor based on the robot s velocity and posture, the human s direction of motion and eye gaze, and the rate of change of the distance between the robot and the human. The ergonomic robot is controlled to reduce the calculated danger index. Ikuta and Nokata (2003) developed a danger evaluation method using the potential impact force as an evaluation measure. In their work, the danger index is defined as a product of factors that affect the potential impact force between the robot and the human, such as relative distance, relative velocity, robot inertia and robot stiffness. Several design examples are presented, but no control-based implementation of the danger index was presented. Both safeguarding and danger evaluation approaches propose that robot behavior be modified based on the human location and motion during human-robot interaction. The safeguarding approaches define discrete behaviors, while the danger evaluation methods generate a continuum of behavior Planning for safety Motion planning, coupled with the a priori identification of potentially hazardous situations, has received less attention than control-based (reactive) techniques as a means of reducing potential robot-safety hazards. However, safe planning is important for any interaction that involves motion in a human environment, especially those that may contain additional obstacles. Including safety criteria at the planning stage can place the robot in a better position to respond to unanticipated safety events. Planning is thus used to improve the control outcome, similar to using smooth trajectory design to improve trajectory tracking at the control level (Macfarlane and Croft, 2003). The majority of planners implicitly consider the distance between the robot and the human as the measure of danger in an interaction, which must be minimized to generate the path (Brock and Khatib, 2002). Nokata et al. (2002) explicitly define a danger index based on the potential impact force between a human and the end effector. The danger index is the ratio of the potential force to the largest safe impact force (an impact force that does not cause injury to the human). The danger index is calculated based on factors such as the distance and velocity between the human and the manipulator end effector. However, their approach considers only the end effector motion, and not potential danger due to impacts with any other parts of the manipulator. In a more general context, Brock and Khatib (2002) describe the Elastic Strips framework for motion planning for highly articulated robots moving in a human environment. The potential field method in operational space is used to plan the motion starting from a pre-existing rough plan, and an additional posture potential field is defined to specify a preferred posture for the robot. Although their paper does not deal explicitly with safety, the posture potential can be used to formulate safety-based constraints.

3 Auton Robot (2007) 22: Human monitoring for human-robot interaction A key issue during human-robot interaction is the question of robot perception (Heinzmann and Zelinsky, 2003). During human-robot interaction, monitoring of the human can provide valuable information, which can enhance the safety of the interaction and provide a feedback signal for robot actions. Human-Robot interaction systems frequently utilize vision-based tracking of the human in the interaction and use this data to guide the interaction. This can include visual tracking of the user s eye gaze (Traver et al., 2000; Heinzmann and Zelinsky, 1999; Matsumoto et al., 2000) and head position (Stiefelhagen et al., 2001), reading of the facial expression (Song et al., 2001) or hand gestures. Physiological monitoring systems use signals from the user to extract information about the users reaction to robot motion or actions. Many different physiological signals have been proposed for use in human-computer interfaces and human-robot interaction, including skin conductance, heart rate, pupil dilation and brain and muscle neural activity. Although physiological signals have the potential to provide objective measures of the human s emotional response, they are difficult to interpret. These difficulties arise from the large variability in physiological response from person to person, and the variety of stimulus that causes physiological response. Sarkar proposes using multiple physiological signals to estimate emotional state, and using this estimate to modify robotic actions to make the user more comfortable (Sarkar, 2002). Rani et al. (2002) propose analyzing the frequency content of the heart rate signal to distinguish different levels of anxiety during human-robot co-operation. The system was tested on two subjects, using video game playing to elicit the various levels of anxiety and corresponding heart rate signal. In Rani et al. (2004), the frequency domain heart rate analysis is combined with skin conductance activity and corrugator and masseter muscle activity to measure human stress. These signals are analyzed with a fuzzy inference engine to estimate stress. The stress information is then used by an autonomous mobile robot to return to the human if the human is in distress. In this case, the robot is not directly interacting with the human; rather, physiological information is used to allow the robot to assess the human s condition in a rescue situation. Research on the use of physiological signals for humanrobot interaction is still at the earliest stages. However, physiological sensors present a promising area of research, and they offer several important advantages over other methods of estimating affective state, such as analysis of facial expressions from a video stream. Physiological signals, unlike facial expressions, are not under conscious control and are therefore harder to suppress or modify, and are not susceptible to cultural differences (Picard, 1997). Physiological signals can also be acquired and analyzed faster than a video stream and are not hampered by visibility conditions such as lighting, obstructions, user location etc. While research in this area is still new, in partnership with vision-based and other techniques, physiological signal monitoring has great potential for providing information that will improve the quality of human robot interaction. 1.2 Multi-level approaches to safety during human-robot interaction Safety strategies for human-robot interaction have been classified as either pre-collision or post-collision strategies (Heinzmann and Zelinsky, 2003; Morita et al., 1999). Most human-robot interaction systems propose a combination of one pre-collision and one post-collision safety strategy. However, in the pre-collision time frame, different strategies may be appropriate, depending on when the robot becomes aware of the hazard, i.e., the time remaining before a collision occurs. In the long term, path planning strategies can be employed to avoid any collisions, and to place the robot in a better posture to respond to an unanticipated collision. In the medium term, local re-planning and trajectory modification can be used to modify the robot motion and attempt to avoid the hazard. In the case of an immediate hazard, reactive strategies can be used to move the robot away from the potential collision. In this paper, safety is considered in the context of a robot planning and control system that is user driven at the highest level. An overview of the system context is shown in Fig. 1. The system architecture is similar to the hybrid deliberative and reactive paradigm described in Bischoff and Graefe (2004). An approximate geometric path is generated in a slower outer loop, while the detailed trajectory planning and control are performed reactively in real time. Planning is divided into two parts: the global path planner and the local trajectory planner. The global path planner considers the long-term safety time horizon, by considering factors affecting safety that take a long time to change, such as the robot posture and inertia. The local trajectory planner generates the trajectory along the globally planned path, based on real-time information obtained during task execution. The trajectory planner generates the required control signal at each control point. During the interaction, the user is monitored to assess the user s level of approval of robot actions. The trajectory planner uses this information to modify the velocity of the robot along the planned path. The trajectory planner considers safety factors that are salient in the medium term time horizon, such as the robot velocity and information about the user. The safety control module evaluates the safety of the plan generated by the trajectory planner at each control

4 152 Auton Robot (2007) 22: User Monitoring Recovery Evaluator User Command Interpreter Path Planner Trajectory Planner Safety Control Low level Control Robot Safety Measure Estimation User Intent Estimation Fig. 1 System component overview step. If a change in the environment is detected that threatens the safety of the interaction, the safety control module initiates a deviation from the planned path. This deviation will move the robot to a safer location. The safe control module considers short-term time horizon safety factors, such as the relative distance, velocity and inertia between the human and the robot at the potential collision point. Meanwhile, the recovery evaluator will initiate a re-assessment of the plan and initiate re-planning if necessary. This paper is organized as follows: In Section 2, proposed methods for safety at the long term, medium term and short term (real-time) levels for human-robot interaction are reviewed. Further, methods for human monitoring during the interaction are briefly described. This review provides the basis for Section 3, where a novel approach for integration of these methods into a real-time robot planning and control system is described. Section 4 describes testcases demonstrating the integrated operation of the combined safety strategies and human monitoring technology. Conclusions and directions for future work are presented in Section 5. To the authors knowledge, this paper is the first to present an integrated approach using multi-level safety strategies based on multi-modal user monitoring with experimental test case results. 2 Approach Three main components are developed for addressing safety at different time horizons: safe planning (long term safety), trajectory scaling (medium term safety), and reactive control (short term safety). User monitoring technology is also developed for sensing the position and reactions of the user. The user monitoring information is then used as inputs to the various safety modules. 2.1 Path planning As already discussed, including safety criteria at the planning stage can place the robot in a better position to respond to unanticipated safety events. Herein, a similar approach to Nokata et al. (2002) is considered. However, in order to address safety in unstructured environments, the whole arm configuration of the manipulator, rather than only the endeffector state, is considered in the planning stage. Within this context, an additional danger criterion is proposed, which must be minimized to find the safest path, using a motion planning framework similar to Brock and Khatib (2002). The danger criterion is formulated as a product of factors affecting the impact force during a collision. A two stage planning approach is proposed to address issues of potentially conflicting planning criteria. By selecting safer configurations at the planning stage, potential hazards can be avoided, and the computational load for hazard response during real time control can be reduced. The path planning strategy is described in detail in Kulić and Croft (2005a), and is briefly reviewed here as it relates to the overall system. The planning module uses the best-first planning approach (Latombe, 1991). In cases when the number of degrees of freedom (DoF) of the robot affecting gross end-effector motion are small (less than 5), the best-first planning approach provides a fast and reliable solution (Latombe, 1991). For highly redundant robots, a different search strategy can be employed, such as randomized planning (Barraquand and Latombe, 1991). However, the search criteria presented herein remain identical regardless of the search strategy used. The safest path is found by searching for contiguous regions that: (i) remain free of obstacles, (ii) lead to the goal, and, (iii) minimize a measure of danger (the danger criterion). Since path planning (as opposed to trajectory planning) does not consider robot velocities,

5 Auton Robot (2007) 22: a configuration-based (quasi-static) danger criterion is required. To be effective, the danger criterion should be constructed from measures that contribute to reducing the impact force in the case of unexpected human-robot impact, as well as reducing of the likelihood of impact. Herein, the robot inertia and the relative distance between the robot and the user s center of mass are used. The robot stiffness was not included as danger related to robot stiffness can be more effectively lowered through mechanical design (Ikuta and Nokata, 2003; Bicchi et al., 2001). Dynamic factors, such as the relative velocity and acceleration between the robot and the user, are handled by the trajectory planner and the safety module, discussed in Sections 2.2 and 3.2. The factor due to inertia is defined as: f I (I S ) = I S, (1) I max where, I max is the maximum safe value of the robot inertia, and I s is the robot inertia in the sagittal 2 plane. For a general robot architecture, the largest eigenvalue of the 3 3 inertia tensor may be used as the scalar measure. The center of mass distance between the robot and the user is used to formulate the distance factor: ( 1 k 1 ) 2 : D f CM (D CM ) = CM D max D CM D max. (2) 0 : D CM > D max D CM is the current distance between the robot center of mass and the user, D min is the minimum allowable distance between the robot centre of mass and the user, and D max is the distance at which D CM, the current distance between the robot and the human, no longer contributes to the cost function (e.g., if no human is detected in the environment). The scaling constant k is used to scale the potential function such that the value of the potential function is zero when the distance between the user and the robot is larger than D max, and is one when the distance between the user and the robot is the minimum allowable distance (D min ). Values of the product-based distance criterion above one indicate an unsafe distance. The danger criterion is then computed as a product of these contributing factors: DC = f I (I S ) f CM (D CM ). (3) The planning-cost function is generated by combining the goal seeking, obstacle avoidance, and danger criteria. The planned path is generated by searching for a set of configurations that minimize the cost function: J = W G f G (D G ) + W O f O (D O ) + W D K DC. (4) 2 The sagittal plane is the vertical plane (plane of symmetry) passing through the center of the outstretched robot arm. Here, f G is the goal seeking criterion, based on the distance between the robot end effector and the goal D G. The obstacle avoidance criterion, f O, is based on the distance between the robot and any obstacles D O. For the goal seeking and obstacle avoidance functions, quadratic potential field functions are used, as defined in Brock and Khatib (2002), Latombe (1991), Khatib (1986). W G, Wo, and W D are the weighting factors for the goal seeking, obstacle avoidance, and danger criteria, respectively. To avoid large local minima in the cost function caused by conflicting danger avoidance and goal seeking objectives, a two-stage planning approach is used. In the first stage, the danger criterion is weighted over the goal criterion, while in the second stage, the goal criterion is more strongly weighted, as described in Kulić and Croft (2005a). One can note that the safe planned path will generally not result in the minimum time path, thus, there is a tradeoff between speed and safety. The threshold used to switch between the two stages can be used to find the optimal solution in the sense of the dual purpose of minimizing the time to complete the task, and minimizing the danger criterion. Once the geometric path is generated, the trajectory module modifies the velocity and acceleration profile along the path during the interaction, based on an estimate of the danger based on dynamic factors. The trajectory module is described in Section Safe control The safety control module evaluates the safety of the plan generated by the trajectory planner at each control step. If a change in the environment is detected that threatens the safety of the interaction, the safety control module initiates a deviation from the planned path. This deviation will move the robot to a safer location. The inputs into the safety module are: the proposed next configuration of the robot as provided by the trajectory planner, including the velocity and acceleration information; the current user configuration; and estimates of the user s reactions (as discussed in Section 2.3). In this section, the implementation of the safety module based on physical factors only (human and robot configurations) is reviewed (Kulić and Croft 2006), the addition of human reaction information is described in Section 3. Based on information about the robot configuration and human location, the safety module evaluates the current level of danger of the interaction. If the level of danger is low, the proposed plan can proceed, otherwise, a corrective decision is made and an alternate trajectory is generated and passed to the low-level controller. The alternate trajectory generated by the safety module lowers the estimated danger present in the interaction. This

6 154 Auton Robot (2007) 22: approach is most similar to the impedance type strategies presented in Khatib (1986) and Tsuji and Kaneko (1999). A virtual force, calculated by the safety module, pushes the robot away from the person or obstacle. The key element of the safety module is the estimation of the level of danger, namely the danger index. Unlike the quasi-static danger criterion utilized in Section 2.1, the danger index is a dynamic quantity. Thus, the design and evaluation of the danger index must consider the effect of the index on the evolving robot trajectory, in particular, its effect on the stability of the motion. The danger index is constructed from measures that have an effect on the potential impact force during a collision. Since the robot is an articulated linkage of bodies, for the real-time control application in this work it is not sufficient to consider only the end-effector as done in Ikuta and Nokata (2003); instead the entire robot body must be considered as a potential source of impact. For each link, the point closest to the person is considered; this point is called a critical point. The danger index is estimated for each critical point. The factors included are the distance between the robot and the person at the critical point being considered, their relative velocity, and the effective inertia at the critical point. The distance factor f D is given in Eq. (5), ( 1 k f D (s) = D s 1 ) 2 : s D max D max, (5) 0 : s > D max where, s is the distance from the critical point to the nearest point on the person. The scaling constant, k D, is used to scale the distance factor function. This scaling is such that the value of the function is zero when the distance between the human and the robot is larger than D max, the value at which the danger to the person is negligible, and is one when the distance between the human and the robot is the minimum allowable distance (D min ). Values of the distance factor above one indicate an unsafe distance. The velocity factor f V is based on the magnitude of the velocity component, v, between the critical point and the nearest point on the person along the line joining these two points (the approach velocity). The approach velocity, v, is defined to be positive when the robot and the human are moving towards each other. { kv (v V min ) 2 : v V min f V (v) =. (6) 0 : v<v min The scaling constant, k V, is used to scale the velocity factor function such that the value of the function is zero when the velocity is lower than V min, and one when the velocity is V max. V min, a safe separation speed, is set to a negative value (i.e., f V is zero when the robot is moving away from the person). Values of the velocity factor above one indicate an unsafe approach speed greater than V max. The inertia factor is defined as: f I (I CP ) = I CP I max, (7) where, I CP is the effective inertia at the critical point, and I max is the maximum safe value of the robot inertia. The danger index is formulated as the product of the distance, velocity and inertia factors. DI = f D f V f I. (8) Once the danger index is calculated, it is used to generate the virtual force to push the robot away from the human as in Khatib (1986), Tsuji and Kaneko (1999). The resultant virtual force is calculated in the joint space, to ensure safe and stable behavior through robot singularities. The resulting algorithm is analogous to a non-linear non-contact impedance controller, which results in faster and more localized control action than can be achieved with linear impedance control. The detailed algorithm and stability proof are presented in Kulić and Croft (2006). 2.3 Human monitoring During human-human interaction, non-verbal communication signals are frequently exchanged in order to assess each participant s emotional state, focus of attention and intent. Many of these signals are indirect; that is, they occur outside of conscious control. By monitoring and interpreting indirect signals during an interaction, significant cues about the cognitive and emotional state of each participant can be recognized (Picard, 1997). Recently, research has focused on using non-verbal communication, such as eye-gaze (Matsumoto et al., 1999, 2000), facial orientation (Stiefelhagen et al., 2001), facial expressions (Pantic and Rothkrantz, 2000) and physiological signals (Sarkar, 2002; Rani et al., 2004; Kulić and Croft, 2003; Picard, 2001) for human-robot and human-computer interaction. Robot vision is also important for detecting the location of the human in the environment, as well as the presence of any obstacles. In this work, two human monitoring technologies are utilized to perceive the user: machine vision to detect the physical location and head orientation of the user, and physiological signal monitoring to detect the affective state of the user. A machine vision module is used to track the location and (pan/tilt) orientation of the human head, and to estimate the 3D location of the human body in the robot s workspace, using a set of spheres representation (Kulić, 2005). The vision algorithm is described in Kulić(2005), and the physiological sensing work is summarized in the section below.

7 Auton Robot (2007) 22: The physiological sensing module estimates the user affective state based on measured physiological signals such as heart rate, skin conductance and facial muscle contraction. Affective state is represented as a two-dimensional state of valence and arousal (Bradley, 2000). Valence measures the degree to which the emotion is positive or negative, and arousal measures the strength of the emotion. Three physiological signals were selected for measurement: skin conductance response (SCR), heart rate and corrugator muscle activity. These three signals have been shown to be the most reliable indicators of affective state in psychophysiological research (Picard, 1997; Bradley and Lang, 2000; Lang, 1995). Respiration rate was also considered in an early study (Kulić and Croft, 2003), but was rejected as unsuitable for on-line interaction applications due to the slow physiological response of the signal. Another key finding from psychophysiological research is that physiological responses can be highly variable between individuals, as well as vary for the same individual depending on the context of the response (Bradley and Lang, 2000; Brownley, 2000; Dawson, 2000). Pre-processing of the data is necessary prior to inference, in order to extract the relevant features of the signals and to normalize the signal features so that a single inference engine can be used across individuals. The preprocessing stage and fuzzy inference engine developed for this work are described in detail in Kulić and Croft (2005b). An affective inference engine was also developed for estimating affective state during human-robot interaction. The inference engine was tested in a user trial with 36 subjects. In the user trial, subjects were shown motions that a robot would typically perform during hand-over type tasks. Two tasks were presented: a pick and place task, and a reach and retract task. In the pick and place task, the robot moved to pick up an object to the side of the subject, next moved to place the object in front of the subject, and finally retracted to the home position. In the reach and retract task, the robot moved to pick up an object in front of the person, and then retracted to the home position. Each task was planned with two different motion planners: a typical complete potential field planner (Latombe, 1991), and the safe planner described in Section 2.1. Each planned path was executed at three different speeds, resulting in 12 different trajectories. These were presented to subjects in random order. Following the presentation of each trajectory, subjects were asked to rate their level of anxiety, calm and surprise during the motion. At the same time, their physiological response to the robot motion was measured, and their affective state estimated using the fuzzy inference engine. The average anxiety response across trajectories is shown in Fig. 2, while the estimated arousal dimension of the affective state is shown in Fig. 3. Detailed results and analysis can be found in Kulić(2005). The results of the user study indicate that subjects report significantly more anxiety for the potential field planned paths as com- Fig. 2 Fig. 3 Average subject reported anxiety for each trajectory Average estimated arousal for each trajectory pared to the safe planned paths at higher robot velocities. This increase in anxiety during high velocity robot motions, which are perceived to be unsafe, can be reliably detected through physiological signal measurement and the developed inference engine in terms of the estimated arousal. The estimated level of arousal can therefore be used as an indicator of user comfort with the robot, and can be utilized to modulate the robot velocity and behavior during interaction. The integration of this affective state indicator into a robot safety strategy is described in Section 3.1 below. 3 System integration The objective of the integrated system is to improve the safety of human robot interaction by reducing both the potential for a collision to occur, and reducing the impact force in the event of a collision. The strategy is to design and modify the motion prior to the occurrence of a collision. This objective is accomplished through three concurrent planning processes, each considering a different time frame horizon. The safe path planner, described in Section 2.1, considers the long-term safety strategy; namely, the robot path is planned prior to the start of the interaction to minimize the potential impact force along the path. This strategy places the robot in

8 156 Auton Robot (2007) 22: Table 1 Summary of modules and criteria used Module Time frame target Criterion used Safe planner Long term Danger criterion (Eq. (4)) Velocity scaling Medium term Modulated danger index (Eq. (11)) Safe control Short term Modulated danger index (Eq. (11)) a better position to respond, should an unexpected hazard occur during real-time execution. Once task execution begins, velocity scaling along the planned path, as described in Section 3.2, is used to address potential safety hazards that appear while the robot motion is in progress, but that are far enough away to give the robot time to decelerate to a safe approach speed along the planned path. This velocity scaling constitutes the medium-term strategy. Once the robot is close to the human, there will be very little time to react, should a sudden hazard occur. In this short time frame, the reactive safety controller, described in Section 2.2, is used for those cases where an imminent hazard is present, and the robot must deviate from the planned path to ensure the safety of the user. Human monitoring information is used to enhance the robot s knowledge of the user s response (or lack of awareness) to the interaction. Since this information is not available prior to the start of motion, human monitoring information is only incorporated into the medium and short term planning, i.e., by using the human monitoring information during trajectory scaling and safety controller response. The user s behavior (e.g., awareness and anxiety level) during the interaction has important implications for the safety of the interaction; therefore, this information is used to modulate the estimate of the danger index. A summary of the modules and the criteria used for each module is provided in Table Danger index modulation The danger index describes the current level of danger in the interaction, and is used to decide when the robot needs to take corrective action and modify its course, as described in Section 2.2. During subcomponent testing described in Kulić and Croft (2006), the danger index was calculated based only on the physical location and velocities of the robot and user. However, if user monitoring information is available, the danger index can be improved by including this information. For example, analysis of industrial robot accidents has found that the majority of accidents in industry occur when the human operator is not aware of the robot s motion (Corke, 1999). Therefore, the danger index should be increased if the human is facing away from the robot, and is therefore less likely to be observing and aware of the robot motion. In this implementation, two indices are used to modulate the danger index: the user head orientation and affective state. Similarly to the kinematic factors comprising the danger index, the user status components are formulated as scaling factors. The user status components only affect robot behavior if the potential for a hazard already exists based on the kinematic danger factors. The head orientation scaling factor is formulated as a sigmoid function based on the horizontal angle ( pan angle ) of the head. The orientation of zero degrees indicates that the user is oriented facing the robot. The sigmoid function is used to ensure a smoothly differentiable scaling factor, and to adjust for differences in visibility at different head orientations. For example, in the 15 to + 15 degree range, changes to head orientation should not significantly impact the danger index, as robot visibility is not affected. Similarly, in the degree range, the scaling factor should be close to the maximum, as the robot will not be fully visible from this orientation. The factor changes most rapidly in the degree range, where changes to robot visibility occur. The scaling factor is given by: K OR = 1 + M OR 1 + e S OR(θ h θ c ). (9) K OR is the scaling factor due to head orientation, M OR is the maximum increase in scaling due to the orientation, S OR controls the slope of the sigmoid function, θ h is the horizontal head orientation, and θ c is the head orientation at the midpoint of the sigmoid. M OR = 2, S OR = 0.2, and θ c = 30 degrees were used in all the test cases described in Section 4. Similarly, for the user affective state, a sigmoid function was used to generate the scaling factor. The arousal measure was used as the estimate of the affective state, as the arousal was shown to be strongly correlated to the user reported anxiety and surprise, as described in Kulić and Croft (2005b). Since the affective state estimation is less accurate at low arousal levels, the sigmoid function is used to filter out spurious arousal responses of low magnitude (arousal level less than 0.3). The scaling factor due to affective state is given by: K AS = 1 + M AS 1 + e S AS(a a c ). (10) K AS is the scaling factor due to affective state, M AS is the maximum increase in scaling due to the affective state, S AS controls the slope of the affective state sigmoid function, a is the current level of arousal, and a c is the midpoint of

9 Auton Robot (2007) 22: Fig. 4 Velocity scaling test case Fig. 5 Velocity scaling test case danger index Fig. 7 Velocity scaling test case reference trajectory Fig. 6 Velocity scaling test case trajectory scaling factor the arousal scale. During all of the experiments, M AS = 2, S AS = 20, and a c = 0.5 were used. The scaling factors modulate the danger index to generate the integrated danger index as follows: DI t = K OR K AS DI, (11) and applied to the control strategy described in Section 2.2. Thus, the integrated danger index is a product of physical interaction factors affecting the potential impact force, degree of user awareness of the robot, as estimated by the head orientation, and level of user anxiety. The product of factors formulation ensures that corrective action is only taken by the robot controller when all the conditions for a hazard are present. This means that the robot can continue to operate at maximum velocity far away from the user, regardless of the user awareness of the robot or the user s affective state. Once the robot gets close to the user, lack of awareness, or increased affective arousal will modify the robot behavior to ensure user safety. The robot behavior can be modified by activating the safety controller in the case of an immediate hazard, or, if more time is available, through trajectory scaling. 3.2 Trajectory scaling The trajectory planning module generates the velocity and acceleration profile to be followed along the path generated by the path planner. The problem of trajectory planning is that of matching the end conditions for a set of coordinates over a series of path segments, while respecting the kinematic limits for each coordinate (i.e., robot joint axis). In this paper, the trajectories are generated to maximize the robot velocity along the path without violating kinematic limits. The trajectory is described in terms of parameterized time r.

10 158 Auton Robot (2007) 22: Fig. 8 Path Obstruction Test Case 1 controlling the rate at which r increases (i.e., the rate at which the robot advances along the path). The rate of change of r can be modified during run-time at each execution step, according to (12). r i = r i 1 + c dt, (12) Fig. 9 Fig. 10 Fig. 11 Obstruction Test Case 1 danger index Obstruction Test Case 1 trajectory scaling Obstruction Test Case 1 reference trajectory Once the trajectory is generated for the maximum velocity trajectory, time scaling is applied during path execution to adjust the trajectory to meet interaction constraints, such as changes to the environment. Time scaling is performed by where c is the time scaling variable. Here c = 1 indicates full speed, and c = 0 indicates full stop. The magnitude of the time step is dt. The trajectory is described by a series of cubic and quintic polynomials, similar to Macfarlane and Croft (2003). The trajectory planner is implemented such that the scaling along the trajectory can be modified at any point, according to (12). At the same time, the trajectory planner must ensure that during changes to the time scale variable c, the kinematic constraints (max/min velocity and acceleration) are respected, and that the acceleration stays continuous through the velocity changes. For this reason, changes to c are low-pass filtered with a second order filter. The cut-off frequency of the low-pass filter is selected such that the maximum acceleration (after a step change to c) is equal to the maximum robot acceleration. Trajectory scaling is used to slow the robot down as it is approaching a potential hazard, for example, when it approaches a person. By reducing the velocity to zero, the robot can also handle temporary obstructions of the planned path. The scaling constant c (see the Eq. (12)) is calculated in real time, and is based on the integrated danger index, c = V max K D DI t, (13) where V max is the normalized maximum desired velocity (where V max = 1 is the physical robot maximum), K D

11 Auton Robot (2007) 22: Fig. 12 Path Obstruction Test Case 2 Fig. 13 Obstruction Test Case 2 danger index Fig. 15 Obstruction Test Case 2 reference trajectory The user head orientation and affective state are therefore used to modify both the short and medium term safety strategies. Lack of awareness by the user or agitated affective state will result in the robot slowing down sooner, and reacting quicker to a potential hazard, thus improving user safety. Fig. 14 Obstruction Test Case 2 trajectory scaling 4 Experiments Experimental test-bed is a scaling constant, and DI t is the integrated danger index (Eq. (11)). The value of c is limited to the range [0, 1]. The use of the danger index for both trajectory scaling and trajectory modification allows for a smooth transition between the planned motion and a reactive trajectory in case of a hazard. When a hazard is first identified, the danger index will increase, thus lowering the velocity along the path. If the reduced velocity does not lower the danger index, then the hazard is likely caused by changes in the external environment, and a change in path is required. The system was tested with the CRS A460 6-DoF manipulator, controlled by an open architecture controller. The video images used for human visual monitoring were obtained from a Point Grey Bumblebee (Pt. Grey Bumblebee) stereo camera mounted in front of the robot base and facing the approximate user location. The ProComp Infinity system from Thought Technology (Thought Technology Ltd) was used to gather the user physiological data. A detailed description of the experimental test bed is provided in Kulić (2005). 3 Videos of the experiments can be viewed at ubc.ca/ dana/thesis.

12 160 Auton Robot (2007) 22: Fig. 16 Affective State Test Case trajectory scaling. The danger index rapidly increases from zero, as the robot distance becomes less than D max, as described in Eq. (5). As the robot approaches the person, the velocity initially decreases rapidly, and then levels out to zero as the robot approaches the target position, as shown in Fig. 4(e) (h) Obstruction test cases Fig Test cases Affective State Test Case danger index Velocity scaling test case The Velocity Scaling Test Case shows the effect of localized velocity scaling. In this case, the robot is executing a pick and place task with maximum velocity (V max = 1). Figure 4 shows sample frames taken from the video sequence for this test case. The danger index, trajectory scaling factor and resulting trajectory of the first three joints are shown in Figs. 5 7, respectively. The motion is planned using the safe planner. The safe planner maximizes the robot distance away from the user by reaching backward towards the pick location, as shown in Fig. 4(b) and (c). At locations in the path where the robot nears the user, the planner reduces the robot inertia by bringing the body towards the robot base, as shown in Fig. 4(c). The robot reaches the pick location in Fig. 4(d). The first half of the motion is executed at full velocity, as can be seen in Fig. 6. As the robot approaches the user to access the place location (directly in front of the user), the inertia along the planned path is again lowered, as seen in Fig. 4(e). As the robot approaches the user, the danger index increases, as shown in Fig. 5, resulting in a decreased velocity along the trajectory, due to the The next two case studies demonstrate the vision based user position tracking, and illustrate the interaction between the trajectory scaling and the reactive safety module. In both Obstruction Test Case 1 and 2, the user moves his hand to obstruct the robot s planned path. In the first case, the user moves while the robot is still far away from the potential collision point, so that the robot has enough time to decelerate to a stop a safe distance away from the user while staying on the planned path. In this case, once the user removes the obstruction, the robot can continue with the planned motion. Figure 8 shows selected frames from the video sequence for the Obstruction Test Case 1. The robot s task is to approach a position above the table directly in front of the user, simulating a pick-up task. Once the robot path is generated by the safe planner, the robot begins executing the task, initially at the maximum normalized velocity, in this case The robot first lowers its inertia while staying away from the user, and then approaches the user once the geometric danger criterion has been lowered, as described in Section 2.1. The danger index, trajectory scaling factor and the resulting joint trajectory are shown in Figs. 9 11, respectively. As the robot starts approaching the user, the user moves his hand such that it obstructs the robot s planned path (see Fig. 8(b) and (c)). The trajectory scaling factor decreases to zero, and the robot comes to a stop a safe distance away from the user

13 Auton Robot (2007) 22: Fig. 18 Fig. 19 Affective State Test Case trajectory scaling Affective State Test Case joint trajectory along the planned path and still maintain a safe distance from the user. The safety module is activated and generates a new path pushing the robot to a safe location away from the user. Figure 12 shows selected frames for the video sequence for this test case. Figures show the danger index, the trajectory scaling, and the commanded trajectory for the first three joints, respectively. The planned path is the same as in the Obstruction Test Case 1. Similarly to the previous case, the user moves his hand to obstruct the robot s path, (Fig. 12(b) and (c)). However, the user moves his hand later in the path, so that it is not possible for the robot to decelerate to a stop a safe distance away from the user (Figs. 13 and 14). Even though the velocity is scaled to zero, the danger index continues to climb, as the robot cannot slow down fast enough along the planned path to maintain a safe distance. Once the danger index climbs above the safe threshold, the safety module generates an alternate trajectory seeking to minimize the danger index, as described in Section 2.2. The safety module is activated at s into the experiment. Once the safety module is activated, it acts to minimize the danger index (Fig. 13). The trajectory generated by the safety module moves all three lower joints, as shown in Fig. 15, and in Fig. 12(f) (h) Affective state test case Fig. 20 Affective State Test Case estimated arousal. (Fig. 8(d)). Once the user removes his hand from the path, the robot continues to finish execution of the path. Note that the robot s velocity is reduced as it approaches the user. In Obstruction Test Case 2, the user moves to obstruct the robot s path when the robot is already close to the user. In this case, there is not enough time for the robot to decelerate The Affective State Test Case demonstrates the impact of the user affective state. The affective state was estimated using the fuzzy inference engine described in Section 2.3. The same pick-up task as above is used, however, the normalized maximum robot velocity is set to 0.85, in order to elicit a strong response from the user. Figure 16 shows sample frames from the video sequence taken during the test. Figures show the integrated danger index, the velocity scaling, and the resulting joint trajectory, respectively. Fig. 21 Orientation Test Case video frames

14 162 Auton Robot (2007) 22: Figure 20 shows the level of arousal estimated during the test case. The robot is initially moving at the maximum specified velocity, (Fig. 16(a) (d)). Following the user affective reaction, as shown in Fig. 20, the robot is slowed down and then stopped (Fig. 16(e) (g)). Note that there is approximately a 2 s delay between the start of the robot motion and user affective response. This is due to the fact that the estimated affective state is based on skin conductance response and heart rate changes, physiological processes that exhibit a 1 3 s delay in response from stimulus onset (Brownley, 2000; Dawson, 2000). Therefore, the use of physiological data is most suitable for medium-term strategies, i.e. trajectory scaling. Once the reaction of the user subsides, the robot completes its mission, at a lowered velocity, as shown in Fig. 16(h) Orientation test case Fig. 24 Orientation Test Case velocity scaling The Orientation Test Case demonstrates system behavior during user head orientation changes. Figure 21 shows sample frames from a video sequence of the experiment. Figure 22 shows the user horizontal head orientation in radians, as reported by the head pose estimation component of the user monitoring module. Figures show the danger index, the velocity scaling, and the resulting joint trajectory. For this test case, the maximum normalized velocity of the robot (V max ) was set to The robot s task is to approach the area of the table directly in front of the user, simulating a pick-up task. Initially, the user is oriented towards the robot, with the horizontal head orientation angle of 0 degrees. After the robot motion has already started, the user turns away Fig. 25 Orientation Test Case reference trajectory from the robot, as shown in Fig. 21(c), and in Fig. 22. Since the robot is still far away from the user, the motion proceeds, however at a decreased velocity, as can be seen from the decreased velocity scaling in Fig. 24, and the resulting joint trajectories in Fig. 25. As the robot approaches the person, the velocity scaling decreases to zero, stopping the robot at a safe distance from the user (Fig. 21(e)). At 13.93s (Fig. 21(f)), the user turns back towards the robot. At this point, the danger index is lowered, and the velocity scaling is increased correspondingly. The robot can now proceed closer to the user, and complete the planned task. Note that the velocity of the robot slows again as the robot approaches the user, since the danger index increases due to the decreased distance between the robot and the user. 5 Conclusion Fig. 22 Fig. 23 Orientation Test Case user head orientation Orientation Test Case danger index In this work, a novel methodology for ensuring safety during human-robot interaction through planning and control has been presented, based on an explicit quantification of the level of danger in the interaction. The robot ensures human safety by planning and modifying its trajectory at three different time horizons: long-term path planning, medium term trajectory planning and short-term reactive control. At each stage, a quantitative level of danger is used to guide the decision making process. The robot also has available information about the user location and head orientation, and an estimate of the user affective state in terms of the level of arousal. This information is used to modulate the estimated level of danger, to further improve the safety and intuitiveness of the interaction.

15 Auton Robot (2007) 22: Specifically, a methodology for assessing the level of danger at both the planning and control stages has been developed. Planning and control algorithms have been proposed for minimizing the estimated danger during the interaction. Further, a human monitoring system has been proposed for enhancing the safety of the interaction through the use of visual and human physiological information. All of the above methods have been developed into physical system components that have been integrated and validated on a robot platform. A human-robot interaction test-bed was developed and implemented, incorporating the safe planner, safe controller and human monitoring functions. A methodology for smooth integration of safety strategies for differing planning horizons was proposed. The integrated system was implemented and tested in a series of exemplar real-time human-robot interaction test cases. The prototype system demonstrates the smooth integration and effectiveness of the proposed safety strategies. References Arai, H., Takubo, T., Hayashibara, Y., and Tanie, K Human-robot cooperative manipulation using a virtual nonholonomic constraint. In IEEE International Conference on Robotics and Automation, pp Barraquand, J. and Latombe, J.-C Robot motion planning: A distributed representation approach. The International Journal of Robotics Research, 10(6): Bearveldt, A.J Cooperation between man and robot: Interface and safety. In IEEE International Workshop on Robot Human Communication, pp Bicchi, A. and Tonietti, G Fast and Soft-Arm tactics. IEEE Robotics and Automation Magazine, 11(2): Bicchi, A., Rizzini, S.L., and Tonietti, G Compliant design for intrinsic safety: General issues and preliminary design. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp Bischoff, R. and Graefe, V HERMES A versatile personal robotic assistant. Proceedings of the IEEE, 92(11): Bradley, M.M Emotion and motivation. In Handbook of Psychophysiology, 2nd edn. Cacioppo, J.T., Tassinary, L.G., and Berntson, G.G. (Eds.), Cambridge University Press, Cambridge, pp Bradley, M.M. and Lang, P.J Measuring emotion: Behavior, feeling and physiology. In Cognitive Neuroscience of Emotion, Lane, R.D. and Nadel L. (Eds.), Oxford University Press, New York. Breazeal, C Socially intelligent robots: research, development, and applications. In IEEE International Conference on Systems, Man and Cybernetics, pp Brock, O. and Khatib, O Elastic strips: A framework for motion generation in human environments. The International Journal of Robotics Research, 21(12): Brownley, K.A Cardiovascular psychophysiology. In Handbook of Psychophysiology, Cacioppo, J.T., Tassinary, L.G., and Berntson, G.G. (Eds.), Cambridge University Press, Cambridge. Corke, P.I Safety of advanced robots in human environments. Discussion Paper for IARP, Online. Dawson, M.E The electrodermal system. In Handbook of Psychophysiology, Cacioppo, J.T., Tassinary, L.G., and Berntson, G.G. (Eds.), Cambridge University Press, Cambridge. Fernandez, V., Balaguer, C., Blanco, D., and Salichs, M.A Active human Mobile manipulator cooperation through intention recognition. In IEEE International Conference on Robotics and Automation, pp Guglielmelli, E., Dario, P., Laschi, C., and Fontanelli, R Humans and technologies at home: From friendly appliances to robotic interfaces. In IEEE International Workshop on Robot and Human Communication, pp Heinzmann, J. and Zelinsky, A Building human Friendly robot systems. In International Symposium of Robotics Research, pp Heinzmann, J. and Zelinsky, A Quantitative safety guarantees for physical human-robot interaction. The International Journal of Robotics Research, 22(7 8): Ikuta, K. and Nokata, M Safety evaluation method of design and control for human-care robots. The International Journal of Robotics Research, 22(5): Kawamura, K., Bagchi, S., Iskarous, M., and Bishay, M Intelligent robotic systems in service of the disabled. IEEE Transactions on Rehabilitation Engineering, 3(1): Khatib, O Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research, 5(1): Kosuge, K. and Hirata, Y Human-Robot interaction. In International Conference on Robotics and Biomimetics, pp Kulić, D Safety for human-robot interaction. Ph.D. Thesis: University of British Columbia. Kulić, D. and Croft, E Estimating intent for human-robot interaction. In IEEE International Conference on Advanced Robotics, pp Kulić, D. and Croft, E. 2005a. Safe planning for human-robot interaction. Journal of Robotic Systems, 22(7): Kulić, D. and Croft, E. 2005b. Anxiety detection during human-robot interaction. In IEEE International Conference on Intelligent Robots and Systems, pp Kulić, D. and Croft, E Safety based control strategy for humanrobot interaction. Journal of Robotics and Autonomous Systems, 54(1):1 12. Lang, P.J The emotion probe: Studies of motivation and attention. American Psychologist, 50(5): Latombe, J.-C Robot Motion Planning. Kluwer Academic Publishers, Boston, MA. Macfarlane, S. and Croft, E Jerk-Bounded Robot trajectory planning Design for real-time applications. IEEE Transactions on Robotics and Automation, 19(1): Matsumoto, Y., Heinzmann, J., and Zelinsky, A The essential components of human Friendly robot systems. In International Conference on Field and Service Robotics, pp Matsumoto, Y., Ogasawara, T., and Zelinsky, A Behavior recognition based on head pose and gaze direction measurement. In IEEE International Conference on Ingelligent Robots and Systems, pp Morita, T., Iwata, H., and Sugano, S Development of a human symbiotic robot: WENDY. In IEEE International Conference on Robotics and Automation, pp Nokata, M., Ikuta, K., and Ishii, H Safety-optimizing method of human-care robot design and control. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, pp Pantic, M. and Rothkrantz, L.J.M Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12): Pentland, A Perceptual intelligence. Communications of the ACM, 43(3):35 44.

16 164 Auton Robot (2007) 22: Picard, R Affective Computing. MIT Press, Cambridge, Massachussetts. Picard, R Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10): Pt. Grey Bumblebee. In index.html Rani, P., Sims, J., Brackin, R., and Sarkar, N Online stress detection using phychophysiological signals for implicit humanrobot cooperation. Robotica, 20: Rani, P., Sarkar, N., Smith, C.A., and Kirby, L.D Anxiety detecting robotic system towards implicit human-robot collaboration. Robotica, 22: RIA/ANSI RIA/ANSI R American National Standard for Industrial Robots and Robot Systems Safety Requirements. American National Standards Institute. New York. Traver, V.J., del Pobil, A.P., and Perez-Francisco, M Making service robots human-safe. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000), pp Tsuji, T. and Kaneko, M Noncontact impedance control for redundant manipulators. In IEEE Transactions on Systems, Man and Cybernetics Part A: Systems and Humans, 29(2): Sarkar, N Psychophysiological control architecture for humanrobot coordination concepts and initial experiments. In IEEE International Conference on Robotics and Automation, pp Song, W.K., Kim, D. J., Kim, J. S., and Bien, Z Visual servoing for a user s mouth with effective attention reading in a wheelchairbased robotic arm. In IEEE International Conference on Robotics and Automation, pp Stiefelhagen, R., Yang, J., and Waibel, A Tracking focus of attention for human-robot communication. In IEEE-RAS International Conference on Humanoid Robots. Thought Technology Ltd., in Wada, K., Shibata, T., Saito, T., and Tanie, K Effects of robot-assisted activity for elderly people and nurses at a day service center. Proceedings of the IEEE, 92(11): Weiner, J.M., Hanley, R.J., Clark, R., and Van Nostrand, J.F Measuring the activities of daily living: Comparisons across national surveys. Journal of Gerontology: Social Sciences, 45(6): Yamada, Y., Hirawawa, Y., Huang, S., Umetani, Y., and Suita, K Human Robot contact in the safeguarding space. IEEE/ASME Transactions on Mechatronics, 2(4): Yamada, Y., Yamamoto, T., Morizono, T., and Umetani, Y FTAbased issues on securing human safety in a Human/Robot coexistance system. In IEEE Systems, Man and Cybernetics, pp Zinn, M., Khatib, O., Roth, B., and Salisbury, J.K Towards a human-centered intrinsically safe robotic manipulator. In Workshop on Technology Challenges for Dependable Robots in Human Environments. Zinn, M., Khatib, O., and Roth, B A new actuation approach for human friendly robot design. In IEEE International Conference on Robotics and Automation, pp Zurada, J., Wright, A.L., and Graham, J.H A neuro-fuzzy approach for robot system safety. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, 31(1): Dana Kulić received the combined B.A.Sc. and M.Eng. degree in electro-mechanical engineering, and the Ph.D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. She is currently a post-doctoral fellow at the Nakamura-Yamane Laboratory in the Department of Mechano- Informatics at the University of Tokyo, Japan. Her research interests include human-robot interaction, robot learning, humanoid robotics and mechatronics. Elizabeth A. Croft (M 95) received the B.A.Sc. degree in mechanical engineering in 1988 from the University of British Columbia, the M.A.Sc. degree from the University of Waterloo in 1992 and the Ph.D. degree from the University of Toronto, Canada in She is currently an Associate Professor in Mechanical Engineering at the University of British Columbia. Her research interests include human-robot interaction, industrial robotics, and mechatronics.

Strategies for Safety in Human Robot Interaction

Strategies for Safety in Human Robot Interaction Strategies for Safety in Human Robot Interaction D. Kulić E. A. Croft Department of Mechanical Engineering University of British Columbia 2324 Main Mall Vancouver, BC, V6T 1Z4, Canada Abstract This paper

More information

Real-Time Safety for Human Robot Interaction

Real-Time Safety for Human Robot Interaction Real-Time Safety for Human Robot Interaction ana Kulić and Elizabeth A. Croft Abstract This paper presents a strategy for ensuring safety during human-robot interaction in real time. A measure of danger

More information

Safe Planning for Human-Robot Interaction

Safe Planning for Human-Robot Interaction Safe Planning for Human-Robot Interaction Dana Kulić and Elizabeth A. Croft * Department of Mechanical Engineering, University of British Columbia Vancouver, Canada Email: dana@mech.ubc.ca Abstract This

More information

ROBOT: Model pp (col. fig: NIL) ARTICLE IN PRESS. Robotics and Autonomous Systems xx (xxxx) xxx xxx

ROBOT: Model pp (col. fig: NIL) ARTICLE IN PRESS. Robotics and Autonomous Systems xx (xxxx) xxx xxx ROBOT: + Model pp. (col. fig: NIL) 0 0 Abstract Robotics and Autonomous Systems xx (xxxx) xxx xxx Real-time safety for human robot interaction Dana Kulić, Elizabeth A. Croft http://www.elsevier.com/locate/robot

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Ensuring the Safety of an Autonomous Robot in Interaction with Children

Ensuring the Safety of an Autonomous Robot in Interaction with Children Machine Learning in Robot Assisted Therapy Ensuring the Safety of an Autonomous Robot in Interaction with Children Challenges and Considerations Stefan Walke stefan.walke@tum.de SS 2018 Overview Physical

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 6 (55) No. 2-2013 PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES A. FRATU 1 M. FRATU 2 Abstract:

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot erebellum Based ar Auto-Pilot System B. HSIEH,.QUEK and A.WAHAB Intelligent Systems Laboratory, School of omputer Engineering Nanyang Technological University, Blk N4 #2A-32 Nanyang Avenue, Singapore 639798

More information

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Quy-Hung Vu, Byeong-Sang Kim, Jae-Bok Song Korea University 1 Anam-dong, Seongbuk-gu, Seoul, Korea vuquyhungbk@yahoo.com, lovidia@korea.ac.kr,

More information

Robotics 2 Collision detection and robot reaction

Robotics 2 Collision detection and robot reaction Robotics 2 Collision detection and robot reaction Prof. Alessandro De Luca Handling of robot collisions! safety in physical Human-Robot Interaction (phri)! robot dependability (i.e., beyond reliability)!

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

Physical Human Robot Interaction

Physical Human Robot Interaction MIN Faculty Department of Informatics Physical Human Robot Interaction Intelligent Robotics Seminar Ilay Köksal University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Department

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Adaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers

Adaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers Proceedings of the 3 rd International Conference on Mechanical Engineering and Mechatronics Prague, Czech Republic, August 14-15, 2014 Paper No. 170 Adaptive Humanoid Robot Arm Motion Generation by Evolved

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Glossary of terms. Short explanation

Glossary of terms. Short explanation Glossary Concept Module. Video Short explanation Abstraction 2.4 Capturing the essence of the behavior of interest (getting a model or representation) Action in the control Derivative 4.2 The control signal

More information

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Klaus Buchegger 1, George Todoran 1, and Markus Bader 1 Vienna University of Technology, Karlsplatz 13, Vienna 1040,

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

BioRob-Arm: A Quickly Deployable and Intrinsically Safe, Light- Weight Robot Arm for Service Robotics Applications.

BioRob-Arm: A Quickly Deployable and Intrinsically Safe, Light- Weight Robot Arm for Service Robotics Applications. BioRob-Arm: A Quickly Deployable and Intrinsically Safe, Light- Weight Robot Arm for Service Robotics Applications. Thomas Lens, Jürgen Kunz, Oskar von Stryk Simulation, Systems Optimization and Robotics

More information

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Ergonomic positioning of bulky objects Thesis 1 Robot acts as a 3rd hand for workpiece positioning: Muscular fatigue

More information

INDUSTRIAL ROBOTS AND ROBOT SYSTEM SAFETY

INDUSTRIAL ROBOTS AND ROBOT SYSTEM SAFETY INDUSTRIAL ROBOTS AND ROBOT SYSTEM SAFETY I. INTRODUCTION. Industrial robots are programmable multifunctional mechanical devices designed to move material, parts, tools, or specialized devices through

More information

Autonomous and Mobile Robotics Prof. Giuseppe Oriolo. Introduction: Applications, Problems, Architectures

Autonomous and Mobile Robotics Prof. Giuseppe Oriolo. Introduction: Applications, Problems, Architectures Autonomous and Mobile Robotics Prof. Giuseppe Oriolo Introduction: Applications, Problems, Architectures organization class schedule 2017/2018: 7 Mar - 1 June 2018, Wed 8:00-12:00, Fri 8:00-10:00, B2 6

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction It is appropriate to begin the textbook on robotics with the definition of the industrial robot manipulator as given by the ISO 8373 standard. An industrial robot manipulator is

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Humanoids. Lecture Outline. RSS 2010 Lecture # 19 Una-May O Reilly. Definition and motivation. Locomotion. Why humanoids? What are humanoids?

Humanoids. Lecture Outline. RSS 2010 Lecture # 19 Una-May O Reilly. Definition and motivation. Locomotion. Why humanoids? What are humanoids? Humanoids RSS 2010 Lecture # 19 Una-May O Reilly Lecture Outline Definition and motivation Why humanoids? What are humanoids? Examples Locomotion RSS 2010 Humanoids Lecture 1 1 Why humanoids? Capek, Paris

More information

Robot Architectures. Prof. Yanco , Fall 2011

Robot Architectures. Prof. Yanco , Fall 2011 Robot Architectures Prof. Holly Yanco 91.451 Fall 2011 Architectures, Slide 1 Three Types of Robot Architectures From Murphy 2000 Architectures, Slide 2 Hierarchical Organization is Horizontal From Murphy

More information

Robust Haptic Teleoperation of a Mobile Manipulation Platform

Robust Haptic Teleoperation of a Mobile Manipulation Platform Robust Haptic Teleoperation of a Mobile Manipulation Platform Jaeheung Park and Oussama Khatib Stanford AI Laboratory Stanford University http://robotics.stanford.edu Abstract. This paper presents a new

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment-

The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment- The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment- Hitoshi Hasunuma, Kensuke Harada, and Hirohisa Hirukawa System Technology Development Center,

More information

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1 Development of Multi-D.O.F. Master-Slave Arm with Bilateral Impedance Control for Telexistence Riichiro Tadakuma, Kiyohiro Sogen, Hiroyuki Kajimoto, Naoki Kawakami, and Susumu Tachi 7-3-1 Hongo, Bunkyo-ku,

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Available theses (October 2011) MERLIN Group

Available theses (October 2011) MERLIN Group Available theses (October 2011) MERLIN Group Politecnico di Milano - Dipartimento di Elettronica e Informazione MERLIN Group 2 Luca Bascetta bascetta@elet.polimi.it Gianni Ferretti ferretti@elet.polimi.it

More information

Robot Architectures. Prof. Holly Yanco Spring 2014

Robot Architectures. Prof. Holly Yanco Spring 2014 Robot Architectures Prof. Holly Yanco 91.450 Spring 2014 Three Types of Robot Architectures From Murphy 2000 Hierarchical Organization is Horizontal From Murphy 2000 Horizontal Behaviors: Accomplish Steps

More information

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Visual Interpretation of Hand Gestures as a Practical Interface Modality Visual Interpretation of Hand Gestures as a Practical Interface Modality Frederik C. M. Kjeldsen Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate

More information

Multi-Robot Coordination. Chapter 11

Multi-Robot Coordination. Chapter 11 Multi-Robot Coordination Chapter 11 Objectives To understand some of the problems being studied with multiple robots To understand the challenges involved with coordinating robots To investigate a simple

More information

The safe & productive robot working without fences

The safe & productive robot working without fences The European Robot Initiative for Strengthening the Competitiveness of SMEs in Manufacturing The safe & productive robot working without fences Final Presentation, Stuttgart, May 5 th, 2009 Objectives

More information

Haptic Discrimination of Perturbing Fields and Object Boundaries

Haptic Discrimination of Perturbing Fields and Object Boundaries Haptic Discrimination of Perturbing Fields and Object Boundaries Vikram S. Chib Sensory Motor Performance Program, Laboratory for Intelligent Mechanical Systems, Biomedical Engineering, Northwestern Univ.

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

JEPPIAAR ENGINEERING COLLEGE

JEPPIAAR ENGINEERING COLLEGE JEPPIAAR ENGINEERING COLLEGE Jeppiaar Nagar, Rajiv Gandhi Salai 600 119 DEPARTMENT OFMECHANICAL ENGINEERING QUESTION BANK VII SEMESTER ME6010 ROBOTICS Regulation 013 JEPPIAAR ENGINEERING COLLEGE Jeppiaar

More information

Funzionalità per la navigazione di robot mobili. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Funzionalità per la navigazione di robot mobili. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Funzionalità per la navigazione di robot mobili Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Variability of the Robotic Domain UNIBG - Corso di Robotica - Prof. Brugali Tourist

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 3 (2014) 121 130 Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new effective algorithm for on-line robot motion planning

More information

Information and Program

Information and Program Robotics 1 Information and Program Prof. Alessandro De Luca Robotics 1 1 Robotics 1 2017/18! First semester (12 weeks)! Monday, October 2, 2017 Monday, December 18, 2017! Courses of study (with this course

More information

2. Publishable summary

2. Publishable summary 2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

Available theses (October 2012) MERLIN Group

Available theses (October 2012) MERLIN Group Available theses (October 2012) MERLIN Group Politecnico di Milano - Dipartimento di Elettronica e Informazione MERLIN Group 2 Luca Bascetta bascetta@elet.polimi.it Gianni Ferretti ferretti@elet.polimi.it

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

The Haptic Impendance Control through Virtual Environment Force Compensation

The Haptic Impendance Control through Virtual Environment Force Compensation The Haptic Impendance Control through Virtual Environment Force Compensation OCTAVIAN MELINTE Robotics and Mechatronics Department Institute of Solid Mechanicsof the Romanian Academy ROMANIA octavian.melinte@yahoo.com

More information

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Overview Agents, environments, typical components

Overview Agents, environments, typical components Overview Agents, environments, typical components CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami January 23, 2017 Outline 1 Autonomous robots 2 Agents

More information

Prospective Teleautonomy For EOD Operations

Prospective Teleautonomy For EOD Operations Perception and task guidance Perceived world model & intent Prospective Teleautonomy For EOD Operations Prof. Seth Teller Electrical Engineering and Computer Science Department Computer Science and Artificial

More information

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Politecnico di Milano - Dipartimento di Elettronica, Informazione e Bioingegneria Industrial robotics

More information

Haptic Virtual Fixtures for Robot-Assisted Manipulation

Haptic Virtual Fixtures for Robot-Assisted Manipulation Haptic Virtual Fixtures for Robot-Assisted Manipulation Jake J. Abbott, Panadda Marayong, and Allison M. Okamura Department of Mechanical Engineering, The Johns Hopkins University {jake.abbott, pmarayong,

More information

4R and 5R Parallel Mechanism Mobile Robots

4R and 5R Parallel Mechanism Mobile Robots 4R and 5R Parallel Mechanism Mobile Robots Tasuku Yamawaki Department of Mechano-Micro Engineering Tokyo Institute of Technology 4259 Nagatsuta, Midoriku Yokohama, Kanagawa, Japan Email: d03yamawaki@pms.titech.ac.jp

More information

Chapter 10 Digital PID

Chapter 10 Digital PID Chapter 10 Digital PID Chapter 10 Digital PID control Goals To show how PID control can be implemented in a digital computer program To deliver a template for a PID controller that you can implement yourself

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Implementation of Conventional and Neural Controllers Using Position and Velocity Feedback

Implementation of Conventional and Neural Controllers Using Position and Velocity Feedback Implementation of Conventional and Neural Controllers Using Position and Velocity Feedback Expo Paper Department of Electrical and Computer Engineering By: Christopher Spevacek and Manfred Meissner Advisor:

More information

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and

More information

CORC 3303 Exploring Robotics. Why Teams?

CORC 3303 Exploring Robotics. Why Teams? Exploring Robotics Lecture F Robot Teams Topics: 1) Teamwork and Its Challenges 2) Coordination, Communication and Control 3) RoboCup Why Teams? It takes two (or more) Such as cooperative transportation:

More information

Cognition & Robotics. EUCog - European Network for the Advancement of Artificial Cognitive Systems, Interaction and Robotics

Cognition & Robotics. EUCog - European Network for the Advancement of Artificial Cognitive Systems, Interaction and Robotics Cognition & Robotics Recent debates in Cognitive Robotics bring about ways to seek a definitional connection between cognition and robotics, ponder upon the questions: EUCog - European Network for the

More information

Obstacle Displacement Prediction for Robot Motion Planning and Velocity Changes

Obstacle Displacement Prediction for Robot Motion Planning and Velocity Changes International Journal of Information and Electronics Engineering, Vol. 3, No. 3, May 13 Obstacle Displacement Prediction for Robot Motion Planning and Velocity Changes Soheila Dadelahi, Mohammad Reza Jahed

More information

SECOND YEAR PROJECT SUMMARY

SECOND YEAR PROJECT SUMMARY SECOND YEAR PROJECT SUMMARY Grant Agreement number: 215805 Project acronym: Project title: CHRIS Cooperative Human Robot Interaction Systems Period covered: from 01 March 2009 to 28 Feb 2010 Contact Details

More information

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach Witold Jacak* and Stephan Dreiseitl" and Karin Proell* and Jerzy Rozenblit** * Dept. of Software Engineering, Polytechnic

More information

Term Paper: Robot Arm Modeling

Term Paper: Robot Arm Modeling Term Paper: Robot Arm Modeling Akul Penugonda December 10, 2014 1 Abstract This project attempts to model and verify the motion of a robot arm. The two joints used in robot arms - prismatic and rotational.

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

Designing Better Industrial Robots with Adams Multibody Simulation Software

Designing Better Industrial Robots with Adams Multibody Simulation Software Designing Better Industrial Robots with Adams Multibody Simulation Software MSC Software: Designing Better Industrial Robots with Adams Multibody Simulation Software Introduction Industrial robots are

More information

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim MEM380 Applied Autonomous Robots I Winter 2011 Feedback Control USARSim Transforming Accelerations into Position Estimates In a perfect world It s not a perfect world. We have noise and bias in our acceleration

More information

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as

More information

Chapter 1. Robot and Robotics PP

Chapter 1. Robot and Robotics PP Chapter 1 Robot and Robotics PP. 01-19 Modeling and Stability of Robotic Motions 2 1.1 Introduction A Czech writer, Karel Capek, had first time used word ROBOT in his fictional automata 1921 R.U.R (Rossum

More information

Masatoshi Ishikawa, Akio Namiki, Takashi Komuro, and Idaku Ishii

Masatoshi Ishikawa, Akio Namiki, Takashi Komuro, and Idaku Ishii 1ms Sensory-Motor Fusion System with Hierarchical Parallel Processing Architecture Masatoshi Ishikawa, Akio Namiki, Takashi Komuro, and Idaku Ishii Department of Mathematical Engineering and Information

More information

Introduction to robotics. Md. Ferdous Alam, Lecturer, MEE, SUST

Introduction to robotics. Md. Ferdous Alam, Lecturer, MEE, SUST Introduction to robotics Md. Ferdous Alam, Lecturer, MEE, SUST Hello class! Let s watch a video! So, what do you think? It s cool, isn t it? The dedication is not! A brief history The first digital and

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Wireless Robust Robots for Application in Hostile Agricultural. environment.

Wireless Robust Robots for Application in Hostile Agricultural. environment. Wireless Robust Robots for Application in Hostile Agricultural Environment A.R. Hirakawa, A.M. Saraiva, C.E. Cugnasca Agricultural Automation Laboratory, Computer Engineering Department Polytechnic School,

More information

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

Proactive Intention-based Safety through Human Location Anticipation in HRI Workspace

Proactive Intention-based Safety through Human Location Anticipation in HRI Workspace roactive Intention-based Safety through Human Location Anticipation in HRI Workspace Muhammad Usman Ashraf 1,5 1 IBMS, University of Agriculture, Faisalabad, akistan Muhammad Awais 2 2 Department of SE,

More information

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press,   ISSN Application of artificial neural networks to the robot path planning problem P. Martin & A.P. del Pobil Department of Computer Science, Jaume I University, Campus de Penyeta Roja, 207 Castellon, Spain

More information

Available theses in robotics (November 2017) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in robotics (November 2017) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in robotics (November 2017) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Ergonomic positioning of bulky objects Thesis 1 Robot acts as a 3rd hand for workpiece positioning: Muscular

More information

Chapter 1 Introduction to Robotics

Chapter 1 Introduction to Robotics Chapter 1 Introduction to Robotics PS: Most of the pages of this presentation were obtained and adapted from various sources in the internet. 1 I. Definition of Robotics Definition (Robot Institute of

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Safety Standards and Collaborative Robots. Pat Davison Robotic Industries Association

Safety Standards and Collaborative Robots. Pat Davison Robotic Industries Association Safety Standards and Collaborative Robots Pat Davison Robotic Industries Association Topics What is it? How did we get here? What has already been done? What still needs doing? Standards ISO 10218-1:2006

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Theme 2: The new paradigm in robotics safety

Theme 2: The new paradigm in robotics safety Competitiveness in Emerging Robot Technologies (CEROBOT) The opportunities in safety and robots for SMEs Theme 2: The new paradigm in robotics safety Colin Blackman Simon Forge SCF Associates Ltd Safety

More information

FUNDAMENTALS ROBOT TECHNOLOGY. An Introduction to Industrial Robots, T eleoperators and Robot Vehicles. D J Todd. Kogan Page

FUNDAMENTALS ROBOT TECHNOLOGY. An Introduction to Industrial Robots, T eleoperators and Robot Vehicles. D J Todd. Kogan Page FUNDAMENTALS of ROBOT TECHNOLOGY An Introduction to Industrial Robots, T eleoperators and Robot Vehicles D J Todd &\ Kogan Page First published in 1986 by Kogan Page Ltd 120 Pentonville Road, London Nl

More information

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,

More information

Relationship to theory: This activity involves the motion of bodies under constant velocity.

Relationship to theory: This activity involves the motion of bodies under constant velocity. UNIFORM MOTION Lab format: this lab is a remote lab activity Relationship to theory: This activity involves the motion of bodies under constant velocity. LEARNING OBJECTIVES Read and understand these instructions

More information

Simulation of a mobile robot navigation system

Simulation of a mobile robot navigation system Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information