Smooth collision avoidance in human-robot coexisting environment

Similar documents
Prediction of Human s Movement for Collision Avoidance of Mobile Robot

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

4R and 5R Parallel Mechanism Mobile Robots

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment-

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

Development of Drum CVT for a Wire-Driven Robot Hand

Learning and Using Models of Kicking Motions for Legged Robots

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Homeostasis Lighting Control System Using a Sensor Agent Robot

DATA ACQUISITION FOR STOCHASTIC LOCALIZATION OF WIRELESS MOBILE CLIENT IN MULTISTORY BUILDING

Shuffle Traveling of Humanoid Robots

Sensor Data Fusion Using Kalman Filter

On Observer-based Passive Robust Impedance Control of a Robot Manipulator

Sloshing Damping Control in a Cylindrical Container on a Wheeled Mobile Robot Using Dual-Swing Active-Vibration Reduction

COMPACT FUZZY Q LEARNING FOR AUTONOMOUS MOBILE ROBOT NAVIGATION

Proceedings of th IEEE-RAS International Conference on Humanoid Robots ! # Adaptive Systems Research Group, School of Computer Science

Analysis of Trailer Position Error in an Autonomous Robot-Trailer System With Sensor Noise

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

Development of a Walking Support Robot with Velocity-based Mechanical Safety Devices*

Self-Tuning Nearness Diagram Navigation

Learning and Using Models of Kicking Motions for Legged Robots

EFFICIENT PIPE INSTALLATION SUPPORT METHOD FOR MODULE BUILD

On-line adaptive side-by-side human robot companion to approach a moving person to interact

Cooperative robots in people guidance mission: DTM model validation and local optimization motion.

A Reconfigurable Guidance System

Estimation of Absolute Positioning of mobile robot using U-SAT

An Intuitional Method for Mobile Robot Path-planning in a Dynamic Environment

Mobile Robots (Wheeled) (Take class notes)

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Social interactive robot navigation based on human intention analysis from face orientation and human path prediction

Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot

Estimation and Control of Lateral Displacement of Electric Vehicle Using WPT Information

Mobile Robots Exploration and Mapping in 2D

Rec. ITU-R P RECOMMENDATION ITU-R P PROPAGATION BY DIFFRACTION. (Question ITU-R 202/3)

Towards Quantification of the need to Cooperate between Robots

Multi-Robot Cooperative Localization: A Study of Trade-offs Between Efficiency and Accuracy

2 Copyright 2012 by ASME

Navigation of Transport Mobile Robot in Bionic Assembly System

A Robotic Simulator Tool for Mobile Robots

Takafumi Matsumaru /08/$ IEEE. 3487

Path Planning for mobile robots using fuzzy logic controller in the presence of static and moving obstacles

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Evaluation of Passing Distance for Social Robots

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Impact of Interference Model on Capacity in CDMA Cellular Networks

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information

Modified Approach Using Variable Charges to Solve Inherent Limitations of Potential Fields Method.

Randomized Motion Planning for Groups of Nonholonomic Robots

Study and Design of Virtual Laboratory in Robotics-Learning Fei MA* and Rui-qing JIA

Autonomous Decentralized Synchronization System for Inter-Vehicle Communication in Ad-hoc Network

Modeling and Experimental Studies of a Novel 6DOF Haptic Device

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

Simulation of a mobile robot navigation system

Adaptive Inverse Control with IMC Structure Implementation on Robotic Arm Manipulator

Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes

Simple Path Planning Algorithm for Two-Wheeled Differentially Driven (2WDD) Soccer Robots

Development of a Simulator of Environment and Measurement for Autonomous Mobile Robots Considering Camera Characteristics

Design of an office guide robot for social interaction studies

Embodied social interaction for service robots in hallway environments

IMPROVEMENTS TO A QUEUE AND DELAY ESTIMATION ALGORITHM UTILIZED IN VIDEO IMAGING VEHICLE DETECTION SYSTEMS

Optimized Tuning of PI Controller for a Spherical Tank Level System Using New Modified Repetitive Control Strategy

Cooperative Transportation by Humanoid Robots Learning to Correct Positioning

Machining operations using Yamaha YK 400 robot

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1

A Communication Model for Inter-vehicle Communication Simulation Systems Based on Properties of Urban Areas

Roadside Range Sensors for Intersection Decision Support

Intelligent Technology for More Advanced Autonomous Driving

A New Analytical Representation to Robot Path Generation with Collision Avoidance through the Use of the Collision Map

Tracking of a Moving Target by Improved Potential Field Controller in Cluttered Environments

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Robot Crowd Navigation using Predictive Position Fields in the Potential Function Framework

Nonholonomic Haptic Display

Fig.. Block diagram of the IMC system. where k c,t I,T D,T s and f denote the proportional gain, reset time, derivative time, sampling time and lter p

Robust Haptic Teleoperation of a Mobile Manipulation Platform

SOURCES OF ERROR IN UNBALANCE MEASUREMENTS. V.J. Gosbell, H.M.S.C. Herath, B.S.P. Perera, D.A. Robinson

Assisting and Guiding Visually Impaired in Indoor Environments

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch

Motion Planning using Potential Fields

Lecture on Angular Vibration Measurements Based on Phase Demodulation

Yusuke Tamura. Atsushi Yamashita and Hajime Asama

Using Wi-Fi Signal Strength to Localize in Wireless Sensor Networks

Optimization of Robot Arm Motion in Human Environment

Application of congestion control algorithms for the control of a large number of actuators with a matrix network drive system

Design of an Office-Guide Robot for Social Interaction Studies

Evaluation of Distance for Passage for a Social Robot

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Walking Together: Side-by-Side Walking Model for an Interacting Robot

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

Augmented reality approach for mobile multi robotic system development and integration

Development of Grinding Simulation based on Grinding Process

Adaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers

Multi-Robot Task-Allocation through Vacancy Chains

Mobile Target Tracking Using Radio Sensor Network

Transcription:

The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Smooth collision avoidance in human-robot coexisting environment Yusue Tamura, Tomohiro Fuuzawa, Hajime Asama Abstract In order for service robots to safely coexist with humans, collision avoidance with humans is the most important issue. On the other hand, woring efficiencies are also important and cannot be ignored. In this paper, we propose a method to estimate a pedestrian s behavior. Based on the estimation, we realize smooth collision avoidances between a robot and a human. A robot detects pedestrians by using a laser range finder and tracs them by a Kalman filter. We apply the social force model to the observed trajectory for a determination whether the pedestrian intends to avoid a collision with the robot or not. The robot selects an appropriate behavior based on the estimation results. We conducted experiments that a robot and a person pass each other. Through the experiments, the usefulness of the proposed method was demonstrated. Robot Human model Decision Maing Observation Observation Human Robot model Decision Maing I. INTRODUCTION Service robots, such as delivery robots, security robots, and cleaning robots, are required to operate in the environment in which humans live. In order for robots to safely coexist with humans, collision avoidance behavior is of extreme importance. Many researchers have studied on obstacle avoidance in dynamic environments[1], [2]. In most such studies, humans were regarded as just moving obstacles, and the problem of how to avoid collision against moving obstacles has been tacled. In other words, in these studies, they considered only robots avoid the collision. On the other hand, some studies treated humans distinctively from mere moving obstacles. Yoda and Shiota analyzed collision avoidance behavior between humans [3] and implemented a model emulating human s avoiding behavior to a robot [4]. Actually, however, humans change own behavior in response to the changing situation. Although not only robots but also humans inevitably avoid the collision between each other, they did not consider effects of the existence of robots on humans. Matsumaru proposed a robot, which presents its intending motion to people around it [5], [6]. The robot does not change its motion to avoid collision and maes people change their motion. This idea will wor only if people around the robot notice the preliminary announcement comply with the robot s intention. Unless people does not notice the announcement, they have the potential to crash into the robot. This wor was part of the Intelligent Robot Technology Software Project supported by the New Energy and Industrial Technology Development Organization (NEDO), Japan. Y. Tamura, T. Fuuzawa, and H. Asama are with Department of Precision Engineering, Graduate School of Engineering, The University of Toyo, 7-3-1 Hongo, Bunyo-u, Toyo, Japan tamura@robot.t.u-toyo.ac.jp Fig. 1. Conceptual diagram of human-robot mutual estimation of each other s intention. On the other hand, Muraami et al. proposed an intelligent wheelchair that determines whether a pedestrian notices it or not by observing his face direction [7]. The wheelchair decides its motion based on the determination. In other words, the wheelchair does not perform an avoiding behavior if the pedestrian notices the existence of the wheelchair. For woring efficiencies perspective, this idea seems really good. However even if a pedestrian notices the wheelchair, he does not always change his motion to avoid the upcoming collision for any reason. For example, physically disabled or elderly persons have difficulties to avoid the collision even if they notices the existence of a robot. In such a case, the robot should avoid the collision. In this study, we assume that human intention is expressed in his behavior. In order for human and robot to smoothly interact with each other, both of them should estimate each other s intention based on other s model (Fig. 1). In this study, therefore, we propose a method to predict whether or not a pedestrian will change his motion to avoid the collision against a robot by observing his waling trajectory. Moreover, we develop a robot that smoothly avoids the collision against a pedestrian based on the prediction. In section II, an algorithm to detect and trac pedestrian movement is presented. In section III, a method to predict the pedestrian behavior is shown. We also present a method to avoid the collision against the pedestrian. In section IV, experiments for verifying the proposed method are described and discussed. We conclude this paper and refer the future plans in section V. 978-1-4244-6676-4/10/$25.00 2010 IEEE 3887

Pedestrian legs equations: d j 1 d j ε 1 (1) d l+1 d l ε 1 (2) d i+1 d i < ε 1 (j i l 1) (3) Fig. 2. Distance Laser Range Finder Measurement of surrounding environment with a laser range finder. 2) 3) 1) Direction Fig. 3. Detection of persons based on the three assumptions. 1) There is a certain amount of distance between a person s leg and another object. 2) Width of a person s leg is within a certain definite range. 3) Distance between both legs of a single person is within a certain definite range. II. MEASUREMENT OF PEDESTRIAN BEHAVIORS A. Detection and tracing In order to detect pedestrians, we employ a laser range finder (LRF) to detect human legs. After detecting leg candidates, we pair the appropriate two candidates and regard the pair as a person. In this study, we have three assumptions: 1) There is a certain amount of distance between a person s leg and another object. 2) Width of a person s leg is within a certain definite range. 3) Distance between both legs of a single person is within a certain definite range. These assumptions are reasonable for normal pedestrians. As shown in Fig. 2, a LRF measures d i, a distance to an object, for each direction θ i. From the obtained data, persons are detected as follows (Fig. 3). The first assumption is represented as the following three θ i d i where j and l are detected ends of a single obstacle, and ε 1 is a constant threshold. Based on the second assumption, when considering a person s leg as a cylinder, the diameter of the cylinder is no shorter than ε 2, nor longer than ε 3. Assuming the angular resolution of a LRF is (2π/N) rad, the diameter of the cylinder can be approximated as follows. 2π(l j) d N i end (4) Therefore, the second assumption is represented as the following equation: 2π(l j) ε 2 d N i end ε 3 (5) The variables j and l, which satisfies the equations (1), (2), (3), and (4), are represented as follows: i begin i end = j = l where -th leg candidate is defined by these parameters. After that, we apply the third assumption. Similar to the equation (4), the distance between both legs is approximated and the assumption is represented as follows: 2π(i end i begin +1 ) d i end (6) ε 4 (7) N If -th and ( + 1)-th leg candidates satisfy this equation, they are paired and detected as a person. The center between the both legs is regarded as the location of the person. Here, p t denotes the location of the person at t. Based on the detection method stated above, we apply a Kalman filter [8] to the obtained data for tracing pedestrian movements. The filter estimates the current (t) state by using only the previous (t 1) state and the current observation. Even if only one leg is observed, the filter can estimate the current state by using the previous state. B. Accuracy verification of tracing In order to verify the accuracy of the proposed tracing method, we conducted experiments as follows. In the experiments, we used a LRF (UTM-30LX, Houyo Automatic). The LRF is able to report ranges from 20 [mm] to 30 [m] in a 240 [deg] arc. The resolution of distance is 30 [mm] and that of angle is 0.25 [deg]. The LRF was installed at 340 [mm] high. Here, the measurement interval was 125 [ms]. We conducted the following two experiments. Crossing: A participant wals across in front of the LRF (Fig. 4). Approaching: A participant wals towards the LRF (Fig. 5). 3888

L 4.0 4.0 R person 2.0 LRF (m) Fig. 4. Crossing case: a person wal from L to R and from R to L F person Difference (m) 0.8 0.6 0.4 0.2 0 L-R R-L F-N Fig. 6. Differences between waling paths and observed trajectories Fig. 5. N 8.0 1.0 LRF (m) Approaching case: a person wal from F to N Ten trials for each direction (L to R, R to L, and F to N) were conducted. Here, ε 1, ε 2, ε 3, and ε 4, the thresholds for detecting a pedestrian, were set to 200 [mm], 100 [mm], 300 [mm], and 200 [mm], respectively. The average differences between the planned waling paths (straight lines) and the observed trajectories are shown in Fig. 6. The average differences in the crossing case was enough small. The standard deviations were about 0.22 m in the cases of both directions. The differences in the approaching case was larger than those in the crossing case. However, considering comfortableness during passing each other by a robot and a person [9], people generally prefer to eep larger passing distance. Therefore, the proposed tracing method could be enough accurate. III. ACTION DECISION BASED ON THE PREDICTION OF PEDESTRIAN BEHAVIOR A. Prediction of pedestrian behavior In order to smoothly avoid a collision with a person, in this study, a robot determines whether the person is trying to avoid the collision or not. Here, this includes not only a situation that the person does not detect the robot but also situations that the person intends not to avoid the collision by himself or he cannot avoid the collision for any reason. The determination is conducted based on a model of pedestrian s movement. We employ the social force model [10] for the determination. The social force model assumes that four types of virtual forces act on a pedestrian α as follows. Acceleration: F 0 α Repulsive effects of other pedestrians β: F αβ Repulsive effects of obstacles B: F αb Attractive effects of others i: F αi For simplicity, in this case, we consider two of them, such as the acceleration and the repulsive effects of other pedestrians. In this paper, a case that a single person and a single robot is explained below. The acceleration term F 0 α is defined as follows: F 0 α = 1 τ α (v 0 α v α ) (8) where τ α is the relaxation time and v α is the current velocity. v 0 α is the desired velocity, which is defined as the following equation. v 0 α = v 0 αe α (9) where v 0 α is the desired speed and e α is the desired direction. The repulsive effects F αβ is defined as follows: where and b = 1 2 F αβ = r αβ V αβ (b) (10) ( r αβ + r αβ v β te β ) 2 (v β t) 2 (11) V αβ (b) = V 0 αβ exp( b σ ) (12) Here, V αβ is the repulsive potential, and Vαβ 0 and σ are constants. The social force model assumes that the resultant of these two effects acts on the pedestrian as follows (Fig. 7): F α = F 0 α + w(e α, r αβ )F αβ (13) 3889

pedestrian assumes the existence of the robot as unavoiding trajectory f unavoid. Assuming p t is an observed location of the pedestrian at t, distances from p t to f avoid (t) and f unavoid (t) are defined as follows: D (un)avoid (t) = f (un)avoid (t) p t (16) Here, P (un)avoid t denotes the lielihood that the pedestrian will perform an (un)avoidance behavior at t. These lielihood functions are defined as follows: robot goal Fig. 7. Social force model acting on pedestrian α. where w(e, F ) denotes the weight factor of the repulsive effects, which models an effect of pedestrian s eyesight. The weight factor is defined as follows: { 1 if e F F cos ϕ w(e, F ) = (14) c otherwise (0 < c < 1) where 2ϕ represents the eyesight. At first, the robot just tracs a pedestrian s movement until the distance between the robot and the pedestrian is longer than L using the method proposed in the previous section. Here, L is defined by the following equation: L = l + v αβ t (15) where l is the distance that a normal person starts avoiding a robot and v αβ is the relative speed of the pedestrian α with respect to the robot β. l is about three to five meters [3], but it depends on the size and speed of a robot. Here, t is set to 1 [s]. One second after the distance between the robot and the pedestrian is shorter than L, the robot calculates the pedestrian s velocity based on the obtained position data. Because the desired velocity of the pedestrian cannot be observed, the robot regards the calculated velocity as the desired velocity v 0 α of the pedestrian. After that, planned location and velocity of the robot is assigned to the social force model for calculating the virtual force acting on the pedestrian. Then, the location and velocity of the pedestrian at the next step is calculated according to the model. This process is sequentially conducted and finally the robot will obtain the predicted trajectory of the pedestrian. Here, we define a trajectory that assumes the existence of the robot as avoiding trajectory f avoid and that does not P avoid t P unavoid t = γ = γ t τ=0 t τ=0 where γ is a normalized factor. B. Decision of a robot s behavior D avoid (τ) D avoid (τ) + D unavoid (τ) D unavoid (τ) D avoid (τ) + D unavoid (τ) (17) (18) If Pt avoid is smaller than Pt unavoid, the robot will determine that the pedestrian does not intend to avoid a collision, and will decide to avoid the collision by itself. If Pt avoid is larger than P unavoid, on the other hand, the robot will not change its behavior and will continue moving toward its own goal and comparing these lielihoods. When the robot decide to avoid a collision, it is required to decide whether it will avoid the collision by moving rightward or leftward. In the robot-centered coordination, conforming the traveling direction of the robot to y-axis, the relationship between the robot and the pedestrian is shown in Fig. 8. Here, q = (q, 0) denotes the intersection of f unavoid with x-axis. If q is larger than 0, the robot will avoid a collision by moving leftward, and vice versa. If q is equal to 0, the robot will randomly choose left or right. A. Setup and procedure IV. EXPERIMENTS In order to verify the proposed method, experiments were conducted. In the experiments, an omni-directional mobile robot (Fig. 9), which controls four wheels by using three actuators [11]. The robot was equipped with the LRF stated in the experiments of section II. As shown in Fig. 9, the robot is an almost octagonal prism 178 [mm] on a side and its height is 912 [mm]. The travel speed of the robot was 400 [mm/s], and L was fixed to 8.0 [m]. In a single trial, the robot and a participant pass each other in an open patch. At the start of a trial, the robot and a participant stood at a distance of 10 [m] as shown in 10. The goal of the robot was set to an enough far point in the line through the initial locations of the robot and a participant. The goal of a participant was set in the same manner. In the experiments, the robot started when a participant moved about 1 [m]. 3890

y pedestrian robot x Fig. 9. Appearance of the omni-directional robot Fig. 8. Decision of a robot s behavior. If q > 0, then the robot swerve to the left, and vice versa. Four healthy men (aged from 22 to 24) participated in the experiments. For each participant, the following three types of trials were conducted and five trials were performed in each type. (i-r) A participant swerved to the right. (i-l) A participant swerved to the left. (ii) A participant waled straightforward. Through the experiments, we evaluated whether the behavior of the robot is appropriate or not. In the trials (i-r) and (i-l), if the robot does not swerve to either side, the behavior will be regarded as a smooth avoidance. If the robot swerve to the opposite side of participants, the behavior will be regarded as a safe avoidance. In this case, it is not necessary for the robot to avoid the collision. In that context, the behavior is not effective. However, the collision ris is quite low, therefore this is not regarded as a failure. The other cases are regarded as a failure avoidance. In the trials (ii), if the robot avoids the participants, the behavior of the robot will be regarded as a smooth avoidance. On the other hand, if the robot does not avoid the participants and moves straightforward and the distance between the participant and the robot is shorter than 1 [m], the behavior is regarded as a failure avoidance. The parameters of the social force model are predetermined for each participant based on the preliminary experiment. B. Results TABLE I SUCCESS RATE OF THE ROBOT S AVOIDANCE BEHAVIOR Smooth Safe Failure (i-r) 40% 50% 10% (i-l) 70% 15% 15% (ii) 90% 10% Average 67% 22% 12% An example of the experimental scene was shown in Fig. 11. As shown in Table I, the rates of the smooth avoidance in the trials (i-r), (i-l), and (ii) were 40%, 70%, and 90%, respectively. The rates of the safe avoidance in (i-r) and (i-l) were 50% and 15%, respectively. There is a large difference between (i-r) and (i-l). In (i-l) situation, the participants tended to eep longer distance to the robot. The dominant leg or eye of the participants may cause the results. The total rate of the successful avoidance was 89%. The failures can be divided into two factors. One is attributed to tracing failures and another is caused by the inconsistency between the pedestrian model and observed trajectories. In a case that legs of trousers were very close to each other, the proposed algorithm to detect legs did not function properly. This decreased the accuracy of tracing a pedestrian. In this study, we applied the social force model to modeling pedestrian behaviors. However, the model cannot completely represent pedestrian individual differences. Therefore, the observed trajectories of the participants were not always consistent with the model. 3891

robot 10.0 (m) (ii) (i-l) (i-r) person Fig. 11. An example of the experimental scene Fig. 10. Experimental placement of robot and participant V. CONCLUSION In this study, we proposed a method to determine whether a pedestrian performs an avoiding behavior or not, and developed a robot that smoothly avoids a collision against the pedestrian. The usefulness of the proposed method was demonstrated through the experiments. In the experiments, the behaviors of the participants were qualitatively controlled for the validation. However, in actual situations, persons may change their behavior in response to a robot s behavior. For future wors, we will test a mobile robot applied to the proposed method in an actual human-robot coexisting environment. [5] Taafumi Matsumaru, Development of Four Kinds of Mobile Robot with Preliminary-Announcement and Display Function of Its Forthcoming Operation, Journal of Robotics and Mechatronics, vol.19, no.2, pp.148 159, 2007. [6] Taafumi Matsumaru, Experimental Examination in Simulated Interactive Situation between People and Mobile Robot with Preliminary- Announcement and Indication Function of Upcoming Operation, Proceedings of the IEEE International Conference on Robotics and Automation, pp.3487 3494, 2008. [7] Y. Muraami, Y. Kuno, N. Shimada, and Y. Shirai, Collision Avoidance by Observing Pedestrians Faces for Intelligent Wheelchairs, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.2018 2023, 2001. [8] G. Welch and G. Bishop, An Introduction to the Kalman Filter, Dept. Comp. Sci., Univ. North Carolina, Chapel Hill, TR95-041, 1995. [9] Elena Pacchierotti, Henri I. Christensen, and Patric Jensfelt, Evaluation of Passing Distance for Social Robots, Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, pp.315 320, 2006. [10] Dir Helbing and Péter Molnár, Social force model for pedestrian dynamics, Physical Review E, vol.51, no.5, pp.4282 4286, 1995. [11] H. Asama, M. Sato, L. Bogoni, H. Kaetsu, A. Matsumoto, I. Endo, Development of an Omni-Directional Mobile Robot with 3 DOF Decoupling Drive Mechanism, Proceedings of the 1995 IEEE International Conference on Robotics and Automation, pp.1925 1930, 1995. REFERENCES [1] Oussama Khatib, Real-Time Obstacle Avoidance for Manipulators and Mobile Robots, The International Journal of Robotics Research, vol.5, no.1, pp.90 98, 1986. [2] Animesh Charavarthy and Debasish Ghose, Obstacle Avoidance in a Dynamic Environment: A Collision Cone Approach, IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, vol.28, no.5, pp.562 574, 1998. [3] Mitsumasa Yoda and Yasuhito Shiota, Analysis of Human Avoidance Motion for Application to Robot, Proceedings of the IEEE International Worshop on Robot and Human Communication, pp.65 70, 1996. [4] Mitsumasa Yoda and Yasuhito Shiota, The Mobile Robot Which Passes a Man, Proceedings of the IEEE International Conference on Robot and Human Communication, pp.112 117, 1997. 3892